id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1202.4486
Deterministic Leader Election Among Disoriented Anonymous Sensors
cs.DC cs.MA
We address the Leader Election (LE) problem in networks of anonymous sensors sharing no kind of common coordinate system. Leader Election is a fundamental symmetry breaking problem in distributed computing. Its goal is to assign value 1 (leader) to one of the entities and value 0 (non-leader) to all others. In this paper, assuming n > 1 disoriented anonymous sensors, we provide a complete charac- terization on the sensors positions to deterministically elect a leader, provided that all the sensors' positions are known by every sensor. More precisely, our contribution is twofold: First, assuming n anonymous sensors agreeing on a common handedness (chirality) of their own coordinate system, we provide a complete characterization on the sensors positions to deterministically elect a leader. Second, we also provide such a complete chararacterization for sensors devoided of a common handedness. Both characterizations rely on a particular object from combinatorics on words, namely the Lyndon Words.
1202.4495
Stochastic-Based Pattern Recognition Analysis
cs.CV
In this work we review the basic principles of stochastic logic and propose its application to probabilistic-based pattern-recognition analysis. The proposed technique is intrinsically a parallel comparison of input data to various pre-stored categories using Bayesian techniques. We design smart pulse-based stochastic-logic blocks to provide an efficient pattern recognition analysis. The proposed rchitecture is applied to a specific navigation problem. The resulting system is orders of magnitude faster than processor-based solutions.
1202.4507
A Cryptographic Moving-Knife Cake-Cutting Protocol
cs.GT cs.CR cs.MA
This paper proposes a cake-cutting protocol using cryptography when the cake is a heterogeneous good that is represented by an interval on a real line. Although the Dubins-Spanier moving-knife protocol with one knife achieves simple fairness, all players must execute the protocol synchronously. Thus, the protocol cannot be executed on asynchronous networks such as the Internet. We show that the moving-knife protocol can be executed asynchronously by a discrete protocol using a secure auction protocol. The number of cuts is n-1 where n is the number of players, which is the minimum.
1202.4509
Rich Counter-Examples for Temporal-Epistemic Logic Model Checking
cs.LO cs.MA
Model checking verifies that a model of a system satisfies a given property, and otherwise produces a counter-example explaining the violation. The verified properties are formally expressed in temporal logics. Some temporal logics, such as CTL, are branching: they allow to express facts about the whole computation tree of the model, rather than on each single linear computation. This branching aspect is even more critical when dealing with multi-modal logics, i.e. logics expressing facts about systems with several transition relations. A prominent example is CTLK, a logic that reasons about temporal and epistemic properties of multi-agent systems. In general, model checkers produce linear counter-examples for failed properties, composed of a single computation path of the model. But some branching properties are only poorly and partially explained by a linear counter-example. This paper proposes richer counter-example structures called tree-like annotated counter-examples (TLACEs), for properties in Action-Restricted CTL (ARCTL), an extension of CTL quantifying paths restricted in terms of actions labeling transitions of the model. These counter-examples have a branching structure that supports more complete description of property violations. Elements of these counter-examples are annotated with parts of the property to give a better understanding of their structure. Visualization and browsing of these richer counter-examples become a critical issue, as the number of branches and states can grow exponentially for deeply-nested properties. This paper formally defines the structure of TLACEs, characterizes adequate counter-examples w.r.t. models and failed properties, and gives a generation algorithm for ARCTL properties. It also illustrates the approach with examples in CTLK, using a reduction of CTLK to ARCTL. The proposed approach has been implemented, first by extending the NuSMV model checker to generate and export branching counter-examples, secondly by providing an interactive graphical interface to visualize and browse them.
1202.4532
Conceptual Level Design of Semi-structured Database System: Graph-semantic Based Approach
cs.SE cs.DB
This paper has proposed a Graph - semantic based conceptual model for semi-structured database system, called GOOSSDM, to conceptualize the different facets of such system in object oriented paradigm. The model defines a set of graph based formal constructs, variety of relationship types with participation constraints and rich set of graphical notations to specify the conceptual level design of semi-structured database system. The proposed design approach facilitates modeling of irregular, heterogeneous, hierarchical and non-hierarchical semi-structured data at the conceptual level. Moreover, the proposed GOOSSDM is capable to model XML document at conceptual level with the facility of document-centric design, ordering and disjunction characteristic. A rule based transformation mechanism of GOOSSDM schema into the equivalent XML Schema Definition (XSD) also has been proposed in this paper. The concepts of the proposed conceptual model have been implemented using Generic Modeling Environment (GME).
1202.4533
Unified model of voltage/current mode control to predict saddle-node bifurcation
cs.SY math.DS nlin.CD
A unified model of voltage mode control (VMC) and current mode control (CMC) is proposed to predict the saddle-node bifurcation (SNB). Exact SNB boundary conditions are derived, and can be further simplified in various forms for design purpose. Many approaches, including steady-state, sampled-data, average, harmonic balance, and loop gain analyses are applied to predict SNB. Each approach has its own merits and complement the other approaches.
1202.4534
Bifurcation Boundary Conditions for Switching DC-DC Converters Under Constant On-Time Control
cs.SY math.DS nlin.CD
Sampled-data analysis and harmonic balance analysis are applied to analyze switching DC-DC converters under constant on-time control. Design-oriented boundary conditions for the period-doubling bifurcation and the saddle-node bifurcation are derived. The required ramp slope to avoid the bifurcations and the assigned pole locations associated with the ramp are also derived. The derived boundary conditions are more general and accurate than those recently obtained. Those recently obtained boundary conditions become special cases under the general modeling approach presented in this paper. Different analyses give different perspectives on the system dynamics and complement each other. Under the sampled-data analysis, the boundary conditions are expressed in terms of signal slopes and the ramp slope. Under the harmonic balance analysis, the boundary conditions are expressed in terms of signal harmonics. The derived boundary conditions are useful for a designer to design a converter to avoid the occurrence of the period-doubling bifurcation and the saddle-node bifurcation.
1202.4535
Proceedings First Workshop on CTP Components for Educational Software
cs.SY cs.LO cs.MS cs.SC
The THedu'11 workshop received thirteen submissions, twelve of which were accepted and presented during the workshop. For the post-conference proceedings nine submission where received and accepted. The submissions are within the scope of the following points, which have been announced in the call of papers: CTP-based software tools for education; CTP technology combined with novel interfaces, drag and drop, etc.; technologies to access ITP knowledge relevant for a certain step of problem solving; usability considerations on representing ITP knowledge; combination of deduction and computation; formal problem specifications; effectiveness of ATP in checking user input; formats for deductive content in proof documents, geometric constructions, etc; formal domain models for e-learning in mathematics and applications.
1202.4537
Sampled-Data and Harmonic Balance Analyses of Average Current-Mode Controlled Buck Converter
cs.SY math.DS nlin.CD
Dynamics and stability of average current-mode control of buck converters are analyzed by sampled-data and harmonic balance analyses. An exact sampled-data model is derived. A new continuous-time model "lifted" from the sampled-data model is also derived, and has frequency response matched with experimental data reported previously. Orbital stability is studied and it is found unrelated to the ripple size of the current-loop compensator output. An unstable window of the current-loop compensator pole is found by simulations, and it can be accurately predicted by sampled-data and harmonic balance analyses. A new S plot accurately predicting the subharmonic oscillation is proposed. The S plot assists pole assignment and shows the required ramp slope to avoid instability.
1202.4553
MIMO capacity for deterministic channel models: sublinear growth
cs.IT math-ph math.IT math.MP
This is the second paper of the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. Starting from the Maxwell equations, we have described in \cite{BCFM} the generic structure of such a deterministic transfer matrix. In the current paper we apply the results of \cite{BCFM} in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment, and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given, fixed volume. Under some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior.
1202.4554
On the dynamics of social conflicts: looking for the Black Swan
math-ph cs.SI math.MP physics.soc-ph
This paper deals with the modeling of social competition, possibly resulting in the onset of extreme conflicts. More precisely, we discuss models describing the interplay between individual competition for wealth distribution that, when coupled with political stances coming from support or opposition to a government, may give rise to strongly self-enhanced effects. The latter may be thought of as the early stages of massive, unpredictable events known as Black Swans, although no analysis of any fully-developed Black Swan is provided here. Our approach makes use of the framework of the kinetic theory for active particles, where nonlinear interactions among subjects are modeled according to game-theoretical tools.
1202.4590
Noncontinous additive entropies of partitions
cs.IT math.IT math.PR
In a previous paper: A. Paszkiewicz, T. Sobieszek, Additive Entropies of Partitions, we have given a description of additive partition entropies that is real functions $I$ on the set of finite partitions that are additive on stochastically independent partitions in a given probability space. We now present an analogical result, this time without assuming continuity. As a by-product of our efforts we solve a 2-cocycle functional equation for certain subsets of convex cones.
1202.4591
Additive Entropies of Partitions
cs.IT math.IT math.PR
We provide, under minimal continuity assumptions, a description of \textsl{additive partition entropies}. They are real functions $I$ on the set of finite partitions that are additive on stochastically independent partitions in a given probability space.
1202.4596
Compressive Principal Component Pursuit
cs.IT math.IT
We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals.
1202.4661
Delay Asymptotics with Retransmissions and Incremental Redundancy Codes over Erasure Channels
cs.IT cs.PF math.IT
Recent studies have shown that retransmissions can cause heavy-tailed transmission delays even when packet sizes are light-tailed. Moreover, the impact of heavy-tailed delays persists even when packets size are upper bounded. The key question we study in this paper is how the use of coding techniques to transmit information, together with different system configurations, would affect the distribution of delay. To investigate this problem, we model the underlying channel as a Markov modulated binary erasure channel, where transmitted bits are either received successfully or erased. Erasure codes are used to encode information prior to transmission, which ensures that a fixed fraction of the bits in the codeword can lead to successful decoding. We use incremental redundancy codes, where the codeword is divided into codeword trunks and these trunks are transmitted one at a time to provide incremental redundancies to the receiver until the information is recovered. We characterize the distribution of delay under two different scenarios: (I) Decoder uses memory to cache all previously successfully received bits. (II) Decoder does not use memory, where received bits are discarded if the corresponding information cannot be decoded. In both cases, we consider codeword length with infinite and finite support. From a theoretical perspective, our results provide a benchmark to quantify the tradeoff between system complexity and the distribution of delay.
1202.4664
Super-FEC Codes for 40/100 Gbps Networking
cs.IT math.IT
This paper presents a simple approach to evaluate the performance bound at very low bit-error-rate (BER) range for binary pseudo-product codes and true-product codes. Moreover it introduces a super-product BCH code that can achieve near-Shannon limit performance with very low decoding complexity. This work has been accepted by IEEE Communications Letters for future publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
1202.4707
A para-model agent for dynamical systems
math.OC cs.SY
Consider a dynamical system $u \mapsto x, \dot{x} = f_{nl}(x,u)$ where $f_{nl}$ is a nonlinear (convex or nonconvex) function, or a combination of nonlinear functions that can eventually switch. We present, in this preliminary work, a generalization of the standard model-free control, that can either control the dynamical system, given an output reference trajectory, or optimize the dynamical system as a derivative-free optimization based "extremum-seeking" procedure. Multiple applications are presented and the robustness of the proposed method is studied in simulation.
1202.4720
Non-Stationary Random Process for Large-Scale Failure and Recovery of Power Distributions
cs.SY
A key objective of the smart grid is to improve reliability of utility services to end users. This requires strengthening resilience of distribution networks that lie at the edge of the grid. However, distribution networks are exposed to external disturbances such as hurricanes and snow storms where electricity service to customers is disrupted repeatedly. External disturbances cause large-scale power failures that are neither well-understood, nor formulated rigorously, nor studied systematically. This work studies resilience of power distribution networks to large-scale disturbances in three aspects. First, a non-stationary random process is derived to characterize an entire life cycle of large-scale failure and recovery. Second, resilience is defined based on the non-stationary random process. Close form analytical expressions are derived under specific large-scale failure scenarios. Third, the non-stationary model and the resilience metric are applied to a real life example of large-scale disruptions due to Hurricane Ike. Real data on large-scale failures from an operational network is used to learn time-varying model parameters and resilience metrics.
1202.4736
Diversity of MIMO Linear Precoding
cs.IT math.IT
Linear precoding is a relatively simple method of MIMO signaling that can also be optimal in certain special cases. This paper is dedicated to high-SNR analysis of MIMO linear precoding. The Diversity-Multiplexing Tradeoff (DMT) of a number of linear precoders is analyzed. Furthermore, since the diversity at finite rate (also known as the fixed-rate regime, corresponding to multiplexing gain of zero) does not always follow from the DMT, linear precoders are also analyzed for their diversity at fixed rates. In several cases, the diversity at multiplexing gain of zero is found not to be unique, but rather to depend on spectral efficiency. The analysis includes the zero-forcing (ZF), regularized ZF, matched filtering and Wiener filtering precoders. We calculate the DMT of ZF precoding under two common design approaches, namely maximizing the throughput and minimizing the transmit power. It is shown that regularized ZF (RZF) or Matched filter (MF) suffer from error floors for all positive multiplexing gains. However, in the fixed rate regime, RZF and MF precoding achieve full diversity up to a certain spectral efficiency and zero diversity at rates above it. When the regularization parameter in the RZF is optimized in the MMSE sense, the structure is known as the Wiener precoder which in the fixed-rate regime is shown to have diversity that depends not only on the number of antennas, but also on the spectral efficiency. The diversity in the presence of both precoding and equalization is also analyzed.
1202.4743
Real-time detection and tracking of multiple objects with partial decoding in H.264/AVC bitstream domain
cs.MM cs.CV
In this paper, we show that we can apply probabilistic spatiotemporal macroblock filtering (PSMF) and partial decoding processes to effectively detect and track multiple objects in real time in H.264|AVC bitstreams with stationary background. Our contribution is that our method cannot only show fast processing time but also handle multiple moving objects that are articulated, changing in size or internally have monotonous color, even though they contain a chaotic set of non-homogeneous motion vectors inside. In addition, our partial decoding process for H.264|AVC bitstreams enables to improve the accuracy of object trajectories and overcome long occlusion by using extracted color information.
1202.4805
Fast Generation of Large Scale Social Networks with Clustering
cs.SI physics.soc-ph
A key challenge within the social network literature is the problem of network generation - that is, how can we create synthetic networks that match characteristics traditionally found in most real world networks? Important characteristics that are present in social networks include a power law degree distribution, small diameter and large amounts of clustering; however, most current network generators, such as the Chung Lu and Kronecker models, largely ignore the clustering present in a graph and choose to focus on preserving other network statistics, such as the power law distribution. Models such as the exponential random graph model have a transitivity parameter, but are computationally difficult to learn, making scaling to large real world networks intractable. In this work, we propose an extension to the Chung Lu ran- dom graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion of a random transitive edge. That is, with some probability it will choose to connect to a node exactly two hops away, having been introduced to a 'friend of a friend'. In all other cases it will follow the standard Chung Lu model, selecting a 'random surfer' from anywhere in the graph according to the given invariant distribution. We prove TCL's expected degree distribution is equal to the degree distribution of the original graph, while being able to capture the clustering present in the network. The single parameter required by our model can be learned in seconds on graphs with millions of edges, while networks can be generated in time that is linear in the number of edges. We demonstrate the performance TCL on four real- world social networks, including an email dataset with hundreds of thousands of nodes and millions of edges, showing TCL generates graphs that match the degree distribution, clustering coefficients and hop plots of the original networks.
1202.4815
Data Mining Applications: A comparative Study for Predicting Student's performance
cs.IR cs.DB
Knowledge Discovery and Data Mining (KDD) is a multidisciplinary area focusing upon methodologies for extracting useful knowledge from data and there are several useful KDD tools to extracting the knowledge. This knowledge can be used to increase the quality of education. But educational institution does not use any knowledge discovery process approach on these data. Data mining can be used for decision making in educational system. A decision tree classifier is one of the most widely used supervised learning methods used for data exploration based on divide & conquer technique. This paper discusses use of decision trees in educational data mining. Decision tree algorithms are applied on students' past performance data to generate the model and this model can be used to predict the students' performance. It helps earlier in identifying the dropouts and students who need special attention and allow the teacher to provide appropriate advising/counseling.
1202.4818
Association Rule Mining Based On Trade List
cs.DB
In this paper a new mining algorithm is defined based on frequent item set. Apriori Algorithm scans the database every time when it finds the frequent item set so it is very time consuming and at each step it generates candidate item set. So for large databases it takes lots of space to store candidate item set .In undirected item set graph, it is improvement on apriori but it takes time and space for tree generation. The defined algorithm scans the database at the start only once and then from that scanned data base it generates the Trade List. It contains the information of whole database. By considering minimum support it finds the frequent item set and by considering the minimum confidence it generates the association rule. If database and minimum support is changed, the new algorithm finds the new frequent items by scanning Trade List. That is why it's executing efficiency is improved distinctly compared to traditional algorithm.
1202.4828
Towards an Intelligent Tutor for Mathematical Proofs
cs.AI cs.LO cs.MS cs.SC
Computer-supported learning is an increasingly important form of study since it allows for independent learning and individualized instruction. In this paper, we discuss a novel approach to developing an intelligent tutoring system for teaching textbook-style mathematical proofs. We characterize the particularities of the domain and discuss common ITS design models. Our approach is motivated by phenomena found in a corpus of tutorial dialogs that were collected in a Wizard-of-Oz experiment. We show how an intelligent tutor for textbook-style mathematical proofs can be built on top of an adapted assertion-level proof assistant by reusing representations and proof search strategies originally developed for automated and interactive theorem proving. The resulting prototype was successfully evaluated on a corpus of tutorial dialogs and yields good results.
1202.4835
Isabelle/PIDE as Platform for Educational Tools
cs.LO cs.AI cs.MS
The Isabelle/PIDE platform addresses the question whether proof assistants of the LCF family are suitable as technological basis for educational tools. The traditionally strong logical foundations of systems like HOL, Coq, or Isabelle have so far been counter-balanced by somewhat inaccessible interaction via the TTY (or minor variations like the well-known Proof General / Emacs interface). Thus the fundamental question of math education tools with fully-formal background theories has often been answered negatively due to accidental weaknesses of existing proof engines. The idea of "PIDE" (which means "Prover IDE") is to integrate existing provers like Isabelle into a larger environment, that facilitates access by end-users and other tools. We use Scala to expose the proof engine in ML to the JVM world, where many user-interfaces, editor frameworks, and educational tools already exist. This shall ultimately lead to combined mathematical assistants, where the logical engine is in the background, without obstructing the view on applications of formal methods, formalized mathematics, and math education in particular.
1202.4837
The GF Mathematics Library
cs.MS cs.CL
This paper is devoted to present the Mathematics Grammar Library, a system for multilingual mathematical text processing. We explain the context in which it originated, its current design and functionality and the current development goals. We also present two prototype services and comment on possible future applications in the area of artificial mathematics assistants.
1202.4856
Improved Linear Precoding over Block Diagonalization in Multi-cell Cooperative Networks
cs.IT math.IT
In downlink multiuser multiple-input multiple-output (MIMO) systems, block diagonalization (BD) is a practical linear precoding scheme which achieves the same degrees of freedom (DoF) as the optimal linear/nonlinear precoding schemes. However, its sum-rate performance is rather poor in the practical SNR regime due to the transmit power boost problem. In this paper, we propose an improved linear precoding scheme over BD with a so-called "effective-SNR-enhancement" technique. The transmit covariance matrices are obtained by firstly solving a power minimization problem subject to the minimum rate constraint achieved by BD, and then properly scaling the solution to satisfy the power constraints. It is proved that such approach equivalently enhances the system SNR, and hence compensates the transmit power boost problem associated with BD. The power minimization problem is in general non-convex. We therefore propose an efficient algorithm that solves the problem heuristically. Simulation results show significant sum rate gains over the optimal BD and the existing minimum mean square error (MMSE) based precoding schemes.
1202.4871
Multilevel Image Encryption
cs.CR cs.CV
With the fast evolution of digital data exchange and increased usage of multi media images, it is essential to protect the confidential image data from unauthorized access. In natural images the values and position of the neighbouring pixels are strongly correlated. The method proposed in this paper, breaks this correlation increasing entropy of the position and entropy of pixel values using block shuffling and encryption by chaotic sequence respectively. The plain-image is initially row wise shuffled and first level of encryption is performed using addition modulo operation. The image is divided into blocks and then block based shuffling is performed using Arnold Cat transformation, further the blocks are uniformly scrambled across the image. Finally the shuffled image undergoes second level of encryption by bitwise XOR operation, and then the image as a whole is shuffled column wise to produce the ciphered image for transmission. The experimental results show that the proposed algorithm can successfully encrypt or decrypt the image with the secret keys, and the analysis of the algorithm also demonstrates that the encrypted image has good information entropy and low correlation coefficients.
1202.4905
A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions
cs.LO cs.AI
The paper describes the refinement algorithm for the Calculus of (Co)Inductive Constructions (CIC) implemented in the interactive theorem prover Matita. The refinement algorithm is in charge of giving a meaning to the terms, types and proof terms directly written by the user or generated by using tactics, decision procedures or general automation. The terms are written in an "external syntax" meant to be user friendly that allows omission of information, untyped binders and a certain liberal use of user defined sub-typing. The refiner modifies the terms to obtain related well typed terms in the internal syntax understood by the kernel of the ITP. In particular, it acts as a type inference algorithm when all the binders are untyped. The proposed algorithm is bi-directional: given a term in external syntax and a type expected for the term, it propagates as much typing information as possible towards the leaves of the term. Traditional mono-directional algorithms, instead, proceed in a bottom-up way by inferring the type of a sub-term and comparing (unifying) it with the type expected by its context only at the end. We propose some novel bi-directional rules for CIC that are particularly effective. Among the benefits of bi-directionality we have better error message reporting and better inference of dependent types. Moreover, thanks to bi-directionality, the coercion system for sub-typing is more effective and type inference generates simpler unification problems that are more likely to be solved by the inherently incomplete higher order unification algorithms implemented. Finally we introduce in the external syntax the notion of vector of placeholders that enables to omit at once an arbitrary number of arguments. Vectors of placeholders allow a trivial implementation of implicit arguments and greatly simplify the implementation of primitive and simple tactics.
1202.4910
Distributed Private Heavy Hitters
cs.DS cs.CR cs.DB
In this paper, we give efficient algorithms and lower bounds for solving the heavy hitters problem while preserving differential privacy in the fully distributed local model. In this model, there are n parties, each of which possesses a single element from a universe of size N. The heavy hitters problem is to find the identity of the most common element shared amongst the n parties. In the local model, there is no trusted database administrator, and so the algorithm must interact with each of the $n$ parties separately, using a differentially private protocol. We give tight information-theoretic upper and lower bounds on the accuracy to which this problem can be solved in the local model (giving a separation between the local model and the more common centralized model of privacy), as well as computationally efficient algorithms even in the case where the data universe N may be exponentially large.
1202.4943
A new hybrid jpeg image compression scheme using symbol reduction technique
cs.MM cs.CV
Lossy JPEG compression is a widely used compression technique. Normally the JPEG standard technique uses three process mapping reduces interpixel redundancy, quantization, which is lossy process and entropy encoding, which is considered lossless process. In this paper, a new technique has been proposed by combining the JPEG algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The symbols reduction technique reduces the number of symbols by combining together to form a new symbol. As a result of this technique the number of Huffman code to be generated also reduced. It is simple fast and easy to implement. The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
1202.4959
Lossy Source Coding via Spatially Coupled LDGM Ensembles
cs.IT cond-mat.stat-mech math.IT
We study a new encoding scheme for lossy source compression based on spatially coupled low-density generator-matrix codes. We develop a belief-propagation guided-decimation algorithm, and show that this algorithm allows to approach the optimal distortion of spatially coupled ensembles. Moreover, using the survey propagation formalism, we also observe that the optimal distortions of the spatially coupled and individual code ensembles are the same. Since regular low-density generator-matrix codes are known to achieve the Shannon rate-distortion bound under optimal encoding as the degrees grow, our results suggest that spatial coupling can be used to reach the rate-distortion bound, under a {\it low complexity} belief-propagation guided-decimation algorithm. This problem is analogous to the MAX-XORSAT problem in computer science.
1202.4961
Strongly universal string hashing is fast
cs.DB cs.DS
We present fast strongly universal string hashing families: they can process data at a rate of 0.2 CPU cycle per byte. Maybe surprisingly, we find that these families---though they require a large buffer of random numbers---are often faster than popular hash functions with weaker theoretical guarantees. Moreover, conventional wisdom is that hash functions with fewer multiplications are faster. Yet we find that they may fail to be faster due to operation pipelining. We present experimental results on several processors including low-powered processors. Our tests include hash functions designed for processors with the Carry-Less Multiplication (CLMUL) instruction set. We also prove, using accessible proofs, the strong universality of our families.
1202.4974
How Clustering Affects Epidemics in Random Networks
math.PR cs.SI physics.soc-ph
Motivated by the analysis of social networks, we study a model of random networks that has both a given degree distribution and a tunable clustering coefficient. We consider two types of growth processes on these graphs: diffusion and symmetric threshold model. The diffusion process is inspired from epidemic models. It is characterized by an infection probability, each neighbor transmitting the epidemic independently. In the symmetric threshold process, the interactions are still local but the propagation rule is governed by a threshold (that might vary among the different nodes). An interesting example of symmetric threshold process is the contagion process, which is inspired by a simple coordination game played on the network. Both types of processes have been used to model spread of new ideas, technologies, viruses or worms and results have been obtained for random graphs with no clustering. In this paper, we are able to analyze the impact of clustering on the growth processes. While clustering inhibits the diffusion process, its impact for the contagion process is more subtle and depends on the connectivity of the graph: in a low connectivity regime, clustering also inhibits the contagion, while in a high connectivity regime, clustering favors the appearance of global cascades but reduces their size. For both diffusion and symmetric threshold models, we characterize conditions under which global cascades are possible and compute their size explicitly, as a function of the degree distribution and the clustering coefficient. Our results are applied to regular or power-law graphs with exponential cutoff and shed new light on the impact of clustering.
1202.5014
Two-way Interference Channels
cs.IT math.IT
We consider two-way interference channels (ICs) where forward and backward channels are ICs but not necessarily the same. We first consider a scenario where there are only two forward messages and feedback is offered through the backward IC for aiding forward-message transmission. For a linear deterministic model of this channel, we develop inner and outer bounds that match for a wide range of channel parameters. We find that the backward IC can be more efficiently used for feedback rather than if it were used for sending its own independent backward messages. As a consequence, we show that feedback can provide a net increase in capacity even if feedback cost is taken into consideration. Moreover we extend this to a more general scenario with two additional independent backward messages, from which we find that interaction can provide an arbitrarily large gain in capacity.
1202.5041
Information flow in a network model and the law of diminishing marginal returns
physics.data-an cs.SI physics.soc-ph q-bio.NC
We analyze a simple dynamical network model which describes the limited capacity of nodes to process the input information. For a suitable choice of the parameters, the information flow pattern is characterized by exponential distribution of the incoming information and a fat-tailed distribution of the outgoing information, as a signature of the law of diminishing marginal returns. The analysis of a real EEG data-set shows that similar phenomena may be relevant for brain signals.
1202.5110
ISAR Image Formation Using Sequential Minimization of L0 and L2 Norms
cs.IT math.IT
A sparsity-driven algorithm of inverse synthetic aperture radar (ISAR) imaging is proposed. Based on the parametric sparse representation of the received ISAR signal, the problem of ISAR image formation is converted into the joint estimation of the target rotation rate and the sparse power distribution in the spatial domain. This goal is achieved by sequential minimization of L0 and L2 norms, which ensure the sparsest ISAR image and the minimum recovery error, respectively.
1202.5187
Sphere Decoding for Spatial Modulation Systems with Arbitrary Nt
cs.IT math.IT
Recently, three Sphere Decoding (SD) algorithms were proposed for Spatial Modulation (SM) scheme which focus on reducing the transmit-, receive-, and both transmit and receive-search spaces at the receiver and were termed as Receiver-centric SD (Rx-SD), Transmitter-centric SD (Tx-SD), and Combined SD (C-SD) detectors, respectively. The Tx-SD detector was proposed for systems with Nt \leq Nr, where Nt and Nr are the number of transmit and receive antennas of the system. In this paper, we show that the existing Tx-SD detector is not limited to systems with Nt \leq Nr but can be used with systems Nr < Nt \leq 2Nr - 1 as well. We refer to this detector as the Extended Tx-SD (E-Tx-SD) detector. Further, we propose an E- Tx-SD based detection scheme for SM systems with arbitrary Nt by exploiting the Inter-Channel Interference (ICI) free property of the SM systems. We show with our simulation results that the proposed detectors are ML-optimal and offer significantly reduced complexity.
1202.5198
Network Theory, Cracking and Frictional Sliding
physics.geo-ph cs.CE nlin.AO
We have developed different network approaches to complex patterns of frictional interfaces (contact areas developments). Here, we analyze the dynamics of static friction. We found, under the correlation measure, the fraction of triangles correlates with the detachment fronts. Also, for all types of the loops (such as triangles), there is a universal power law between nodes' degree and motifs where motifs frequency follow a power law. This shows high energy localization is characterized by fast variation of the loops fraction. Also, this proves that the congestion of loops occurs around hubs. Furthermore, the motif distributions and modularity space of networks -in terms of within-module degree and participation coefficient- show universal trends, indicating an in common aspect of energy flow in shear ruptures. Moreover, we confirmed that slow ruptures generally hold small localization, while regular ruptures carry a high level of energy localization. We proposed that assortativity, as an index to correlation of node's degree, can uncover acoustic features of the interfaces. We showed that increasing assortativity induces a nearly silent period of fault's activities. Also, we proposed that slow ruptures resulted from within-module developments rather than extra-modules of the networks. Our approach presents a completely new perspective of the evolution of shear ruptures.
1202.5202
Secure Compressed Reading in Smart Grids
cs.IT cs.PF math.IT
Smart Grids measure energy usage in real-time and tailor supply and delivery accordingly, in order to improve power transmission and distribution. For the grids to operate effectively, it is critical to collect readings from massively-installed smart meters to control centers in an efficient and secure manner. In this paper, we propose a secure compressed reading scheme to address this critical issue. We observe that our collected real-world meter data express strong temporal correlations, indicating they are sparse in certain domains. We adopt Compressed Sensing technique to exploit this sparsity and design an efficient meter data transmission scheme. Our scheme achieves substantial efficiency offered by compressed sensing, without the need to know beforehand in which domain the meter data are sparse. This is in contrast to traditional compressed-sensing based scheme where such sparse-domain information is required a priori. We then design specific dependable scheme to work with our compressed sensing based data transmission scheme to make our meter reading reliable and secure. We provide performance guarantee for the correctness, efficiency, and security of our proposed scheme. Through analysis and simulations, we demonstrate the effectiveness of our schemes and compare their performance to prior arts.
1202.5216
Identifying Discriminating Network Motifs in YouTube Spam
cs.SI
Like other social media websites, YouTube is not immune from the attention of spammers. In particular, evidence can be found of attempts to attract users to malicious third-party websites. As this type of spam is often associated with orchestrated campaigns, it has a discernible network signature, based on networks derived from comments posted by users to videos. In this paper, we examine examples of different YouTube spam campaigns of this nature, and use a feature selection process to identify network motifs that are characteristic of the corresponding campaign strategies. We demonstrate how these discriminating motifs can be used as part of a network motif profiling process that tracks the activity of spam user accounts over time, enabling the process to scale to larger networks.
1202.5230
Triadic Measures on Graphs: The Power of Wedge Sampling
cs.SI cs.DM
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of a graph. Some of the most useful graph metrics, especially those measuring social cohesion, are based on triangles. Despite the importance of these triadic measures, associated algorithms can be extremely expensive. We propose a new method based on wedge sampling. This versatile technique allows for the fast and accurate approximation of all current variants of clustering coefficients and enables rapid uniform sampling of the triangles of a graph. Our methods come with provable and practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state-of-the-art, while providing nearly the accuracy of full enumeration. Our results will enable more wide-scale adoption of triadic measures for analysis of extremely large graphs, as demonstrated on several real-world examples.
1202.5249
On the Power of Manifold Samples in Exploring Configuration Spaces and the Dimensionality of Narrow Passages
cs.RO cs.CG
We extend our study of Motion Planning via Manifold Samples (MMS), a general algorithmic framework that combines geometric methods for the exact and complete analysis of low-dimensional configuration spaces with sampling-based approaches that are appropriate for higher dimensions. The framework explores the configuration space by taking samples that are entire low-dimensional manifolds of the configuration space capturing its connectivity much better than isolated point samples. The contributions of this paper are as follows: (i) We present a recursive application of MMS in a six-dimensional configuration space, enabling the coordination of two polygonal robots translating and rotating amidst polygonal obstacles. In the adduced experiments for the more demanding test cases MMS clearly outperforms PRM, with over 20-fold speedup in a coordination-tight setting. (ii) A probabilistic completeness proof for the most prevalent case, namely MMS with samples that are affine subspaces. (iii) A closer examination of the test cases reveals that MMS has, in comparison to standard sampling-based algorithms, a significant advantage in scenarios containing high-dimensional narrow passages. This provokes a novel characterization of narrow passages which attempts to capture their dimensionality, an attribute that had been (to a large extent) unattended in previous definitions.
1202.5259
Sequential Coding of Markov Sources over Burst Erasure Channels
cs.IT math.IT
We study sequential coding of Markov sources under an error propagation constraint. An encoder sequentially compresses a sequence of vector-sources that are spatially i.i.d. but temporally correlated according to a first-order Markov process. The channel erases up to B packets in a single burst, but reveals all other packets to the destination. The destination is required to reproduce all the source-vectors instantaneously and in a lossless manner, except those sequences that occur in an error propagation window of length B + W following the start of the erasure burst. We define the rate-recovery function R(B, W) - the minimum achievable compression rate per source sample in this framework - and develop upper and lower bounds on this function. Our upper bound is obtained using a random binning technique, whereas our lower bound is obtained by drawing connections to multi-terminal source coding. Our upper and lower bounds coincide, yielding R(B, W), in some special cases. More generally, both the upper and lower bounds equal the rate for predictive coding plus a term that decreases as 1/(W+1), thus establishing a scaling behaviour of the rate-recovery function. For a special class of semi-deterministic Markov sources we propose a new optimal coding scheme: prospicient coding. An extension of this coding technique to Gaussian sources is also developed. For the class of symmetric Markov sources and memoryless encoders, we establish the optimality of random binning. When the destination is required to reproduce each source sequence with a fixed delay and when W = 0 we also establish the optimality of binning.
1202.5284
Elitism Levels Traverse Mechanism For The Derivation of Upper Bounds on Unimodal Functions
cs.NE cs.AI
In this article we present an Elitism Levels Traverse Mechanism that we designed to find bounds on population-based Evolutionary algorithms solving unimodal functions. We prove its efficiency theoretically and test it on OneMax function deriving bounds c{\mu}n log n - O({\mu} n). This analysis can be generalized to any similar algorithm using variants of tournament selection and genetic operators that flip or swap only 1 bit in each string.
1202.5298
Min Max Generalization for Two-stage Deterministic Batch Mode Reinforcement Learning: Relaxation Schemes
cs.SY cs.LG
We study the minmax optimization problem introduced in [22] for computing policies for batch mode reinforcement learning in a deterministic setting. First, we show that this problem is NP-hard. In the two-stage case, we provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, leads to a conic quadratic programming problem. We also theoretically prove and empirically illustrate that both relaxation schemes provide better results than those given in [22].
1202.5299
Culturomics meets random fractal theory: Insights into long-range correlations of social and natural phenomena over the past two centuries
physics.soc-ph cond-mat.stat-mech cs.DL cs.SI stat.AP
Culturomics was recently introduced as the application of high-throughput data collection and analysis to the study of human culture. Here we make use of this data by investigating fluctuations in yearly usage frequencies of specific words that describe social and natural phenomena, as derived from books that were published over the course of the past two centuries. We show that the determination of the Hurst parameter by means of fractal analysis provides fundamental insights into the nature of long-range correlations contained in the culturomic trajectories, and by doing so, offers new interpretations as to what might be the main driving forces behind the examined phenomena. Quite remarkably, we find that social and natural phenomena are governed by fundamentally different processes. While natural phenomena have properties that are typical for processes with persistent long-range correlations, social phenomena are better described as nonstationary, on-off intermittent, or Levy walk processes.
1202.5302
Application of Steganography for Anonymity through the Internet
cs.CR cs.IT math.IT
In this paper, a novel steganographic scheme based on chaotic iterations is proposed. This research work takes place into the information hiding security framework. The applications for anonymity and privacy through the Internet are regarded too. To guarantee such an anonymity, it should be possible to set up a secret communication channel into a web page, being both secure and robust. To achieve this goal, we propose an information hiding scheme being stego-secure, which is the highest level of security in a well defined and studied category of attacks called "watermark-only attack". This category of attacks is the best context to study steganography-based anonymity through the Internet. The steganalysis of our steganographic process is also studied in order to show it security in a real test framework.
1202.5332
A Characterization of Scale Invariant Responses in Enzymatic Networks
cs.SY cs.CE q-bio.MN
An ubiquitous property of biological sensory systems is adaptation: a step increase in stimulus triggers an initial change in a biochemical or physiological response, followed by a more gradual relaxation toward a basal, pre-stimulus level. Adaptation helps maintain essential variables within acceptable bounds and allows organisms to readjust themselves to an optimum and non-saturating sensitivity range when faced with a prolonged change in their environment. Recently, it was shown theoretically and experimentally that many adapting systems, both at the organism and single-cell level, enjoy a remarkable additional feature: scale invariance, meaning that the initial, transient behavior remains (approximately) the same even when the background signal level is scaled. In this work, we set out to investigate under what conditions a broadly used model of biochemical enzymatic networks will exhibit scale-invariant behavior. An exhaustive computational study led us to discover a new property of surprising simplicity and generality, uniform linearizations with fast output (ULFO), whose validity we show is both necessary and sufficient for scale invariance of enzymatic networks. Based on this study, we go on to develop a mathematical explanation of how ULFO results in scale invariance. Our work provides a surprisingly consistent, simple, and general framework for understanding this phenomenon, and results in concrete experimental predictions.
1202.5349
Buffer-Aided Relaying with Adaptive Link Selection
cs.IT math.IT
In this paper, we consider a simple network consisting of a source, a half-duplex decode-and-forward relay, and a destination. We propose a new relaying protocol employing adaptive link selection, i.e., in any given time slot, based on the channel state information of the source-relay and the relay-destination link a decision is made whether the source or the relay transmits. In order to avoid data loss at the relay, adaptive link selection requires the relay to be equipped with a buffer such that data can be queued until the relay-destination link is selected for transmission. We study both delay constrained and delay unconstrained transmission. For the delay unconstrained case, we characterize the optimal link selection policy, derive the corresponding throughput, and develop an optimal power allocation scheme. For the delay constrained case, we propose to starve the buffer of the relay by choosing the decision threshold of the link selection policy smaller than the optimal one and derive a corresponding upper bound on the average delay. Furthermore, we propose a modified link selection protocol which avoids buffer overflow by limiting the queue size. Our analytical and numerical results show that buffer-aided relaying with adaptive link selection achieves significant throughput gains compared to conventional relaying protocols with and without buffers where the relay employs a fixed schedule for reception and transmission.
1202.5358
DPCube: Differentially Private Histogram Release through Multidimensional Partitioning
cs.DB
Differential privacy is a strong notion for protecting individual privacy in privacy preserving data analysis or publishing. In this paper, we study the problem of differentially private histogram release for random workloads. We study two multidimensional partitioning strategies including: 1) a baseline cell-based partitioning strategy for releasing an equi-width cell histogram, and 2) an innovative 2-phase kd-tree based partitioning strategy for releasing a v-optimal histogram. We formally analyze the utility of the released histograms and quantify the errors for answering linear queries such as counting queries. We formally characterize the property of the input data that will guarantee the optimality of the algorithm. Finally, we implement and experimentally evaluate several applications using the released histograms, including counting queries, classification, and blocking for record linkage and show the benefit of our approach.
1202.5398
Mod-CSA: Modularity optimization by conformational space annealing
physics.comp-ph cs.SI physics.data-an physics.soc-ph
We propose a new modularity optimization method, Mod-CSA, based on stochastic global optimization algorithm, conformational space annealing (CSA). Our method outperforms simulated annealing in terms of both efficiency and accuracy, finding higher modularity partitions with less computational resources required. The high modularity values found by our method are higher than, or equal to, the largest values previously reported. In addition, the method can be combined with other heuristic methods, and implemented in parallel fashion, allowing it to be applicable to large graphs with more than 10000 nodes.
1202.5413
On the Joint Error-and-Erasure Decoding for Irreducible Polynomial Remainder Codes
cs.IT math.IT math.RA
A general class of polynomial remainder codes is considered. Such codes are very flexible in rate and length and include Reed-Solomon codes as a special case. As an extension of previous work, two joint error-and-erasure decoding approaches are proposed. In particular, both the decoding approaches by means of a fixed transform are treated in a way compatible with the error-only decoding. In the end, a collection of gcd-based decoding algorithm is obtained, some of which appear to be new even when specialized to Reed-Solomon codes.
1202.5414
Left-Invariant Diffusion on the Motion Group in terms of the Irreducible Representations of SO(3)
math.AP cs.CV cs.NA math.RT
In this work we study the formulation of convection/diffusion equations on the 3D motion group SE(3) in terms of the irreducible representations of SO(3). Therefore, the left-invariant vector-fields on SE(3) are expressed as linear operators, that are differential forms in the translation coordinate and algebraic in the rotation. In the context of 3D image processing this approach avoids the explicit discretization of SO(3) or $S_2$, respectively. This is particular important for SO(3), where a direct discretization is infeasible due to the enormous memory consumption. We show two applications of the framework: one in the context of diffusion-weighted magnetic resonance imaging and one in the context of object detection.
1202.5447
Global $H_\infty$ Consensus of Multi-Agent Systems with Lipschitz Nonlinear Dynamics
cs.SY math.OC
This paper addresses the global consensus problems of a class of nonlinear multi-agent systems with Lipschitz nonlinearity and directed communication graphs, by using a distributed consensus protocol based on the relative states of neighboring agents. A two-step algorithm is presented to construct a protocol, under which a Lipschitz multi-agent system without disturbances can reach global consensus for a strongly connected directed communication graph. Another algorithm is then given to design a protocol which can achieve global consensus with a guaranteed $H_\infty$ performance for a Lipschitz multiagent system subject to external disturbances. The case with a leader-follower communication graph is also discussed. Finally, the effectiveness of the theoretical results is demonstrated through a network of single-link manipulators.
1202.5469
Enhancing Navigation on Wikipedia with Social Tags
cs.IR cs.DL cs.HC cs.SI
Social tagging has become an interesting approach to improve search and navigation over the actual Web, since it aggregates the tags added by different users to the same resource in a collaborative way. This way, it results in a list of weighted tags describing its resource. Combined to a classical taxonomic classification system such as that by Wikipedia, social tags can enhance document navigation and search. On the one hand, social tags suggest alternative navigation ways, including pivot-browsing, popularity-driven navigation, and filtering. On the other hand, it provides new metadata, sometimes uncovered by documents' content, that can substantially improve document search. In this work, the inclusion of an interface to add user-defined tags describing Wikipedia articles is proposed, as a way to improve article navigation and retrieval. As a result, a prototype on applying tags over Wikipedia is proposed in order to evaluate its effectiveness.
1202.5470
Convergence analysis of the FOCUSS algorithm
cs.IT math.IT
FOCal Underdetermined System Solver (FOCUSS) is a powerful tool for sparse representation and underdetermined inverse problems, which is extremely easy to implement. In this paper, we give a comprehensive convergence analysis on the FOCUSS algorithm towards establishing a systematic convergence theory by providing three primary contributions as follows. First, we give a rigorous derivation for this algorithm exploiting the auxiliary function. Then, we prove its convergence. Third, we systematically study its convergence rate with respect to the sparsity parameter p and demonstrate its convergence rate by numerical experiments.
1202.5471
L1-norm minimization for quaternion signals
cs.NA cs.DS cs.IT math.IT
The l1-norm minimization problem plays an important role in the compressed sensing (CS) theory. We present in this letter an algorithm for solving the problem of l1-norm minimization for quaternion signals by converting it to second-order cone programming. An application example of the proposed algorithm is also given for practical guidelines of perfect recovery of quaternion signals. The proposed algorithm may find its potential application when CS theory meets the quaternion signal processing.
1202.5474
Pareto Boundary of the Rate Region for Single-Stream MIMO Interference Channels: Linear Transceiver Design
cs.IT math.IT
We consider a multiple-input multiple-output (MIMO) interference channel (IC), where a single data stream per user is transmitted and each receiver treats interference as noise. The paper focuses on the open problem of computing the outermost boundary (so-called Pareto boundary-PB) of the achievable rate region under linear transceiver design. The Pareto boundary consists of the strict PB and non-strict PB. For the two user case, we compute the non-strict PB and the two ending points of the strict PB exactly. For the strict PB, we formulate the problem to maximize one rate while the other rate is fixed such that a strict PB point is reached. To solve this non-convex optimization problem which results from the hard-coupled two transmit beamformers, we propose an alternating optimization algorithm. Furthermore, we extend the algorithm to the multi-user scenario and show convergence. Numerical simulations illustrate that the proposed algorithm computes a sequence of well-distributed operating points that serve as a reasonable and complete inner bound of the strict PB compared with existing methods.
1202.5477
Analyzing Tag Distributions in Folksonomies for Resource Classification
cs.DL cs.IR
Recent research has shown the usefulness of social tags as a data source to feed resource classification. Little is known about the effect of settings on folksonomies created on social tagging systems. In this work, we consider the settings of social tagging systems to further understand tag distributions in folksonomies. We analyze in depth the tag distributions on three large-scale social tagging datasets, and analyze the effect on a resource classification task. To this end, we study the appropriateness of applying weighting schemes based on the well-known TF-IDF for resource classification. We show the great importance of settings as to altering tag distributions. Among those settings, tag suggestions produce very different folksonomies, which condition the success of the employed weighting schemes. Our findings and analyses are relevant for researchers studying tag-based resource classification, user behavior in social networks, the structure of folksonomies and tag distributions, as well as for developers of social tagging systems in search of an appropriate setting.
1202.5509
Organizing the Aggregate: Languages for Spatial Computing
cs.PL cs.DC cs.MA
As the number of computing devices embedded into engineered systems continues to rise, there is a widening gap between the needs of the user to control aggregates of devices and the complex technology of individual devices. Spatial computing attempts to bridge this gap for systems with local communication by exploiting the connection between physical locality and device connectivity. A large number of spatial computing domain specific languages (DSLs) have emerged across diverse domains, from biology and reconfigurable computing, to sensor networks and agent-based systems. In this chapter, we develop a framework for analyzing and comparing spatial computing DSLs, survey the current state of the art, and provide a roadmap for future spatial computing DSL investigation.
1202.5514
Classification approach based on association rules mining for unbalanced data
stat.ML cs.LG
This paper deals with the binary classification task when the target class has the lower probability of occurrence. In such situation, it is not possible to build a powerful classifier by using standard methods such as logistic regression, classification tree, discriminant analysis, etc. To overcome this short-coming of these methods which yield classifiers with low sensibility, we tackled the classification problem here through an approach based on the association rules learning. This approach has the advantage of allowing the identification of the patterns that are well correlated with the target class. Association rules learning is a well known method in the area of data-mining. It is used when dealing with large database for unsupervised discovery of local patterns that expresses hidden relationships between input variables. In considering association rules from a supervised learning point of view, a relevant set of weak classifiers is obtained from which one derives a classifier that performs well.
1202.5517
Research Traceability using Provenance Services for Biomedical Analysis
cs.DB cs.SE
We outline the approach being developed in the neuGRID project to use provenance management techniques for the purposes of capturing and preserving the provenance data that emerges in the specification and execution of workflows in biomedical analyses. In the neuGRID project a provenance service has been designed and implemented that is intended to capture, store, retrieve and reconstruct the workflow information needed to facilitate users in conducting user analyses. We describe the architecture of the neuGRID provenance service and discuss how the CRISTAL system from CERN is being adapted to address the requirements of the project and then consider how a generalised approach for provenance management could emerge for more generic application to the (Health)Grid community.
1202.5528
Hierarchical Resource Allocation in Femtocell Networks using Graph Algorithms
cs.IT cs.NI math.IT
This paper presents a hierarchical approach to resource allocation in open-access femtocell networks. The major challenge in femtocell networks is interference management which in our system, based on the Long Term Evolution (LTE) standard, translates to which user should be allocated which physical resource block (or fraction thereof) from which femtocell access point (FAP). The globally optimal solution requires integer programming and is mathematically intractable. We propose a hierarchical three-stage solution: first, the load of each FAP is estimated considering the number of users connected to the FAP, their average channel gain and required data rates. Second, based on each FAP's load, the physical resource blocks (PRBs) are allocated to FAPs in a manner that minimizes the interference by coloring the modified interference graph. Finally, the resource allocation is performed at each FAP considering users' instantaneous channel gain. The two major advantages of this suboptimal approach are the significantly reduced computation complexity and the fact that the proposed algorithm only uses information that is already likely to be available at the nodes executing the relevant optimization step. The performance of the proposed solution is evaluated in networks based on the LTE standard.
1202.5529
On Secure Communication with Constrained Randomization
cs.IT math.IT
In this paper, we investigate how constraints on the randomization in the encoding process affect the secrecy rates achievable over wiretap channels. In particular, we characterize the secrecy capacity with a rate-limited local source of randomness and a less capable eavesdropper's channel, which shows that limited rate incurs a secrecy rate penalty but does not preclude secrecy. We also discuss a more practical aspect of rate-limited randomization in the context of cooperative jamming. Finally, we show that secure communication is possible with a non-uniform source for randomness; this suggests the possibility of designing robust coding schemes.
1202.5544
An Incremental Sampling-based Algorithm for Stochastic Optimal Control
cs.RO cs.SY math.DS math.OC math.PR
In this paper, we consider a class of continuous-time, continuous-space stochastic optimal control problems. Building upon recent advances in Markov chain approximation methods and sampling-based algorithms for deterministic path planning, we propose a novel algorithm called the incremental Markov Decision Process (iMDP) to compute incrementally control policies that approximate arbitrarily well an optimal policy in terms of the expected cost. The main idea behind the algorithm is to generate a sequence of finite discretizations of the original problem through random sampling of the state space. At each iteration, the discretized problem is a Markov Decision Process that serves as an incrementally refined model of the original problem. We show that with probability one, (i) the sequence of the optimal value functions for each of the discretized problems converges uniformly to the optimal value function of the original stochastic optimal control problem, and (ii) the original optimal value function can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise.
1202.5597
Hybrid Batch Bayesian Optimization
cs.AI cs.LG
Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm that, based on the current state, dynamically switches between a sequential policy and a batch policy with variable batch sizes. We provide theoretical justification for our algorithm and present experimental results on eight benchmark BO problems. The results show that our method achieves substantial speedup (up to %78) compared to a pure sequential policy, without suffering any significant performance loss.
1202.5598
Clustering using Max-norm Constrained Optimization
cs.LG stat.ML
We suggest using the max-norm as a convex surrogate constraint for clustering. We show how this yields a better exact cluster recovery guarantee than previously suggested nuclear-norm relaxation, and study the effectiveness of our method, and other related convex relaxations, compared to other clustering approaches.
1202.5599
On the Ingleton-Violations in Finite Groups
cs.IT math.IT
Given $n$ discrete random variables, its entropy vector is the $2^n-1$ dimensional vector obtained from the joint entropies of all non-empty subsets of the random variables. It is well known that there is a one-to-one correspondence between such an entropy vector and a certain group-characterizable vector obtained from a finite group and $n$ of its subgroups [3]. This correspondence may be useful for characterizing the space of entropic vectors and for designing network codes. If one restricts attention to abelian groups then not all entropy vectors can be obtained. This is an explanation for the fact shown by Dougherty et al [4] that linear network codes cannot achieve capacity in general network coding problems. All abelian group-characterizable vectors, and by fiat all entropy vectors generated by linear network codes, satisfy a linear inequality called the Ingleton inequality. It is therefore of interest to identify groups that violate the Ingleton inequality. In this paper, we study the problem of finding nonabelian finite groups that yield characterizable vectors which violate the Ingleton inequality. Using a refined computer search, we find the symmetric group $S_5$ to be the smallest group that violates the Ingleton inequality. Careful study of the structure of this group, and its subgroups, reveals that it belongs to the Ingleton-violating family $PGL(2,q)$ with a prime power $q \geq 5$, i.e., the projective group of $2\times 2$ nonsingular matrices with entries in $\mathbb{F}_q$. We further interpret this family using the theory of group actions. We also extend the construction to more general groups such as $PGL(n,q)$ and $GL(n,q)$. The families of groups identified here are therefore good candidates for constructing network codes more powerful than linear network codes, and we discuss some considerations for constructing such group network codes.
1202.5600
Interaction Histories and Short Term Memory: Enactive Development of Turn-taking Behaviors in a Childlike Humanoid Robot
cs.AI nlin.AO
In this article, an enactive architecture is described that allows a humanoid robot to learn to compose simple actions into turn-taking behaviors while playing interaction games with a human partner. The robot's action choices are reinforced by social feedback from the human in the form of visual attention and measures of behavioral synchronization. We demonstrate that the system can acquire and switch between behaviors learned through interaction based on social feedback from the human partner. The role of reinforcement based on a short term memory of the interaction is experimentally investigated. Results indicate that feedback based only on the immediate state is insufficient to learn certain turn-taking behaviors. Therefore some history of the interaction must be considered in the acquisition of turn-taking, which can be efficiently handled through the use of short term memory.
1202.5618
An equation-free approach to coarse-graining the dynamics of networks
cs.SI nlin.AO physics.comp-ph physics.soc-ph
We propose and illustrate an approach to coarse-graining the dynamics of evolving networks (networks whose connectivity changes dynamically). The approach is based on the equation-free framework: short bursts of detailed network evolution simulations are coupled with lifting and restriction operators that translate between actual network realizations and their (appropriately chosen) coarse observables. This framework is used here to accelerate temporal simulations (through coarse projective integration), and to implement coarsegrained fixed point algorithms (through matrix-free Newton-Krylov GMRES). The approach is illustrated through a simple network evolution example, for which analytical approximations to the coarse-grained dynamics can be independently obtained, so as to validate the computational results. The scope and applicability of the approach, as well as the issue of selection of good coarse observables are discussed.
1202.5657
Design of a Fractional Order Phase Shaper for Iso-damped Control of a PHWR under Step-back Condition
math.OC cs.SY
Phase shaping using fractional order (FO) phase shapers has been proposed by many contemporary researchers as a means of producing systems with iso-damped closed loop response due to a stepped variation in input. Such systems, with the closed loop damping remaining invariant to gain changes can be used to produce dead-beat step response with only rise time varying with gain. This technique is used to achieve an active step-back in a Pressurized Heavy Water Reactor (PHWR) where it is desired to change the reactor power to a pre-determined value within a short interval keeping the power undershoot as low as possible. This paper puts forward an approach as an alternative for the present day practice of a passive step-back mechanism where the control rods are allowed to drop during a step-back action by gravity, with release of electromagnetic clutches. The reactor under a step-back condition is identified as a system using practical test data and a suitable Proportional plus Integral plus Derivative (PID) controller is designed for it. Then the combined plant is augmented with a phase shaper to achieve a dead-beat response in terms of power drop. The fact that the identified static gain of the system depends on the initial power level at which a step-back is initiated, makes this application particularly suited for using a FO phase shaper. In this paper, a model of a nuclear reactor is developed for a control rod drop scenario involving rapid power reduction in a 500MWe Canadian Deuterium Uranium (CANDU) reactor using AutoRegressive Exogenous (ARX) algorithm. The system identification and reduced order modeling are developed from practical test data. For closed loop active control of the identified reactor model, the fractional order phase shaper along with a PID controller is shown to perform better than the present Reactor Regulating System (RRS) due to its iso-damped nature.
1202.5665
q-Gaussian based Smoothed Functional Algorithm for Stochastic Optimization
cs.SY cs.IT math.IT
The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.
1202.5667
Fractional Order Phase Shaper Design with Routh's Criterion for Iso-damped Control System
cs.SY
Phase curve of an open loop system is flat in nature if the derivative of phase with respect to frequency is zero. With a flat phase curve, the corresponding closed-loop system exhibits an iso-damped property i.e. maintains constant overshoot with the change of gain and with other parametric variations. In recent past application, fractional order (FO) phase shapers have been proposed by contemporary researchers to achieve enhanced parametric robustness. In this paper, a simple Routh tabulation based methodology is proposed to design an appropriate FO phase shaper to achieve phase flattening in a control loop, comprising a system, controlled by a classical PID controller. The method is demonstrated using MATLAB simulation of a second order DC motor plant and also a first order with time delay system.
1202.5674
Handling Packet Dropouts and Random Delays for Unstable Delayed Processes in NCS by Optimal Tuning of PI{\lambda}D{\mu} Controllers with Evolutionary Algorithms
cs.SY
The issues of stochastically varying network delays and packet dropouts in Networked Control System (NCS) applications have been simultaneously addressed by time domain optimal tuning of fractional order (FO) PID controllers. Different variants of evolutionary algorithms are used for the tuning process and their performances are compared. Also the effectiveness of the fractional order PI{\lambda}D{\mu} controllers over their integer order counterparts is looked into. Two standard test bench plants with time delay and unstable poles which are encountered in process control applications are tuned with the proposed method to establish the validity of the tuning methodology. The proposed tuning methodology is independent of the specific choice of plant and is also applicable for less complicated systems. Thus it is useful in a wide variety of scenarios. The paper also shows the superiority of FOPID controllers over their conventional PID counterparts for NCS applications.
1202.5677
On the Selection of Tuning Methodology of FOPID Controllers for the Control of Higher Order Processes
cs.SY
In this paper, a comparative study is done on the time and frequency domain tuning strategies for fractional order (FO) PID controllers to handle higher order processes. A new fractional order template for reduced parameter modeling of stable minimum/non-minimum phase higher order processes is introduced and its advantage in frequency domain tuning of FOPID controllers is also presented. The time domain optimal tuning of FOPID controllers have also been carried out to handle these higher order processes by performing optimization with various integral performance indices. The paper highlights on the practical control system implementation issues like flexibility of online autotuning, reduced control signal and actuator size, capability of measurement noise filtration, load disturbance suppression, robustness against parameter uncertainties etc. in light of the above tuning methodologies.
1202.5680
A Novel Fractional Order Fuzzy PID Controller and Its Optimal Time Domain Tuning Based on Integral Performance Indices
cs.SY
A novel fractional order (FO) fuzzy Proportional-Integral-Derivative (PID) controller has been proposed in this paper which works on the closed loop error and its fractional derivative as the input and has a fractional integrator in its output. The fractional order differ-integrations in the proposed fuzzy logic controller (FLC) are kept as design variables along with the input-output scaling factors (SF) and are optimized with Genetic Algorithm (GA) while minimizing several integral error indices along with the control signal as the objective function. Simulations studies are carried out to control a delayed nonlinear process and an open loop unstable process with time delay. The closed loop performances and controller efforts in each case are compared with conventional PID, fuzzy PID and PI{\lambda}D{\mu} controller subjected to different integral performance indices. Simulation results show that the proposed fractional order fuzzy PID controller outperforms the others in most cases.
1202.5683
Improved Model Reduction and Tuning of Fractional Order PI{\lambda}D{\mu} Controllers for Analytical Rule Extraction with Genetic Programming
cs.SY cs.NE
Genetic Algorithm (GA) has been used in this paper for a new approach of sub-optimal model reduction in the Nyquist plane and optimal time domain tuning of PID and fractional order (FO) PI{\lambda}D{\mu} controllers. Simulation studies show that the Nyquist based new model reduction technique outperforms the conventional H2 norm based reduced parameter modeling technique. With the tuned controller parameters and reduced order model parameter data-set, optimum tuning rules have been developed with a test-bench of higher order processes via Genetic Programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by weighted sum of error index and controller effort. From the reported Pareto optimal front of GP based optimal rule extraction technique a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP based tuning rules has been compared with original GA based control performance for the PID and PI{\lambda}D{\mu} controllers, handling four different class of representative higher order processes. These rules are very useful for process control engineers as they inherit the power of the GA based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three dimensional plots of the required variation in PID/FOPID controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP based tuning rules has also been shown with credible simulation examples.
1202.5684
Fractional Order Modeling of a PHWR Under Step-Back Condition and Control of Its Global Power with a Robust PI{\lambda}D{\mu} Controller
cs.SY
Bulk reduction of reactor power within a small finite time interval under abnormal conditions is referred to as step-back. In this paper, a 500MWe Canadian Deuterium Uranium (CANDU) type Pressurized Heavy Water Reactor (PHWR) is modeled using few variants of Least Square Estimator (LSE) from practical test data under a control rod drop scenario in order to design a control system to achieve a dead-beat response during a stepped reduction of its global power. A new fractional order (FO) model reduction technique is attempted which increases the parametric robustness of the control loop due to lesser modeling error and ensures iso-damped closed loop response with a PI{\lambda}D{\mu} or FOPID controller. Such a controller can, therefore, be used to achieve active step-back under varying load conditions for which the system dynamics change significantly. For closed loop active control of the reduced FO reactor models, the PI{\lambda}D{\mu} controller is shown to perform better than the classical integer order PID controllers and present operating Reactor Regulating System (RRS) due to its robustness against shift in system parameters.
1202.5685
Information inequalities and Generalized Graph Entropies
cs.IT math.IT
In this article, we discuss the problem of establishing relations between information measures assessed for network structures. Two types of entropy based measures namely, the Shannon entropy and its generalization, the R\'{e}nyi entropy have been considered for this study. Our main results involve establishing formal relationship, in the form of implicit inequalities, between these two kinds of measures when defined for graphs. Further, we also state and prove inequalities connecting the classical partition-based graph entropies and the functional-based entropy measures. In addition, several explicit inequalities are derived for special classes of graphs.
1202.5686
Genetic Algorithm Based Improved Sub-Optimal Model Reduction in Nyquist Plane for Optimal Tuning Rule Extraction of PID and PI{\lambda}D{\mu} Controllers via Genetic Programming
cs.SY
Genetic Algorithm (GA) has been used in this paper for a new Nyquist based sub-optimal model reduction and optimal time domain tuning of PID and fractional order (FO) PI{\lambda}D{\mu} controllers. Comparative studies show that the new model reduction technique outperforms the conventional H2-norm based reduced order modeling techniques. Optimum tuning rule has been developed next with a test-bench of higher order processes via Genetic Programming (GP) with minimum value of weighted integral error index and control signal. From the Pareto optimal front which is a trade-off between the complexity of the formulae and control performance, an efficient set of tuning rules has been generated for time domain optimal PID and PI{\lambda}D{\mu} controllers.
1202.5689
Estimation, Analysis and Smoothing of Self-Similar Network Induced Delays in Feedback Control of Nuclear Reactors
cs.SY
This paper analyzes a nuclear reactor power signal that suffers from network induced random delays in the shared data network while being fed-back to the Reactor Regulating System (RRS). A detailed study is carried out to investigate the self similarity of random delay dynamics due to the network traffic in shared medium. The fractionality or selfsimilarity in the network induced delay that corrupts the measured power signal coming from Self Powered Neutron Detectors (SPND) is estimated and analyzed. As any fractional order randomness is intrinsically different from conventional Gaussian kind of randomness, these delay dynamics need to be handled efficiently, before reaching the controller within the RRS. An attempt has been made to minimize the effect of the randomness in the reactor power transient data with few classes of smoothing filters. The performance measure of the smoothers with fractional order noise consideration is also investigated into.
1202.5690
Embedded Network Test-Bed for Validating Real-Time Control Algorithms to Ensure Optimal Time Domain Performance
cs.SY
The paper presents a Stateflow based network test-bed to validate real-time optimal control algorithms. Genetic Algorithm (GA) based time domain performance index minimization is attempted for tuning of PI controller to handle a balanced lag and delay type First Order Plus Time Delay (FOPTD) process over network. The tuning performance is validated on a real-time communication network with artificially simulated stochastic delay, packet loss and out-of order packets characterizing the network.
1202.5692
Adaptive Gain and Order Scheduling of Optimal Fractional Order PI{\lambda}D{\mu} Controllers with Radial Basis Function Neural-Network
cs.SY
Gain and order scheduling of fractional order (FO) PI{\lambda}D{\mu} controllers are studied in this paper considering four different classes of higher order processes. The mapping between the optimum PID/FOPID controller parameters and the reduced order process models are done using Radial Basis Function (RBF) type Artificial Neural Network (ANN). Simulation studies have been done to show the effectiveness of the RBFNN for online scheduling of such controllers with random change in set-point and process parameters.
1202.5693
Optimizing Continued Fraction Expansion Based IIR Realization of Fractional Order Differ-Integrators with Genetic Algorithm
cs.SY
Rational approximation of fractional order (FO) differ-integrators via Continued Fraction Expansion (CFE) is a well known technique. In this paper, the nominal structures of various generating functions are optimized using Genetic Algorithm (GA) to minimize the deviation in magnitude and phase response between the original FO element and the rationalized discrete time filter in Infinite Impulse Response (IIR) structure. The optimized filter based realizations show better approximation of the FO elements in comparison with the existing methods and is demonstrated by the frequency response of the IIR filters.
1202.5695
Training Restricted Boltzmann Machines on Word Observations
cs.LG stat.ML
The restricted Boltzmann machine (RBM) is a flexible tool for modeling complex data, however there have been significant computational difficulties in using RBMs to model high-dimensional multinomial observations. In natural language processing applications, words are naturally modeled by K-ary discrete distributions, where K is determined by the vocabulary size and can easily be in the hundreds of thousands. The conventional approach to training RBMs on word observations is limited because it requires sampling the states of K-way softmax visible units during block Gibbs updates, an operation that takes time linear in K. In this work, we address this issue by employing a more general class of Markov chain Monte Carlo operators on the visible units, yielding updates with computational complexity independent of K. We demonstrate the success of our approach by training RBMs on hundreds of millions of word n-grams using larger vocabularies than previously feasible and using the learned features to improve performance on chunking and sentiment classification tasks, achieving state-of-the-art results on the latter.
1202.5713
The warm-start bias of Yelp ratings
cs.SI
Yelp ratings are often viewed as a reputation metric for local businesses. In this paper we study how Yelp ratings evolve over time. Our main finding is that on average the first ratings that businesses receive overestimate their eventual reputation. In particular, the first review that a business receives in our dataset averages 4.1 stars, while the 20th review averages just 3.69 stars. This significant warm-start bias which may be attributed to the limited exposure of a business in its first steps may mask analysis performed on ratings and reputational ramifications. Therefore, we study techniques to identify and correct for this bias. Further, we perform a case study to explore the effect of a Groupon deal on the merchant's subsequent ratings and show both that previous research has overestimated Groupon's effect to merchants' reputation and that average ratings anticorrelate with the number of reviews received. Our analysis points to the importance of identifying and removing biases from Yelp reviews.
1202.5722
S3A: Secure System Simplex Architecture for Enhanced Security of Cyber-Physical Systems
cs.CR cs.SY
Until recently, cyber-physical systems, especially those with safety-critical properties that manage critical infrastructure (e.g. power generation plants, water treatment facilities, etc.) were considered to be invulnerable against software security breaches. The recently discovered 'W32.Stuxnet' worm has drastically changed this perception by demonstrating that such systems are susceptible to external attacks. Here we present an architecture that enhances the security of safety-critical cyber-physical systems despite the presence of such malware. Our architecture uses the property that control systems have deterministic execution behavior, to detect an intrusion within 0.6 {\mu}s while still guaranteeing the safety of the plant. We also show that even if an attack is successful, the overall state of the physical system will still remain safe. Even if the operating system's administrative privileges have been compromised, our architecture will still be able to protect the physical system from coming to harm.
1202.5820
Tag-Aware Recommender Systems: A State-of-the-art Survey
cs.IR cs.SI
In the past decade, Social Tagging Systems have attracted increasing attention from both physical and computer science communities. Besides the underlying structure and dynamics of tagging systems, many efforts have been addressed to unify tagging information to reveal user behaviors and preferences, extract the latent semantic relations among items, make recommendations, and so on. Specifically, this article summarizes recent progress about tag-aware recommender systems, emphasizing on the contributions from three mainstream perspectives and approaches: network-based methods, tensor-based methods, and the topic-based methods. Finally, we outline some other tag-related works and future challenges of tag-aware recommendation algorithms.
1202.5830
On Secrecy Rate of the Generalized Artificial-Noise Assisted Secure Beamforming for Wiretap Channels
cs.IT math.IT
In this paper we consider the secure transmission in fast Rayleigh fading channels with full knowledge of the main channel and only the statistics of the eavesdropper's channel state information at the transmitter. For the multiple-input, single-output, single-antenna eavesdropper systems, we generalize Goel and Negi's celebrated artificial-noise (AN) assisted beamforming, which just selects the directions to transmit AN heuristically. Our scheme may inject AN to the direction of the message, which outperforms Goel and Negi's scheme where AN is only injected in the directions orthogonal to the main channel. The ergodic secrecy rate of the proposed AN scheme can be represented by a highly simplified power allocation problem. To attain it, we prove that the optimal transmission scheme for the message bearing signal is a beamformer, which is aligned to the direction of the legitimate channel. After characterizing the optimal eigenvectors of the covariance matrices of signal and AN, we also provide the necessary condition for transmitting AN in the main channel to be optimal. Since the resulting secrecy rate is a non-convex power allocation problem, we develop an algorithm to efficiently solve it. Simulation results show that our generalized AN scheme outperforms Goel and Negi's, especially when the quality of legitimate channel is much worse than that of eavesdropper's. In particular, the regime with non-zero secrecy rate is enlarged, which can significantly improve the connectivity of the secure network when the proposed AN assisted beamforming is applied.
1202.5844
Divide-and-Conquer Method for L1 Norm Matrix Factorization in the Presence of Outliers and Missing Data
cs.NA cs.CV
The low-rank matrix factorization as a L1 norm minimization problem has recently attracted much attention due to its intrinsic robustness to the presence of outliers and missing data. In this paper, we propose a new method, called the divide-and-conquer method, for solving this problem. The main idea is to break the original problem into a series of smallest possible sub-problems, each involving only unique scalar parameter. Each of these subproblems is proved to be convex and has closed-form solution. By recursively optimizing these small problems in an analytical way, efficient algorithm, entirely avoiding the time-consuming numerical optimization as an inner loop, for solving the original problem can naturally be constructed. The computational complexity of the proposed algorithm is approximately linear in both data size and dimensionality, making it possible to handle large-scale L1 norm matrix factorization problems. The algorithm is also theoretically proved to be convergent. Based on a series of experiment results, it is substantiated that our method always achieves better results than the current state-of-the-art methods on $L1$ matrix factorization calculation in both computational time and accuracy, especially on large-scale applications such as face recognition and structure from motion.
1202.5857
Algebraic Fast-Decodable Relay Codes for Distributed Communications
cs.IT math.IT math.RA
In this paper, fast-decodable lattice code constructions are designed for the nonorthogonal amplify-and-forward (NAF) multiple-input multiple-output (MIMO) channel. The constructions are based on different types of algebraic structures, e.g. quaternion division algebras. When satisfying certain properties, these algebras provide us with codes whose structure naturally reduces the decoding complexity. The complexity can be further reduced by shortening the block length, i.e., by considering rectangular codes called less than minimum delay (LMD) codes.
1202.5895
Asymptotic behaviour of gossip processes and small world networks
math.PR cs.SI physics.soc-ph
Both small world models of random networks with occasional long range connections and gossip processes with occasional long range transmission of information have similar characteristic behaviour. The long range elements appreciably reduce the effective distances, measured in space or in time, between pairs of typical points. In this paper, we show that their common behaviour can be interpreted as a product of the locally branching nature of the models. In particular, it is shown that both typical distances between points and the proportion of space that can be reached within a given distance or time can be approximated by formulae involving the limit random variable of the branching process.
1202.5909
Closed benchmarks for network community structure characterization
physics.soc-ph cond-mat.stat-mech cs.SI
Characterizing the community structure of complex networks is a key challenge in many scientific fields. Very diverse algorithms and methods have been proposed to this end, many working reasonably well in specific situations. However, no consensus has emerged on which of these methods is the best to use in practice. In part, this is due to the fact that testing their performance requires the generation of a comprehensive, standard set of synthetic benchmarks, a goal not yet fully achieved. Here, we present a type of benchmark that we call "closed", in which an initial network of known community structure is progressively converted into a second network whose communities are also known. This approach differs from all previously published ones, in which networks evolve toward randomness. The use of this type of benchmark allows us to monitor the transformation of the community structure of a network. Moreover, we can predict the optimal behavior of the variation of information, a measure of the quality of the partitions obtained, at any moment of the process. This enables us in many cases to determine the best partition among those suggested by different algorithms. Also, since any network can be used as a starting point, extensive studies and comparisons can be performed using a heterogeneous set of structures, including random ones. These properties make our benchmarks a general standard for comparing community detection algorithms.
1202.5913
Fly out-smarts man
q-bio.PE cs.CL physics.bio-ph
Precopulatory courtship is a high-cost, non-well understood animal world mystery. Drosophila's (=D.'s) precopulatory courtship not only shows marked structural similarities with mammalian courtship, but also with human spoken language. This suggests the study of purpose, modalities and in particular of the power of this language and to compare it to human language. Following a mathematical symbolic dynamics approach, we translate courtship videos of D.'s body language into a formal language. This approach made it possible to show that D. may use its body language to express individual information - information that may be important for evolutionary optimization, on top of the sexual group membership. Here, we use Chomsky's hierarchical language classification to characterize the power of D.'s body language, and then compare it with the power of languages spoken by humans. We find that from a formal language point of view, D.'s body language is at least as powerful as the languages spoken by humans. From this we conclude that human intellect cannot be the direct consequence of the formal grammar complexity of human language.
1202.5938
Intelligent Car System
cs.RO
In modern life the road safety has becomes the core issue. One single move of a driver can cause horrifying accident. The main goal of intelligent car system is to make communication with other cars on the road. The system is able to control to speed, direction and the distance between the cars the intelligent car system is able to recognize traffic light and is able to take decision according to it. This paper presents a framework of the intelligent car system. I validate several aspect of our system using simulation.
1202.5953
On an Ethical Use of Neural Networks: A Case Study on a North Indian Raga
cs.NE cs.SD
The paper gives an artificial neural network (ANN) approach to time series modeling, the data being instance versus notes (characterized by pitch) depicting the structure of a North Indian raga, namely, Bageshree. Respecting the sentiments of the artists' community, the paper argues why it is more ethical to model a structure than try and "manufacture" an artist by training the neural network to copy performances of artists. Indian Classical Music centers on the ragas, where emotion and devotion are both important and neither can be substituted by such "calculated artistry" which the ANN generated copies are ultimately up to.
1202.5967
Joint Source-Channel Cooperative Transmission over Relay-Broadcast Networks
cs.IT math.IT
Reliable transmission of a discrete memoryless source over a multiple-relay relay-broadcast network is considered. Motivated by sensor network applications, it is assumed that the relays and the destinations all have access to side information correlated with the underlying source signal. Joint source-channel cooperative transmission is studied in which the relays help the transmission of the source signal to the destinations by using both their overheard signals, as in the classical channel cooperation scenario, as well as the available correlated side information. Decode-and-forward (DF) based cooperative transmission is considered in a network of multiple relay terminals and two different achievability schemes are proposed: i) a regular encoding and sliding-window decoding scheme without explicit source binning at the encoder, and ii) a semi-regular encoding and backward decoding scheme with binning based on the side information statistics. It is shown that both of these schemes lead to the same source-channel code rate, which is shown to be the "source-channel capacity" in the case of i) a physically degraded relay network in which the side information signals are also degraded in the same order as the channel; and ii) a relay-broadcast network in which all the terminals want to reconstruct the source reliably, while at most one of them can act as a relay.
1202.6001
Efficiently Sampling Multiplicative Attribute Graphs Using a Ball-Dropping Process
stat.ML cs.LG
We introduce a novel and efficient sampling algorithm for the Multiplicative Attribute Graph Model (MAGM - Kim and Leskovec (2010)}). Our algorithm is \emph{strictly} more efficient than the algorithm proposed by Yun and Vishwanathan (2012), in the sense that our method extends the \emph{best} time complexity guarantee of their algorithm to a larger fraction of parameter space. Both in theory and in empirical evaluation on sparse graphs, our new algorithm outperforms the previous one. To design our algorithm, we first define a stochastic \emph{ball-dropping process} (BDP). Although a special case of this process was introduced as an efficient approximate sampling algorithm for the Kronecker Product Graph Model (KPGM - Leskovec et al. (2010)}), neither \emph{why} such an approximation works nor \emph{what} is the actual distribution this process is sampling from has been addressed so far to the best of our knowledge. Our rigorous treatment of the BDP enables us to clarify the rational behind a BDP approximation of KPGM, and design an efficient sampling algorithm for the MAGM.
1202.6009
Marginality: a numerical mapping for enhanced treatment of nominal and hierarchical attributes
cs.AI
The purpose of statistical disclosure control (SDC) of microdata, a.k.a. data anonymization or privacy-preserving data mining, is to publish data sets containing the answers of individual respondents in such a way that the respondents corresponding to the released records cannot be re-identified and the released data are analytically useful. SDC methods are either based on masking the original data, generating synthetic versions of them or creating hybrid versions by combining original and synthetic data. The choice of SDC methods for categorical data, especially nominal data, is much smaller than the choice of methods for numerical data. We mitigate this problem by introducing a numerical mapping for hierarchical nominal data which allows computing means, variances and covariances on them.