text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
Very recently, we proposed the row-monomial distributed orthogonal space-time block codes (DOSTBCs) and showed that the row-monomial DOSTBCs achieved approximately twice higher bandwidth efficiency than the repetitionbased cooperative strategy [1]. However, we imposed two limitations on the row-monomial DOSTBCs. The first one was that the associated matrices of the codes must be row-monomial. The other was the assumption that the relays did not have any channel state information (CSI) of the channels from the source to the relays, although this CSI could be readily obtained at the relays without any additional pilot signals or any feedback overhead. In this paper, we first remove the row-monomial limitation; but keep the CSI limitation. In this case, we derive an upper bound of the data-rate of the DOSTBC and it is larger than that of the row-monomial DOSTBCs in [1]. Secondly, we abandon the CSI limitation; but keep the row-monomial limitation. Specifically, we propose the row-monomial DOSTBCs with channel phase information (DOSTBCs-CPI) and derive an upper bound of the data-rate of those codes. The rowmonomial DOSTBCs-CPI have higher data-rate than the DOSTBCs and the row-monomial DOSTBCs. Furthermore, we find the actual row-monomial DOSTBCs-CPI which achieve the upper bound of the data-rate.
The Impact of Noise Correlation and Channel Phase Information on the Data-Rate of the Single-Symbol ML Decodable Distributed STBCs
7,700
We present a joint source-channel multiple description (JSC-MD) framework for resource-constrained network communications (e.g., sensor networks), in which one or many deprived encoders communicate a Markov source against bit errors and erasure errors to many heterogeneous decoders, some powerful and some deprived. To keep the encoder complexity at minimum, the source is coded into K descriptions by a simple multiple description quantizer (MDQ) with neither entropy nor channel coding. The code diversity of MDQ and the path diversity of the network are exploited by decoders to correct transmission errors and improve coding efficiency. A key design objective is resource scalability: powerful nodes in the network can perform JSC-MD distributed estimation/decoding under the criteria of maximum a posteriori probability (MAP) or minimum mean-square error (MMSE), while primitive nodes resort to simpler MD decoding, all working with the same MDQ code. The application of JSC-MD to distributed estimation of hidden Markov models in a sensor network is demonstrated. The proposed JSC-MD MAP estimator is an algorithm of the longest path in a weighted directed acyclic graph, while the JSC-MD MMSE decoder is an extension of the well-known forward-backward algorithm to multiple descriptions. Both algorithms simultaneously exploit the source memory, the redundancy of the fixed-rate MDQ, and the inter-description correlations. They outperform the existing hard-decision MDQ decoders by large margins (up to 8dB). For Gaussian Markov sources, the complexity of JSC-MD distributed MAP sequence estimation can be made as low as that of typical single description Viterbi-type algorithms.
Networked Multiple Description Estimation and Compression with Resource Scalability
7,701
Consider a wireless MIMO multi-hop channel with n_s non-cooperating source antennas and n_d fully cooperating destination antennas, as well as L clusters containing k non-cooperating relay antennas each. The source signal traverses all L clusters of relay antennas, before it reaches the destination. When relay antennas within the same cluster scale their received signals by the same constant before the retransmission, the equivalent channel matrix H relating the input signals at the source antennas to the output signals at the destination antennas is proportional to the product of channel matrices H_l, l=1,...,L+1, corresponding to the individual hops. We perform an asymptotic capacity analysis for this channel as follows: In a first instance we take the limits n_s->infty, n_d->infty and k->infty, but keep both n_s/n_d and k/n_d fixed. Then, we take the limits L->infty and k/n_d->infty. Requiring that the H_l's satisfy the conditions needed for the Marcenko-Pastur law, we prove that the capacity scales linearly in min{n_s,n_d}, as long as the ratio k/n_d scales at least linearly in L. Moreover, we show that up to a noise penalty and a pre-log factor the capacity of a point-to-point MIMO channel is approached, when this scaling is slightly faster than linear. Conversely, almost all spatial degrees of freedom vanish for less than linear scaling.
On the Distortion of the Eigenvalue Spectrum in MIMO Amplify-and-Forward Multi-Hop Channels
7,702
The optimal decoder achieving the outage capacity under imperfect channel estimation is investigated. First, by searching into the family of nearest neighbor decoders, which can be easily implemented on most practical coded modulation systems, we derive a decoding metric that minimizes the average of the transmission error probability over all channel estimation errors. Next, we specialize our general expression to obtain the corresponding decoding metric for fading MIMO channels. According to the notion of estimation-induced outage (EIO) capacity introduced in our previous work and assuming no channel state information (CSI) at the transmitter, we characterize maximal achievable information rates, using Gaussian codebooks, associated to the proposed decoder. In the case of uncorrelated Rayleigh fading, these achievable rates are compared to the rates achieved by the classical mismatched maximum-likelihood (ML) decoder and the ultimate limits given by the EIO capacity. Numerical results show that the derived metric provides significant gains for the considered scenario, in terms of achievable information rates and bit error rate (BER), in a bit interleaved coded modulation (BICM) framework, without introducing any additional decoding complexity.
On the Outage Capacity of a Practical Decoder Accounting for Channel Estimation Inaccuracies
7,703
The Gilbert-Varshamov bound states that the maximum size A_2(n,d) of a binary code of length n and minimum distance d satisfies A_2(n,d) >= 2^n/V(n,d-1) where V(n,d) stands for the volume of a Hamming ball of radius d. Recently Jiang and Vardy showed that for binary non-linear codes this bound can be improved to A_2(n,d) >= cn2^n/V(n,d-1) for c a constant and d/n <= 0.499. In this paper we show that certain asymptotic families of linear binary [n,n/2] random double circulant codes satisfy the same improved Gilbert-Varshamov bound.
Asymptotic improvement of the Gilbert-Varshamov bound for linear codes
7,704
Distributed Orthogonal Space-Time Block Codes (DOSTBCs) achieving full diversity order and single-symbol ML decodability have been introduced recently for cooperative networks and an upper-bound on the maximal rate of such codes along with code constructions has been presented. In this report, we introduce a new class of Distributed STBCs called Semi-orthogonal Precoded Distributed Single-Symbol Decodable STBCs (S-PDSSDC) wherein, the source performs co-ordinate interleaving of information symbols appropriately before transmitting it to all the relays. It is shown that DOSTBCs are a special case of S-PDSSDCs. A special class of S-PDSSDCs having diagonal covariance matrix at the destination is studied and an upper bound on the maximal rate of such codes is derived. The bounds obtained are approximately twice larger than that of the DOSTBCs. A systematic construction of S-PDSSDCs is presented when the number of relays $K \geq 4$. The constructed codes are shown to achieve the upper-bound on the rate when $K$ is of the form 0 modulo 4 or 3 modulo 4. For the rest of the values of $K$, the constructed codes are shown to have rates higher than that of DOSTBCs. It is also shown that S-PDSSDCs cannot be constructed with any form of linear processing at the relays when the source doesn't perform co-ordinate interleaving of the information symbols.
High Rate Single-Symbol Decodable Precoded DSTBCs for Cooperative Networks
7,705
The role of multiple antennas for secure communication is investigated within the framework of Wyner's wiretap channel. We characterize the secrecy capacity in terms of generalized eigenvalues when the sender and eavesdropper have multiple antennas, the intended receiver has a single antenna, and the channel matrices are fixed and known to all the terminals, and show that a beamforming strategy is capacity-achieving. In addition, we show that in the high signal-to-noise (SNR) ratio regime the penalty for not knowing eavesdropper's channel is small--a simple ``secure space-time code'' that can be thought of as masked beamforming and radiates power isotropically attains near-optimal performance. In the limit of large number of antennas, we obtain a realization-independent characterization of the secrecy capacity as a function of the number $\beta$: the number of eavesdropper antennas per sender antenna. We show that the eavesdropper is comparatively ineffective when $\beta<1$, but that for $\beta\ge2$ the eavesdropper can drive the secrecy capacity to zero, thereby blocking secure communication to the intended receiver. Extensions to ergodic fading channels are also provided.
Secure Transmission with Multiple Antennas: The MISOME Wiretap Channel
7,706
This paper provides a new duality between entropy functions and network codes. Given a function $g\geq 0$ defined on all proper subsets of $N$ random variables, we provide a construction for a network multicast problem which is solvable if and only if $g$ is entropic. The underlying network topology is fixed and the multicast problem depends on $g$ only through edge capacities and source rates. Relaxing the requirement that the domain of $g$ be subsets of random variables, we obtain a similar duality between polymatroids and the linear programming bound. These duality results provide an alternative proof of the insufficiency of linear (and abelian) network codes, and demonstrate the utility of non-Shannon inequalities to tighten outer bounds on network coding capacity regions.
Dualities Between Entropy Functions and Network Codes
7,707
In this paper, the inherent drawbacks of the naive lattice decoding for MIMO fading systems is investigated. We show that using the naive lattice decoding for MIMO systems has considerable deficiencies in terms of the rate-diversity trade-off. Unlike the case of maximum-likelihood decoding, in this case, even the perfect lattice space-time codes which have the non-vanishing determinant property can not achieve the optimal rate-diversity trade-off. Indeed, we show that in the case of naive lattice decoding, when we fix the underlying lattice, all the codes based on full-rate lattices have the same rate-diversity trade-off as V-BLAST. Also, we drive a lower bound on the symbol error probability of the naive lattice decoding for the fixed-rate MIMO systems (with equal numbers of receive and transmit antennas). This bound shows that asymptotically, the naive lattice decoding has an unbounded loss in terms of the required SNR, compared to the maximum likelihood decoding.
On The Limitations of The Naive Lattice Decoding
7,708
Distributed Orthogonal Space-Time Block Codes (DOSTBCs) achieving full diversity order and single-symbol ML decodability have been introduced recently by Yi and Kim for cooperative networks and an upperbound on the maximal rate of such codes along with code constructions has been presented. In this paper, we introduce a new class of Distributed STBCs called Semi-orthogonal Precoded Distributed Single-Symbol Decodable STBCs (S-PDSSDC) wherein, the source performs co-ordinate interleaving of information symbols appropriately before transmitting it to all the relays. It is shown that DOSTBCs are a special case of S-PDSSDCs. A special class of S-PDSSDCs having diagonal covariance matrix at the destination is studied and an upperbound on the maximal rate of such codes is derived. The bounds obtained are approximately twice larger than that of the DOSTBCs. A systematic construction of S-PDSSDCs is presented when the number of relays $K \geq 4$. The constructed codes are shown to achieve the upperbound on the rate when $K$ is of the form 0 or 3 modulo 4. For the rest of the values of $K$, the constructed codes are shown to have rates higher than that of DOSTBCs. It is shown that S-PDSSDCs cannot be constructed with any form of linear processing at the relays when the source doesn't perform co-ordinate interleaving of the information symbols. Simulation result shows that S-PDSSDCs have better probability of error performance than that of DOSTBCs.
High Rate Single-Symbol ML Decodable Precoded DSTBCs for Cooperative Networks
7,709
In this paper, detection of the primary user (PU) signal in an orthogonal frequency division multiplexing (OFDM) based cognitive radio (CR) system is addressed. According to the prior knowledge of the PU signal known to the detector, three detection algorithms based on the Neyman-Pearson philosophy are proposed. In the first case, a Gaussian PU signal with completely known probability density function (PDF) except for its received power is considered. The frequency band that the PU signal resides is also assumed known. Detection is performed individually at each OFDM sub-carrier possibly interfered by the PU signal, and the results are then combined to form a final decision. In the second case, the sub-carriers that the PU signal resides are known. Observations from all possibly interfered sub-carriers are considered jointly to exploit the fact that the presence of a PU signal interferers all of them simultaneously. In the last case, it is assumed no PU signal prior knowledge is available. The detection is involved with a search of the interfered band. The proposed detector is able to detect an abrupt power change when tracing along the frequency axis.
Spectrum Sensing in Wideband OFDM Cognitive Radios
7,710
In this work, we consider a partially cooperative relay broadcast channel (PC-RBC) controlled by random parameters. We provide rate regions for two different situations: 1) when side information (SI) S^n on the random parameters is non-causally known at both the source and the relay and, 2) when side information S^n is non-causally known at the source only. These achievable regions are derived for the general discrete memoryless case first and then extended to the case when the channel is degraded Gaussian and the SI is additive i.i.d. Gaussian. In this case, the source uses generalized dirty paper coding (GDPC), i.e., DPC combined with partial state cancellation, when only the source is informed, and DPC alone when both the source and the relay are informed. It appears that, even though it can not completely eliminate the effect of the SI (in contrast to the case of source and relay being informed), GDPC is particularly useful when only the source is informed.
Rate Regions for the Partially-Cooperative Relay Broadcast Channel with Non-causal Side Information
7,711
This paper is focused on the derivation of some universal properties of capacity-approaching low-density parity-check (LDPC) code ensembles whose transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. Properties of the degree distributions, graphical complexity and the number of fundamental cycles in the bipartite graphs are considered via the derivation of information-theoretic bounds. These bounds are expressed in terms of the target block/ bit error probability and the gap (in rate) to capacity. Most of the bounds are general for any decoding algorithm, and some others are proved under belief propagation (BP) decoding. Proving these bounds under a certain decoding algorithm, validates them automatically also under any sub-optimal decoding algorithm. A proper modification of these bounds makes them universal for the set of all MBIOS channels which exhibit a given capacity. Bounds on the degree distributions and graphical complexity apply to finite-length LDPC codes and to the asymptotic case of an infinite block length. The bounds are compared with capacity-approaching LDPC code ensembles under BP decoding, and they are shown to be informative and are easy to calculate. Finally, some interesting open problems are considered.
On Universal Properties of Capacity-Approaching LDPC Ensembles
7,712
In this paper we investigate algorithmic randomness on more general spaces than the Cantor space, namely computable metric spaces. To do this, we first develop a unified framework allowing computations with probability measures. We show that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computable and measure-theoretic sense. We show that any computable metric space admits a universal uniform randomness test (without further assumption).
Computability of probability measures and Martin-Lof randomness over metric spaces
7,713
Very recently, an operator channel was defined by Koetter and Kschischang when they studied random network coding. They also introduced constant dimension codes and demonstrated that these codes can be employed to correct errors and/or erasures over the operator channel. Constant dimension codes are equivalent to the so-called linear authentication codes introduced by Wang, Xing and Safavi-Naini when constructing distributed authentication systems in 2003. In this paper, we study constant dimension codes. It is shown that Steiner structures are optimal constant dimension codes achieving the Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures. Then, we derive two Johnson type upper bounds, say I and II, on constant dimension codes. The Johnson type bound II slightly improves on the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known Steiner structures is actually a family of optimal constant dimension codes achieving both the Johnson type bounds I and II.
Johnson Type Bounds on Constant Dimension Codes
7,714
The large majority of commercially available multiple-input multiple-output (MIMO) radio channel measurement devices (sounders) is based on time-division multiplexed switching (TDMS) of a single transmit/receive radio-frequency chain into the elements of a transmit/receive antenna array. While being cost-effective, such a solution can cause significant measurement errors due to phase noise and frequency offset in the local oscillators. In this paper, we systematically analyze the resulting errors and show that, in practice, overestimation of channel capacity by several hundred percent can occur. Overestimation is caused by phase noise (and to a lesser extent frequency offset) leading to an increase of the MIMO channel rank. Our analysis furthermore reveals that the impact of phase errors is, in general, most pronounced if the physical channel has low rank (typical for line-of-sight or poor scattering scenarios). The extreme case of a rank-1 physical channel is analyzed in detail. Finally, we present measurement results obtained from a commercially employed TDMS-based MIMO channel sounder. In the light of the findings of this paper, the results obtained through MIMO channel measurement campaigns using TDMS-based channel sounders should be interpreted with great care.
Information-theoretic analysis of MIMO channel sounding
7,715
This paper presents a computationally efficient decoder for multiple antenna systems. The proposed algorithm can be used for any constellation (QAM or PSK) and any labeling method. The decoder is based on matrix-lifting Semi-Definite Programming (SDP). The strength of the proposed method lies in a new relaxation algorithm applied to the method of Mobasher et al. This results in a reduction of the number of variables from $(NK+1)^2$ to $(2N+K)^2$, where $N$ is the number of antennas and $K$ is the number of constellation points in each real dimension. Since the computational complexity of solving SDP is a polynomial function of the number of variables, we have a significant complexity reduction. Moreover, the proposed method offers a better performance as compared to the best quasi-maximum likelihood decoding methods reported in the literature.
Matrix-Lifting Semi-Definite Programming for Decoding in Multiple Antenna Systems
7,716
The main goal of coding theory is to devise efficient systems to exploit the full capacity of a communication channel, thus achieving an arbitrarily small error probability. Low Density Parity Check (LDPC) codes are a family of block codes--characterised by admitting a sparse parity check matrix--with good correction capabilities. In the present paper the orbits of subspaces of a finite projective space under the action of a Singer cycle are investigated.
LDPC codes from Singer cycles
7,717
Recently Li and Xia have proposed a transmission scheme for wireless relay networks based on the Alamouti space time code and orthogonal frequency division multiplexing to combat the effect of timing errors at the relay nodes. This transmission scheme is amazingly simple and achieves a diversity order of two for any number of relays. Motivated by its simplicity, this scheme is extended to a more general transmission scheme that can achieve full cooperative diversity for any number of relays. The conditions on the distributed space time code (DSTC) structure that admit its application in the proposed transmission scheme are identified and it is pointed out that the recently proposed full diversity four group decodable DSTCs from precoded co-ordinate interleaved orthogonal designs and extended Clifford algebras satisfy these conditions. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Finally, four group decodable distributed differential space time codes applicable in this new transmission scheme for power of two number of relays are also provided.
Distributed Space Time Codes with Low Decoding Complexity for Asynchronous Relay Networks
7,718
We address the optimization of the sum rate performance in multicell interference-limited singlehop networks where access points are allowed to cooperate in terms of joint resource allocation. The resource allocation policies considered here combine power control and user scheduling. Although very promising from a conceptual point of view, the optimization of the sum of per-link rates hinges, in principle, on tough issues such as computational complexity and the requirement for heavy receiver-to-transmitter channel information feedback across all network cells. In this paper, we show that, in fact, distributed algorithms are actually obtainable in the asymptotic regime where the numbers of users per cell is allowed to grow large. Additionally, using extreme value theory, we provide scaling laws for upper and lower bounds for the network capacity (sum of single user rates over all cells), corresponding to zero-interference and worst-case interference scenarios. We show that the scaling is either dominated by path loss statistics or by small-scale fading, depending on the regime and user location scenario. We show that upper and lower rate bounds behave in fact identically, asymptotically. This remarkable result suggests not only that distributed resource allocation is practically possible but also that the impact of multicell interference on the capacity (in terms of scaling) actually vanishes asymptotically.
Joint power control and user scheduling in multicell wireless networks: Capacity scaling laws
7,719
In this paper, a tutorial software to learn Information Theory basics in a practical way is reported. The software, called IT-tutor-UV, makes use of a modern existing Spanish corpus for the modeling of the source. Both the source and the channel coding are also included in this educational tool as part of the learning experience. Entropy values of the Spanish language obtained with the IT-tutor-UV are discussed and compared to others that were previously calculated under limited conditions.
A software for learning Information Theory basics with emphasis on Entropy of Spanish
7,720
We find the secrecy capacity of the 2-2-1 Gaussian MIMO wire-tap channel, which consists of a transmitter and a receiver with two antennas each, and an eavesdropper with a single antenna. We determine the secrecy capacity of this channel by proposing an achievable scheme and then developing a tight upper bound that meets the proposed achievable secrecy rate. We show that, for this channel, Gaussian signalling in the form of beam-forming is optimal, and no pre-processing of information is necessary.
Towards the Secrecy Capacity of the Gaussian MIMO Wire-tap Channel: The 2-2-1 Channel
7,721
Previous work on relay networks has concentrated primarily on the diversity benefits of such techniques. This paper explores the possibility of also obtaining multiplexing gain in a relay network, while retaining diversity gain. Specifically, consider a network in which a single source node is equipped with one antenna and a destination is equipped with two antennas. It is shown that, in certain scenarios, by adding a relay with two antennas and using a successive relaying protocol, the diversity multiplexing tradeoff performance of the network can be lower bounded by that of a 2 by 2 MIMO channel, when the decode-and-forward protocol is applied at the relay. A distributed D-BLAST architecture is developed, in which parallel channel coding is applied to achieve this tradeoff. A space-time coding strategy, which can bring a maximal multiplexing gain of more than one, is also derived for this scenario. As will be shown, while this space-time coding strategy exploits maximal diversity for a small multiplexing gain, the proposed successive relaying scheme offers a significant performance advantage for higher data rate transmission. In addition to the specific results shown here, these ideas open a new direction for exploiting the benefits of wireless relay networks.
Cooperative Multiplexing in a Half Duplex Relay Network: Performance and Constraints
7,722
Rate and diversity impose a fundamental tradeoff in communications. This tradeoff was investigated for Intersymbol Interference (ISI) channels in [4]. A different point of view was explored in [1] where high-rate codes were designed so that they have a high-diversity code embedded within them. Such diversity embedded codes were investigated for flat fading channels and in this paper we explore its application to ISI channels. In particular, we investigate the rate tuples achievable for diversity embedded codes for scalar ISI channels through particular coding strategies. The main result of this paper is that the diversity multiplexing tradeoff for fading ISI channels is indeed successively refinable. This implies that for fading single input single output (SISO) ISI channels one can embed a high diversity code within a high rate code without any performance loss (asymptotically). This is related to a deterministic structural observation about the asymptotic behavior of frequency response of channel with respect to fading strength of time domain taps.
On Successive Refinement of Diversity for Fading ISI Channels
7,723
In this paper we investigate the structure of the fundamental polytope used in the Linear Programming decoding introduced by Feldman, Karger and Wainwright. We begin by showing that for expander codes, every fractional pseudocodeword always has at least a constant fraction of non-integral bits. We then prove that for expander codes, the active set of any fractional pseudocodeword is smaller by a constant fraction than the active set of any codeword. We further exploit these geometrical properties to devise an improved decoding algorithm with the same complexity order as LP decoding that provably performs better, for any blocklength. It proceeds by guessing facets of the polytope, and then resolving the linear program on these facets. While the LP decoder succeeds only if the ML codeword has the highest likelihood over all pseudocodewords, we prove that the proposed algorithm, when applied to suitable expander codes, succeeds unless there exist a certain number of pseudocodewords, all adjacent to the ML codeword on the LP decoding polytope, and with higher likelihood than the ML codeword. We then describe an extended algorithm, still with polynomial complexity, that succeeds as long as there are at most polynomially many pseudocodewords above the ML codeword.
Guessing Facets: Polytope Structure and Improved LP Decoding
7,724
It has been shown lately the optimality of uncoded transmission in estimating Gaussian sources over homogeneous/symmetric Gaussian multiple access channels (MAC) using multiple sensors. It remains, however, unclear whether it still holds for any arbitrary networks and/or with high channel signal-to-noise ratio (SNR) and high signal-to-measurement-noise ratio (SMNR). In this paper, we first provide a joint source and channel coding approach in estimating Gaussian sources over Gaussian MAC channels, as well as its sufficient and necessary condition in restoring Gaussian sources with a prescribed distortion value. An interesting relationship between our proposed joint approach with a more straightforward separate source and channel coding scheme is then established. We then formulate constrained power minimization problems and transform them to relaxed convex geometric programming problems, whose numerical results exhibit that either separate or uncoded scheme becomes dominant over a linear topology network. In addition, we prove that the optimal decoding order to minimize the total transmission powers for both source and channel coding parts is solely subject to the ranking of MAC channel qualities, and has nothing to do with the ranking of measurement qualities. Finally, asymptotic results for homogeneous networks are obtained which not only confirm the existing optimality of the uncoded approach, but also show that the asymptotic SNR exponents of these three approaches are all the same. Moreover, the proposed joint approach share the same asymptotic ratio with respect to high SNR and high SMNR as the uncoded scheme.
Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels
7,725
We address the error floor problem of low-density parity check (LDPC) codes on the binary-input additive white Gaussian noise (AWGN) channel, by constructing a serially concatenated code consisting of two systematic irregular repeat accumulate (IRA) component codes connected by an interleaver. The interleaver is designed to prevent stopping-set error events in one of the IRA codes from propagating into stopping set events of the other code. Simulations with two 128-bit rate 0.707 IRA component codes show that the proposed architecture achieves a much lower error floor at higher SNRs, compared to a 16384-bit rate 1/2 IRA code, but incurs an SNR penalty of about 2 dB at low to medium SNRs. Experiments indicate that the SNR penalty can be reduced at larger blocklengths.
Serially Concatenated IRA Codes
7,726
In this paper, a multiple-relay network in considered, in which $K$ single-antenna relays assist a single-antenna transmitter to communicate with a single-antenna receiver in a half-duplex mode. A new Amplify and Forward (AF) scheme is proposed for this network and is shown to achieve the optimum diversity-multiplexing trade-off curve.
Optimum Diversity-Multiplexing Tradeoff in the Multiple Relays Network
7,727
The downlink transmission in multi-user multiple-input multiple-output (MIMO) systems has been extensively studied from both communication-theoretic and information-theoretic perspectives. Most of these papers assume perfect/imperfect channel knowledge. In general, the problem of channel training and estimation is studied separately. However, in interference-limited communication systems with high mobility, this problem is tightly coupled with the problem of maximizing throughput of the system. In this paper, scheduling and pre-conditioning based schemes in the presence of reciprocal channel are considered to address this. In the case of homogeneous users, a scheduling scheme is proposed and an improved lower bound on the sum capacity is derived. The problem of choosing training sequence length to maximize net throughput of the system is studied. In the case of heterogeneous users, a modified pre-conditioning method is proposed and an optimized pre-conditioning matrix is derived. This method is combined with a scheduling scheme to further improve net achievable weighted-sum rate.
Scheduling and Pre-Conditioning in Multi-User MIMO TDD Systems
7,728
In wireless data networks, communication is particularly susceptible to eavesdropping due to its broadcast nature. Security and privacy systems have become critical for wireless providers and enterprise networks. This paper considers the problem of secret communication over the Gaussian broadcast channel, where a multi-antenna transmitter sends independent confidential messages to two users with information-theoretic secrecy. That is, each user would like to obtain its own confidential message in a reliable and safe manner. This communication model is referred to as the multi-antenna Gaussian broadcast channel with confidential messages (MGBC-CM). Under this communication scenario, a secret dirty-paper coding scheme and the corresponding achievable secrecy rate region are first developed based on Gaussian codebooks. Next, a computable Sato-type outer bound on the secrecy capacity region is provided for the MGBC-CM. Furthermore, the Sato-type outer bound prove to be consistent with the boundary of the secret dirty-paper coding achievable rate region, and hence, the secrecy capacity region of the MGBC-CM is established. Finally, two numerical examples demonstrate that both users can achieve positive rates simultaneously under the information-theoretic secrecy requirement.
Secrecy Capacity Region of a Multi-Antenna Gaussian Broadcast Channel with Confidential Messages
7,729
In this paper we investigate the achievable rate of a system that includes a nomadic transmitter with several antennas, which is received by multiple agents, exhibiting independent channel gains and additive circular-symmetric complex Gaussian noise. In the nomadic regime, we assume that the agents do not have any decoding ability. These agents process their channel observations and forward them to the final destination through lossless links with a fixed capacity. We propose new achievable rates based on elementary compression and also on a Wyner-Ziv (CEO-like) processing, for both fast fading and block fading channels, as well as for general discrete channels. The simpler two agents scheme is solved, up to an implicit equation with a single variable. Limiting the nomadic transmitter to a circular-symmetric complex Gaussian signalling, new upper bounds are derived for both fast and block fading, based on the vector version of the entropy power inequality. These bounds are then compared to the achievable rates in several extreme scenarios. The asymptotic setting with numbers of agents and transmitter's antennas taken to infinity is analyzed. In addition, the upper bounds are analytically shown to be tight in several examples, while numerical calculations reveal a rather small gap in a finite $2\times2$ setting. The advantage of the Wyner-Ziv approach over elementary compression is shown where only the former can achieve the full diversity-multiplexing tradeoff. We also consider the non-nomadic setting, with agents that can decode. Here we give an achievable rate, over fast fading channel, which combines broadcast with dirty paper coding and the decentralized reception, which was introduced for the nomadic setting.
Distributed MIMO receiver - Achievable rates and upper bounds
7,730
The McEliece cryptosystem is a public-key cryptosystem based on coding theory that has successfully resisted cryptanalysis for thirty years. The original version, based on Goppa codes, is able to guarantee a high level of security, and is faster than competing solutions, like RSA. Despite this, it has been rarely considered in practical applications, due to two major drawbacks: i) large size of the public key and ii) low transmission rate. Low-Density Parity-Check (LDPC) codes are state-of-art forward error correcting codes that permit to approach the Shannon limit while ensuring limited complexity. Quasi-Cyclic (QC) LDPC codes are a particular class of LDPC codes, able to join low complexity encoding of QC codes with high-performing and low-complexity decoding of LDPC codes. In a previous work it has been proposed to adopt a particular family of QC-LDPC codes in the McEliece cryptosystem to reduce the key size and increase the transmission rate. Recently, however, new attacks have been found that are able to exploit a flaw in the transformation from the private key to the public one. Such attacks can be effectively countered by changing the form of some constituent matrices, without altering the system parameters. This work gives an overview of the QC-LDPC codes-based McEliece cryptosystem and its cryptanalysis. Two recent versions are considered, and their ability to counter all the currently known attacks is discussed. A third version able to reach a higher security level is also proposed. Finally, it is shown that the new QC-LDPC codes-based cryptosystem scales favorably with the key length.
LDPC codes in the McEliece cryptosystem: attacks and countermeasures
7,731
We propose a new algorithm for binary quantization based on the Belief Propagation algorithm with decimation over factor graphs of Low Density Generator Matrix (LDGM) codes. This algorithm, which we call Bias Propagation (BiP), can be considered as a special case of the Survey Propagation algorithm proposed for binary quantization by Wainwright et al. [8]. It achieves the same near-optimal rate-distortion performance with a substantially simpler framework and 10-100 times faster implementation. We thus challenge the widespread belief that binary quantization based on sparse linear codes cannot be solved by simple Belief Propagation algorithms. Finally, we give examples of suitably irregular LDGM codes that work with the BiP algorithm and show their performance.
Binary quantization using Belief Propagation with decimation over factor graphs of LDGM codes
7,732
If $N=2^k > 8$ then there exist exactly $[(k-1)/2]$ pairwise nonequivalent $Z_4$-linear Hadamard $(N,2N,N/2)$-codes and $[(k+1)/2]$ pairwise nonequivalent $Z_4$-linear extended perfect $(N,2^N/2N,4)$-codes. A recurrent construction of $Z_4$-linear Hadamard codes is given.
Z4-linear Hadamard and extended perfect codes
7,733
This paper investigates point-to-point information transmission over a wideband slow-fading channel, modeled as an (asymptotically) large number of independent identically distributed parallel channels, with the random channel fading realizations remaining constant over the entire coding block. On the one hand, in the wideband limit the minimum achievable energy per nat required for reliable transmission, as a random variable, converges in probability to certain deterministic quantity. On the other hand, the exponential decay rate of the outage probability, termed as the wideband outage exponent, characterizes how the number of parallel channels, {\it i.e.}, the ``bandwidth'', should asymptotically scale in order to achieve a targeted outage probability at a targeted energy per nat. We examine two scenarios: when the transmitter has no channel state information and adopts uniform transmit power allocation among parallel channels; and when the transmitter is endowed with an one-bit channel state feedback for each parallel channel and accordingly allocates its transmit power. For both scenarios, we evaluate the wideband minimum energy per nat and the wideband outage exponent, and discuss their implication for system performance.
On Outage Behavior of Wideband Slow-Fading Channels
7,734
This paper introduces a new counting code. Its design was motivated by distributed video coding where, for decoding, error correction methods are applied to improve predictions. Those error corrections sometimes fail which results in decoded values worse than the initial prediction. Our code exploits the fact that bit errors are relatively unlikely events: more than a few bit errors in a decoded pixel value are rare. With a carefully designed counting code combined with a prediction those bit errors can be corrected and sometimes the original pixel value recovered. The error correction improves significantly. Our new code not only maximizes the Hamming distance between adjacent (or "near 1") codewords but also between nearby (for example "near 2") codewords. This is why our code is significantly different from the well-known maximal counting sequences which have maximal average Hamming distance. Fortunately, the new counting code can be derived from Gray Codes for every code word length (i.e. bit depth).
New Counting Codes for Distributed Video Coding
7,735
`Tree pruning' (TP) is an algorithm for probabilistic inference on binary Markov random fields. It has been recently derived by Dror Weitz and used to construct the first fully polynomial approximation scheme for counting independent sets up to the `tree uniqueness threshold.' It can be regarded as a clever method for pruning the belief propagation computation tree, in such a way to exactly account for the effect of loops. In this paper we generalize the original algorithm to make it suitable for decoding linear codes, and discuss various schemes for pruning the computation tree. Further, we present the outcomes of numerical simulations on several linear codes, showing that tree pruning allows to interpolate continuously between belief propagation and maximum a posteriori decoding. Finally, we discuss theoretical implications of the new method.
TP Decoding
7,736
In this paper, we propose a new coding scheme for the general relay channel. This coding scheme is in the form of a block Markov code. The transmitter uses a superposition Markov code. The relay compresses the received signal and maps the compressed version of the received signal into a codeword conditioned on the codeword of the previous block. The receiver performs joint decoding after it has received all of the B blocks. We show that this coding scheme can be viewed as a generalization of the well-known Compress-And-Forward (CAF) scheme proposed by Cover and El Gamal. Our coding scheme provides options for preserving the correlation between the channel inputs of the transmitter and the relay, which is not possible in the CAF scheme. Thus, our proposed scheme may potentially yield a larger achievable rate than the CAF scheme.
A New Achievability Scheme for the Relay Channel
7,737
This report is devoted to introduction in multichannel algorithm based on generalized numeration notations (GPN). The internal, external and mixed account are entered. The concept of the GPN and its classification as decomposition of an integer on composed of integers is discussed. Realization of multichannel algorithm on the basis of GPN is introduced. In particular, some properties of Fibonacci multichannel algorithm are discussed.
Multichannel algorithm based on generalized positional numeration system
7,738
We solve the problem of designing powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Unfortunately, optimal constructions for maximum-likelihood decoding do not perform well under iterative decoding. To overcome this limitation, we then introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes with multiplexed parallel turbo codes suitable for nonergodic channels and recently reported in the literature.
Low-Density Parity-Check Codes for Nonergodic Block-Fading Channels
7,739
In this paper we formalize the notions of information elements and information lattices, first proposed by Shannon. Exploiting this formalization, we identify a comprehensive parallelism between information lattices and subgroup lattices. Qualitatively, we demonstrate isomorphisms between information lattices and subgroup lattices. Quantitatively, we establish a decisive approximation relation between the entropy structures of information lattices and the log-index structures of the corresponding subgroup lattices. This approximation extends the approximation for joint entropies carried out previously by Chan and Yeung. As a consequence of our approximation result, we show that any continuous law holds in general for the entropies of information elements if and only if the same law holds in general for the log-indices of subgroups. As an application, by constructing subgroup counterexamples we find surprisingly that common information, unlike joint information, obeys neither the submodularity nor the supermodularity law. We emphasize that the notion of information elements is conceptually significant--formalizing it helps to reveal the deep connection between information theory and group theory. The parallelism established in this paper admits an appealing group-action explanation and provides useful insights into the intrinsic structure among information elements from a group-theoretic perspective.
A Group Theoretic Model for Information
7,740
Convergence properties of Shannon Entropy are studied. In the differential setting, it is shown that weak convergence of probability measures, or convergence in distribution, is not enough for convergence of the associated differential entropies. A general result for the desired differential entropy convergence is provided, taking into account both compactly and uncompactly supported densities. Convergence of differential entropy is also characterized in terms of the Kullback-Liebler discriminant for densities with fairly general supports, and it is shown that convergence in variation of probability measures guarantees such convergence under an appropriate boundedness condition on the densities involved. Results for the discrete setting are also provided, allowing for infinitely supported probability measures, by taking advantage of the equivalence between weak convergence and convergence in variation in this setting.
On Convergence Properties of Shannon Entropy
7,741
We consider a general stochastic input-output dynamical system with output evolving in time as the solution to a functional coefficients, It\^{o}'s stochastic differential equation, excited by an input process. This general class of stochastic systems encompasses not only the classical communication channel models, but also a wide variety of engineering systems appearing through a whole range of applications. For this general setting we find analogous of known relationships linking input-output mutual information and minimum mean causal and non-causal square errors, previously established in the context of additive Gaussian noise communication channels. Relationships are not only established in terms of time-averaged quantities, but also their time-instantaneous, dynamical counterparts are presented. The problem of appropriately introducing in this general framework a signal-to-noise ratio notion expressed through a signal-to-noise ratio parameter is also taken into account, identifying conditions for a proper and meaningful interpretation.
On the Relationship between Mutual Information and Minimum Mean-Square Errors in Stochastic Dynamical Systems
7,742
The MIMOME channel is a Gaussian wiretap channel in which the sender, receiver, and eavesdropper all have multiple antennas. We characterize the secrecy capacity as the saddle-value of a minimax problem. Among other implications, our result establishes that a Gaussian distribution maximizes the secrecy capacity characterization of Csisz{\'a}r and K{\"o}rner when applied to the MIMOME channel. We also determine a necessary and sufficient condition for the secrecy capacity to be zero. Large antenna array analysis of this condition reveals several useful insights into the conditions under which secure communication is possible.
The MIMOME Channel
7,743
A multiple transmit antenna, single receive antenna (per receiver) downlink channel with limited channel feedback is considered. Given a constraint on the total system-wide channel feedback, the following question is considered: is it preferable to get low-rate feedback from a large number of receivers or to receive high-rate/high-quality feedback from a smaller number of (randomly selected) receivers? Acquiring feedback from many users allows multi-user diversity to be exploited, while high-rate feedback allows for very precise selection of beamforming directions. It is shown that systems in which a limited number of users feedback high-rate channel information significantly outperform low-rate/many user systems. While capacity increases only double logarithmically with the number of users, the marginal benefit of channel feedback is very significant up to the point where the CSI is essentially perfect.
Multi-User Diversity vs. Accurate Channel Feedback for MIMO Broadcast Channels
7,744
A clear understanding the behavior of the error probability (EP) as a function of signal-to-noise ratio (SNR) and other system parameters is fundamental for assessing the design of digital wireless communication systems.We propose an analytical framework based on the log-concavity property of the EP which we prove for a wide family of multidimensional modulation formats in the presence of Gaussian disturbances and fading. Based on this property, we construct a class of local bounds for the EP that improve known generic bounds in a given region of the SNR and are invertible, as well as easily tractable for further analysis. This concept is motivated by the fact that communication systems often operate with performance in a certain region of interest (ROI) and, thus, it may be advantageous to have tighter bounds within this region instead of generic bounds valid for all SNRs. We present a possible application of these local bounds, but their relevance is beyond the example made in this paper.
Log-concavity property of the error probability with application to local bounds for wireless communications
7,745
We analyze a slow-fading interference network with MN non-cooperating single-antenna sources and M non-cooperating single-antenna destinations. In particular, we assume that the sources are divided into M mutually exclusive groups of N sources each, every group is dedicated to transmit a common message to a unique destination, all transmissions occur concurrently and in the same frequency band and a dedicated 1-bit broadcast feedback channel from each destination to its corresponding group of sources exists. We provide a feedback-based iterative distributed (multi-user) beamforming algorithm, which "learns" the channels between each group of sources and its assigned destination. This algorithm is a straightforward generalization, to the multi-user case, of the feedback-based iterative distributed beamforming algorithm proposed recently by Mudumbai et al., in IEEE Trans. Inf. Th. (submitted) for networks with a single group of sources and a single destination. Putting the algorithm into a Markov chain context, we provide a simple convergence proof. We then show that, for M finite and N approaching infinity, spatial multiplexing based on the beamforming weights produced by the algorithm achieves full spatial multiplexing gain of M and full per-stream array gain of N, provided the time spent "learning'' the channels scales linearly in N. The network is furthermore shown to "crystallize''. Finally, we characterize the corresponding crystallization rate.
Distributed spatial multiplexing with 1-bit feedback
7,746
Despite the NP hardness of acquiring minimum distance $d_m$ for linear codes theoretically, in this paper we propose one experimental method of finding minimum-weight codewords, the weight of which is equal to $d_m$ for LDPC codes. One existing syndrome decoding method, called serial belief propagation (BP) with ordered statistic decoding (OSD), is adapted to serve our purpose. We hold the conjecture that among many candidate error patterns in OSD reprocessing, modulo 2 addition of the lightest error pattern with one of the left error patterns may generate a light codeword. When the decoding syndrome changes to all-zero state, the lightest error pattern reduces to all-zero, the lightest non-zero error pattern is a valid codeword to update lightest codeword list. Given sufficient codewords sending, the survived lightest codewords are likely to be the target. Compared with existing techniques, our method demonstrates its efficiency in the simulation of several interested LDPC codes.
Fast Reliability-based Algorithm of Finding Minimum-weight Codewords for LDPC Codes
7,747
This paper studies the performance of transmission schemes that have rate that increases with average SNR while maintaining a fixed outage probability. This is in contrast to the classical Zheng-Tse diversity-multiplexing tradeoff (DMT) that focuses on increasing rate and decreasing outage probability. Three different systems are explored: antenna diversity systems, time/frequency diversity systems, and automatic repeat request (ARQ) systems. In order to accurately study performance in the fixed outage setting, it is necesary to go beyond the coarse, asymptotic multiplexing gain metric. In the case of antenna diversity and time/frequency diversity, an affine approximation to high SNR outage capacity (i.e., multiplexing gain plus a power/rate offset) accurately describes performance and shows the very significant benefits of diversity. ARQ is also seen to provide a significant performance advantage, but even an affine approximation to outage capacity is unable to capture this advantage and outage capacity must be directly studied in the non-asymptotic regime.
Analysis of Fixed Outage Transmission Schemes: A Finer Look at the Full Multiplexing Point
7,748
Franceschetti et al. have recently shown that per-node throughput in an extended, ad hoc wireless network with $\Theta(n)$ randomly distributed nodes and multihop routing can be increased from the $\Omega({1 \over \sqrt{n} \log n})$ scaling demonstrated in the seminal paper of Gupta and Kumar to $\Omega({1 \over \sqrt{n}})$. The goal of the present paper is to understand the dependence of this interesting result on the principal new features it introduced relative to Gupta-Kumar: (1) a capacity-based formula for link transmission bit-rates in terms of received signal-to-interference-and-noise ratio (SINR); (2) hierarchical routing from sources to destinations through a system of communal highways; and (3) cell-based routes constructed by percolation. The conclusion of the present paper is that the improved throughput scaling is principally due to the percolation-based routing, which enables shorter hops and, consequently, less interference. This is established by showing that throughput $\Omega({1 \over \sqrt{n}})$ can be attained by a system that does not employ highways, but instead uses percolation to establish, for each source-destination pair, a set of $\Theta(\log n)$ routes within a narrow routing corridor running from source to destination. As a result, highways are not essential. In addition, it is shown that throughput $\Omega({1 \over \sqrt{n}})$ can be attained with the original threshold transmission bit-rate model, provided that node transmission powers are permitted to grow with $n$. Thus, the benefit of the capacity bit-rate model is simply to permit the power to remain bounded, even as the network expands.
Throughput Scaling in Random Wireless Networks: A Non-Hierarchical Multipath Routing Strategy
7,749
The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.
Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function
7,750
This paper addresses the problem of coding a continuous random source correlated with another source which is only available at the decoder. The proposed approach is based on the extension of the channel coding concept of syndrome from the discrete into the continuous domain. If the correlation between the sources can be described by an additive Gaussian backward channel and capacity-achieving linear codes are employed, it is shown that the performance of the system is asymptotically close to the Wyner-Ziv bound. Even if such an additive channel is not Gaussian, the design procedure can fit the desired correlation and transmission rate. Experiments based on trellis-coded quantization show that the proposed system achieves a performance within 3-4 dB of the theoretical bound in the 0.5-3 bit/sample rate range for any Gaussian correlation, with a reasonable computational complexity.
Distributed Source Coding Using Continuous-Valued Syndromes
7,751
The cognitive interference channel with confidential messages is studied. Similarly to the classical two-user interference channel, the cognitive interference channel consists of two transmitters whose signals interfere at the two receivers. It is assumed that there is a common message source (message 1) known to both transmitters, and an additional independent message source (message 2) known only to the cognitive transmitter (transmitter 2). The cognitive receiver (receiver 2) needs to decode both messages, while the non-cognitive receiver (receiver 1) should decode only the common message. Furthermore, message 2 is assumed to be a confidential message which needs to be kept as secret as possible from receiver 1, which is viewed as an eavesdropper with regard to message 2. The level of secrecy is measured by the equivocation rate. A single-letter expression for the capacity-equivocation region of the discrete memoryless cognitive interference channel is established and is further explicitly derived for the Gaussian case. Moreover, particularizing the capacity-equivocation region to the case without a secrecy constraint, establishes a new capacity theorem for a class of interference channels, by providing a converse theorem.
Cognitive Interference Channels with Confidential Messages
7,752
A linear mesh network is considered in which a single user per cell communicates to a local base station via a dedicated relay (two-hop communication). Exploiting the possibly relevant inter-cell channel gains, rate splitting with successive cancellation in both hops is investigated as a promising solution to improve the rate of basic single-rate communications. Then, an alternative solution is proposed that attempts to improve the performance of the second hop (from the relays to base stations) by cooperative transmission among the relay stations. The cooperative scheme leverages the common information obtained by the relays as a by-product of the use of rate splitting in the first hop. Numerical results bring insight into the conditions (network topology and power constraints) under which rate splitting, with possible relay cooperation, is beneficial. Multi-cell processing (joint decoding at the base stations) is also considered for reference.
Capacity of Linear Two-hop Mesh Networks with Rate Splitting, Decode-and-forward Relaying and Cooperation
7,753
Inner and outer bounds are established on the capacity region of two-sender, two-receiver interference channels where one transmitter knows both messages. The transmitter with extra knowledge is referred to as being cognitive. The inner bound is based on strategies that generalize prior work, and include rate-splitting, Gel'fand-Pinsker coding and cooperative transmission. A general outer bound is based on the Nair-El Gamal outer bound for broadcast channels. A simpler bound is presented for the case in which one of the decoders can decode both messages. The bounds are evaluated and compared for Gaussian channels.
On the Capacity of Interference Channels with One Cooperating Transmitter
7,754
In this paper, we investigate the error correction capability of column-weight-three LDPC codes when decoded using the Gallager A algorithm. We prove that the necessary condition for a code to correct $k \geq 5$ errors is to avoid cycles of length up to $2k$ in its Tanner graph. As a consequence of this result, we show that given any $\alpha>0, \exists N $ such that $\forall n>N$, no code in the ensemble of column-weight-three codes can correct all $\alpha n$ or fewer errors. We extend these results to the bit flipping algorithm.
Error Correction Capability of Column-Weight-Three LDPC Codes
7,755
The problem of channel shortening equalization for optimal detection in ISI channels is considered. The problem is to choose a linear equalizer and a partial response target filter such that the combination produces the best detection performance. Instead of using the traditional approach of MMSE equalization, we directly seek all equalizer and target pairs that yield optimal detection performance in terms of the sequence or symbol error rate. This leads to a new notion of a posteriori equivalence between the equalized and target channels with a simple characterization in terms of their underlying probability distributions. Using this characterization we show the surprising existence an infinite family of equalizer and target pairs for which any maximum a posteriori (MAP) based detector designed for the target channel is simultaneously MAP optimal for the equalized channel. For channels whose input symbols have equal energy, such as q-PSK, the MMSE equalizer designed with a monic target constraint yields a solution belonging to this optimal family of designs. Although, these designs produce IIR target filters, the ideas are extended to design good FIR targets. For an arbitrary choice of target and equalizer, we derive an expression for the probability of sequence detection error. This expression is used to design optimal FIR targets and IIR equalizers and to quantify the FIR approximation penalty.
A Posteriori Equivalence: A New Perspective for Design of Optimal Channel Shortening Equalizers
7,756
In this paper will be presented methodology of encoding information in valuations of discrete lattice with some translational invariant constrains in asymptotically optimal way. The method is based on finding statistical description of such valuations and changing it into statistical algorithm, which allows to construct deterministically valuation with given statistics. Optimal statistics allow to generate valuations with uniform distribution - we get maximum information capacity this way. It will be shown that we can reach the optimum for one-dimensional models using maximal entropy random walk and that for the general case we can practically get as close to the capacity of the model as we want (found numerically: lost 10^{-10} bit/node for Hard Square). There will be also presented simpler alternative to arithmetic coding method which can be used as cryptosystem and data correction method too.
Optimal encoding on discrete lattice with translational invariant constrains using statistical algorithms
7,757
Cooperative technology is expected to have a great impact on the performance of cellular or, more generally, infrastructure networks. Both multicell processing (cooperation among base stations) and relaying (cooperation at the user level) are currently being investigated. In this presentation, recent results regarding the performance of multicell processing and user cooperation under the assumption of limited-capacity interbase station and inter-user links, respectively, are reviewed. The survey focuses on related results derived for non-fading uplink and downlink channels of simple cellular system models. The analytical treatment, facilitated by these simple setups, enhances the insight into the limitations imposed by limited-capacity constraints on the gains achievable by cooperative techniques.
Cooperative Multi-Cell Networks: Impact of Limited-Capacity Backhaul and Inter-Users Links
7,758
We study the problem of the reconstruction of a Gaussian field defined in [0,1] using N sensors deployed at regular intervals. The goal is to quantify the total data rate required for the reconstruction of the field with a given mean square distortion. We consider a class of two-stage mechanisms which a) send information to allow the reconstruction of the sensor's samples within sufficient accuracy, and then b) use these reconstructions to estimate the entire field. To implement the first stage, the heavy correlation between the sensor samples suggests the use of distributed coding schemes to reduce the total rate. We demonstrate the existence of a distributed block coding scheme that achieves, for a given fidelity criterion for the reconstruction of the field, a total information rate that is bounded by a constant, independent of the number $N$ of sensors. The constant in general depends on the autocorrelation function of the field and the desired distortion criterion for the sensor samples. We then describe a scheme which can be implemented using only scalar quantizers at the sensors, without any use of distributed source coding, and which also achieves a total information rate that is a constant, independent of the number of sensors. While this scheme operates at a rate that is greater than the rate achievable through distributed coding and entails greater delay in reconstruction, its simplicity makes it attractive for implementation in sensor networks.
Distributed source coding in dense sensor networks
7,759
The wideband regime of bit-interleaved coded modulation (BICM) in Gaussian channels is studied. The Taylor expansion of the coded modulation capacity for generic signal constellations at low signal-to-noise ratio (SNR) is derived and used to determine the corresponding expansion for the BICM capacity. Simple formulas for the minimum energy per bit and the wideband slope are given. BICM is found to be suboptimal in the sense that its minimum energy per bit can be larger than the corresponding value for coded modulation schemes. The minimum energy per bit using standard Gray mapping on M-PAM or M^2-QAM is given by a simple formula and shown to approach -0.34 dB as M increases. Using the low SNR expansion, a general trade-off between power and bandwidth in the wideband regime is used to show how a power loss can be traded off against a bandwidth gain.
Bit-interleaved coded modulation in the wideband regime
7,760
Recently, the secrecy capacity of the multi-antenna wiretap channel was characterized by Khisti and Wornell [1] using a Sato-like argument. This note presents an alternative characterization using a channel enhancement argument. This characterization relies on an extremal entropy inequality recently proved in the context of multi-antenna broadcast channels, and is directly built on the physical intuition regarding to the optimal transmission strategy in this communication scenario.
A Note on the Secrecy Capacity of the Multi-antenna Wiretap Channel
7,761
This paper deals with a universal coding problem for a certain kind of multiterminal source coding system that we call the complementary delivery coding system. In this system, messages from two correlated sources are jointly encoded, and each decoder has access to one of the two messages to enable it to reproduce the other message. Both fixed-to-fixed length and fixed-to-variable length lossless coding schemes are considered. Explicit constructions of universal codes and bounds of the error probabilities are clarified via type-theoretical and graph-theoretical analyses. [[Keywords]] multiterminal source coding, complementary delivery, universal coding, types of sequences, bipartite graphs
Universal coding for correlated sources with complementary delivery
7,762
In their landmark paper Cover and El Gamal proposed different coding strategies for the relay channel with a single relay supporting a communication pair. These strategies are the decode-and-forward and compress-and-forward approach, as well as a general lower bound on the capacity of a relay network which relies on the mixed application of the previous two strategies. So far, only parts of their work - the decode-and-forward and the compress-and-forward strategy - have been applied to networks with multiple relays. This paper derives a mixed strategy for multiple relay networks using a combined approach of partial decode-and-forward with N +1 levels and the ideas of successive refinement with different side information at the receivers. After describing the protocol structure, we present the achievable rates for the discrete memoryless relay channel as well as Gaussian multiple relay networks. Using these results we compare the mixed strategy with some special cases, e. g., multilevel decode-and-forward, distributed compress-and-forward and a mixed approach where one relay node operates in decode-and-forward and the other in compress-and-forward mode.
Analysis of a Mixed Strategy for Multiple Relay Networks
7,763
Single Event Upsets (SEU) as well as permanent faults can significantly affect the correct on-line operation of digital systems, such as memories and microprocessors; a memory can be made resilient to permanent and transient faults by using modular redundancy and coding. In this paper, different memory systems are compared: these systems utilize simplex and duplex arrangements with a combination of Reed Solomon coding and scrubbing. The memory systems and their operations are analyzed by novel Markov chains to characterize performance for dynamic reconfiguration as well as error detection and correction under the occurrence of permanent and transient faults. For a specific Reed Solomon code, the duplex arrangement allows to efficiently cope with the occurrence of permanent faults, while the use of scrubbing allows to cope with transient faults.
On the Analysis of Reed Solomon Coding for Resilience to Transient/Permanent Faults in Highly Reliable Memories
7,764
The problem of security against timing based traffic analysis in wireless networks is considered in this work. An analytical measure of anonymity in eavesdropped networks is proposed using the information theoretic concept of equivocation. For a physical layer with orthogonal transmitter directed signaling, scheduling and relaying techniques are designed to maximize achievable network performance for any given level of anonymity. The network performance is measured by the achievable relay rates from the sources to destinations under latency and medium access constraints. In particular, analytical results are presented for two scenarios: For a two-hop network with maximum anonymity, achievable rate regions for a general m x 1 relay are characterized when nodes generate independent Poisson transmission schedules. The rate regions are presented for both strict and average delay constraints on traffic flow through the relay. For a multihop network with an arbitrary anonymity requirement, the problem of maximizing the sum-rate of flows (network throughput) is considered. A selective independent scheduling strategy is designed for this purpose, and using the analytical results for the two-hop network, the achievable throughput is characterized as a function of the anonymity level. The throughput-anonymity relation for the proposed strategy is shown to be equivalent to an information theoretic rate-distortion function.
Anonymous Networking amidst Eavesdroppers
7,765
The distributed source coding problem is considered when the sensors, or encoders, are under Byzantine attack; that is, an unknown group of sensors have been reprogrammed by a malicious intruder to undermine the reconstruction at the fusion center. Three different forms of the problem are considered. The first is a variable-rate setup, in which the decoder adaptively chooses the rates at which the sensors transmit. An explicit characterization of the variable-rate achievable sum rates is given for any number of sensors and any groups of traitors. The converse is proved constructively by letting the traitors simulate a fake distribution and report the generated values as the true ones. This fake distribution is chosen so that the decoder cannot determine which sensors are traitors while maximizing the required rate to decode every value. Achievability is proved using a scheme in which the decoder receives small packets of information from a sensor until its message can be decoded, before moving on to the next sensor. The sensors use randomization to choose from a set of coding functions, which makes it probabilistically impossible for the traitors to cause the decoder to make an error. Two forms of the fixed-rate problem are considered, one with deterministic coding and one with randomized coding. The achievable rate regions are given for both these problems, and it is shown that lower rates can be achieved with randomized coding.
Distributed Source Coding in the Presence of Byzantine Sensors
7,766
This paper deals with a universal coding problem for a certain kind of multiterminal source coding network called a generalized complementary delivery network. In this network, messages from multiple correlated sources are jointly encoded, and each decoder has access to some of the messages to enable it to reproduce the other messages. Both fixed-to-fixed length and fixed-to-variable length lossless coding schemes are considered. Explicit constructions of universal codes and the bounds of the error probabilities are clarified by using methods of types and graph-theoretical analysis.
Universal source coding over generalized complementary delivery networks
7,767
A network of $n$ wireless communication links is considered in a Rayleigh fading environment. It is assumed that each link can be active and transmit with a constant power $P$ or remain silent. The objective is to maximize the number of active links such that each active link can transmit with a constant rate $\lambda$. An upper bound is derived that shows the number of active links scales at most like $\frac{1}{\lambda} \log n$. To obtain a lower bound, a decentralized link activation strategy is described and analyzed. It is shown that for small values of $\lambda$, the number of supported links by this strategy meets the upper bound; however, as $\lambda$ grows, this number becomes far below the upper bound. To shrink the gap between the upper bound and the achievability result, a modified link activation strategy is proposed and analyzed based on some results from random graph theory. It is shown that this modified strategy performs very close to the optimum. Specifically, this strategy is \emph{asymptotically almost surely} optimum when $\lambda$ approaches $\infty$ or 0. It turns out the optimality results are obtained in an interference-limited regime. It is demonstrated that, by proper selection of the algorithm parameters, the proposed scheme also allows the network to operate in a noise-limited regime in which the transmission rates can be adjusted by the transmission powers. The price for this flexibility is a decrease in the throughput scaling law by a multiplicative factor of $\log \log n$.
Rate-Constrained Wireless Networks with Fading Channels: Interference-Limited and Noise-Limited Regimes
7,768
Aiming at bridging the gap between the maximum likelihood decoding (MLD) and the suboptimal iterative decodings for short or medium length LDPC codes, we present a generalized ordered statistic decoding (OSD) in the form of syndrome decoding, to cascade with the belief propagation (BP) or enhanced min-sum decoding. The OSD is invoked only when the decoding failures are obtained for the preceded iterative decoding method. With respect to the existing OSD which is based on the accumulated log-likelihood ratio (LLR) metric, we extend the accumulative metric to the situation where the BP decoding is in the probability domain. Moreover, after generalizing the accumulative metric to the context of the normalized or offset min-sum decoding, the OSD shows appealing tradeoff between performance and complexity. In the OSD implementation, when deciding the true error pattern among many candidates, an alternative proposed proves to be effective to reduce the number of real additions without performance loss. Simulation results demonstrate that the cascade connection of enhanced min-sum and OSD decodings outperforms the BP alone significantly, in terms of either performance or complexity.
Generalized reliability-based syndrome decoding for LDPC codes
7,769
We consider the transmission of a memoryless bivariate Gaussian source over an average-power-constrained one-to-two Gaussian broadcast channel. The transmitter observes the source and describes it to the two receivers by means of an average-power-constrained signal. Each receiver observes the transmitted signal corrupted by a different additive white Gaussian noise and wishes to estimate the source component intended for it. That is, Receiver~1 wishes to estimate the first source component and Receiver~2 wishes to estimate the second source component. Our interest is in the pairs of expected squared-error distortions that are simultaneously achievable at the two receivers. We prove that an uncoded transmission scheme that sends a linear combination of the source components achieves the optimal power-versus-distortion trade-off whenever the signal-to-noise ratio is below a certain threshold. The threshold is a function of the source correlation and the distortion at the receiver with the weaker noise.
Broadcasting Correlated Gaussians
7,770
This article proposes a novel iterative algorithm based on Low Density Parity Check (LDPC) codes for compression of correlated sources at rates approaching the Slepian-Wolf bound. The setup considered in the article looks at the problem of compressing one source at a rate determined based on the knowledge of the mean source correlation at the encoder, and employing the other correlated source as side information at the decoder which decompresses the first source based on the estimates of the actual correlation. We demonstrate that depending on the extent of the actual source correlation estimated through an iterative paradigm, significant compression can be obtained relative to the case the decoder does not use the implicit knowledge of the existence of correlation.
LDPC-Based Iterative Algorithm for Compression of Correlated Sources at Rates Approaching the Slepian-Wolf Bound
7,771
In this paper, we derive the optimal transmitter/ receiver beamforming vectors and relay weighting matrix for the multiple-input multiple-output amplify-and-forward relay channel. The analysis is accomplished in two steps. In the first step, the direct link between the transmitter (Tx) and receiver (Rx) is ignored and we show that the transmitter and the relay should map their signals to the strongest right singular vectors of the Tx-relay and relay-Rx channels. Based on the distributions of these vectors for independent identically distributed (i.i.d.) Rayleigh channels, the Grassmannian codebooks are used for quantizing and sending back the channel information to the transmitter and the relay. The simulation results show that even a few number of bits can considerably increase the link reliability in terms of bit error rate. For the second step, the direct link is considered in the problem model and we derive the optimization problem that identifies the optimal Tx beamforming vector. For the i.i.d Rayleigh channels, we show that the solution to this problem is uniformly distributed on the unit sphere and we justify the appropriateness of the Grassmannian codebook (for determining the optimal beamforming vector), both analytically and by simulation. Finally, a modified quantizing scheme is presented which introduces a negligible degradation in the system performance but significantly reduces the required number of feedback bits.
Grassmannian Beamforming for MIMO Amplify-and-Forward Relaying
7,772
We describe and present a new construction method for codes using encodings from group rings. They consist primarily of two types: zero-divisor and unit-derived codes. Previous codes from group rings focused on ideals; for example cyclic codes are ideals in the group ring over a cyclic group. The fresh focus is on the encodings themselves, which only under very limited conditions result in ideals. We use the result that a group ring is isomorphic to a certain well-defined ring of matrices, and thus every group ring element has an associated matrix. This allows matrix algebra to be used as needed in the study and production of codes, enabling the creation of standard generator and check matrices. Group rings are a fruitful source of units and zero-divisors from which new codes result. Many code properties, such as being LDPC or self-dual, may be expressed as properties within the group ring thus enabling the construction of codes with these properties. The methods are general enabling the construction of codes with many types of group rings. There is no restriction on the ring and thus codes over the integers, over matrix rings or even over group rings themselves are possible and fruitful.
Codes from Zero-divisors and Units in Group Rings
7,773
The utility of limited feedback for coding over an individual sequence of DMCs is investigated. This study complements recent results showing how limited or noisy feedback can boost the reliability of communication. A strategy with fixed input distribution $P$ is given that asymptotically achieves rates arbitrarily close to the mutual information induced by $P$ and the state-averaged channel. When the capacity achieving input distribution is the same over all channel states, this achieves rates at least as large as the capacity of the state averaged channel, sometimes called the empirical capacity.
Zero-rate feedback can achieve the empirical capacity
7,774
A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.
Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes
7,775
This paper addresses the following question, which is of interest in the design of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W/N should the band be partitioned into in order to maximize the number of simultaneous links in the network? Dividing the available spectrum results in two competing effects. On the positive side, a larger N allows for more parallel, noninterfering communications to take place in the same area. On the negative side, a larger N increases the SINR requirement for each link because the same information rate must be achieved over less bandwidth. Exploring this tradeoff and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, the optimal SINR threshold - which directly corresponds to the optimal spectral efficiency - is derived for both the low SNR (power-limited) and high SNR (interference-limited) regimes. This leads to the optimum choice of the number of frequency bands N in terms of the path loss exponent, power and noise spectral density, desired rate, and total bandwidth.
Bandwidth Partitioning in Decentralized Wireless Networks
7,776
In this paper, we study the number of measurements required to recover a sparse signal in ${\mathbb C}^M$ with $L$ non-zero coefficients from compressed samples in the presence of noise. For a number of different recovery criteria, we prove that $O(L)$ (an asymptotically linear multiple of $L$) measurements are necessary and sufficient if $L$ grows linearly as a function of $M$. This improves on the existing literature that is mostly focused on variants of a specific recovery algorithm based on convex programming, for which $O(L\log(M-L))$ measurements are required. We also show that $O(L\log(M-L))$ measurements are required in the sublinear regime ($L = o(M)$).
Shannon Theoretic Limits on Noisy Compressive Sampling
7,777
A codebook based limited feedback strategy is a practical way to obtain partial channel state information at the transmitter in a precoded multiple-input multiple-output (MIMO) wireless system. Conventional codebook designs use Grassmannian packing, equiangular frames, vector quantization, or Fourier based constructions. While the capacity and error rate performance of conventional codebook constructions have been extensively investigated, constructing these codebooks is notoriously difficult relying on techniques such as nonlinear search or iterative algorithms. Further, the resulting codebooks may not have a systematic structure to facilitate storage of the codebook and low search complexity. In this paper, we propose a new systematic codebook design based on Kerdock codes and mutually unbiased bases. The proposed Kerdock codebook consists of multiple mutually unbiased unitary bases matrices with quaternary entries and the identity matrix. We propose to derive the beamforming and precoding codebooks from this base codebook, eliminating the requirement to store multiple codebooks. The propose structure requires little memory to store and, as we show, the quaternary structure facilitates codeword search. We derive the chordal distance for two antenna and four antenna codebooks, showing that the proposed codebooks compare favorably with prior designs. Monte Carlo simulations are used to compare achievable rates and error rates for different codebooks sizes.
Kerdock Codes for Limited Feedback Precoded MIMO Systems
7,778
In this work we find the capacity of a compound finite-state channel with time-invariant deterministic feedback. The model we consider involves the use of fixed length block codes. Our achievability result includes a proof of the existence of a universal decoder for the family of finite-state channels with feedback. As a consequence of our capacity result, we show that feedback does not increase the capacity of the compound Gilbert-Elliot channel. Additionally, we show that for a stationary and uniformly ergodic Markovian channel, if the compound channel capacity is zero without feedback then it is zero with feedback. Finally, we use our result on the finite-state channel to show that the feedback capacity of the memoryless compound channel is given by $\inf_{\theta} \max_{Q_X} I(X;Y|\theta)$.
Feedback Capacity of the Compound Channel
7,779
The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of K\"otter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if $\mu$ erasures and $\delta$ deviations occur, then errors of rank $t$ can always be corrected provided that $2t \leq d - 1 + \mu + \delta$, where $d$ is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application where $n$ packets of length $M$ over $F_q$ are transmitted, the complexity of the decoding algorithm is given by $O(dM)$ operations in an extension field $F_{q^n}$.
A Rank-Metric Approach to Error Control in Random Network Coding
7,780
This paper provides simple lower bounds on the number of iterations which is required for successful message-passing decoding of some important families of graph-based code ensembles (including low-density parity-check codes and variations of repeat-accumulate codes). The transmission of the code ensembles is assumed to take place over a binary erasure channel, and the bounds refer to the asymptotic case where we let the block length tend to infinity. The simplicity of the bounds derived in this paper stems from the fact that they are easily evaluated and are expressed in terms of some basic parameters of the ensemble which include the fraction of degree-2 variable nodes, the target bit erasure probability and the gap between the channel capacity and the design rate of the ensemble. This paper demonstrates that the number of iterations which is required for successful message-passing decoding scales at least like the inverse of the gap (in rate) to capacity, provided that the fraction of degree-2 variable nodes of these turbo-like ensembles does not vanish (hence, the number of iterations becomes unbounded as the gap to capacity vanishes).
Bounds on the Number of Iterations for Turbo-Like Ensembles over the Binary Erasure Channe
7,781
We consider transmission of a continuous amplitude source over an L-block Rayleigh fading $M_t \times M_r$ MIMO channel when the channel state information is only available at the receiver. Since the channel is not ergodic, Shannon's source-channel separation theorem becomes obsolete and the optimal performance requires a joint source -channel approach. Our goal is to minimize the expected end-to-end distortion, particularly in the high SNR regime. The figure of merit is the distortion exponent, defined as the exponential decay rate of the expected distortion with increasing SNR. We provide an upper bound and lower bounds for the distortion exponent with respect to the bandwidth ratio among the channel and source bandwidths. For the lower bounds, we analyze three different strategies based on layered source coding concatenated with progressive, superposition or hybrid digital/analog transmission. In each case, by adjusting the system parameters we optimize the distortion exponent as a function of the bandwidth ratio. We prove that the distortion exponent upper bound can be achieved when the channel has only one degree of freedom, that is L=1, and $\min\{M_t,M_r\}=1$. When we have more degrees of freedom, our achievable distortion exponents meet the upper bound for only certain ranges of the bandwidth ratio. We demonstrate that our results, which were derived for a complex Gaussian source, can be extended to more general source distributions as well.
Joint Source-Channel Codes for MIMO Block Fading Channels
7,782
The Golden space-time trellis coded modulation (GST-TCM) scheme was proposed in \cite{Hong06} for a high rate $2\times 2$ multiple-input multiple-output (MIMO) system over slow fading channels. In this letter, we present the performance analysis of GST-TCM over block fading channels, where the channel matrix is constant over a fraction of the codeword length and varies from one fraction to another, independently. In practice, it is not useful to design such codes for specific block fading channel parameters and a robust solution is preferable. We then show both analytically and by simulation that the GST-TCM designed for slow fading channels are indeed robust to all block fading channel conditions.
On the performance of Golden space-time trellis coded modulation over MIMO block fading channels
7,783
A tree decomposition of the coordinates of a code is a mapping from the coordinate set to the set of vertices of a tree. A tree decomposition can be extended to a tree realization, i.e., a cycle-free realization of the code on the underlying tree, by specifying a state space at each edge of the tree, and a local constraint code at each vertex of the tree. The constraint complexity of a tree realization is the maximum dimension of any of its local constraint codes. A measure of the complexity of maximum-likelihood decoding for a code is its treewidth, which is the least constraint complexity of any of its tree realizations. It is known that among all tree realizations of a code that extends a given tree decomposition, there exists a unique minimal realization that minimizes the state space dimension at each vertex of the underlying tree. In this paper, we give two new constructions of these minimal realizations. As a by-product of the first construction, a generalization of the state-merging procedure for trellis realizations, we obtain the fact that the minimal tree realization also minimizes the local constraint code dimension at each vertex of the underlying tree. The second construction relies on certain code decomposition techniques that we develop. We further observe that the treewidth of a code is related to a measure of graph complexity, also called treewidth. We exploit this connection to resolve a conjecture of Forney's regarding the gap between the minimum trellis constraint complexity and the treewidth of a code. We present a family of codes for which this gap can be arbitrarily large.
On Minimal Tree Realizations of Linear Codes
7,784
The problem of channel code design for the $M$-ary input AWGN channel with additive $Q$-ary interference where the sequence of i.i.d. interference symbols is known causally at the encoder is considered. The code design criterion at high SNR is derived by defining a new distance measure between the input symbols of the Shannon's \emph{associated} channel. For the case of binary-input channel, i.e., M=2, it is shown that it is sufficient to use only two (out of $2^Q$) input symbols of the \emph{associated} channel in the encoding as far as the distance spectrum of code is concerned. This reduces the problem of channel code design for the binary-input AWGN channel with known interference at the encoder to design of binary codes for the binary symmetric channel where the Hamming distance among codewords is the major factor in the performance of the code.
Channel Code Design with Causal Side Information at the Encoder
7,785
This paper investigates downlink transmission over a quasi-static fading Gaussian broadcast channel (BC), to model delay-sensitive applications over slowly time-varying fading channels. System performance is characterized by outage achievable rate regions. In contrast to most previous work, here the problem is studied under the key assumption that the transmitter only knows the probability distributions of the fading coefficients, but not their realizations. For scalar-input channels, two coding schemes are proposed. The first scheme is called blind dirty paper coding (B-DPC), which utilizes a robustness property of dirty paper coding to perform precoding at the transmitter. The second scheme is called statistical superposition coding (S-SC), in which each receiver adaptively performs successive decoding with the process statistically governed by the realized fading. Both B-DPC and S-SC schemes lead to the same outage achievable rate region, which always dominates that of time-sharing, irrespective of the particular fading distributions. The S-SC scheme can be extended to BCs with multiple transmit antennas.
Outage-Efficient Downlink Transmission Without Transmit Channel State Information
7,786
We consider the effects of Rayleigh fading and lognormal shadowing in the physical interference model for all the successful transmissions of traffic across the network. New bounds are derived for the capacity of a given random ad hoc wireless network that reflect packet drop or capture probability of the transmission links. These bounds are based on a simplified network topology termed as honey-comb topology under a given routing and scheduling scheme.
Asymptotic Capacity of Wireless Ad Hoc Networks with Realistic Links under a Honey Comb Topology
7,787
The "water-filling" solution for the quadratic rate-distortion function of a stationary Gaussian source is given in terms of its power spectrum. This formula naturally lends itself to a frequency domain "test-channel" realization. We provide an alternative time-domain realization for the rate-distortion function, based on linear prediction. This solution has some interesting implications, including the optimality at all distortion levels of pre/post filtered vector-quantized differential pulse code modulation (DPCM), and a duality relationship with decision-feedback equalization (DFE) for inter-symbol interference (ISI) channels.
Achieving the Gaussian Rate-Distortion Function by Prediction
7,788
In wireless networks with random node distribution, the underlying point process model and the channel fading process are usually considered separately. A unified framework is introduced that permits the geometric characterization of fading by incorporating the fading process into the point process model. Concretely, assuming nodes are distributed in a stationary Poisson point process in $\R^d$, the properties of the point processes that describe the path loss with fading are analyzed. The main applications are connectivity and broadcasting.
A Geometric Interpretation of Fading in Wireless Networks: Theory and Applications
7,789
In this article we focus on the problem of channel decoding in presence of a-priori information. In particular, assuming that the a-priori information reliability is not perfectly estimated at the receiver, we derive a novel analytical framework for evaluating the decoder's performance. It is derived the important result that a "good code", i.e., a code which allows to fully exploit the potential benefit of a-priori information, must associate information sequences with high Hamming weights to codewords with low Hamming weights. Basing on the proposed analysis, we analyze the performance of convolutional codes, random codes, and turbo codes. Moreover, we consider the transmission of correlated binary sources from independent nodes, a problem which has several practical applications, e.g. in the case of sensor networks. In this context, we propose a very simple joint source-channel turbo decoding scheme where each decoder works by exploiting a-priori information given by the other decoder. In the case of block fading channels, it is shown that the inherent correlation between information signals provide a form of non-cooperative diversity, thus allowing joint source-channel decoding to outperform separation-based schemes.
Performance bounds and codes design criteria for channel decoding with a-priori information
7,790
We characterize the affine-invariant maximal extended cyclic codes. Then by the CSS construction, we derive from these codes a family of pure quantum codes. Also for ordnq even, a new family of degenerate quantum stabilizer codes is derived from the classical duadic codes. This answer an open problem asked by Aly et al.
Two Families of Quantum Codes Derived from Cyclic Codes
7,791
A pattern of a sequence is a sequence of integer indices with each index describing the order of first occurrence of the respective symbol in the original sequence. In a recent paper, tight general bounds on the block entropy of patterns of sequences generated by independent and identically distributed (i.i.d.) sources were derived. In this paper, precise approximations are provided for the pattern block entropies for patterns of sequences generated by i.i.d. uniform and monotonic distributions, including distributions over the integers, and the geometric distribution. Numerical bounds on the pattern block entropies of these distributions are provided even for very short blocks. Tight bounds are obtained even for distributions that have infinite i.i.d. entropy rates. The approximations are obtained using general bounds and their derivation techniques. Conditional index entropy is also studied for distributions over smaller alphabets.
Patterns of i.i.d. Sequences and Their Entropy - Part II: Bounds for Some Distributions
7,792
The analysis of random coding error exponents pertaining to erasure/list decoding, due to Forney, is revisited. Instead of using Jensen's inequality as well as some other inequalities in the derivation, we demonstrate that an exponentially tight analysis can be carried out by assessing the relevant moments of a certain distance enumerator. The resulting bound has the following advantages: (i) it is at least as tight as Forney's bound, (ii) under certain symmetry conditions associated with the channel and the random coding distribution, it is simpler than Forney's bound in the sense that it involves an optimization over one parameter only (rather than two), and (iii) in certain special cases, like the binary symmetric channel (BSC), the optimum value of this parameter can be found in closed form, and so, there is no need to conduct a numerical search. We have not found yet, however, a numerical example where this new bound is strictly better than Forney's bound. This may provide an additional evidence to support Forney's conjecture that his bound is tight for the average code. We believe that the technique we suggest in this paper can be useful in simplifying, and hopefully also improving, exponential error bounds in other problem settings as well.
Error Exponents of Erasure/List Decoding Revisited via Moments of Distance Enumerators
7,793
An interference alignment example is constructed for the deterministic channel model of the $K$ user interference channel. The deterministic channel example is then translated into the Gaussian setting, creating the first known example of a fully connected Gaussian $K$ user interference network with single antenna nodes, real, non-zero and contant channel coefficients, and no propagation delays where the degrees of freedom outerbound is achieved. An analogy is drawn between the propagation delay based interference alignment examples and the deterministic channel model which also allows similar constructions for the 2 user $X$ channel as well.
Interference Alignment on the Deterministic Channel and Application to Fully Connected AWGN Interference Networks
7,794
We investigate, in the Shannon model, the security of constructions corresponding to double and (two-key) triple DES. That is, we consider F_{k1}(F_{k2}(.)) and F_{k1}(F_{k2}^{-1}(F_{k1}(.))) with the component functions being ideal ciphers. This models the resistance of these constructions to ``generic'' attacks like meet in the middle attacks. We obtain the first proof that composition actually increases the security of these constructions in some meaningful sense. We compute a bound on the probability of breaking the double cipher as a function of the number of computations of the base cipher made, and the number of examples of the composed cipher seen, and show that the success probability is the square of that for a single key cipher. The same bound holds for the two-key triple cipher. The first bound is tight and shows that meet in the middle is the best possible generic attack against the double cipher.
Security amplification by composition: The case of doubly-iterated, ideal ciphers
7,795
A security policy states the acceptable actions of an information system, as the actions bear on security. There is a pressing need for organizations to declare their security policies, even informal statements would be better than the current practice. But, formal policy statements are preferable to support (1) reasoning about policies, e.g., for consistency and completeness, (2) automated enforcement of the policy, e.g., using wrappers around legacy systems or after the fact with an intrusion detection system, and (3) other formal manipulation of policies, e.g., the composition of policies. We present LaSCO, the Language for Security Constraints on Objects, in which a policy consists of two parts: the domain (assumptions about the system) and the requirement (what is allowed assuming the domain is satisfied). Thus policies defined in LaSCO have the appearance of conditional access control statements. LaSCO policies are specified as expressions in logic and as directed graphs, giving a visual view of policy. LaSCO has a simple semantics in first order logic (which we provide), thus permitting policies we write, even for complex policies, to be very perspicuous. LaSCO has syntax to express many of the situations we have found to be useful on policies or, more interesting, the composition of policies. LaSCO has an object-oriented structure, permitting it to be useful to describe policies on the objects and methods of an application written in an object-oriented language, in addition to the traditional policies on operating system objects. A LaSCO specification can be automatically translated into executable code that checks an invocation of a program with respect to a policy. The implementation of LaSCO is in Java, and generates wrappers to check Java programs with respect to a policy.
Security Policy Specification Using a Graphical Approach
7,796
We present here a generalization of the work done by Rabin and Ben-Or. We give a protocol for multiparty computation which tolerates any Q^2 active adversary structure based on the existence of a broadcast channel, secure communication between each pair of participants, and a monotone span program with multiplication tolerating the structure. The secrecy achieved is unconditional although we allow an exponentially small probability of error. This is possible due to a protocol for computing the product of two values already shared by means of a homomorphic commitment scheme which appeared originally in a paper of Chaum, Evertse and van de Graaf.
Multiparty computation unconditionally secure against Q^2 adversary structures
7,797
This paper provides a proof of the proposed Internet standard Transport Level Security protocol using the Gong-Needham-Yahalom logic. It is intended as a teaching aid and hopes to show to students: the potency of a formal method for protocol design; some of the subtleties of authenticating parties on a network where all messages can be intercepted; the design of what should be a widely accepted standard.
Transport Level Security: a proof using the Gong-Needham-Yahalom Logic
7,798
Research in the field of electronic signature confirmation has been active for some 20 years now. Unfortunately present certificate-based solutions also come from that age when no-one knew about online data transmission. The official standardized X.509 framework also depends heavily on offline operations, one of the most complicated ones being certificate revocation handling. This is done via huge Certificate Revocation Lists which are both inconvenient and expencive. Several improvements to these lists are proposed and in this report we try to analyze them briefly. We conclude that although it is possible to do better than in the original X.509 setting, none of the solutions presented this far is good enough.
Certificate Revocation Paradigms
7,799