content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Nonlinear Signal Processing: A Statistical Approach - PDF Free Download
Nonlinear Signal Processing A Statistical Approach
Gonzalo R. Arce University of Delaware Department of Computer and Electrical Engineering
@EEiCIENCE A JOHN WILEY & SONS, INC., PUBLICATION
This Page Intentionally Left Blank
Nonlinear Signal Processing
This Page Intentionally Left Blank
Nonlinear Signal Processing A Statistical Approach
Gonzalo R. Arce University of Delaware Department of Computer and Electrical Engineering
@EEiCIENCE A JOHN WILEY & SONS, INC., PUBLICATION
Copyright 02005 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be
reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section 107 or
108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-601 1, fax (201) 748-6008. Limit of LiabilityiDisclaimer of Warranty: While the publisher and author
have used their best efforts in preparing this book, they make no representation or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any
implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies
contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other
commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services please contact our Customer Care
Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in
print, however, may not be available in electronic format.
Library of Congress Cataloging-in-Publication Data:
Arce, Gonzalo R. Nonlinear signal processing : a statistical approach / Gonzalo R. Arce p. cm. Includes bibliographical references and index. ISBN 0-471-67624-1 (cloth : acid-free paper) 1. Signal
processing-Mathematics. 2. Statistics. I. Title. TK5102.9.A77 2004 621.382'24~22 Printed in the United States of America 1 0 9 8 7 6 5 4 3 2 1
To Catherine, Andrew, Catie, and my beloved parents.
This Page Intentionally Left Blank
Linear filters today enjoy a rich theoretical framework based on the early and important contributions of Gauss (1795) on Least Squares, Wiener (1949) on optimal filtering, and Widrow (1970) on
adaptive filtering. Linear filter theory has consistently provided the foundation upon which linear filters are used in numerous practical applications as detailed in classic treatments including
that of Haykin [99], Kailath [ 1lo], and Widrow [ 1971. Nonlinear signal processing, however, offers significant advantages over traditional linear signal processing in applications in which the
underlying random processes are nonGaussian in nature, or when the systems acting on the signals of interest are inherently nonlinear. Practice has shown that nonlinear systems and nonGaussian
processes emerge in a broad range of applications including imaging, teletraffic, communications, hydrology, geology, and economics. Nonlinear signal processing methods in all of these applications
aim at exploiting the system’s nonlinearities or the statistical characteristics of the underlying signals to overcome many of the limitations of the traditional practices used in signal processing.
Traditional signal processing enjoys the rich and unified theory of linear systems. Nonlinear signal processing, on the other hand, lacks a unified and universal set of tools for analysis and design.
Hundreds of nonlinear signal processing algorithms have been proposed in the literature. Most of the proposed methods, although well tailored for a given application, are not broadly applicable in
general. While nonlinear signal processing is a dynamic and rapidly growing field, large classes of nonlinear signal processing algorithms can be grouped and studied in a unified framework. Textbooks
on higher-and lower-order statistics [1481, polynomial filters [ 1411, neural-networks [ 1001, and mathematical morphology have appeared recently with vii
the common goal of grouping a "self-contained" class of nonlinear signal processing algorithms into a unified framework of study. This book focuses on unifying the study of a broad and important
class of nonlinear signal processing algorithms that emerge from statistical estimation principles, and where the underlying signals are nonGaussian processes. Notably, by concentrating on just two
nonGaussian models, a large set of tools is developed that encompasses a large portion of the nonlinear signal processing tools proposed in the literature over the past several decades. In
particular, under the generalized Gaussian distribution, signal processing algorithms based on weighted medians and their generalizations are developed. The class of stable distributions is used as
the second nonGaussian model from which weighted myriads emerge as the fundamental estimate from which general signal processing tools are developed. Within these two classes of nonlinear signal
processing methods, a goal of the book is to develop a unified treatment on optimal and adaptive signal processing algorithms that mirror those of Wiener and Widrow, extensively presented in the
linear filtering literature. The current manuscript has evolved over several years while the author regularly taught a nonlinear signal processing course in the graduate program at the University of
Delaware. The book serves an international market and is suitable for advanced undergraduates or graduate students in engineering and the sciences, and practicing engineers and researchers. The book
contains many unique features including: 0
Numerous problems at the end of each chapter. Numerous examples and case studies provided throughout the book in a wide range of applications.
A set of 60+ MATLAB software m-files allowing the reader to quickly design and apply any of the nonlinear signal processing algorithms described in the book to an application of interest. An
accompanying MATLAB software guide. A companion PowerPoint presentation with more than 500 slides available for instruction.
The chapters in the book are grouped into three parts. Part I provides the necessary theoretical tools that are used later in text. These include a review of nonGaussian models emphasizing the class
of generalized Gaussian distributions and the class of stable distributions. The basic principles of order statistics are covered, which are of essence in the study of weighted medians. Part I closes
with a chapter on maximum likelihood and robust estimation principles which are used later in the book as the foundation on which signal processing methods are build upon. Part I1 comprises of three
chapters focusing on signal processing tools developed under the generalized Gaussian model with an emphasis on the Laplacian model. Weighted medians, L-filters, and several generalizations are
studied at length.
Part I11 encompasses signal processing methods that emerge from parameter estimation within the stable distribution framework. The chapter sequence is thus assembledin a self-containedand unified
framework of study.
This Page Intentionally Left Blank
The material in this textbook has benefited greatly from my interaction with many bright students at the University of Delaware. I am particularly indebted to my previous graduate students Juan
Gonzalez, Sebastian Hoyos, Sudhakar Kalluri, Yinbo Li, David Griffith, Yeong-Taeg Kim, Edwin Heredia, Alex Flaig, Zhi Zhou, Dan Lau, Karen Bloch, Russ Foster, Russ Hardie, Tim Hall, and Michael
McLoughlin. They have all contributed significantly to material throughout the book. I am very grateful to Jan Bacca and Dr. Jose-Luis Paredes for their technical and software contributions. They
have generated all of the MATLAB routines included in the book as well as the accompanying software guide. Jan Bacca has provided the much needed electronic publishing support to complete this
project. I am particularly indebted to Dr. Neal C. Gallagher of the University of Central Florida for being a lifelong mentor, supporter, and friend. It has been a pleasure working with the
Non-linear Signal Processing Board: Dr. Hans Burkhardt of the Albert-Ludwigs-University, Freiburg Germany, Dr. Ed Coyle of Purdue University, Dr. Moncef Gabbouj of the Tampere University of
Technology, Dr. Murat Kunt of the Swiss Federal Institute of Technology, Dr. Steve Marshall of the University of Strathclyde, Dr. John Mathews of the University of Utah, Dr. Yrjo Neuvo of Nokia, Dr.
Ioannis Pitas of the Aristotle University of Thessaloniki, Dr. Jean Serra of the Center of Mathematical Morphology, Dr. Giovanni Sicuranza of the University of Trieste, Dr. Akira Taguchi of the
Musashi Institute of Technology, Dr. Anastasios N. Venetsanopoulos of the University of Toronto, and Dr. Pao-Ta Yu of the National Chung Cheng University. Their contribution in the organization
of the international workshop series in this field has provided the vigor required for academic excellence. My interactions with a number of outstanding colleagues has deepened my understanding of
nonlinear signal processing. Many of these collaborators have made important contributions to the theory and practice of nonlinear signal processing. I am most grateful to Dr. Ken Barner, Dr. Charles
Boncelet, Dr. Xiang Xia, and Dr. Peter Warter all from the University of Delaware, Dr. Jackko Astola, Dr. Karen Egiazarian, Dr. Oli Yli-Harja, Dr I. Tibus, all from the Tampere University of
Technology, Dr. Visa Koivunen of the Helsinki University of Technology, Dr. Saleem Kassam of the University of Pennsylvania, Dr. Sanjit K. Mitra of the University of California, Santa Barbara, Dr.
David Munson of the University of Michigan, Dr. Herbert David of Iowa State University, Dr. Kotroupolus of the Universtiy of Thessaloniki, Dr. Yrjo Neuvo of Nokia, Dr. Alan Bovik and Dr. Ilya
Shmulevich, both of the University of Texas, Dr. Francesco Palmieri of the University of Naples, Dr. Patrick Fitch of the Lawrence Livermore National Laboratories, Dr. Thomas Nodes of TRW, Dr. Brint
Cooper of Johns Hopkins University, Dr. Petros Maragos of the University of Athens, and Dr. Y. H. Lee of KAIST University. I would like to express my appreciation for the research support I received
from the National Science Foundation and the Army Research laboratories, under the Federated Laboratories and Collaborative Technical Alliance programs, for the many years of research that led to
this textbook. I am particularly grateful to Dr. John Cozzens and Dr. Taieb Znati, both from NSF, and Dr. Brian Sadler, Dr. Ananthram Swami, and Jay Gowens, all from ARL. I am also grateful to the
Charles Black Evans Endowment that supports my current Distinguished Professor appointment at the University of Delaware. I would like to thank my publisher George Telecki and the staff at Wiley for
their dedicated work during this project and Heather King for establishing the first link to Wiley. G. R. A,
Acronyms 1 Introduction 1.1 NonGaussian Random Processes 1.1.1 Generalized Gaussian Distributions and Weighted Medians 1.1.2 Stable Distributions and Weighted Myriads 1.2 Statistical Foundations 1.3
The Filtering Problem 1.3.1 Moment Theory
xix 1 7
Part I Statistical Foundations
2 NonGaussian Models 2.1 Generalized Gaussian Distributions 2.2 Stable Distributions 2.2.1 Definitions
17 18 19 22 xiii
2.2.2 Symmetric Stable Distributions 2.2.3 Generalized Central Limit Theorem 2.2.4 Simulation of Stable Sequences 2.3 Lower-Order Moments 2.3.1 Fractional Lower-Order Moments 2.3.2 Zero-Order
Statistics 2.3.3 Parameter Estimation of Stable Distributions Problems
3 Order Statistics 3.1 Distributions Of Order Statistics 3.2 Moments Of Order Statistics 3.2.1 Order Statistics From Uniform Distributions 3.2.2 Recurrence Relations 3.3 Order Statistics Containing
Outliers 3.4 Joint Statistics Of Ordered And NonOrdered Samples Problems
4 Statistical Foundations of Filtering 4.1 Properties of Estimators 4.2 Maximum Likelihood Estimation 4.3 Robust Estimation Problems
Part I1 Signal Processing with Order Statistics 5 Median and Weighted Median Smoothers 5.1 Running Median Smoothers 5.1.1 Statistical Properties 5.1.2 Root Signals (Fixed Points) 5.2 Weighted Median
Smoothers 5.2.1 The Center-Weighted Median Smoother 5.2.2 Permutation-Weighted Median Smoothers 5.3 Threshold Decomposition Representation 5.3.1 Stack Smoothers 5.4 Weighted Medians in Least Absolute
Deviation (LAD) Regression 5.4.1 Foundation and Cost Functions
5.4.2 LAD Regression with Weighted Medians 5.4.3 Simulation Problems 6 Weighted Median Filters 6.1 Weighted Median Filters With Real-Valued Weights 6.1.1 Permutation-Weighted Median Filters 6.2
Spectral Design of Weighted Median Filters 6.2.1 Median Smoothers and Sample Selection Probabilities 6.2.2 SSPs for Weighted Median Smoothers 6.2.3 Synthesis of WM Smoothers 6.2.4 General Iterative
Solution 6.2.5 Spectral Design of Weighted Median Filters Admitting Real-Valued Weights 6.3 The Optimal Weighted Median Filtering Problem 6.3.1 Threshold Decomposition For Real-Valued Signals 6.3.2
The Least Mean Absolute (LMA) Algorithm 6.4 Recursive Weighted Median Filters 6.4.1 Threshold Decomposition Representation of Recursive WM Filters 6.4.2 Optimal Recursive Weighted Median Filtering
6.5 Mirrored Threshold Decomposition and Stack Filters 6.5.1 Stack Filters 6.5.2 Stack Filter Representation of Recursive WM Filters 6.6 Complex-Valued Weighted Median Filters 6.6.1 Phase-Coupled
Complex WM Filter 6.6.2 Marginal Phase-Coupled Complex WM Filter 6.6.3 Complex threshold decomposition 6.6.4 Optimal Marginal Phase-Coupled Complex WM 6.6.5 Spectral Design of Complex-Valued Weighted
Medians 6.7 Weighted Median Filters for Multichannel Signals 6.7.1 Marginal WM filter 6.7.2 Vector WM filter 6.7.3 Weighted Multichannel Median Filtering Structures
6.7.4 Filter Optimization Problems 7 Linear Combination of Order Statistics 7.1 L-Estimates of Location 7.2 L-Smoothers 7.3 L!-Filters 7.3.1 Design and Optimization of ,%filters 7.4 L3l Permutation
Filters 7.5 Hybrid Mediadinear FIR Filters 7.5.1 Median and FIR Affinity Trimming 7.6 Linear Combination of Weighted Medians 7.6.1 LCWM Filters 7.6.2 Design of LCWM filters 7.6.3 Symmetric LCWM
Filters Problems
Part 111 Signal Processing with the Stable Model
8 Myriad Smoothers 8.1 FLOM Smoothers 8.2 Running Myriad Smoothers 8.3 Optimality of the Sample Myriad 8.4 Weighted Myriad Smoothers 8.5 Fast Weighted Myriad Computation 8.6 Weighted Myriad Smoother
Design 8.6.1 Center-Weighted Myriads for Image Denoising 8.6.2 Myriadization Problems
9 Weighted Myriad Filters 9.1 Weighted Myriad Filters With Real-Valued Weights 9.2 Fast Real-valued Weighted Myriad Computation 9.3 Weighted Myriad Filter Design 9.3.1 Myriadization 9.3.2
Optimization Problems
Appendix A Software Guide
This Page Intentionally Left Blank
ADSL BIB0 BR CMA CWM CWMY DWMTM DWD FIR FLOS FLOM HOS i.i.d IIR LCWM LS LAD
Asymmetric digital suscriber line Bounded-input bounded-output Barrodale and Roberts’ (algorithm) Constant modulus algorithm Center-weighted median Center-weighted myriad Double window modified
Trimmed mean Discrete Wigner distribution Finite impulse response Fractional lower-order statistics Fractiona lower-order moments higher-order statistics Independent and identically distributed
Infinite impulse response Linear combination of weighted medians Least squares Least absolute deviation xix
LLS LMS LMA LP MSE ML MAE MTM PAM Pdf PLL PSNR PBF RTT SaS SSP TCP/IP TD WM WMM WD
Logarithmic least squares Least mean square Least mean absolute Linearity parameter Mean square error Maximum likelihood Mean absolute error Modified trimmed mean Phase amplitude modulation Portable
document format Phase lock loop Peak signal-to-noiseratio Positive boolean function Round trip time Symmetric a-stable Sample selection probabilities Internet transfer protocol Threshold
Decomposition Weighted median Weighted multichannel median Wigner distribution Zero-order statistics
1 Introduction
Signal processing is a discipline embodying a large set of methods for the representation, analysis, transmission,and restoration of information-bearingsignals from various sources. As such, signal
processing revolves around the mathematicalmanipulation of signals. Perhaps the most fundamental form of signal manipulation is that of filtering, which describes a rule or procedure for processing a
signal with the goal of separating or attenuating a desired component of an observed signal from either noise, interference, or simply from other components of the same signal. In many applications,
such as communications,we may wish to remove noise or interference from the received signal. If the received signal was in some fashion distorted by the channel, one of the objectives of the receiver
is to compensatefor these disturbances. Digital picture processing is another application where we may wish to enhance or extract certain image features of interest. Perhaps image edges or regions of
the image composed of a particular texture are most useful to the user. It can be seen that in all of these examples, the signal processing task calls for separating a desired component of the
observed waveform from any noise, interference,or undesired component. This segregation is often done in frequency, but that is only one possibility. Filtering can thus be considered as a system with
arbitrary input and output signals, and as such the filtering problem is found in a wide range of disciplines including economics, engineering, and biology. A classic filtering example, depicted in
Figure 1.1, is that of bandpass filtering a frequency rich chirp signal. The frequency componentsof the chirp within a selected band can be extracted through a number of linear filtering methods.
Figure l . l b shows the filtered clwp when a linear 120-tap finite impulse response (FIR) filter is used. This figure clearly shows that linear methods in signal processing can indeed 1
Figure 1. I Frequency selective filtering: ( a )chirp signal, (b)linear FIR filter output.
be markedly effective. In fact, linear signal processing enjoys the rich theory of linear systems, and in many applications linear signal processing algorithms prove to be optimal. Most importantly,
linear filters are inherently simple to implement, perhaps the dominant reason for their widespread use. Although linear filters will continue to play an important role in signal processing,
nonlinear filters are emerging as viable alternative solutions. The major forces that motivate the implementation of nonlinear signal-processing algorithms are the growth of increasingly challenging
applications and the development of more powerful computers. Emerging multimedia and communications applications are becoming significantly more complex. Consequently, they require the use of
increasingly sophisticated signal-processing algorithms. At the same time, the ongoing advances of computers and digital signal processors, in terms of speed, size, and cost, makes the implementation
of sophisticated algorithms practical and cost effective.
Why Nonlinear Signal Processing? Nonlinear signal processing offers advantages in applications in which the underlying random processes are nonGaussian. Practice has shown that nonGaussian processes
do emerge in a broad array of applications, including wireless communications, teletraffic, hydrology, geology, economics, and imaging. The common element in these applications, and many others, is
that the underlying processes of interest tend to produce more large-magnitude (outlier or impulsive) observations than those that would be predicted by a Gaussian model. That is, the underlying
signal density functions have tails that decay at rates lower than the tails of a Gaussian distribution. As a result, linear methods which obey the superposition principle suffer from serious
degradation upon the arrival of samples corrupted with high-amplitude noise. Nonlinear methods, on the other hand, exploit the statistical characteristics of the noise to overcome many of the
limitations of the traditional practices in signal processing.
Figure 1.2 Frequency selective filtering in nonGaussian noise: (a)linear FIR filter output, (b)nonlinear filter. To illustrate the above, consider again the classic bandpass filtering example. This
time, however, the chirp signal under analysis has been degraded by nonGaussian noise during the signal acquisition stage. Due to the nonGaussian noise, the linear FIR filter output is severely
degraded as depicted in Figure 1 . 2 ~ .The advantages of an equivalent nonlinear filter are illustrated in Figure 1.2b where the frequency components of the chirp within the selected band have been
extracted,and the ringing artifacts and the noise have been suppressed'. Internet traffic provides another example of signals arising in practice that are best modeled by nonGaussian models for which
nonlinear signal processing offer advantages. Figure 1.3 depicts several round trip time delay series, each of which measures the time that a TCP/IP packet takes to travel between two network hosts.
An RTT measures the time differencebetween the time when a packet is sent and the time when an acknowledgment comes back to the sender. RTTs are important in retransmissiontransport protocols used by
TCPAP where reliability of communications is accomplished through packet reception acknowledgments, and, when necessary, packet retransmissions. In the TCP/IP protocol, the retransmissionof packets
is based on the prediction of future RTTs. Figure 1.3 depicts the nonstationary characteristics of RTT processes as their mean varies dramatically with the network load. These processes are also
noncaussian indicating that nonlinear prediction of RTTs can lead to more efficient communication protocols. Internet traffic exhibits nonGaussian statistics, not only on the RTT delay data
mechanisms, but also on the data throughput. For example, the traffic data shown in Figure 1.4 corresponds to actual Gigabit (1000 Mb/s) Ethernet traffic measured on a web server of the ECE
Department at the University of Delaware. It was measured using the TCPDUMP program, which is part of the Sun Solaris operating system. To
'The example uses a weighted median filter that is developed in later sections.
I 1000
0.13 0.12
0.3 0.11
0.21 300
J 500
Figure 7.3 RTT time series measured in seconds between a host at the University of Delaware and hosts in ( a )Australia (12:18 A M - 3:53 AM); (b)Sydney, Australia (12:30 AM 4:03 AM); (c) Japan (2:52
PM - 6:33 PM); (6)London, UK (1O:oO AM - 1:35 PM). All plots shown in 1 minute interval samples.
generate this trace, all packets coming to the server were captured and time-stamped during several hours. The figure considers byte counts (size of the transferred data) measured on l0ms intervals,
which is shown in the top plot of Figure 1.4. The overall length of the recordings is approximately four hours (precisely 14,000s). The other plots in Figure 1.4 represent the "aggregated" data
obtained by averaging the data counts on increasing time intervals. The notable fact in Figure 1.4 is that the aggregation does not smooth out the data. The aggregated traffic still appears bursty
even in the bottom plot despite the fact that each point in it is the average of one thousand successive values of the series shown in the top plot of Figure 1.4. Similar behavior in data traffic has
been observed in numerous experimental setups, including CappC et al. (2002) [42], Beran et al. (1995) [31], Leland et al. (1994) [127], and Paxson and Floyd (1995) [ 1591. Another example is found
in high-speed data links over telephone wires, such as Asymmetric Digital Subscriber Lines (ADSL), where noise in the communications channel exhibits impulsive characteristics. In these systems,
telephone twisted pairs
10 ms Data Counts
100 ms Data Counts 1500
500 0 1 s Data Counts 4000
I "
Time ( 6 )
Figure 1.4 Byte counts measured over 14,000 seconds in a web server of the ECE Department at the University of Delaware viewed through different aggregation intervals: from top to bottom, 10ms, l00ms
Is, 10s.
are unshielded, and are thus susceptible to large electromagnetic interference. Potential sources of impulsive interference include light switching and home appliances, as well as natural weather
phenomena. Severe interference is also generated by cross talk among multiple twisted pairs making up a telephone cable. The interference is inherently impulsive and nonstationary leading to poor
service reliability. The impact of impulsive noise on ADSL systems depends on the impulse energy, duration, interarrival time, and spectral characteristics. Isolated impulses can reach magnitudes
significantly larger than either additive white noise or crosstalk interference. A number of models to characterize ADSL interference have been proposed [139]. Current ADSL systems are designed
conservatively under the assumption of a worst-case scenario due to severe nonstationary and nonGaussian channel interference [204]. Figure 1.5 shows three ADSL noise signals measured at a customer's
premise. These signals exhibit a wide range of spectral characteristics, burstiness, and levels of impulsiveness. In addition to channel coding, linear filtering is used to combat ADSL channel
interference [204]. Figure 1.5u-c depicts the use of linear and nonlinear filtering. These figures depict the improvement attained by nonlinear filtering in removing the noise and interference.
Figure 1.5 (a-c) Different noise and interference characteristics in ADSL lines. A linear and a nonlinear filter (recursive median filter) are used to overcome the channel limitations, both with the
same window size (adapted from [204]).
The last example (Fig. 1.6),visually illustrates the advantages of nonlinear signal processing. This figure depicts an enlarged section of an image which has been JPEG compressed for storagein a Web
site. Since compression reduces and often eliminates the high frequency components, compressed images contain edge artifacts and tend to look blurred. As a result, images found on the Internet are
often sharpened. Figure 1.6b shows the output of a traditional sharpening algorithm equipped with linear FIR filters. The amplification of the compression artifacts are clearly seen. Figure 1 . 6 ~
depicts the sharpening output when nonlinear filters are used. Nonlinear sharpeners avoid noise and artifact amplification and are as effective as linear sharpeners in highlighting the signal edges.
The examples above suggest that significant improvements in performance can be achieved by nonlinear methods of signal processing. Unlike linear signal processing, however, nonlinear signal
processing lacks a unified and universal set of tools for analysis and design. Hundreds of nonlinear signal processing algorithms have been proposed [21,160].While nonlinear signal processing is a
dynamic, rapidly growing field, a large class of nonlinear signal algorithms can be studied in a unified framework. Since signal processing focuses on the analysis and transformation of signals,
nonlinear filtering emerges as the fundamentalbuilding block of nonlinear signal processing. This book develops the fundamental signal-processingtools that arise when considering the filtering of
nonGaussian, rather than Gaussian, random processes. By concentrating on just two nonGaussian models, a large set of tools is developed that notably encompass a significant portion of the nonlinear
signal-processing tools proposed in the literature over the past several decades.
In statistical signal processing, signals are modeled as random processes and many signal-processingtasks reduce to the proper statistical analysis of the observed signals. Selecting the appropriate
model for the application at hand is of fundamental importance. The model, in turn, determines the signal processing approach. Classical linear signal-processingmethods rely on the popular Gaussian
assumption. The Gaussian model appears naturally in many applications as a result of the Central Limit Theorem first proved by De Moivre (1733) [69]. THEOREM 1.1 (CENTRAL LIMIT THEOREM) Let X I ,Xa,
. . . , be a sequence of i.i.d. random variables with Zero mean and variance 02.Then as N + 00, the normalized sum
converges almost surely to a zero-mean Gaussian variable with the same variance as Xa . Conceptually, the central limit theorem explains the Gaussian nature of processes generated from the
superposition of many small and independent effects. For ex-
Figure 1.6 ( a )Enlarged section of a JPEG compressed image, (b)output of unsharp masking using FIR filters, ( c ) and (d)outputs of median sharpeners.
ample, thermal noise generated as the superposition of a large number of random independent interactions at the molecular level. The Central Limit Theorem theoretically justifies the appearance of
Gaussian statistics in real life. However, in a wide range of applications, the Gaussian model does not produce a good fit which, at first, may seem to contradict the principles behind the Central
Limit Theorem. A careful revision of the conditions of the Central Limit Theorem indicates that, in order for this theorem to be valid, the variance of the superimposed random variables must be
finite. If the random variables possess infinite variance, it can be shown that the series in the Central Limit Theorem converges to a nonGaussian impulsive variable [65, 2071. This important
generalization of the Central Limit Theorem explains the apparent contradictions of its “traditional” version, as well as the presence of non-Gaussian, infinite variance processes, in practical
problems. In the same way as the Gaussian model owes most of its strength to the Central Limit Theorem, the Generalized Central Limit Theorem constitutes a strong theoretical argument to the
development of models that capture the impulsive nature of these signals, and of signal processing tools that are adequate in these nonGaussian environments. Perhaps the simplest approach to address
the effects of nonGaussian signals is to detect outliers that may be present in the data, reject these heuristically, and subsequently use classical signal-processing algorithms. This approach,
however, has many disadvantages. First, the detection of outliers is not simple, particularly when these are bundled together. Second, the efficiency of these methods is not optimal and is generally
difficult to measure since the methods are based on heuristics. The approach followed in this book is that of exploiting the rich theories of robust statistics and non-Gaussian stochastic processes,
such that a link is established between them leading to signal processing with solid theoretical foundations. This book considers two model families that encompass a large class of random processes.
These models described by their distributions allow the rate of tail decay to be varied: the generalized Gaussian distribution and the class of stable distributions. The tail of a distribution can be
measured by the mass of the tail, or order, defined as P , ( X > x) as 5 4 a. Both distribution families are general in that they encompass a wide array of distributions with different tail
characteristics. Additionally, both the generalized Gaussian and stable distributions contain important special cases that lead directly to classes of nonlinear filters that are tractable and optimal
for signals with heavy tail distributions.
Generalized Gaussian Distributionsand Weighted Medians
One approach to modeling the presence of outliers is to start with the Gaussian distribution and allow the exponential rate of tail decay to be a free parameter. This results directly in the
generalized Gaussian density function. Of special interest is the case of first order exponential decay, which yields the double exponential, or Laplacian, distribution. Optimal estimators for the
generalized Gaussian distribution take on a particularly simple realization in the Laplacian case. It turns out that weighted median filters are optimal for samples obeying Laplacian statistics, much
like linear filters are optimal for Gaussian processes. In general, weighted median filters are more efficient than linear filters in impulsive environments, which can be directly attributed to the
heavy tailed characteristic of the Laplacian distribution. Part I1 of the book uncovers signal processing methods using median-like operations, or order statistics.
Stable Distributions and Weighted Myriads
Although the class of generalized Gaussian distributions includes a spectrum of impulsive processes, these are all of exponential tails. It turns out that a wide variety of processes exhibit more
impulsive statistics that are characterized with algebraic tailed distributions. These impulsive processes found in signal processing applications arise as the superposition of many small independent
effects. For example, radar clutter is the sum of many signal reflections from an irregular surface; the transmitters in a multiuser communication system generate relatively small independent
signals, the sum of which represents the ensemble at a user’s receiver; rotating electric machinery generates many impulses caused by contact between distinct parts of the machine; and standard
atmospheric noise is known to be the superposition of many electrical discharges caused by lightning activity around the Earth. The theoretical justification for using stable distribution models lies
in the Generalized Central Limit Theorem which includes the well known “traditional” Central Limit Theorem as a special case. Informally: A random variable X is stable if it can be the limit of a
normalized sum of i.i.d. random variables.
The generalized theorem states that if the sum of i.i.d. random variables with or without finite variance converges to a distribution, the limit distribution must belong to the family of stable laws
[149, 2071. Thus, nonGaussian processes can emerge in practical applications as sums of random variables in the same way as Gaussian processes. Stable distributions include two special cases of note:
the standard Gaussian distribution and the Cauchy distribution. The Cauchy distribution is particularly important as its tails decay algebraically. Thus, the Cauchy distribution can be used to model
very impulsive processes. It turns out that for a wide range of stabledistributed signals, the so-called weighted myriad filters are optimal. Thus, weighted myriad filters emerging from the stable
model are the counterparts to linear and median filters related to the Gaussian and Laplacian environments, respectively. Part I11 of the book develops signal-processing methods derived from stable
1.2 STATISTICAL FOUNDATIONS Estimation theory is a branch of statistics concerned with the problem of deriving information about the properties of random processes from a set of observed samples. As
such, estimation theory lies at the heart of statistical signal processing. Given an
observation waveform { X ( n ) } ,one goal is to extract information that is embedded within the observed signal. It turns out that the embedded information can often be modeled parametrically. That
is, some parameter p of the signal represents the informationof interest. This parameter may be the local mean, the variance, the local range, or some other parameter associated with the received
waveform. Of course, finding a good parametric model is critical.
Location Estimation Because observed signals are inherently random, these are described by a probability density function (pdf), f ( 1 ,~ 2 2 , . . . ,Z N ) . The pdf may be parameterized by an
unknown parameter p. The parameter p thus defines a class of pdfs where each member is defined by a particular value of p. As an example, if our signal consists of a single point ( N = 1)and ,B is
the mean, the pdf of the data under the Gaussian model is
which is shown in Figure 1.7 for various values of p. Since the value of /3 affects the probability of X I , intuitively we should be able to infer the value of p from the observed value of X I . For
example, if the observed value of X I is a large positive number, the parameter p is more likely to be equal to PI than to p2 in Figure 1.7. Notice that p determines the location of the pdf. As such,
P is referred to as the location parameter. Rules that infer the value of P from sample realizations of the data are known as location estimators. Although a number of parameters can be associated
with a set of data, location is a parameter that plays a key role in the design of filtering algorithms. The filtering structures to be defined in later chapters have their roots in location
figure 7.7 Estimation of parameter ,# based on the observation X I .
Running Smoothers Location estimation and filtering are intimately related. The running mean is the simplest form of filtering and is most useful in illustrating this relationship. Given the data
sequence {. . . , X ( n - l),X ( n ) , X ( n l),. . .}, the running mean is defined as
Y ( n )= MEAN(X(n - N ) ,X ( n - N
+ 1).. . . ,X ( n + N ) ) .
At a given point n, the output is the average of the samples within a window centered at n. The output at n 1 is the average of the samples within the window centered at n 1, and so on. Thus, at each
point n, the running mean computes a location estimate, namely the sample mean. If the underlying signals are not Gaussian, it would be reasonable to replace the mean by a more appropriate location
estimator. Tukey (1974) [189], for instance, introduced the running median as a robust alternative to the running mean. Although running smoothers are effective in removing noise, more powerful
signal processing is needed in general to adequately address the tasks at hand. To this end, the statistical foundation provided by running smoothers can be extended to define optimal filtering
1.3 THE FILTERING PROBLEM Filtering constitutes a system with arbitrary input and output signals, and consequently the filtering problem is found in a wide range of disciplines. Although filtering
theory encompasses continuous-time as well as discrete-time signals, the availability of digital computer processors is causing discrete-time signal representation to become the preferred method of
analysis and implementation. In this book, we thus consider signals as being defined at discrete moments in time where we assume that the sampling interval is fixed and small enough to satisfy the
Nyquist sampling criterion. Denote a random sequence as { X } and let X(n) be a N-long element, real valued observation vector
+ 1)]T
[ X ( n ) ,X ( n - l ) ,. . . , X ( n - N = [ X , ( n ) , X2(72),. . . , X,(n)lT
where X i ( n ) = X ( n - i 1) and where T denotes the transposition operator. R denotes the real line. Further, assume that the observation vector X(n) is statistically related to some desired
signal denoted as D ( n ) . The filtering problem is then formulated in terms of joint process estimation as shown in Figure 1.8. The observed vector, X(n,),is formed by the elements of a shifting
window, the output of the filter is the estimate 5 ( n ) of a desired signal D ( n ) . The optimal filtering problem thus reduces to minimizing the cost function associated with the error e ( n )
under a given criterion, such as the mean square error (MSE). Under Gaussian statistics, the estimation framework becomes linear and the filter structure reduces to that of FIR linear filters. The
linear filter output is defined as
T+ Figure 7.8 Filtering as a joint process estimation where the Wi are real-valued weights assigned to each input sample. Under the Laplacian model, it will be shown that the median becomes the
estimate of choice and weighted medians become the filtering structure. The output of a weighted median is defined as
Y ( n )=MEDIAN(Wl o X l ( n ) , W z o X z ( n )., . . , W N o X N ( n ) ) ,
where the operation Wi o X i ( n )replicates the sample X i ( n ) ,Wi times. Weighting in median filters thus takes on a very different meaning than traditional weighting in linear filters. For
stable processes, it will be derived shortly that the weighted myriad filter emerges as the ideal structure. In this case the filter output is defined as
Y ( n )= MYRIAD ( K : Wl o X I ,W, o X z , . . . ,WNo X N ),
where Wi o X z ( n )represents a nonlinear weighting operation to be described later, and K in (1.7) is a free tunable parameter that will play an important role in weighted myriad filtering. It is
the flexibility provided by K that makes the myriad filter a more powerful filtering framework than either the linear FIR or the weighted median filter frameworks.
1.3.1 Moment Theory Historically, signal processing has relied on second-order moments, as these are intimately related to Gaussian models. The first-order moment PX =E{X(n))
and the second-order moment characterization provided by the autocorrelation of stationary processes
R x ( k )= E { X ( n ) X ( n k ) }
are deeply etched into traditional signal processing practice. As it will be shown later, second-order descriptions do not provide adequate information to process nonGaussian signals. One popular
approach is to rely on higher-order statistics that exploit moments of order greater than two. If they exist, higher-order statistics provide information that is unaccessible to second-order moments
[ 1481. Unfortunately, higher-order statistics become less reliable in impulsive environments to the extent that often they cease to exist. The inadequacy of second- or higher-order moments leads to
the introduction of alternate moment characterizations of impulsive processes. One approach is to use fractional lower-order statistics (FLOS) consisting of moments for orders less than two [136,
1491. Fractional lower-order statistics are not the only choice. Much like the Gaussian model naturally leads to second-order based methods, selecting a Laplacian model will lead to a different
natural moment characterization. Likewise, adopting the stable laws will lead to a different, yet natural, moment characterization.
Part I
Statistical Foundations
This Page Intentionally Left Blank
2 NonGaussian Models The Gaussian distribution model is widely accepted in signal processing practice. Theoreticallyjustified by the Central Limit Theorem, the Gaussian model has attained a
privileged place in statistics and engineering. There are, however, applications where the underlying random processes do not follow Gaussian statistics. Often, the processes encountered in practice
are impulsive in nature and are not well described with conventional Gaussian distributions. Traditionally, the design emphasis has often relied on a continuity principle: optimal processing at the
ideal Gaussian model should be almost optimal nearby. Unfortunately, this reliance on continuity is unfounded and in many cases one finds that optimum signal-processingmethods can suffer drastic
performance degradations, even for small deviations from the nominal assumptions. As an example, synchronization,detection, and equalization, basic in all communication systems, fail in impulsive
noise environments whenever linear processing is used. In order to model nonGaussian processes, a wide variety of distributions with heavier-than-Gaussian tails have been proposed as viable
alternatives. This chapter reviews several of these approaches and focuses on two distribution families, namely the class of generalized Gaussian distributions and the class of stable distributions.
These two distribution families are parsimonious in their characterizationleading to a balanced trade-off between fidelity and complexity. On the one hand, fidelity leads to more efficient
signal-processing algorithms, while the complexity issue stands for simpler models from which more tractable algorithms can be derived. The Laplacian distribution, a special case of the generalized
Gaussian distribution, lays the statisticalfoundationfor a large class of signal-processingalgorithmsbased on the
sample median. Likewise, signal processing based on the so-called sample myriad emerges from the statistical foundation laid by stable distributions.
The Central Limit Theorem provides a theoretical justification for the appearance of Gaussian processes in nature. Intimately related to the Gaussian model are linear estimation methods and, to a
large extent, a large section of signal-processing algorithms based on operations satisfying the linearity property. While the Central Limit Theorem has provided the key to understanding the
interaction of a large number of random independent events, it has also provided the theoretical burden favoring the use of linear methods, even in circumstances where the nature of the underlying
signals are decidedly non-Gaussian. One approach used in the modeling of non-Gaussian processes is to start from the Gaussian model and slightly modify it to account for the appearance of clearly
inappropriate samples or outliers. The Gaussian mixture or contaminated Gaussian model follows this approach, where the t-contaminated density function takes on the form
where f n ( x ) is the nominal Gaussian density with variance t is a small positive constant determining the percentage of contamination, and f c ( x )is the contaminating Gaussian density with a
large relative variance, such that 0,">> c:. Intuitively, one out of 1/t samples is allowed to be contaminated by the higher variance source. The advantage of the contaminated Gaussian distribution
lies in its mathematical simplicity and ease of computer simulation. Gaussian mixtures, however, present drawbacks. First, dispersion and impulsiveness are characterized by three parameters, t ,
cn,crc, which may be considered overparameterized. The second drawback, and perhaps the most serious, is that its sum density function formulation makes it difficult to manipulate in general
estimation problems. A more accurate model for impulsive phenomena was proposed by Middleton (1977) [143]. His class A, B, and C models are perhaps the most credited statisticalphysical
characterization of radio noise. These models have a direct physical interpretation and have been found to provide good fits to a variety of noise and interference measurements. Contaminated Gaussian
mixtures can in fact be derived as approximations to Middleton's Class A model. Much like Gaussian mixtures, however, Middleton's models are complicated and somewhat difficult to use in laying the
foundation of estimation algorithms. Among the various extensions of the Gaussian distributions, the most popular models are those characterized by the generalized Gaussian distribution. These have
been long known, with references dating back to 1923 by Subbotin [183] and 1924 by Frkchet [74]. A special case of the generalized Gaussian distribution class is the well known Laplacian
distribution, which has even older roots; Laplace introduced it
more than two hundred years ago [ 1221. In the generalized Gaussian distribution, the presence of outlier samples can be modeled by modifying the Gaussian distribution, allowing the exponential rate
of tail decay to be a free parameter. In this manner, the tail of the generalized Gaussian density function is governed by the parameter k .
DEFINITION 2.1 (GENERALIZED GAUSSIAN DISTRIBUTION) The probability density function for the generalized Gaussian distribution is given by
where I?(.) is the Gammafunction r ( x ) =
7 r ( 3 / k ) (I'(l/k))-
tX'-'eptdt, a is a constantdejinedas
and g is the standard deviation'.
= 0-1
> 0 whereas the impulsiveness is related to the parameter k > 0. As expected, the
In this representation,the scale of the distribution is determined by the parameter representation in (2.2) includes the standard Gaussian distribution as a special case for k = 2. Conceptually, the
lower the value of k , the more impulsive the distribution is. For k < 2, the tails decay slower than in the Gaussian case, resulting in a heavier tailed distribution. A second special case of the
generalized Gaussian distribution that is of particular interest is the case k = 1, which yields the double exponential, or Laplacian distribution, (2.3) where the second representation is the most
commonly used and is obtained making ff = &/A. The effect of decreasing k on the tails of the distribution can be seen in Figures 2.1 and 2.2. As these figures show, the Laplacian distribution has
heavier tails than the Gaussian distribution. One of the weaknesses of the generalized Gaussian distribution is the shape of these distributions around the origin for k < 2. The "peaky" shape of
these distributions contradicts the widely accepted Winsor's principle, according to which, all density functions of practical appeal are bell-shaped [87, 1881.
2.2 STABLE DISTRIBUTIONS Stable distributions describe a rich class of processes that allow heavy tails and skewness. The class was characterizedby LCvy in 1925 [ 1281. Stable distributionsare
described by four parameters: an index of stability (I: E (0,2],a scale parameter y > 0, a skewness parameter 6 E [ - 1,1], and a location parameter ,O E R.The stability 'The gamma function
satisfies: r(x) = (z - l)r(x - 1) for x > 1. For positive integers it follows that r(z) = (x - l)! and for a non integer x > 0 such that z = i ti where 0 5 u < 1, r(%) = (Z- I)(. - 2 ) . . . qU). For
x = r( = J;;.
generalizedgaussian density functions 2 ,
Z 15"=
1 -
0 -3
/ \
//-k=0.5 L= I
Figure 2.1 Generalized Gaussian density functions for different values of the tail constant k.
parameter (u measures the thickness of the tails of the distribution and provides this model with the flexibility needed to characterize a wide range of impulsive processes. The scale parameter
y,also called the dispersion, is similar to the variance of the Gaussian distribution. The variance equals twice the square of gamma in the Gaussian case when (u = 2. When the skewness parameter is
set to S = 0, the stable distribution is symmetric about the location parameter p. Symmetric stable processes are also referred to as symmetric a-stable or simply as S a S . A stable distribution
with parameter a is said to be standard if /3 = 0 and y = 1. For any stable variable X with parameters a , p, y,S, the corresponding standardized stable variable is found as ( X - P)/y, for a # 1.
Stable distributions are rapidly becoming popular for the characterization of impulsive processes for the following reasons. Firstly, good empirical fits are often found using stable distributions on
data exhibiting skewness and heavy tails. Secondly, there is solid theoretical justification that nonGaussian stable processes emerge in practice, such as multiple access interference in a
Poisson-distributed communication network [179], reflection off a rotating mirror [69], and Internet traffic [127]; see Uchaikin and Zolotarev (1999) [ 1911 and Feller (197 1) [69] for additional
examples. The third argument for modeling with stable distributions is perhaps the most significant and compelling. Stable distributions satisfy an important generalization
Tails of the generalizedGaussian density functions
k = 0.5 k = 1.0
k = 1.5
Figure 2.2 Tails of the Generalized Gaussian density functions for different values of the tail constant k .
of the Central Limit Theorem which states that the only possible limit of normalized sums of independent and identically distributed terms is stable. A wide variety of impulsive processes found in
signal processing applications arise as the superpositionof many small independenteffects. While Gaussian models are clearly inappropriate, stable distributions have the theoretical underpinnings to
accurately model these type of impulsive processes [149, 2071. Stable models are thus appealing, since the generalization of the Central Limit Theorem explains the apparent contradictions of its
“ordinary” version, which could not naturally explain the presence of heavy tailed signals. The Generalized Central Limit Theorem and the strong empirical evidence is used by many to justify the use
of stable models. Examples in finance and economics are given in Mandelbrot (1963) [138] and McCulloch (1966) [142]; in communication systems by Stuck and Kleiner (1974)[182], Nikias and Shao (1995)
[149], and Ilow and Hatzinakos (1997) [106]. A number of monographs providing indepth discussion of stable processes have recently appeared: Zolotarev (1986) [207], Samorodnitsky and Taqqu (1994)
1751, Nikias and Shao (1995) 11491, Uchaikin and Zolotarev (1999) 11911, Adler et al. (2002) 1671, and Nolan (2002) [151].
Gaussian random variables obey the important property that the sum of any two Gaussian variables is itself a Gaussian random variable. Formally, for any two independentGaussian random variables X 1
and X2 and any positive constants a , b, c,
+ bX2 5 cX + d ,
where d is a real-valued constant'. As their name implies, stable random variables obey this property as well.
D E F I N I T I O N 2.2 ( S T A B L E RANDOM VARIABLES) A random variable is stable iffor X I and X z independent copies of X and for arbitrary positive constants a and b, there are constants c and d
such that
+ bX2 5 cX + d.
(2.4) d
A symmetric stable random variable distributed around 0 satisfies X = - X .
Informally, the stability property states that the shape of X is preserved under addition up to scale and shift. The stability property (2.4) for Gaussian random variables can be readily verified
yielding c2 = a2 b2 and d = ( a b - c)p, where p is the mean of the parent Gaussian distribution. Other well known distributions that satisfy the stable property are the Cauchy and LCvy
distributions, and as such, both distributions are members of the stable class. The density function, for X Cauchy(y, p) has the form
The LCvy density function, sometimes referred to as the Pearson distribution, is totally skewed concentrating on (0, m). The density function for X L6vy(y, p) has the form
1 Y , p < x < m. (2.6) f(z)= - p ) 3 / 2 exp 2(" - p,) Figure 2.3 shows the plots of the standardized Gaussian, Cauchy, and LCvy distributions. Both Gaussian and Cauchy distributions are symmetric
and bellshaped. The main difference between these two densities is the area under their tails - the Cauchy having much larger area or heavier tails. In contrast to the Gaussian and Cauchy, the LCvy
distribution is highly skewed, with even heavier tails than the Cauchy. General stable distributions allow for varying degrees of skewness, the influence of the parameter 6 in the distribution of an
a-stable random variable is shown in Figure 2.4. d
'The symbol = defines equality in distribution
Figure 2.3 Density functions of standardized Gaussian ( a = 2), Cauchy (a = l),and U v y ( a = 0.5, 6 = 1).
Although some practical processes might be better modeled by skewed distributions, we will focus on symmetric stable processes for several reasons. First, the processes found in a number of
signal-processingapplicationsare symmetric; second, asymmetric models can lead to a significant increase in the computationalcomplexity of signal-processing algorithms; and, more important,
estimating the location of an asymmetric distribution is not a well-defined problem. All of the above constitute impediments to the derivation of a general theory of nonlinear filtering.
Symmetric Stable Distributions
Symmetric a-stable or SaS distributions are defined when the skewness parameter S is set to zero. In this case, a random variable obeying the symmetric stable distributionwith scale y is denoted as X
S a S ( y ) . Although the stability condition in Definition 2.2 is sufficientto characterizeall stable distributions,a second and more practical characterization of stable random variables is
through their characteristic function. N
L 00
4(w)= Eexp(jwX) =
where f(z)is the density function of the underlying random variable.
Figure 2.4 Density functions of skewed stable variables (a = 0.5, y = 1, p = 0).
DEFINITION 2 . 3 (CHARACTERISTIC FUNCTION OF S a S DISTRIBUTIONS) A d random variable X is symmetrically stable if and only i f X = AZ B where 0 < a 5 2, A 2 0, B E R and Z = Z ( a ) is a random
variable with characteristic function
4 ( w ) = e-Tulwla.
The dispersion parameter y is a positive constant related to the scale of the distribution. Again, the parameter a is referred to as the index of stability. In order for (2.8) to define a
characteristic function, the values of a must be restricted to the interval (0; 21. Conceptually speaking, a determines the impulsiveness or tail heaviness of the distribution (smaller values of Q
indicate increased levels of impulsiveness). The limit case, Q: = 2, corresponds to the zero-mean Gaussian distribution with variance 2y2.3 All other values of a correspond to heavy-tailed
distributions. Figure 2.5 shows plots of normalized unitary-dispersion stable densities. Note that lower values of a correspond to densities with heavier tails, as shown in Figure 2.6. 3The
characteristic function of a Gaussian random variable with zero mean and variance $ ( w ) = exp
u2 can be obtained.
d is given by:
from this equation and (2.8) with a = 2, the relationship shown between y and
Symmetric stable densities maintain many of the features of the Gaussian density. They are smooth, unimodal, symmetric with respect to the mode, and bell-shaped.
Scrs densitv for different values of a
1 “ 2 -
0 -6
Figure 2.5 Density functions of Symmetric stable distributions for different values of the tail constant a.
A major drawback to stable distribution modeling is that with a few exceptions stable density or their corresponding cumulative distribution functions lack closed form expressions. There are three
cases for which closed form expressions of stable density functions exist: the Gaussian distribution (a = 2), the Cauchy distribution (a = l),and the LCvy (a = distribution. For other values of a, no
closed form expressions are known for the density functions, making it necessary to resort to series expansions or integral transforms to describe them.
DEFINITION 2.4 ( SYMMETRIC STABLE DENSITYFUNCTIONS ) A general, “zero-centered,’’symmetric stable random variable with unitary dispersion can be characterized by the power series density function
representation [207]:
NONGAUSSIAN MODELS Tails of the Sc6 density function fordifferent values of u
-0005 3
3 !
Figure 2.6 Tails of symmetric stable distributions for different values of the tail constant a.
S T A B L E Rv.) [I511 A random variable X is stable with characteristic exponent a, dispersion y,location p and skewness 6 i f X has a characteristic function:
EXAMPLE 2 . 1 (STANDARDSTABLERANDOM VARIABLES) As stated previously,if X is a stable random variable with location /3 and dispersion = ( a # 1) is standard stable. This can be demonstrated with the
help of the general characteristic function. Define
y,the variable X’
but y
E[exp(jw’X’)] = E
exp (-j+p)
E [exp ( j + ~ ) ]using (2.10)
2 0, then IyI = y and sgn
- = sgn(w’), then
(2.1 1)
is the characteristic function of a stable random variable with y = 1and p
= 0.
EXAMPLE 2.2
Let X shown that
S ( a ,y,p), a symmetric stable random variable, then for a # 0 it is
S ( a ,laly, up + b).
Following the procedure used in the previous example, define X = a X
+ b:
+(w’) = E [exp(jw‘X’)] = E [exp(jw’ (ax+ b ) ) ] = exp (jw’b) E [exp ( j (w’a) X)] using (2.10) with 6 = 0 = exp (jw’b) exp (-yQ lw’ala j p (w’a)) = exp(-((aly)QIw’l”+ j ~ ’ ( a p + b ) ) , (2.12)
which is the characteristic function of a symmetric stable random variable with dispersion la17 and location up b.
EXAMPLE 2.3
Let XI S ( a ,71, PI) and XZ S ( a ,7 2 , P 2 ) be independent symmetric stable random variables, it is shown here that XI + X2 S ( a ,y,P), where y‘ = -f y; and P = PI Pz. Define X’ = XI XZand find
the characteristic function of X’ as: N
4(w’) = E [exp(jw’X’)] =
= E [exp (jw’ (XI
+ Xz))]
E [exp (jw’X1)]E [exp ( j w ’ x z ) ] since the variables are independent
= exp (-7: =
exp (-
/a’(* jplw’) exp (-7;lw’lQ
(rl”+ 7;)IW + j(Pl + P 2 ) 4 >
which is the characteristic function of a symmetric stable random variable with y? and P = PI Pz.
y“ = $
Generalized Central Limit Theorem
Much like Gaussian signals, a wide variety of non-Gaussian processes found in practice arise as the superposition of many small independent effects. At first, this may point to a contradiction of the
Central Limit Theorem, which states that, in the limit, the sum of such effects tends to a Gaussian process. A careful revision of the conditions of the Central Limit Theorem indicates that, in order
for the Central Limit Theorem to be valid, the variance of the superimposed random variables must be finite. If the variance of the underlying random variables is infinite, an important
generalization of the Central Limit Theorem emerges. This generalization explains the apparent contradictions of its “ordinary” version, as well as the presence of non-Gaussian processes in practice.
THEOREM 2.1 (GENERALIZED CENTRAL LIMIT THEOREM [75]) Let
, . . . be an independent,
identically distributed sequence of (possibly shift corrected) random variables. There exist constants a , such that as n --+ 00 the sum
+ x2 + . . .) 3 z
if and only if Z is a stable random variable with some 0 < (Y 5 2. In the same way as the Gaussian model owes most of its strength to the Central Limit Theorem, the Generalized Central Limit Theorem
constitutes a strong theoretical argument compelling the use of stable models in practical problems.
At first, the use of infinite variance in the definition of the Generalized Central Limit Theorem may lead to some skepticism as infinite variance for real data having boundedrange may seem
inappropriate. It should be noted, however, that the variance is but one measure of spread of a distribution, and is not appropriate for all problems. It is argued that in stable environments, y may
be more appropriate as a measure of spread. From an applied point of view, what is important is capturing the shape of a distribution. The Gaussian distribution is, for instance, routinely used to
model bounded data, even though it has unbounded support. Although in some cases there are solid theoretical reasons for believing that a stable model is appropriate, in other more pragmatic cases
the stable model can be used if it provides a good and parsimonious fit to the data at hand.
Simulation of Stable Sequences
Computer simulation of random processes is important in the design and analysis of signal processing algorithms. To this end, Chambers, Mallows, and Stuck (1976) [43] developed an algorithm for the
generation of stable random variables. The algorithm is described in the following theorem.
THEOREM 2.2 (SIMULATION OF STABLE VARIABLES [151]) Let 0 and W be independent with 0 uniformly distributed on (- $, $) and W exponentially distributed with mean 1. 2 S ( a ,6)is generated as
and 00 = a-' arctan(6 t a n where c ( a ,6) = (1 (6 tan y)2)1/(2a) ticulal; for a = 1 , 6 = 0 (Cauchy), 2 Cauchy(y) is generated as
2 = ytan(0) = y t a n
( (U 7r
where U is a uniform random variable in ( 0 , l ) .
y).In par(2.16)
Figure 2.7 illustrates the impulsive behavior of symmetric stable processes as the characteristic exponent a is varied. Each one of the plots shows an independent and identically distributed (i.i.d.)
"zero-centered'' symmetric stable signal with unitary geometric power4. In order to give a better feeling of the impulsive structure of the data, the signals are plotted twice under two different
scales. As it can be appreciated, the Gaussian signal (a = 2) does not show impulsive behavior. For values of a close to 2 ( a = 1.7 in the figure), the structure of the signal is still similar to
the Gaussian, 4The geometric power is introduced in the next section as a strength indicator of processes with infinite variance.
although some impulsiveness can now be observed. As the value of a is decreased, the impulsive behavior increases progressively.
Statistical signal processing relies, to a large extent, on the statistical characterization provided by second-order moments such as the variance V a r ( X )= E ( X ’) - ( E X ) 2 with E X being the
first moment. Second-orderbased estimation methods are sufficient whenever the underlying signals obey Gaussian statistics. The characterization of nonGaussian processes by second-order moments is no
longer optimal and other moment characterizationsmay be required. To this end, higher-orderstatistics (HOS) exploiting third- and fourth-order moments (cummulants) have led to improved estimation
algorithms in nonGaussian environments, provided that higher-order moments exist and are finite [148]. In applications where the processes are inherently impulsive, second-orderand HOS may either be
unreliable or may not even exist.
Fractional Lower-Order Moments
The different behavior of the Gaussian and nonGaussian distributions is to a large extent caused by the characteristics of their tails. The existence of second-order moments depends on the behavior
of the tail of the distribution. The tail “thickness” of a distribution can be measured by its asymptotic mass P(1Xl > x) as z + m. Given two functions h ( z ) and g(z), they have asymptotic
similarity (h(x) g(z)) if for z + 00: limz+m h(z)/g(z)= 1, the Gaussian distribution can be shown to have exponential order tails with asymptotic similarity
(2.17) Second order moments for the Gaussian distribution are thus well behaved due to the exponential order of the tails. The tails of the Laplacian distribution are heavier than that of the
Gaussian distribution but remain of exponential order with
~ ( 1 >x x) 1
The tails of more impulsive nonGaussian distributions, however, behave very differently. Infinite variance processes that can appear in practice as a consequence of the Generalized Central Limit
Theorem are modeled by probability distributions with algebraic tails for which
P ( X > z)
C F a
for some fixed c and a > 0. The tail-heaviness of these distributions is determined by the tail constant a, with increased impulsiveness corresponding to small values of a. Stable random variables,
for a < 2, are examples of processes having algebraic tails as described by the following theorem.
= 0.6
Figure 2.7 Impulsive behavior of i.i.d. a-stable signals as the tail constant a is varied. Signals are plotted twice under two different scales.
THEOREM 2.3 (STABLEDISTRIBUTION TAILS [151]) LetX metric, standard stable random variable with 0 < a < 2, then as x
S ( a )beasym-
-+ 00,
(2.20) For stable and other distributions having algebraic tails, the following theorem is important having a significant impact on the statistical moments that can be used to process and analyze
these signals.
THEOREM 2.4 Algebraic-tailed random variables exhibitfnite absolute moments for orders less than a E ( l X l p )< 00, if p < a.
Conversely, i f p 2 a, the absolute moments become infinite. Prooj The variable Y is replaced by lXlP in the first moment relationship (2.22) yielding
E(lXIP) =
P ( I X ( p> t ) d t
.I, pu*-lP(lxI > u)du,
which, from (2.19), diverges for any distribution having algebraic tails.
Given that second-order,or higher-order moments, do not exist for algebraic tailed processes, the result in (2.21) points to the fact that in this case, it is better to rely on fractional lower-order
moments (FLOMs): ElXlP = IzlPf(x)dz,which exist for 0 < p < a. FLOMs for arbitrary processes can be computed from the definitions. Zolotarev (1957) [207], for instance, derived the FLOMs of SaS
random variables as
PROPERTY 2 . 1 The FLOMs for a SaS random variable with zero locution parameter and dispersion y is given by (2.25)
(2.26) Figure 2.8 depict the fractional lower-order moments for standardized SaS (y = 1, S = 0) as functions of p for various values of a.
t a=l 9
1 0.5
1 pth order
figure 2.8 Fractional lower-order moments of the standardized S a S random variable. 2.3.2
Zero-Order Statistics
Fractional lower-order statistics do not provide a universal framework for the characterization of algebraic-tailed processes: for a given p > 0, there will always be a “remaining” class of processes
(those with a 5 p ) for which the associated FLOMs do not exist. On the other hand, restricting the values of p to the valid interval (0; a ) requires either the previous knowledge of a or a
numerical procedure to estimate it. The former may not be possible in most practical applications, and the later may be inexact and/or computationally expensive. Unlike lower- or higher-order
statistics, the advantageof zero-order statistics (ZOS) is that they provide a common ground for the analysis of basically any distribution of practical use [85,48,47, 50, 491. In the same way as
pth-order moments constitute the basis of FLOS and HOS techniques, zero-order statistics are based on logarithmic “moments” of the form E log 1x1.
THEOREM 2 . 5 Let X be a random variable with algebraic or lighter tails. Then, Elog
1x1 <
Proof: If X has algebraic or lighter tails, there exists a p > 0 such that ElXl P < m. Jensen’s inequality [65] guarantees that for a concave function 4, and a random variable 2, E 4 ( Z ) 5 q5
(EZ).Letting $(x) = log Ix\/pand 2 = ( X I Pleads to (2.27)
which is the desired result. Random processes for which Theorem 2.5 applies, are referred to as being of “logarithmic order,” in analogy with the term “second order” used to denote processes with
finite variance. The logarithmicmoment, which is finite for all logarithmic-order processes, can be used as a tool to characterize these signals. The strength of a signal is one attribute that can be
characterizedby logarithmic moments. For second-order processes, the power E X 2 is a widely accepted measure of signal strength. This measure, however, is always infinite when the processes exhibit
algebraic tails, failing to provide useful information. To this end, zero-order statistics can be used to define an alternative strength measure referred to as the geometric power.
DEFINITION 2.6 (GEOMETRIC POWER [85]) Let X be a logarithmic-order random variable. The geometric power of X is dejined as
so = So(X) = e E log 1x1.
The geometric power gives a useful strength characterization along the class of logarithmic-order processes having the advantage that it is mathematically and conceptually simple. In addition, it has
a rich set of properties that can be effectively used. The geometricpower is a scale parameter satisfying S O(X) 2 0 and SO(cX) = IcISo(X),and as such, it can be effectively used as an indicator of
process strength or “power” in situations where second-ordermethods are inadequate. The geometric power takes on the value So(X) = 0 if and only if P ( X = 0) > 0, which implies that zero power is
only attained when there is a discrete probability mass located in zero [85]. The geometric power of any logarithmic-order process can be computed by the evaluation of (2.28). The geometric power of
symmetric stable random variables, for instance, can be obtained in the closed-form expression.
PROPOSITION 2.1 (GEOMETRIC POWER OF STABLEPROCESSES) The geometric power of a symmetric stable variable is given by So where C,
= eCe
x 1.78, is the exponential of the Euler constant.
Proof: From [207], p. 215, the logarithmic moment of a zero-centered symmetric a-stable random variable with unitary dispersion is given by E l o g l X J=
- 1) c e ,
where C, = 0.5772 . . . is the Euler constant. This gives - ,Elog
1x1 = ( e C e ) 6-l
-+ &/a
where C, = ece
1.78. If X has a non-unitary dispersion y, it is easy to see that (2.32)
The geometricpower is well defined in the class of stable distributionsfor any value of a > 0. Being a scale parameter, it is always multiple of y and, more interestingly, it is a decreasing function
of a. This is an intuitively pleasant property, since we should expect to observe more process strength when the levels of impulsiveness are increased. Figure 2.9 illustrates the usefulness of the
geometric power as an indicator of process strength in the a-stable framework. The scatter plot on the left side was generated from a stable distribution with a = 1.99 and geometric power SO = 1. On
the right-hand side, the scatter plot comes from a Gaussian distribution ( a = 2) also with unitary geometric power. After an intuitive inspection of Figure 2.9, it is reasonable to conclude that
both of the generating processes possess the same strength, in accordance with the values of the geometric power. Contrarily, the values of the second-orderpower lead to the misleading conclusion
that the process on the left is much stronger than the one on the right. A similar example to the above can be constructed to depict the disadvantages of FLOS-based indicators of strength in the
class of logarithmic-order processes. Fractional moments of order p present the same type of discontinuities as the one illustrated in Figure 2.9 for processes with tail constants close to a = p .
The geometric power, on the other side, is consistently continuous along all the range of values of a. This “universality” of the geometricpower provides a general framework for comparing the
strengths of any pair of logarithmic-ordersignals, in the same way as the (second-order)power is used in the classical framework. The term zero-order statistics used to describe statistical measures
using logarithmic moments is coined after the following relationship of the geometric power with fractional order statistics.
THEOREM 2.6 Let S, pth-order moment of X .
(EIX IP)’/P denote the scale parameter derived from the exists for suflciently small values of p , then
If S,
SO= lim S,.
Furthermore, SO5 S,, forany p > 0. Proofi It is enough to prove that lim,,o pital rule,
log ElXlp = E log
1x1.Applying L‘Hos(2.34)
CL = 1.99
Second-orderpower = 03 Geometricpower = 1
Second-orderpower = 3.56 Geometricpower = 1
Figure 2.9 Comparison of second-order power vs. geometric power for i.i.d. a-stable processes. Left: a = 1.99. Right: a = 2. While the values of the geometric power give an intuitive idea of the
relative strengths of the signals, second-order power can be misleading.
(2.35) (2.36) =
To prove that So 5 Sp,Jensen’s inequality [65] guarantees that for a convex function q!J and a random variable Z , q!J(EZ)5 E#(Z). Making 4 ( x ) = e x and Z = log jXlP we get,
soP - e ( E l o g l X / P ) < - Ee1oglXlP
ElXlP = S PP ’
which leads to the desired result.
Theorem 2.6 indicates that techniques derived from the geometric power are the limiting zero-order relatives of FLOMs.
2.3.3 Parameter Estimation of Stable Distributions The generalized central limit method and the theoretical formulation of several stochastic processes justify the use of stable distribution models.
In some other cases, the approach can be more empirical where large data sets exhibit skewness and heavy tails in such fashion that stable models provide parsimonious and effective characterization.
Modeling a sample set by a stable probability density function thus
requires estimatingthe parameters of the stable distribution,namely the characteristic exponent a E ( 0 , 2 ] ;the symmetry parameter 6 E [-1,1],which sets the skewness; the scale parameter y > 0;
and the location parameter p. The often-preferred maximum likelihood parameter estimation approach, which offers asymptotic efficiency,is not readily available as stable distributionslack closed form
analytical expressions. This problem can be overcome by numerical solutions. Nonetheless, simpler methods may be adequate in many cases [40, 68, 135, 1511. The approach introduced by Kuruo glu, in
particular, is simple and provides adequate estimates in general [121]. In Kuruo@u's approach, the data of a general a-stable distributions is first transformed to data satisfying certain symmetric
and skewness conditions. The parameters of the transformed data can then be estimated by the use of simple methods that use fractional lower-order statistics. Finally, the parameter estimates of the
original data are obtained by using well-known relationshipsbetween these two sets of parameters. Kuruoglu's approach is summarized next. Let X I ,be independent a-stable variates that are
identically distributed with parameters a, 6, y, and p. This stable law is denoted as
S d 6 , Y,PI.
The distribution of a weighted sum of these variables with weights derived as [121]
can be
where x
denotes the signed pth power of a number x x
= sign(x)lxlP.
This provides a convenient way to generate sequences of independent variables with zero /3, zero 6, or with zero values for both p and 6 (except when a = 1). These are referred to as the centered,
deskewed, and symmetrized sequences, respectively:
"-I 2
x,s = X2k - X2k-1
+ 2"
6,[ 2 + 2,] $y,O)
S"(0, 4 6 7 , [2 - 2'/*]P)
Using such simpler sequences, moment methods for parameter estimation can be easily applied for variates with p = 0 or 6 = 0, or both, to the general variates at the
cost of loss of some sample size. In turn, these estimates are used to calculate the estimates of the original a-stable distributions. Moments of a distribution provide important statistical
information about the distribution. Kuruoglu's methods, in particular, exploit fractional lower-order or negative-order moments, which, for the skewed a-stable distributions, are finite for certain
parameter values. First, the absolute and signed fractional-ordermoments of stable variates are calculated analytically as a generalization of Property 2.1 [121].
PROPERTY 2.2 L e t X
S,(d,y,O). Then,fira
for p E (-1, a) and where
o = arctan (6 tan
7 ).
As for the signed fractional moment of skewed a-stable distributions,the following holds [121].
PROPERTY 2.3 Let X
S,(b, y,0). Then (2.47)
Given n independent observationsof a random variate X , the absolute and signed fractional moments can be estimated by the sample statistics: (2.48) The presence of the gamma function in the
formulaepresented by the propositions hampers the direct solution of these expressions. However, by taking products and ratios of FLOMs and applying the following property of the gamma function:
(2.49) a number of simple closed-form estimators for a , 6, and y can be obtained.
FLOM estimate for a: Noting that (2.48) is only the approximation of the absolute and signed fractional order moments, the analytic formulas (2.45), (2.47) are used. From (2.45), the product APA-, is
given by (2.50) Using (2.49), the above reduces to
sin2(pr)r(p)r(-p) cos2 $ sin . 2 ( z)r'($)r(-E) cos2 F '
(2.5 1)
The function r(.)has the property, (2.52)
r(P+1) = PF(P) thus, using equations (2.49) and (2.52), the following is obtained
(2.53) and P P r(-)r(--) =
a7r psin(px)'
Taking (2.53) and (2.54) into equation (2.51) results in
A,A-, tan?
- 2 ~ 02 sP.rr -
(2.55) '
In a similar fashion, the product S,S-, can be shown to be equal to S,S-,tan
px 2sin2 -= 2 asin?'
Equations (2.55) and (2.56) combined lead to the following equality. (2.57)
whereq = Using the properties of r functions, and the first two propositions, other closedform expressions for a, p, and y can be derived assuming in all cases that 6 = 0. These FLOM estimation
relations are summarized as follows.
Sinc Estimation for a; Estimate a as the solution to (2.58) It is suggested in [ 1211 that given a lower bound Q L B on a, a sensible range for p is (0, Q L B / 2 ) .
Ratio Estimate for 6 : Given an estimate of a, estimate 8 by solving
s,/A, = tan
($) /tan
Given this estimate of 8, obtain the following estimate of 6:
a=--. FLOM Estimate for y:
tan(8) tan ( y )
Given an estimate of a, 8, solve (2.61)
Note that the estimators above are all for zero-location cases, that is, p = 0. For the more general case where ,O # 0, the data must be transformed into a centered sequence by use of (2.42), then
the FLOM estimation method should be applied on the parameters of the centered sequence, and finally the resulting 6 and y must be and (2 2") 6 respectively. transformed by dividing by (2 - 2a)/(2
However, there are two possible problems with the FLOM method. First, since the value of a sinc function is in a finite range, when the value of the right size of (2.58) is out of this range, there
is no solution for (2.58). Secondly, estimating a needs a proper value of p , which in turn depends on the value of a; in practice this can lead to errors in choosing p .
EXAMPLE 2.4 Consider the first-order modeling of the RTT time series in Figure 1.3 using the estimators of the a-stable parameters. The modeling results are shown in Table 2.1. Figure 2.10 shows
histograms of the data and the pdfs associated with the parameters estimated. Table 2.1 Estimated parameters of the distribution of the RTT time series measured between a host at the University of
Delaware and hosts in Australia, Sydney, Japan, and the United Kingdom.
Australia 1.0748
0.0010 0.2533
Sydney 1.5026 1 7.6170 x 0.2359
1.0993 0.6733 0.0025 0.2462
1.2180 1 0.0014 0.1091
Figure 2-70 Histogram and estimated PDF of the R7T time series measured between a host at the University of Delaware and hosts in ( a ) Australia, (b) Sydney, ( c ) Japan, and (d)the United Kingdom.
Problems 2.1
Let $, denote the L, estimator defined by N
,hp= a r g m i n x \xi- P I P . i=l
< p 5 1, the estimator is selection-type (i.e., equal to one of the input samples xi).
(a) Show that when 0
bp is always
(b) Define bo = lim,,o b,. Prove that bo is selection-type, and that it is always equal to one of the most repeated values in the sample set. 2.2 The set of well-behaved samples { - 5 , 5 , -3,3,
-1,l) has been contaminated with an outlier sample of value 200.
(a) Plot the value of the L , estimator ,hpas a function of p , for 0
5 p 5 3.
(b) Assuming that the ideal location of this distribution is p = 0, interpret the qualitative robustness of the L , estimator as a function of p .
For X
Cauchy(y), find the mean and variance of X
2.4 Let X I , . . . , X N denote a set of independent and identically distributed random variables with X i Cauchy(1). Show that the sample mean N
X = =1- C X i N i=l posses the same distribution as any of the samples X i . What does this tell about the efficiency of X in Cauchy noise? Can we say X is robust?
2.5 Q
Show that Gaussian distributions are stable (i.e., show that u 2
+ b2 = c 2 , so
= 2).
Show that Cauchy distributions are stable (i.e., show that u
Find the asymptotic order of the tails of
+ b = c, so
= 1).
(a) A Gaussian distribution.
(b) A Laplacian distribution.
Find the geometric power of X
2.10 with U
Uniform( -./a,
Let W be exponentially distributed with mean 1. Show that W = - In U Uniform(0,l). N
2.11 Show that the expressions in equations (2.42), (2.43), and (2.44) generate centered, deskewed, and symmetrized sequences with the parameters indicated.
3 Order Statistics
The subject of order statistics deals with the statistical properties and characteristics of a set of variables that have been ordered according to magnitude. Represent the elements of an observation
vector X = [ X ( n )X, ( n - l),. . . , X ( n - N 1)IT, as X = [ X IX , 2 , . . . X N ] ~If. the random variables X I ,X Z ,. . . , X N are arranged in ascending order of magnitude such that
X(1) I X(2) I ... I X(N)> we denote X ( i ) as the ith-order statistic for i = 1, . . . , N . The extremes X ( N )and X ( l ) ,for instance, are useful tools in the detection of outliers. Similarly,
the range X ( N )- X(1)is well known to be a quick estimator of the dispersion of a sample set. An example to illustrate the applications of order statistics can be found in the ranking of athletes
in Olympic sports. In this case, a set of N judges, generally from different nationalities, judge a particular athlete with a score bounded by a minimum assigned to a poor performance, and a maximum
for a perfect score. In order to compute the overall score for a given athlete, the scores of the judges are not simply averaged. Instead, the maximum and the minimum scores given by the set of
judges are discarded and the remaining scores are then averaged to provide the final score. This trimming of the data set is consistently done because of the possible bias of judges for a particular
athlete. Since this is likely to occur in an international competition, the trimmed-average has evolved into the standard method of computing Olympic scores. This simple example shows the benefit of
discarding, or discriminating against, a subset of samples from a larger data set based on the information provided by the sorted data. 43
Sorting the elements in the observation vector X constitutes a nonlinear permutation of the input vector. Consequently, even if the statistical characteristics of the input vector are exactly known,
the statistical description of the sorted elements is often difficult to obtain. Simple mathematical expressions are only possible for samples which are mutually independent. Note that even in this
simple case where the input samples X I ,. . . , X N are statistically independent, the order statistics are necessarily dependent because of the ordering on the set. The study of order statistics
originated as a result of mathematical curiosity. The appearance of Sarhan and Greenberg's edited volume (1962) [ 1711, and H. A. David's treatise on the subject (1970) [58] have changed this. Order
statistics have since received considerable attention from numerous researchers. A classic and masterful survey is found in H. A. David (1981) [58]. Other important references include the work on
extreme order statistics by Galambos (1978) [77], Harter's treatment in testing and estimation (1970) [96], Barnett and Lewis' (1984) [28] use of order statistics on data with outliers, and the
introductory text of Arnold, Balakrishnan, and Nagaraja (1992) [16]. Parallel to the theoretical advances in the area, order statistics have also found important applications in diverse areas
including life-testing and reliability, quality control, robustness studies, and signal processing. The Handbook of Statistics VoZ. 17, edited by Balakrishnan and Rao (1998) [24], provides an
encyclopedic survey of the field of order statistics and their applications.
When the variables are independent and identically distributed (i.i.d.), and when the parent distribution is continuous, the density of the rth order statistic is formed as follows. First, decompose
the event that z < X ( v ) 5 z dx into three exclusive parts: that T - 1 of the samples X i are less than or equal to z, that one is between z and x dx, and that N - T are greater than z dz. Figure 3
. 1 depicts ~ the configuration of such event. The probability that N - T are greater than or equal to x dx is simply [l - F(x dz)IN-', the probability that one is between z and z dx is f Z ( x )dz,
and the probability that T - 1 are less than or equal to x is F(z)'-'. The probability corresponding to the event of having more than one sample in the interval ( 2 ,x dz] is on the order of (dz) and
is negligible as dx approaches zero. The objective is to enumerate all possible outcomes of the X ~ S such that the ordering partition is satisfied. Counting all possible enumerations of N samples in
the three respective groups and using the fact that F ( z d z ) + F ( z ) as dx + 0, we can write
N! F(z)'-' (r - l)!( N - T ) !
[l - F ( x ) ] ~ - 'fz(x) dx. (3.1)
The density function of the rth order statistic, f ( T ) (x),follows directly from the above. The coefficient in the right side of (3.1) is the trinomial coefficient whose structure follows from the
general multinomial coefficient as described next. Given a set of N objects, kl labels of type 1, k2 labels of type 2, . . ., and k , labels of type m and suppose that k l k2 . . . k , = N , the
number of ways in which we may assign the labels to the N objects is given by the multinomial coefficient
+ + +
N! k l ! k a ! . . . k,! . The trinomial coefficient in (3.1) is a special case of (3.2) with k l = r and k~ = N - r .
(3.2) -
1, kz
+ +
FigUfe 3.1 (a) The event z < X ( r l 5 3: d z can be seen as T - 1 of the samples X i are less than or equal to z, that one is between 3: and z dz, and that N - T are greater than or dx and y < X ( s
l 5 y dy can be seen as equal to z. (b)The event z < X(Tl 5 x T - 1 of the samples X i are less than z, that one of the samples is between z and 3: dx, that s - T - 1of the samples X , are less than
y but greater than z, that one of the samples is between y and y dy, and finally that N - s of the samples are greater than y.
Thejoint density function of the order statistics X ( T and ) X ( s ) for , 15 r < s 5 N , can be found in a similar way. In this case, for x 5 y, the joint density is denoted as f ( T , s ) (z, y)
and is obtained by decomposing the event z
< X ( T )5
+ dx < y < X ( s ) I y + d l ~
into five mutually exclusive parts: that r - 1 of the samples Xi are less than x,that one of the samples is between x and x dx,that s - r - 1of the samples Xi are less than y but greater than x
dx,that one of the samples is between y and y dy, and dy. The decomposition of the finally that N - s of the samples are greater than y event in (3.3) is depicted in Figure 3.lb. The probability of
occurrence for each of thefivelistedpartsisF(x)'-l,f,(x) dx,[ F ( y ) - F(x+dz)]'-'-', fz(y) dy, and
[l - F ( y d ~ ) ] ~The - ~probability . corresponding to the events of having more than one sample in either of the intervals (2, x dx] and (y, y dy] is negligible as dx and dy approach zero. Using
the multinomial counting principle to enumerate all possible occurrences in each part, and the fact that F ( x d z ) F ( x ) and F ( y d y ) F ( y ) as dx, d y + 0 we obtain the joint density
+ +
m y ) - F(z)lS-'-l
[1 - F(Y)lN-"
These density functions, however, are only valid for continuous random variables, and a different approach must be taken to find the distribution of order statistics with discontinuous parent
distributions. The following approach is valid for both, continuous and discontinuous distributions: let the i.i.d. variables X 1 , X z , . . . , X N have a parent distribution function F ( x ) ,the
distribution function of the largest order statistic X ( N ) is
due to the independence property of the input samples. Similarly, the distribution function of the minimum sample X ( l )is
F(,)(Z) = PT(X(1) 5 x} = 1 - PT(X(1) > x} = 1 - Pr(a11 xi > x} = I - [I - ~ ( x ) ] ~ , since X ( l )is less than, or equal to, all the samples in the set. The distribution function for the general
case is F ( T ) ( 4 = PT{X(,) 5 = Pr(at least
of the X i are less than or equal to x}
Pr(exact1y i of the X i are less than or equal to x} i=r
(3.5) Letting the joint distribution function of X ( ' ) and X ( s ) ,for 1 5 T < s 5 N , be denoted as F(,,.) ( 2 ,y) then for x < y we have for discrete and continuous random variables
F(r,s)(z, y) = Pr{at least T of the X i 5 z, at least s of the X i 5 y } N
Pr(exact1y i of X I ,X2 . . . ,X , are at most x and
= j=s i=r
exactly j of X I , X2 . . . ,X , are at most y} N
N! i ! ( j - i ) ! ( N- j ) ![F(x)]i[F(y)- F(.)p-z[l - F ( y ) ] V
j = s i=r
Notice that for IC 2 y, the ordering X ( r ) < x with X ( s ) 5 y, implies that F(r,s)(z,~) = J'(~)(Y). An alternate representation of the distribution function F ( r )(x)is possible, which will
prove helpful later on in the derivations of order statistics. Define the set of N samples from a uniform distribution in the closed interval [0,1] as U I ,U2, . . . , U N . The order statistics of
these variates are then denoted as U(l), U p ) ,. . . , U ( N ) .For any distribution function F(x), we define its corresponding inverse distribution function or quantile function F-' as
~ - ' ( y ) = supremum [z : ~ ( z 5) y],
for 0 < y < 1. It is simple to show that if X I , . . . , X N are i.i.d. with a parent distribution F ( z ) ,then the transformation F-'(Ui) will lead to variables with the same distribution as X i
[157]. This is written as
where the symbol = represents equality in distribution. Since cumulative distribution functions are monotonic, the smallest Ui will result in the smallest X i , the largest Ui will result in the
largest X i , and so on. It follows that
F-'(U(r)) 2 x(r).
The density function of U ( T )follows from (3.1) as
(3.10) Integrating the above we can obtain the distribution function
Using the relationship F-' ( U ( r ) )= X ( r ) ,we obtain from the above the general expression
which is an incomplete Beta function valid for any parent distribution F ( z ) of the i.i.d. samples X i [ 161. The statistical analysis of order statistics in this section has assumed that the input
samples are i.i.d. As one can expect, if the i.i.d. condition is relaxed to the case of dependent variates, the distribution function of the ordered statistics are no longer straightforward to
compute. Procedures to obtain these are found in [%I. Recursive Relations for Order Statistics Distributions Distributions of order statistics can also be computed recursively, as in Boncelet (1987)
[36]. No assumptions are made about the random variables. They can be discrete, continuous, mixed, i.i.d. or not. Let X(,):N denote the rth order statistic out of N random variables. For first order
distributions let -a= t o < tl < t z = +m and, for second order distributions, . for events of order let -co = t o < tl < t 2 < t 3 = +co and let r1 5 r ~ Then, statistics:
In the first order case, (3.11) states that there are two ways the rth order statistic out of N + 1random variables can be less or equal than t 1: one, that the N 1st is larger than t 1 and the rth
order statistic out of N is less or equal than t 1 and two, the N + 1st is less or equal than tl and the T - 1st order statistic out of N is less or equal than t l . In the second order case, the
event in question is similarly decomposed into three events. Notice that the events on the right hand side are disjoint since the events on X N + ~ partition the real line into nonoverlapping
segments. A direct consequence of this is a recursive formula for calculating distributions for independent X %:
3.2 MOMENTS OF ORDER STATISTICS The Nth order density function provides a complete characterizationof a set of N ordered samples. These distributions, however, can be difficult to obtain. Moments of
order statistics, on the other hand, can be easily estimated and are often sufficient to characterize the data. The moments of order statistics are defined in the same fashion as moments of arbitrary
random variables. Here we always assume that the sample size is N . The mean or expected value of the rth order statistic is denoted as p(') and is found as
N! ( r - I)! ( N - r ) !
z F ( z ) ' - ~ [-~F(x)IN-'
fz(x) dx.
The pth raw moment of the rth-order statistic can also be defined similarly from (3.9) and (3.10) as
for 1 5 T 5 N . Expectation of order statistic products, or order statistic correlation, can also be defined, for 1 5 r 5 s I N , as
E (X(r)X(s))
[ B ( T ,s
- T,N - s
+ 1)I-l
1' 1'
( T J - u)s-T-l(l
where B ( a ,b, ).
= (a-l)!(b-l)!(c-l (a+b+c-l)!
- TJ)]
dv du
. Note that (3.17) does not allude to a time
shift correlation, but to the correlation of two different order-statistic variates taken from the same sample set. The statistical characteristics of the order-statistics X(l),X ( z ) ,. . . ,X(N)
are not homogeneous, since
for T # s, as expected since the expected value of X(')should be less than the expected value of X(T+l).In general, the expectation of products of order statistics are not symmetric
q x ( r ) x ( r + s 1) # E ( X ( r ) X ( r - s ) ) .
This symmetry only holds in very special cases. One such case is when the parent distribution is symmetric and where T = ( N 1 ) / 2 such that X ( r )is the median. The covariance of X ( r )and X ( s
)is written as
cov [X(r)X(s)I= E { ( X ( r )- & T ) )
Tukey (1958) [187], derived the nonnegative property for the covariance of order statistics: c o v [ X ( , ) X ~ , )2] 0.
3.2.1 Order Statistics From Uniform Distributions In order to illustrate the concepts presented above, consider N samples of a standard uniform distribution with density function f u (u)= 1 and
distribution function Fu(u) = u for 0 5 u 5 1. Letting U ( r )be the rth smallest sample, or order statistic, the density function of U ( r )is obtained by substituting the corresponding values in
(3.1) resulting in
(3.21) also in the interval 0 5 u 5 1. The distribution function follows immediately as
N! - l)!( N
- T)!
P ( l
- ty-7
or alternatively using (3.5) as
( 7 ) ui[l
F(,)(u) =
- uIN-'.
The mode of the density function can be found at moment of U ( r )is found from the above as
(T -
1)/(N - 1). The kth
+ k , N - + ~ ) / B ( NT , + I), T
where we make use of the complete beta function
B ( p ,4 ) =
tP-l(l - t ) q - l d t
for p , q > 0. Simplifying (3.25) leads to the kth moment (k) =
N ! ( T + k - l)! ( N k ) ! (r - l)!.
In particular, the first moment of the rth-order statistic can be found as (l) = r / ( N
+ 1).
To gain an intuitiveunderstanding of the distribution of order statistics, it is helpful to plot f ( T ) ( u in ) (3.21) for various values of T . For N = 11, Figure 3.2 depicts the density functions
of the 2nd-, 3rd-, 6th- (median), 9th-, and 10th-order statistics of the samples. With the exception of the median, all other order statistics exhibit asymmetric density functions. Other
characteristics of these density functions, such as their mode and shape, can be readily observed and interpreted in an intuitive fashion.
Figure 3.2 Density functions of X ( z ) X , ( 3 ) X, ( 6 ) (median), X p ) , and eleven uniformly distributed samples. Next consider the joint density function of U ( T and ) U ( s )(1 5 (3.4) we
(u, tJ) =
N! (7- -
~ ( I o for )
a set of
< s 5 N ) . From
Z L ) ~ - ~ -- ~Z I() ~~ - ' (3.28)
U ~ - ~( ~ J
l)!(s- 7- - l ) ! ( N - s)!
Again there are two equivalent expressions for the joint cumulative distribution function, the first is obtained integrating (3.28) and the second from Eq. (3.6)
F(,s)(u74 =
N! ( r - l ) ! ( s- r - l)!( N - s ) !
for 0 5 u < u 5 1. The joint density function allows the computation of the ( k r ,k,)th product moment of (U,,), U,,)), which, after some simplifications, is found as
In particular, for k, = k, = 1, the joint moment becomes P(r,s) =
+ 1) ( N + 1 ) ( N + 2)' r (s
As with their marginal densities, an intuitive understanding of bivariate density functions of order statistics can be gained by plotting f ( r , s ) ( ~u ,). Figure 3.3 depicts the bivariate density
function, described in (3.28) for the 2nd- and 6th- (median) order statistics of a set of eleven uniformly distributed samples. Note how the marginal densities are satisfied as the bivariate density
is integrated over each variable. Several characteristics of the bivariate density, such as the constraint that only regions where u < will have mass, can be appreciated in the plot.
3.2.2 Recurrence Relations The computation of order-statisticmoments can be difficult to obtain for observations of general random variables. In such cases, these moments must be evaluated by
numerical procedures. Moments of order statistics have been given considerable importance in the statistical literature and have been numerically tabulated extensively for several distributions
[58,96]. Order-statistic moments satisfy a number of recurrence relations and identities, which can reduce the number of direct computations. Many of these relations express higher-order moments in
terms of lower-order moments, thus simplifying the evaluation of higher-order moments. Since the recurrence relations between moments often involve sample sets of lower orders, it is convenient to
introduce the notation X ( i ) :to~represent the ith-order statistic taken from a set of N samples. Similarly, ~ L ( ~ represents ):N the expected value of X ( O : N . Many recursive relations for
moments of order-statistics are derived from the identities N
(3.3 1)
Figure 3.3 Bivariate density function of X ( 6 )(median) and X ( z )for a set of eleven uniformly distributed samples. fork
2 1, and N
for k i , kj 2 1, which follows from the principle that the sum of a set of samples raised to the kth power is unchanged by the order in which they are summed. Taking expectations of (3.31) leads to:
CP(k)- N E ( X 3 = NP;;;:, (2):N
for N
2 2 and Ic 2 1. Similarly, from (3.32) the following is obtained:
for k i , kj 2 1. These identities are simple and can be used to check the accuracy of computation of moments of order statistics. Some other useful recurrence relations are presented in the
following properties.
PROPERTY 3.1 For 1 5 i 5 N ' (k) W(i+1):N
1and k
2 1.
+ ( N - 4P::;:N
= NP(B):N-l.
This property can be obtained from equation (3.16) as follows:
( i - 1)!(N N !- i - l)!
-q - i - 1
( i - 1)!(N- i - l)!
Property 3.1 describes a relation known as the triangle rule [ 161, which allows one to compute the kth moment of a single order statistic in a sample of size N , if these moments in samples of size
less than N are already available. By repeated use of the same recurrence relation, the kth moment of the remaining N - 1 order statistics (k)
can be subsequently obtained. Hence, one could start with 1-1 or , u ( ~ ) : and ~ recursively find the moments of the smaller-or larger-order statistics. A different recursion, published by
Srikantan [ 1801, can also be used to recursively compute single moments of order statistics by expressing the lcth moment of the ithorder statistic in a sample of size N in terms of the lcth moments
of the largest order statistics in samples of size N and less.
PROPERTY 3.2 For 1 5 i 5 N - 1 andk 2 1.
The proof of this property is left as an exercise.
3.3 ORDER STATISTICS CONTAINING OUTLIERS Order statistics have the characteristic that they allow us to discriminate against outlier contamination. Hence, when properly designed, statistical
estimates using ordered statistics can ignore clearly inappropriate samples. In the context of robustness, it is useful to obtain the distribution functions and moments of order-statistics arising
from a sample containing outliers. Here, the case where the contamination consists of a single outlier is considered. These results can be easily generalized to higher
Figure 3.4 (a)Triangle recursion for single moments; (b)recurrence relation from moments of maxima of lower orders. orders of contamination. The importance of a systematic study of order statistics
from an outlier model has been demonstrated in several extensive studies [3,59]. First, the distributions of order statistics obtained from a sample of size N when an unidentified single outlier
contaminates the sample are derived. Let the N long sample set consist of N - 1i.i.d. variates X i , i = 1,. . . ,N - 1,and the contaminant variable Y , which is also independent from the other
samples in the sample set. Let F ( z ) and G(z) be the continuous parent distributions of X i and Y , respectively. Furthermore, let Z(1):N
I Z ( 2 ) : N I . . IZ ( N ) : N
be the order statistics obtained by arranging the N independent observations in increasing order of magnitude. The distribution functions of these ordered statistics are now obtained. The
distribution of the maxima denoted as H ( N ) : N (is~ )
H ( ~ ) : ~ (=z ) Pr {all of X I , . . . , XN-1, and Y 5 z} = F ( X ) ~G - (~x ) . The distribution of the ith-order statistic, for 1 < i follows:
1,can be obtained as
H ( ~ ) : ~ (= z ) Pr { at least i of X I , X 2 , , . . ,X N - 1 , Y I x} = Pr {exactly i - 1 of X I , X z , . . . ,X N - ~ 5 z and Y 5 x} +Pr {at least i of X I , Xa, . . . , X N - ~I z} =
N-1 -
) (F(x))Z-l(l
F ( x ) ) ~ - ~ G ( zF) ( + N - I ( ~ )
where P ( i ) : ~ - l ( xis) the distribution of the ith-order statistic in a sample of size N - 1 drawn from a parent distribution F ( x ) . The density function of Z ( ~ ) :can N be obtained by
differentiating the above or by direct derivation, which is left as an exercise:
( N - l)! ( i - 2 ) ! ( N- i ) ! ( F ( Z ) ) ~ --~F( (~~ ) ) ~ - ~ G ( z ) f ( z ) ( N - l)! (F(z))i-l(l- F(z))N-”(z) ( i - l ) ! ( N- i ) ! ( N - l)! (F(z))Z-’(l - F ( x ) ) ~ - ~ (1 - ’ - G(z))f(x)
( i - 1 ) ! ( N- i - l)!
where the first term drops out if i = 1, and the last term if N = i . The effect of contamination on order statistics is illustrated in Figure 3.5 depicting the densities of 2(2), Z(6)(median), and Z
(lo)for a sample set of size 11, zero-mean, double-exponential random variables. The dotted curves are the densities where no contamination exists. In the contaminated case, one of the random
variables is modified such that its mean is shifted to 20. The effect of the contamination on the second-order statistic is negligible, the density of the median is only slightly affected as
expected, but the effect on the 10th-order statistic, on the other hand, is severe.
Figure 3.5 Density functions of Z ( z ) ,Z(6)(median), and Z(lo)with (solid) and without contamination (dotted).
The discussion of order statistics would not be complete if the statistical relationships between the order statistics and the nonordered samples are not described. To begin,
it is useful to describe the statistics of ranks. Sorting the elements X I , .. . , X N defines a set of N keys ri, for i = 1,.. . , N , where the rank key ri identifies the location of X i among the
sorted set of samples X ( l ), . . . , X"). If the input elements to the sorter are i.i.d., each sample X i is equally likely to be ranked first, second, or any arbitrary rank. Hence
=r} =
forr=I, 2,...,N
The expected value of each rank key is then E { r i } = ( N bivariate distribution of the two keys ri and r j , is given by
P, {ri = r,
1 N(N--l)
for T
+ 1)/2. Similarly,the
# s = 1, 2 , . . . , N
rj = s }
The joint distribution function of the rth order statistic X ( r ) and the ith input sample is derived next. Again, let the sample set X I , Xa, . . . , X N be i.i.d. with a parent distributionF ( x
) . Since the observation samples are i.i.d., the joint distribution for X(,) and X i is valid for any arbitrary value of i. The joint distribution function of X i and X(,) is found for X i I X(,) as
Since x I z , then given that X i < x we have that the second term in the right side of the above equation is simply the probability of at least r - 1of the remaining N - 1 samples X i < z ; thus,
For the case X i
> X(,), the following holds:
These probabilities can be shown to be
for z < x. The cross moments of X i and X ( v )can be found through the above equations, but an easier alternativemethod has been described in [ 1541as stated in the next property.
PROPERTY 3.3 The cross moment of the rth order statistic and the nonordered sample X i for an N i.i.d. sample set satis$es the relation (3.36) This property follows from the relation N
(3.37) Substituting the above into the right hand side of (3.36) leads to
Since all the input samples are i.i.d. then the property follows directly.
Problems 3.1
Let X I , . . . , X N , be i.i.d. variates, Xi having a geometric density function
f(z)= q 5 p with q = 1 - p , for 0 < p
< 1, and for z 2 0. Show that X ( l ) is distributed geometrically.
For a random sample of size N from a continuous distribution whose density function is symmetrical about x = p.
(a) Show that f(,) (z) and f mirror. That is
( ~ - ~ (+2 )~are ) mirror images of
each other in z = p as
(b) Generalize (a) to joint distributions of order statistics.
X2,X,be independent and identically distributed observations taken 3.3 Let XI, from the density function f(z)= 2 2 for 0 < z < 1,and 0 elsewhere. (a) Show that the median of the distribution is
(b) What is the probability that the smallest sample in the set exceeds the median of the distribution. 3.4 Given the N marginal density functions f(,)(z),1 5 i 5 N, of a set of i.i.d. variables,
show that the average probability density function f(z) is identical to the parent density function f(z).That is show N
= (1/N)
Let X l , X z , . . . ,XNbe N i.i.d. samples with a Bernoulli parent density function such that P,{X, = 1) = p and P,{X, = 0) = 1 - p with 0 < p < 1.
(a) Find P,{X(,) = 1) and Pr{X(,) = 0). (b) Derive the bivariate distribution function of X (,) and X ( j ) .
(c) Find the moments ,u(~) and P ( , , ~ ) . 3.6 Show that in odd-sized random samples from i.i.d continuous distributions,the expected value of the sample median equals the median of the parent
3.7 Show that the distribution function of the midrange m = $(X(l)+ X ( N ) )of N i.i.d. continuous variates is m
F ( m ) = N L m [Fx(2m- 2 ) - F x ( z ) l N - l fx(z)dz. 3.8
For the geometric distribution with
Pr(Xi = z) = p q2 for z where q = 1- p , show that for 1 5 i 5 N
3.9 Consider a set of 3 samples { X I , X 2 , X 3 ) . While the sample X 3 is independent and uniformly distributed in the interval [0,1],the other two samples are mutually dependent with a joint
density function f ( X 1 ,X 2 ) = ;6(X1 - 1,X2 1 ) + i d ( X 1 - 1,X 2 ) , where 6(.,.) is a 2-Dimensional Dirac delta function. (a) Find the distribution function of X i 3 )
(b) Find the distribution function of the median.
(c) Is the distribution of X ( l ) symmetric to that of X ( 3 ) explain. , 3.10
Prove the relation in Property 3.2. (3.42)
From the definition of p
we get
which can be simplified to
where ( n ) mdenotes the terms n(n - 1 ) .. . ( n - m
+ 1).
3.11 Consider a sequence X I ,X 2 , . . . of independent and identically distributed random variables with a continuous parent distribution F ( z ) . A sample X I , is called outstanding if X I , > m
a z ( X 1 ,X 2 , . . . , Xk-1) (by definition X 1 is outstanding). Prove ;hat P T { X I , > m a z ( ~ 1~ , 2 . . ., , XI,-1)) = %. 1
Statistical Foundations of Fil terink Filtering and parameter estimation are intimately related due to the fact that information is carried, or can be inserted, into one or more parameters of a
signal at hand. In AM and FM signals, for example, the information resides in the envelope and instantaneous frequency of the modulated signals respectively. In general, information can be carried in
a number of signal parameters including but not limited to the mean, variance, phase, and of course frequency. The problem then is to determine the value of the information parameter from a set of
observations in some optimal fashion. If one could directly observe the value of the parameter, there would be no difficulty. In practice, however, the observation contains noise, and in this case, a
statistical procedure to estimate the value of the parameter is needed. Consider a simple example to illustrate the formulation and concepts behind parameter estimation. Suppose that a constant
signal is transmitted through a channel that adds Gaussian noise Zi. For the sake of accuracy, several independent observations X i are measured, from which the value of p can be inferred. A suitable
model for this problem is of the form
i = l , 2 ,.", N .
Thus, given the sample set X I ,Xa,. . . , X N , the goal is to derive a rule for processing the observations samples that will yield a good estimate of p. It should be emphasized that the parameter
,f3, in this formulation,is unknown but fixed -there is no randomness associated with the parameter itself. Moreover, since the samples in this example deviate about the parameter p, the estimate
seeks to determine the value of the location parameter. Estimates of this kind are known as location estimates. As 61
it will become clear later on, the location estimation problem is key in the formulation of the optimal filtering problem. Several methods of estimating p are possible for the example at hand. The
sample mean X,given by
is a natural choice. An alternativewould be the sample median = X in which we order the observation samples and then select the one in the middle. We might also use a trimmed mean where the largest
and smallest samples are first discarded and the remaining N - 2 samples are averaged. All of these choices are valid estimates of location. Which of these estimators, if any, is best will depend on
the criterion which is selected. In this Chapter, several types of location estimates are discussed. After a short introduction to the properties of estimators, the method of maximumlikelihood
estimation is presented with criteria for the “goodness” of an estimate. The class of M-estimators is discussed next, generalizingthe concepts behind maximumlikelihood estimationby introducing the
concept of robust estimation. The application of location estimators to the smoothing of signals is introduced at the end of the Chapter.
For any application at hand, as in our example, there can be a number of possible estimators from which one can choose. Of course, one estimator may be adequate for some applications but not for
others. Describing how good an estimator is, and under which circumstances, is important. Since estimators are in essence procedures that use observations that are random variables, then the
estimators themselves are random variables. The estimates, as for any random variable, can be described by a probability density function. The probability density function of the estimate is denoted
as fp(yIp), where y is a possible value for the estimate. Since this density function can change for different estimationrules, the densities alone provide a cumbersomedescription. Instead, we can
recourse to the statisticalproperties of the estimates as a mean to quantify their characteristics. The statistical properties can, in turn, be used for purposes of comparison among various
estimation alternatives.
Unbiased Estimators A typical probability density fp(yI,D) associated with an estimate is given in Figure 4.1, where the actual value of the parameter P is shown. It would be desirable for the
estimate to be relatively close to the actual value of p. It follows that a good estimator will have its density function as clustered together as possible about p. If the density is not clustered or
if it is clustered about some other point, it is a less good estimator. Since the mean and variance of the density are good measures of where and how clustered the density function is, a good
estimator is one for which the mean of is close to p and for which the variance of is small.
Figure 4.1 Probability density function associated with an unbiased location estimator.
In some cases, it is possible to design estimators for which the mean of b is always equal to the true value of p. When this desirable property is true for all values of p, the estimator is referred
to as unbiased. Thus, the N-sample estimate of p, denoted as , 8 ~is, said to be unbiased if
In addition, the variance of the estimate determines the precision of the estimate. If an unbiased estimate has low variance, then it will provide a more reliable estimate than other unbiased
estimates with inherently larger variances. The sample mean in the previous example, is an unbiased estimate since E{ = p, with a variance that follows
b ~ }
where ozis the channel noise variance. Clearly, the precision of the estimate improves as the number of observationsincreases. Efficient Estimators The mean and variance of an estimate are indicators
of quality. If we restrict our attention to only those estimators that are unbiased, we are in effect reducing the measure of quality to one dimension where we can define the best estimator in this
class as the one that attains the minimum variance. Although at first, this may seem partially useful since we would have to search among all unbiased estimators to determine which has the lowest
variance, it turns out that a lower bound
on the variance of any unbiased estimator exists. Thus, if a given estimator is found to have a variance equal to that of the bound, the best estimator has been identified. The bound is credited to
Cramtr and Rao [56].Let f (X; p) be the density function of the observations X given the value of p. For a scalar real parameter, if fi is an unbiased estimate of p, its variance is bounded by
(4.2) provided that the partial derivative of the log likelihood function exists and is absolutely integrable. A second form of the Cramtr-Rao bound can be written as
(4.3) being valid if the second partial derivative of the log likelihood exists and is absolutely integrable. Proofs of these bounds can be found in [32, 1261. Although there is no guarantee that an
unbiased estimate exists whose variance satisfies the Cram tr-Rao bound with equality, if one is found, we are certain that it is the best estimator in the sense of minimum variance and it is
referred to as an eficient estimator. Efficiency can also be used as a relative measure between two estimators. An estimate is said to be efficient with respect to another estimate if it has a lower
variance. If this relative eficiency is coupled with the order of an estimate the following concept emerges: If f i is~unbiased and efficient with respect to for all N , then f i is~said to be
Having a set of observation samples, a number of approaches can be taken to derive an estimate. Among these, the method of maximum likelihood (ML) is the most popular approach since it allows the
construction of estimators even for uncommonly challenging problems. ML estimation is based on a relatively simple concept: different distributions generate different data samples and any given data
sample is more likely to have come from some population than from others [99]. Conceptually, a set of observations, X I ,X2, . . . ,X N , are postulated to be values taken on by random variables
assumed to follow the joint distribution function f ( X 1 ,X2, . . . ,X N ;p), where p is a parameter of the distributions. The parameter p is assumed unknown but fixed, and in parameter estimation
one tries to specify the best procedure to estimate the value of the parameter p from a given set of measured data. In the method of maximum likelihood the best estimate of p is the value for which
the function f ( X 1 ,Xa, . . . , X N ;p) is at its maximum
where the parameter is variable while the observation samples X I ,X2, . . . ,X N are fixed. The density function when viewed as a function of p, for fixed values of the observations,is known as the
likelihoodfunction. The philosophy of maximum likelihood estimation is elegant and simple. Maximum likelihood estimates are also very powerful due to the notable property they enjoy that relates them
to the Cram&-Rao bound. It can be shown that if an efficient estimate exists, the maximum likelihood estimate is efficient [32]. Thanks to this property, maximum likelihood estimation has evolved
into one of the most popular methods of estimation. In maximum likelihood location estimates, the parameter of interest is the location. Assuming independence in this model, each of the samples in
the set follows some distribution
P(X2 I). = F(. - P),
where F ( . ) corresponds to a distribution that is symmetric about 0.
Location Estimation in Gaussian Noise Assume that the observation samX 2 , . . . ,X N , are i.i.d. Gaussian with a constant but unknown mean P. ples XI, The maximum-likelihood estimate of location is
the value fi which maximizes the likelihood function
The likelihood function in (4.6) can be maximized by minimizing the argument in the exponential. Thus, the maximum-likelihood estimate of location is the value that minimizes the least squares sum
The value that minimizes the sum, found through differentiation, results in the sample mean .
Note that the sample mean is unbiased in the assumed model since E{ ~ M L = } I N E { X i } = p. Furthermore, as a maximum-likelihood estimate, it is efficient having its variance, in (4.1), reach
the CramCr-Raobound.
Location Estimation in Generalized Gaussian Noise Now suppose that the observed data includes samples that clearly deviate from the central data cluster. The large deviations contradict a Gaussian
model. The alternative is to model the deviations with a more appropriate distribution that is more flexible in capturing the characteristics of the data. One approach is to adopt the generalized
Gaussian distribution. The function used to construct the maximum-likelihood estimate of location in this case is
where C and a are normalizing constants and k is the fixed parameter that models the dispersion of the data. Maximizing the likelihood function is equivalent to minimizing the argument of the
exponential, leading to the following estimate of location N
PML =argrninx
Some intuition can be gained by plotting the cost function in (4.12) for various values of k . Figure 4.2 depicts the different cost function characteristicsobtained for k = 2, 1, and 0.5. When the
dispersion parameter is given the value 2, the model reduces to the Gaussian assumption, the cost function is quadratic, and the estimator is, as expected, equal to the sample mean. For k < 1,it can
be shown that the cost function exhibits several local minima. Furthermore, the estimate is of selection type as its value will be that of one of the samples X I ,X z , . . . ,X N . These
characteristics of the cost function are shown in Figure 4.2. When the dispersion parameter is given the value 1, the model is Laplacian, the cost function is piecewise linear and continuous,and the
optimal estimator minimizes the sum of absolute deviations N
(4.13) i=l
Although not immediately seen, the solution to the above is the sample median as it is shown next.
Figure 4.2 Cost functions for the observation samples X I = -3, X Z = 10, X s = 1,X , 1,x5 = 6 for k = 0.5, 1,and 2.
Define the cost function being minimized in (4.13) as L l(P). For values of P in the interval -co < 3!, 5 X ( l ) ,L1(/3)is simplified to
C X ( 2 )- NP. i=1
This, as a direct consequence that in this interval, X ( l ) 2 the range X ( j )< /3 5 X ( j + l ) ,L1(P)can be written as
P. For values of P in
for j = 1 , 2 , . . .,N - 1. Similarly, for X ( N )< ,B < co,
STATlSTlCAL FOUNDATlONS OF FlLTfRlNG
Letting X ( o )= -m and X(N+l) = m, and defining CT="=, X ( i ) = 0 if m > n, we can combine (4.14)-(4.16) into the following compactly written cost function
, When expressed as in (4.17), L1(p) is clearly piecewise for p E ( X ( j )X(j+l)]. linear and continuous. It starts with slope -N for -m < p 5 X ( 1 ) ,and as each X ( j )is crossed, the slope is
increased by 2. At the extreme right the slope ends at N for X ( N )< /3 < m. For N odd, this implies that there is an integer m, such that the slopes over the intervals (X(m-I), X(m)I and ( X ( m )X
(m+1)1, , are negative and positive, respectively. From (4.17), these two conditions are satisfied if both
hold. Both constraints are met when m = For N even, (4.17) implies that there is an integer m, such that the slope over the interval (X,,), X(,+l)] is zero. This condition is satisfied in (4.17) if
2m) = 0,
which is possible for m = N/2. Thus, the maximum-likelihood estimate of location under the Laplacian model is the sample median
MEDIAN(Xl,Xz,.. . , X N ) .
In the case of N being even the output of the median can be any point in the interval shown above, the convention is to take the mean of the extremes ~ M = L
"(9) "(9.1) +
Location Estimation in Stable Noise The formulation of maximum likelihood estimation requires the knowledge of the model's closed-form density function. Among the class of symmetric stable densities,
only the Gaussian ( a = 2 ) and Cauchy (a = 1) distributions enjoy closed-form expressions. Thus, to formulate the non-Gaussian maximum likelihood estimation problem in a stable distribution
framework, it is logical to start with the only non-Gaussian distribution for which we
have a closed form expression, namely the Cauchy distribution. Although at first, this approach may seem too narrow to be effective over the broad class of stable processes,
maximum-likelihoodestimates under the Cauchy model can be made tunable, acquiring remarkably efficiency over the entire spectrum of stable distributions. Given a set of i.i.d. samples X I ,X2, . . .
, X N obeying the Cauchy distribution with scaling factor K , (4.19) the location parameter ,b is to be estimated from the data samples as the value which maximizes the likelihood function
This is equivalent to minimizing N
G K ( P ) = n [ K 2 (Xi- PI2].
Thus given K is given by [82]
> 0, the ML location estimate is known as the sample myriad and
a r g m i n n ( K ~ (xi- p12)
MYRIAD{K; X i , X z , . . . ,X N } .
Note that, unlike the sample mean or median, the definition of the sample myriad involves the free parameter K . For reasons that will become apparent shortly, we will refer to K as the linearity
parameter of the myriad. The behavior of the myriad estimator is markedly dependent on the value of its linearity parameter K . Some intuition can be gained by plotting the cost function in (4.23)
for various values of K . Figure 4.3 depicts the different cost function characteristics obtained for K = 20,2,0.2 for a sample set of size 5 . Although the definition of the sample myriad in (4.23)
is straightforward,it is not intuitive at first. The following interpretationsprovide additional insight.
LEAST LOGARITHMIC DEVIATION The sample myriad minimizes GK(@)in (4.21), which consists of a set of products. Since the logarithm is a strictly monotonic function, the sample myriad will also minimize
the expression logGK(P). The sample myriad can thus be equivalently written as
Figure 4.3 Myriad cost functions for the observation samples X I = -3, X z = 10,X3 = 1,x4 - 1,xs = 6 for K = 20,2,0.2.
MYRIAD{K; X I ,X z , . . . , X,}
= a r g r n i n z log
[ K 2+ ( X i - p)’] . (4.23)
Upon observation of the above, if an observation in the set of input samples has a large magnitude such that / X i - PI >> K , the cost associated with this sample is approximately log(Xi - p)z -the
log of the square deviation. Thus, much as the sample mean and sample median respectively minimize the sum of square and absolute deviations, the sample myriad (approximately) minimizes the sum of
logarithmic square deviations, referred to as the LLS criterion, in analogy to the Least Squares (LS) and Least Absolute Deviation (LAD) criteria. Figure 4.4 illustrates the cost incurred by each
sample as it deviates from the location parameter p. The cost of the sample mean (LS) is quadratic, severely penalizing large deviations. The sample median (LAD) assigns a cost that is linearly
proportional to the deviation. The family of cost functions for the sample myriad assigns a penalty proportional to the logarithm of the deviation, which leads to a much milder penalization of large
deviations than that imposed by the LAD and LS cost functions. The myriad cost function structure, thus, rests importance on clearly inappropriate samples.
GEOMETRICAL INTERPRETATION A second interpretation of the sample myriad that adds additional insight lies in its geometrical properties. First, the observations samples X I , X z , . . . , X N are
placed along the real line. Next, a vertical bar that runs horizontally through the real line is added as depicted in Figure 4.5. The length of the vertical bar is equal to the linearity parameter K
. In this arrangement, each of the terms
Figure 4.4 Cost functions of the mean (LS), the median (LAD), and the myriad (LLS)
Figure 4.5 ( a ) The sample myriad, b, minimizes the product of distances from point A to all samples. Any other value, such as 2 = p’, produces a higher product of distances; (b)the myriad as K is
( K 2+ (Xi- P Y )
in (4.23), represents the distance from point A, at the end of the vertical bar, to the sample point X i . The sample myriad, , 8 ~indicates , the position of the bar for which the product of
distances from point A to the samples X 1 , X 2 , . . . , X N is minimum. Any other value, such as x = ,Of,produces a higher product of distances. If the value of K is reduced as shown in Figure 437,
the sample myriad will favor samples that are clustered together. The sample myriad has a mode-like behavior for small values of K . The term “myriad” was coined as a result of this characteristicof
the estimator.
The maximum-likelihood estimates derived so far have assumed that the form of the distribution is known. In practice, we can seldom be certain of such distributional assumptions and two types of
questions arise:
(1) How sensitive are optimal estimators to the precise nature of the assumed probability model?
(2) Is it possible to construct robust estimators that perform well under deviations from the assumed model?
Sensitivity of Estimators To answer the first question,consider an observed data set Zl,Z,, . . . , Z N , and let us consider the various location estimators previously derived, namely, the mean,
median, and myriad. In addition, we also consider two simple M-estimators, namely the trimmed-mean defined as (4.25) for a = 0,1, . . . , LN/2],and the Windsorized mean defined as:
The median, is a special case of trimmed mean where a = LN/21. The effects of data contaminationon these estimators is then tested. In the first set of experiments, a sample set of size 10 including
one outlier is considered. The nine i.i.d. samples are distributed as N ( p ,1)and the outlier is distributed as N ( p A, 1). Table 4.1, adapted from David [58],depicts the bias of the estimation
where eight different values of X were selected. This table clearly indicates that the mean is highly affected by the outlier. The trimming improves the robustness of the estimate. Clearly the median
performs best, although it is still biased. The expected value of the biases shown in Table 4.1 are not sufficient to compare the various estimates. The variances of the different estimators of p are
needed. These have also been tabulated in [58] and are shown on Table 4.2. This table shows that the Windsorized mean performs better than the trimmed mean when X is small. It also shows that,
although the bias of the median is smaller, the variance is larger than the trimmed and Windsorized means. The mean is also shown to perform poorly in the MSE, except when there is no contamination.
Another useful test is to consider the contaminationsample having the same mean as the other N - 1samples, but in this case the variance of the outlier is much larger. Hence, Table 4.3 tabulates the
variance of the various estimates of p for N = 10.
Table4.7 Bias of estimators of p for N = 10 when a single observation is from N ( p + A, 1) and the others from N ( p , 1).
x Estimator
Tlo(1) Tlo(2) Medl0 Wlo(1) Wio(2)
0.0 0.0 0.0 0.0 0.0
0.05000 0.04912 0.04869 0.04932 0.04938 0.04889
0.1ooOO 0.09325 0.09023 0.08768 0.09506 0.09156
0.15000 0.12870 0.12041 0.11381 0.13368 0.12389
0.20000 0.15400 0.13904 0.12795 0.16298 0.14497
0.3oooO 0.17871 0.15311 0.13642 0.19407 0.16217
0.40000 0.18470 0.15521 0.13723 0.20239 0.16504
0.18563 0.15538 0.13726 0.20377 0.16530
Table 4.2 Mean squared error of various estimators of p for N = 10, when a single observation is from N ( p A, 1)and the others from N ( p , 1).
0.10000 0.10534 0.11331 0.13833 0.10437 0.11133
0.10250 0.10791 0.11603 0.14161 0.10693 0.11402
0.11000 0.11471 0.12297 0.14964 0.11403 0.12106
0.12250 0.12387 0.13132 0.15852 0.12405 0.12995
0.14OOO 0.13285 0.13848 0.16524 0.13469 0.13805
0.19000 0.14475 0.14580 0.17072 0.15039 0.14713
0.26000 0.14865 0.14730 0.17146 0.15627 0.14926
0.14942 0.14745 0.17150 0.15755 0.14950
Tio(1) Tlo(2) Medl0 Wio(1) Wlo(2)
Table 4.3 shows that the mean is a better estimator than the median as long as the variance of the outlier is not large. The trimmed mean, however, outperforms the median regardless of the variance
of the outlier. The Windsorized mean performs comparably to the trimmed mean. These tables illustrate that by trimming the observation sample set, we can effectively increase the robustness of
M-Estimation M-estimation aims at answering the second question raised at the beginning of this section: Is it possible to construct estimates of location which perform adequately under deviations
from distributional assumptions? According to the theory of M-estimation this is not only possible, but a well defined set of design guidelines can be followed. A brief summary of M-estimation is
provided below. The interested reader can further explore the theory and applicationsof M-estimation in [91, 1051.
Table 4.3 Variance of various estimators of p for N = 10, where a single observation is from N ( p ,a2)and the others from N ( p , 1). U
XlO Tio(1) Ti0 (2) Medlo WlO(1) WlO(2)
0.09250 0.09491 0.09953 0.1 1728 0.09571 0.09972
0.10000 0.10534 0.11331 0.13833 0.10437 0.1 1133
0.13000 0.12133 0.12773 0.15373 0.12215 0.12664
0.18000 0.12955 0.13389 0.15953 0.13221 0.13365
0.25000 0.13417 0.13717 0.16249 0.13801 0.13745
cc 0.14942 0.14745 0.17150 0.15754 0.14950
Given a set of samples X I ,X Z ,. . . ,X N , an M-estimator of location is defined as the parameter that minimizes a sum of the form
where pis referred to as a cost function. The behavior of the M-estimate is determined by the shape of p. When p(x) = x 2 , for example, the associated M-estimator minimizes the sum of square
deviations, which corresponds to the sample mean. For p(x) = 1x1, on the other hand, the M-estimator is equivalent to the sample median. In general, if p(x) = - log f ( x ) , where f is a density
function, the M-estimate corresponds to the maximum likelihood estimator associated with f . Accordingly, the cost function associated with the sample myriad is proportional to
p ( X ) = log[lc2 x ” .
The flexibility associated with shaping p ( x ) has been the key for the success of M-estimates. Some insight into the operation of M-estimates is gained through the definition of the
inJluencefunction. The influence function roughly measures the effect of contaminated samples on the estimates and is defined as
(4.29) provided the derivative exists. Denoting the sample deviation X i - ,8as Ui, the influence functions for the sample mean and median are proportional to $ M E A N (Ui) = (Ui)and $ M E D I A N (
U= ~ )sign(&), respectively. Since the influence function of the mean is unbounded, a gross error in the observations can lead to severe distortion in the estimate. On the other hand, a similar gross
error has a limited effect on the median estimate. The influence function of the sample myriad is
Figure 4.6 Influence functions of the mean, median and myriad
‘$MY R I A D ( U i )
Ui K 2 iU:’
As shown in Figure 4.6, the myriad’s influence function is re-descending reaching its maxima (minima) at lUi( = K . Thus, the further away an observation sample is from the value K , the less it is
considered in the estimate. Intuitively, the myriad must be more resistant to outliers than the median, and the mean is linearly sensitive to these.
Problems 4.1 Given N independent and identically distributed samples obeying the Poisson distribution:
(4.31) where LC can take on positive integer values, and where X is a positive parameter to be estimated:
(a) Find the mean and variance of the random variables X i .
(b) Derive the maximum-likelihood estimate (MLE) of X based on a set of N observations.
(c) Is the ML estimate unbiased? (d) Find the CramCr-Rao bound for the variance of an unbiased estimate.
(e) Find the variance of the ML estimate. Is the ML estimate efficient?
4.2 Consider N independent and identically distributed samples from a Gaussian distribution with zero mean and variance c 2 .Find the maximum likelihood estimate of o2 (unknown deterministic
parameter). Is the estimate unbiased? Is the estimate consistent? What can you say about the ML estimate in relation to the Cramer-Rao bound.
4.3 Let X be a uniform random variable on [8, 8 11, where the real-valued parameter 8 is constant but unknown, and let T ( X ) = [ X I = greatest integer less than or equal to X . Is T ( X ) an
unbiased estimate of 8. Hint: consider two cases: 8 is an integer and 8 is not an integer.
A random variable X has the uniform density
f(x) = 1/u
for O 5 x 5 u
and zero elsewhere.
(a) For independent samples of the above random variable, determine the likelihood function f ( X 1 , X 2 , . . . , X N : u ) for N = 1 and N = 2 and sketch it. Find the maximum-likelihood estimate
of the parameter u for these two cases. Find the ML estimate of the parameter u for an arbitrary number of observations N .
(b) Are the ML estimates in (a) unbiased. (c) Is the estimate unbiased as N
Let the zero-mean random variables X and Y obey the Gaussian distribution,
u ~ u 2 is the correlation coefficient and where E [ X Y ]is the correlation where p = EIXY] parameter. Given a set of observation pairs ( X I ,Y I ) (, X 2 ,Y z ) ., . . , ( X n ,Yn),drawn from the
joint random variables X and Y . Find the maximum likelihood estimate of the correlation parameter E [ X Y ]or of the correlation coefficient p.
4.6 Consider a set of N independent and identically distributed observations X i obeying the Rayleigh density function (4.34)
(a) Find the mean and variance of the X i variables. Note
(b) If we assume that the parameter a2is unknown but constant,derive the maximumlikelihood estimate of c2 obtained from the N observation samples. Is the estimate unbiased? Find the
maximum-likelihood estimate of I9 (unknown constant parameter) from a single observation of the variable X where
X = lnI9 + N
) f ~ ( a ) where N is a noise term whose density function is unimodal with f ~ ( 0 > for all a # 0.
Consider the data set
X ( n ) = AS(n)
+ W(n),
for n = 0 , 1 , . . . , N - 1,
where S ( n )is known, W ( n )is white Gaussian noise with known variance c2,and A is an unknown constant parameter.
(a) Find the maximum-likelihood estimate of A. (b) Is the MLE unbiased? (c) Find the variance of the MLE. 4.9 Consider N i.i.d. observations X = { X I ] . . , X N } drawn from a parent distribution F
( x ) = P T ( X 5 x). Let k(X)be the estimate of F ( X ) ,where
number of X i s 5 J: N
F(X)= where U ( x ) = 1if x
, : c
U(x- Xi) N
> 0, and zero otherwise.
(a) Is this estimate unbiased. (b) Prove that this estimate is the maximum-likelihood estimate. That is, let N 2 = Ci=lU ( x - X i ) , I9 = F ( x ) and find P(zlI9).
This Page Intentionally Left Blank
Part II
Signal Processing with Order Statistics
This Page Intentionally Left Blank
Median and Weighted Median Smoothers 5.1
The running median was first suggested as a nonlinear smoother for time-series data by Tukey in 1974 [189], and it was largely popularized in signal processing by Gallagher and Wise’s article in 1981
[78]. To define the running median smoother, let { X ( . ) }be a discrete time sequence. The running median passes a window over the sequence { X (.)} that selects, at each instant n, an odd number
of consecutive samples to comprise the observation vector X ( n ) . The observation window is centered at n, resulting in
x(n)= [ X ( n- NL), . . . , x(n),. . . , x(n+ NR)IT,
where N L and N R may range in value over the nonnegative integers and N = N L NR + 1 is the window size. In most cases, the window is symmetric about X ( n )and N L = N R = Nl . The median smoother
operating on the input sequence { X (.) } produces the output sequence { Y 1, defined at time index n as:
Y ( n ) = MEDIAN [ X ( n- N l ) , . . . , X ( n ) ,. . . , X ( n = MEDIAN [ X i ( n ) ., . . , X N ( ~ ) ]
+Nl)] (5.2)
where X i ( n ) = X ( n - N I - 1 i) for i = 1, 2 , . . . , N . That is, the samples in the observation window are sorted and the middle, or median, value is taken as the output. If X ( 1 ) , X p )
,. . . , X ( N )are the sorted samples in the observation window, the median smoother outputs 81
Figure 5.1 The operation of the window width 5 median smoother.
X( E p )
appended points.
if N is odd otherwise.
The input sequence {X(.)} may be either finite or infinite in extent. For the finite case, the samples of {X(.)} can be indexed as X (1), X (2), . . . , X ( L ) ,where L is the length of the
sequence. Because of the symmetric nature of the observation window, the window extends beyond the finite extent of the input sequence at both the beginning and end. When the window is centered at
the first and last point in the signal, half of the window is empty. These end effects are generally accounted for by appending N L samples at the beginning and N R samples at the end of {X(.)}.
Although the appended samples can be arbitrarily chosen, typically these are selected so that the points appended at the beginning of the sequence have the same value as the first signal point, and
the points appended at the end of the sequence all have the value of the last signal point. To illustrate the appending of input sequences and the median smoother operation, consider the input signal
{X(.)}of Figure 5.1. In this example, {X(.)}consists of 20 observations from a &level process, { X : X(n) E (0, 1,. . . , 5 } , n = 1, 2, . . . , 20}. The figure shows the input sequence and the
resulting output sequence for a median smoother of window size 5. Note that to account for edge effects, two samples have been appended to both the beginning and end of the sequence. The median
smoother output at the window location shown in the figure is
MEDIAN[X(7),X(8), X(9), X(lO), X ( l l ) ]
MEDIAN[ 1, 1, 4, 3, 31
= 3.
Running medians can be extended to a recursive mode by replacing the “causal” input samples in the median smoother by previously derived output samples. The output of the recursive median smoother is
given by
MEDIAN[Y(n - N L ) , Y ( n- N L I), . . . , Y ( n - l), X ( n ) ,. . . , X ( n NR)].
In recursive median smoothing, the center sample in the observation window is modified before the window is moved to the next position. In this manner, the output at each window location replaces the
old input value at the center of the window. With the same amount of operations, recursive median smoothers have better noise attenuation capabilities than their nonrecursive counterparts [5, 81.
Alternatively, recursive median smoothers require smaller window lengths in order to attain a desired level of noise attenuation. Consequently, for the same level of noise attenuation, recursive
median smoothers often yield less signal distortion. The median operation is nonlinear. As such, the running median does not possess the superposition property and traditional impulse response
analysis is not strictly applicable. The impulse response of a median smoother is, in fact, zero for all time. Consequently, alternative methods for analyzing and characterizing running medians must
be employed. Broadly speaking, two types of analysis have been applied to the characterization of median smoothers: statistical and deterministic. Statistical properties examine the performance of
the median smoother, through such measures as optimality and output variance, for the case of white noise time sequences. Conversely, deterministic properties examine the smoother output
characteristics for specific types of commonly occurring deterministic time sequences.
Statistical Properties
The statistical properties of the running median can be examined through the derivation of output distributions and statistical conditions on the optimality of median estimates. This analysis
generally assumes that the input to the running median is a constant signal with additive white noise. The assumption that the noise is additive and white is quite natural, and made similarly in the
analysis of linear filters. The assumption that the underlying signal is a constant is certainly convenient, but more importantly, often valid. This is especially true for the types of signals median
filters are most frequently applied to, such as images. Signals such as images are characterized by regions of constant value separated by sharp transitions, or edges. Thus, the statistical analysis
of a constant region is valid for large portions of these commonly used signals. By calculating the output distribution of the median filter over a constant region, the noise smoothing capabilities
of the median can be measured through statistics such as the filter output variance. The calculation of statistics such as the output mean and variance from the expressions in (3.15) and (3.16) is
often quite difficult. Insight into the smoothing
Table 5.1 Asymptotic output variances for the window size N mean and running median for white input samples with uniform, Gaussian, and Laplacian distributions.
Input Sample Probability Density Function Uniform
for-&GLt<&G otherwise
0 Gaussian
-+7(t-w)Z 2u
Filter Type Mean Median *z N
& N+2
0.2 -
7 r 2 -
characteristics of the median filter can, however, be gained by examining the asymptotic behavior ( N + m) of these statistics, where, under some general assumptions, results can be derived. For the
case of white noise input samples, the asymptotic mean, pmed, and variance, eked, of the running median output are [126]
where t 0 . 5 is the median parameter of the input samples. Thus, the median smoother produces a consistent (limN-m uked= 0) and unbiased estimate of the median of the input distribution. Note that
the output variance is not proportional to the input variance, but rather l/f:(t0.5). For heavy tailed noises, l/f;(to.5) is not related to the input variance. Therefore, the variance is proportional
to the impulse magnitude, not l l f i ( t 0 . 5 ) . Thus, the output variance of the median in this case is not proportional to the input variance. This is not true for the sample mean, and further
explains the more robust behavior of the median. The variances for the sample mean and running median output are given in Table 5.1 for the uniform, Gaussian, and Laplacian input distribution cases
[%I. The results hold for all N in the uniform case and are asymptotic for the Gaussian and Laplacian cases. Note that the median performs about 3 dB better than the sample mean for the Laplacian
case and 2 dB worse in the Gaussian case. Recursive median smoothers, as expected, are more efficient than their nonrecursive counterparts in attenuating noise due to the fact that half of the data
points in the window of the recursive median have already been “cleaned.” Consider the simplest scenario where the recursive median smoother is applied to an i.i.d. time
Table 5.2 Relative efficiency of recursive and non-recursive medians
1.09 1.39 1.83 2.40 3.04 3.73 4.43
series { X ( n ) }described by the cumulative distribution function F ( s ) . It has been shown that the cumulative distribution function of the output of the recursive median filter Y ( n )with
window size N , is [5, 81
where N1 = ( N 1)/2. The output distribution in (5.7) can be used to measure the relative efficiency between the recursive and non-recursive (standard) medians. For a window of size N and for
uniformly distributed noise, the ratio c :/cp of the nonrecursive variance estimate to the recursive variance estimate is given in Table 5.2, where the higher efficiency of the recursive median
smoother is readily seen. To further illustrate the improved noise attenuation capability of recursive medians, consider an i.i.d. input sequence, { X ( n ) }consisting of a constant signal, C ,
embedded in additive white noise Z ( n ) . Without loss of generality, assume C = 0, and that the noise is symmetrically distributed. Figure 5 . 2 ~shows 1000 samples of the sequence { X ( n ) }
,where the underlying distribution is double exponential (heavy tailed). Figures 5.2b,c show the noisy sequence after the application of a nonrecursive and a recursive median smoothers, respectively,
both of window size 7. The improved noise attenuation provided by recursion is apparent in Figures 5.2b,c. A phenomenon that occurs with median smoothers in impulsive noise environment is that if
several impulsive noise samples are clustered together within the window, the impulses may not be removed from the signal. This phenomenon can be observed in Figures 5.2b,c. To quantify such events,
Mallows (1980) [137] introduced the concept of breakdown probability as the probability of an impulse occurring at the output of the estimator, when the probability of impulses at the input is given.
In essence, the breakdown probability is a measure that indicates the robustness of a particular estimator. To derive the breakdown probability of median smoothers, let us
first arbitrarily select a threshold t, such that if a noise sample exceeds such level, the sample is regarded as an impulse. Let the symmetric distribution function of the noise be F ( .), then the
probability of a noise sample being an impulse (positive or negative) is 2F( -t). For the recursive median filter, half of the breakdown probability is given in (5.7) with y = -t. The breakdown
probability of nonrecursive median smoothers is found through order statistics as
where N I = ( N 1)/2. In Figure 5.2, the threshold It] is set to 1; thus, the probability of an impulse occurring at the input is 0.24. The breakdown probability, for the non-recursive median filter,
in Figure 5.2b is 0.011. For the recursive median filter, this probability is 0.002. Thus, on the average, for every impulse occurring at the output of the recursive median smoother in this example,
there will be 5.5 impulses at the output of the nonrecursive median smoother output. Tables 5.3 and 5.4 show the breakdown probabilities for recursive and nonrecursive median smoothers for different
values of input impulse probability, 2F(-t), and for different window sizes. The better noise suppression characteristics of the recursive median smoothers can be seen in Figure 5.2, and in a more
quantitative way in Tables 5.3 and 5.4. Table 5.3 Breakdown probabilities for the Non-Recursive Median Smoother
N =3
N =5
N =7
N =9
0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.0145 0.0560 0.1215 0.2080 0.3125 0.4320 0.5635
0.0023 0.0171 0.0532 0.1158 0.2070 0.3261 0.4703
0.0003 0.0054 0.0242 0.0667 0.1411 0.2520 0.3997
0.00006 0.0017 0.0112 0.0391 0.0978 0.1976 0.3434
0.00001 0.00059 0.0053 0.0233 0.0686 0.1564 0.2974
0.000003 0.0001 0.0025 0.0140 0.0048
0.1247 0.2589
Median smoothers are primarily used to remove undesired disturbances in data, thus their statistical characterization, in terms of output distributions, would provide the required information about
the median smoothers' noise attenuation power. Unfortunately, the general output distribution can seldom be put in manageable form. Unlike linear smoothers, median smoothers have well defined
deterministic properties that effectively complement their set of statistical properties. In particular, root signals (also referred to invariant and fixed points) play an important role revealing
the deterministic behavior of median smoothers, and in this respect the set of root signals resemble the pass band characteristics of linear frequency-selective filters.
Figure 5.2 Impulse threshold It1 = 1: (a)Laplacian noisy sequence, (b)median smoothed sequence, and ( c )recursive median smoothed sequence.
Table 5.4 Breakdown probabilities for the Recursive Median Smoothers
N =3
N =7
N =9
N = 11
N = 13
0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.0102 0.0417 0.0954 0.1714 0.2692 0.3873 0.5233
0.0007 0.0066 0.0239 0.0604 0.1253 0.2285 0.3782
0.00005 0.0009 0.0052 0.0184 0.0501 0.1162 0.2387
0.000003 0.0001 0.0010 0.0052 0.0187 0.0552 0.1417
0.00000002 0.00001 0.0002 0.0014 0.0067 0.0253 0.0812
0.00000001 0.0oO001 0.00004 0.0003 0.0022 0.0113 0.0455
Root Signals (Fixed Points)
Statistical properties give considerable insight into the performance of running medians. Running medians cannot, however, be sufficiently characterized through statistical properties alone. For
instance, an important question not answered by the statistical properties is what type of signal, if any, is passed through a running median unaltered. Linear smoothers, when applied repeatedly to a
signal, for instance, will increasingly smooth a signal. With the exception of some contrived examples, fixed points of linear smoothers are only those belonging to constant-valued sequences. On the
other hand, Gallagher and Wise (1981) [78] showed that running medians have nontrivial fixed-point sequences referred to as rooc signals for reasons that will become clear shortly. The concept of
root signals is important to the understanding of running medians and their effect on general signal structures. In noise smoothing, for instance, the goal is to attain maximum noise attenuation
while preserving the desired signal features. An ideal situation would arise if the smoother could be tailored so that the desired signal features were invariant to the smoothing operation and only
the noise would be affected. Since the median operation is nonlinear and lacks the superposition property, this idealized case is of course not possible. Nonetheless, when a signal consists of
constant areas and step changes between these areas, a similar effect is achieved. Noise will be attenuated, but the signal features will remain intact. This concept is used extensively in image
smoothing, where the median smoother is designed such that certain image patterns, such as lines and edges, are root signals and thus not affected by the smoothing operation [7, 1471. The definition
of a root signal is quite simple: a signal is a running median root if the signal is invariant under the median smoothing operation. For simplicity assume that the window is symmetric about X ( n )
,with N L = N R taking on the value N1. Thus, a signal { X ( . ) }is a root of the window size N = 2N1+ 1median smoother if
for all n. As an example, consider the signal shown in Figure 5.3. This signal is smoothed by three different window size running medians ( N 1 = 1 , 2 , and 3). Note that for the window size three
case (N1 = l), the output is a root. That is, further smoothing of this signal with the window size three running median does not alter the signal. Notice, however, that if this same signal is
smoothed with a larger window running median, the signal will be modified. Thus, the second signal (from the top) in Figure 5.3 is in the pass band, or a root, of a N1 = 1 running median but outside
the pass band, or not a root, of the N1 = 2 and N1 = 3 smoothers. The goal of root analysis is to relate the smoothing of desired signals corrupted by noise to root and nonroot signals. If it can be
shown that certain types of desired signals are in the running median root set, while noise is outside the root set, then median smoothing of a time series will preserve desired structures while
altering the noise. Such a result does in fact hold and will be made clear through the following definitions and properties. First note that, as the example above illustrates, whether or not a signal
is a running median root depends on the window size of the smoother in question. Clearly, all signals are roots of the window size one running median (identity). To investigate this dependence on
window size, running median root signals can be characterized in terms of local signal structures, where the local signal structures are related to the window size. Such a local structure based
analysis serves two purposes. First, it defines signal structures that, when properly combined, form the running median root set. Second, by relating the local structures to the window size, the
effect of window size on roots is made clear. The local structure analysis of running median roots relies on the following definitions [78].
Constant Neighborhood: A region of at least N1+ 1consecutive identically valued points.
An Edge: A monotonic region between two constant neighborhoods of different value. The connecting monotonic region cannot contain any constant neighborhoods. An Impulse: A constant neighborhood
followed by at least one, but no more than Nl points, that are then followed by another constant neighborhood having the same value as the first constant neighborhood. The two boundary points of
these at most N1 points do not have the same value as the two constant neighborhoods. An Oscillation: A sequence of points that is not part of a constant neighborhood, an edge, or an impulse. These
definitions may now be used to develop a description of those signals that do and those that do not pass through a running median without being perturbed. In particular, Gallagher and Wise [78]
developed a number of properties which characterize these signal sets for the case of finite length sequences. First, any impulse will be eliminated upon median smoothing. Secondly, a finite length
signal is a running median root if it consists of constant neighborhoods and edges only. Thus,
* a
3 2 I Q G O O
a -
Output signal for a window of size 3
a a
e Input Signal x(n)
Output signal for a window of size 5
Output signal for a window of size I
Figure 5.3 Effects of window size on a median smoothed signal.
0: appended
if a desired signal is constructed solely of constant neighborhoods and edges, then it will not be altered by the median smoothing operation. Conversely, if observation noise consists of impulses (as
defined above), it will be removed by the median smoothing operation. These running median root properties are made exact by the following.
LOMO Sequence: A sequence { X ( . ) }is said to be locally monotonic of length m, denoted LOMO(m), if the subsequenceX ( n ) , X ( n + l),. . ., X ( n + m - 1) is monotonic for all n 2 1. Root
Signals: Given a length L sequence to be median smoothed with a length N = 2N1 1 window, a necessary and sufficient condition for the signal to be invariant (a root) under median smoothing is that
the extended (beginning and end appended) signal be LOMO(N1 2).
Thus, the set of root signals (invariant to smoothing) of a size N running median consists solely of those signals that are formed of constant neighborhoods and edges. Note that by the definition of
LOMO(m), a change of trend implies that the sequence must stay constant for at least m - 1 points. It follows that for a running median root signal to contain both increasing and decreasing regions,
these regions must be separated by a constant neighborhood of least N1 1 identically valued samples. It is also clear from the definition of LOMO(.) that a LOMO(m 1 ) sequence is also LOMO(m2) for
any two positive integers m 1 2 m2. This implies that the roots for decreasing window size running medians are nested, that is, every root of a window size M smootheris also a root of a window sized
N median smoother for all N < M . This is formalized by:
Root Signal Set: Let S denote a set of finite length sequences and R N~ be the root set of the window size N = 2N1+ 1running median operatingon 5’. Then the root +R~N ~ 2 R N ~ C- .~. . 5 R1 RO= S.
sets are nested such that.. . R N ~ C In addition to the above description of the root signal set for running medians, it can be shown that any signal of finite length is mapped to a root signal by
repeated median smoothing. This property of median filters is very significant and is called the root convergenceproperty. It can be shown that the first and last points to change value on a median
smoothing operation remain invariant upon additional running median passes, where repeated smoother passes consist of using the output of the prior smoothing pass for the input of an identical
smoother on the current pass. This fact, in turn, indicates that any L long nonroot signal (oscillations and impulses) will become a root structure after a maximum of ( L - 2)/2 successive
smoothings. This simple bound was improved in [194] where it was shown that at most (5.10) passes of the median smoother are required to reach a root. This bound is conservative in practice since in
most cases root signals are obtained with much fewer smoothing passes.
0 .
4 3 -
2 I
Root signal for a window of size 3 (1 filter pass)
Root signal for a window of size 5 (2 filter passes)
Root signal for a window o f m e 7 (2 tilter passes)
Figure 5.4 Root signals obtained by running medians of size 3, 5, and 7. points.
The running median root properties are illustrated through an example in Figure 5.4. This figure shows an original signal and the resultant root signals after multiple passes of window size 3 , 5 ,
and 7 running medians. Note that while it takes only a single pass of the window size 3 running median to obtain a root, it takes two passes for the window sizes 5 and 7 median smoothers. Clearly,
the locally monotonic structure requirements of the root signals are satisfied in Figure 5.4. For the window size 3 case, the input sequence becomes LOMO(3) after a single pass of the smoother. Thus,
this sequence is in the root set of the window size 3 running median, but not a root of the window size N > 3 running median, since it is not LOMO(N1 2) for Nl > 1 ( N > 3). Recursive median
smoothers also possess the root convergence property [5, 1501. In fact, they produce root signals after a single filter pass. For a given window size, recursive and nonrecursive median filters have
the same set of root signals. A given input signal, however, may be mapped to distinct root signals by the two filters [5,150]. Figure 5.5 illustrates this concept where a signal is mapped to
different root signals by the recursive and nonrecursive median smoothers. In this case, both roots are attained in a single smoother pass. The deterministic and statistical properties form a
powerful set of tools for describing the median smoothing operation and performance. Together, they show that
Input signal x(n)
-- - -
H -
3 2 1-s
Root signal for a window of size 3 (nonrecursive smoother)
w - I -
Root signal for a window of size 3 (recursive smoother)
Figure 5.5 A signal and its recursive and non-recursive running median roots.
0: appended
the median is an optimal estimator of location for Laplacian noise and that common signal structures, for example, constant neighborhoods and edges in images, are in its pass-band (root set).
Moreover, impulses are removed by the smoothing operation and repeated passes of the running median always result in the signal converging to a root, where a root consists of a well defined set of
structures related to the smoother's window size. Further properties of root signals can be found in Arce and Gallagher (1982) [9], Bovik (1987) [37], Wendt et al. (1986) [194], Wendt (1990) [193].
Multiscale root signal analysis was developed by Bangham (1993) [25].
MAX-MIN Representation of Medians MAX-MIN representation of medians The median has an interesting and useful representation where only minima and maxima operations are used. See Fitch (1987) [71].
This representation is useful in the software of hardware implementation of medians, but more important, it is also useful in the analysis of median operations. In addition, the max-min
representation of medians provides a link between rank-order and morphological operators as shown in Maragos and Schafer (1987) [ 1401. Given the N samples X 1, Xa, . . . , XN, and defining m = the
median of the sample set is given by
X ( ~ L=)min [max(X1,.. . , Xm),. . . , max(Xj,, Xj,, . . . , Xjm),
. . . , m a x ( X ~ - ~ + .l.,,. X,)] (5.11)
where j1, j 2 , . . . , j, index all C g = ( N - -Nm!) ! m ! combinations of N samples taken m at a time. The median of 3 samples, for instance, has the following min-max representation
MEDIAN(X1, X Z ,X,) = min [max(X1, Xz), max(X1, Xs), max(X2, X,)] . (5.12) The max-min representation follows by reordering the input samples into the corresponding order-statistics X ( l ) , X(2),.
. . , X(N)and indexing the resultant samples in all the possible group combinations of size m. The maximum of the first subgroup X(l), X(z),. . . , X(,) is clearly X(m).The maximum of the other
subgroups will be greater than X(,) since these subgroups will include one of the elements in X(,+l), X(,+2!, . . . , X(N).Hence, the minimum of all these maxima will be the mth-order statistic X(,),
that is, the median.
EXAMPLE 5.1 Consider the vector X = [l, 3, 2 , 5, 51, to calculate the median using the max-min representation we have:
MEDIAN(1, 3, 2 , 5, 5) = min [max(l, 3, 2 ) , max(1, 3, 5): max(1, 3, 5), max(1, 2 , 5 ) , max(1, 2, 5 ) , max(1, 5, 5 ) , max(3, 2 , 5 ) , max(3, 2 , 5), max(2, 5, 5)] = min(3, 5, 5, 5 , 5 , 5, 5 ,
5, 5) = 3.
5.2 WEIGHTED MEDIAN SMOOTHERS Although the median is a robust estimator that possesses many optimality properties, the performance of running medians is limited by the fact that it is temporally
blind. That is, all observation samples are treated equally regardless of their location within the observation window. This limitation is a direct result of the i.i.d. assumption made in the
development of the median. A much richer class of smoothers is obtained if this assumption is relaxed to the case of independent, but not identically distributed, samples.
Statistical Foundations Although time-series samples, in general, exhibit temporal correlation, the independent but not identically distributed model can be used to synthesize the mutual correlation.
This is possible by observing that the estimate
Figure 5.6 The weighted median smoothing operation. Y ( n )can rely more on the sample X ( n ) than on the other samples of the series that are further away in time. In this case, X ( n ) is more
reliable than X ( n - 1)or X ( n l),which in turn are more reliable than X ( n - 2) or X ( n a), and so on. By assigning different variances (reliabilities) to the independent but not identically
distributed location estimation model, the temporal correlation used in time-series smoothing is captured. Thus, weighted median smoothers incorporate the reliability of the samples and temporal
order information by weighting samples prior to rank smoothing. The WM smoothing operation can be schematically described as in Figure 5.6. Consider again the generalized Gaussian distribution where
the observation samples have a common location parameter ,f?, but where each X i has a (possibly)unique scale parameter cri. Incorporatingthe unique scale parameters into the ML criteria for the
generalized distribution,equation (4.9), shows that, in this case, the ML estimate of location is given by the value of ,f3 minimizing
(5.13) In the special case of the standard Gaussian distribution (p = a), the ML estimate reduces to the normalized weighted average
where Wi = 1/u: > 0. In the case of a heavier-tailedLaplacian distribution (p = l), the ML estimate is realized by minimizing the sum of weighted absolute deviations N
where again l/ai > 0. Note that G l ( P ) is piecewise linear and convex for Wi 2 0. The value ,O minimizing (5.15) is thus guaranteed to be one of the samples XI, Xz, . . . , XN. This is the
weighted median (WM), originally introduced over a hundred years ago by Edgeworth [66]. The running weighted median output is defined as
Y ( n )= MEDIAN[WiOXI(n), WzOXz(n), . . . , W N O X N ( ~ ) ] , (5.16)
w,times where W, > 0 and 0 is the replication operator defined as WiOX, = X,, . . . , Xi. Weighted median smoothers were introduced in the signal processing literature by Brownigg (1984) [41] and
have since received considerable attention. Note that the formulation in (5.16) requires that the weights take on nonnegative values which is consistent with the statistical interpretation of the
weighted median where the weights have an inverse relationship to the variances of the respective observation samples. A simplified representation of a weighted median smoother, specified by the set
of N weights, is the list of the weights separated by commas within angle brackets [202];thus the median smoother defined in (5.16) has the representation (Wl, W z ,. . . , W N ) .
Weighted Median Computation As an example, consider the window size 5 WM smoother defined by the symmetric weight vector W = (1, 2, 3, 2, 1). For the observation X ( n ) = [la, 6, 4, 1, 91, the
weighted median smoother output is found as
MEDIAN [ 1 0 1 2 , 2 0 6 , 3 0 4 , 2 0 1 , 1091
MEDIAN [ 12, 6, 6, 4, 4, 4, 1, 1, 91
MEDIAN [ 1, 1, 4, 4,
4, 6, 6, 9,
where the median value is underlined in equation (5.17). The large weighting on the center input sample results in this sample being taken as the output. As a comparison, the standard median output
for the given input is Y ( n )= 6. In general, the WM can be computed without replicating the sample data according to the corresponding weights, as this increases the computational complexity. A
more efficient method to find the WM is shown next, which not only is attractive from a computational perspective but it also admits positive real-valued weights:
(1) Calculate the threshold WO=
Wi ;
(2) Sort the samples in the observation vector X ( n ) ;
(3) Sum the concomitantweights of the sorted samplesbeginning with the maximum sample and continuing down in order; (4) The output is the sample whose weight causes the sum to become 2 W O .
The validity of this method can be supported as follows. By definition,the output of the WM smoother is the value of /3 minimizing (5.15). Suppose initially that ,B 2 X ( N ) .(5.15) can be rewritten
[~ Y i ] ) P - ~ W [ i ] X ( i ) , 1
which is the equation of a straight line with slope m N = suppose that X ( N - ~5) /3 < X"). (5.15) is now equal to:
N-1 -
W[i]- W [ N ]
i=l N CiEl W[, 2 0.
C W [ i ] x (+%W) [ N ] X ( N ) (5.19) . Ni=l -l
This time the slope of the line is m N - 1 = CE;' W[, - w[N]5 m N , since all the weights are positive. If this procedure is repeated for values of ,B in intervals lying between the order statistics,
the slope of the lines in each interval decreases and so will the value of the cost function (5.15), until the slope reaches a negative value. The value of the cost function at this point will
increase. The minimum is then reached when this change of sign in the slope occurs. Suppose the minimum (i.e.. the weighted median) is the Mth-order statistic. The slopes of the cost function in the
intervals before and after X ( M )are given by: M
(5.20) M-l
'Represent the input samples and their corresponding weights as pairs of the form Wi).If the pairs are ordered by their X variates, then the value of W associated with T m )denoted , by I+"+, is
referred to as the concomitant of the m f h order stutisfic [ 5 8 ] .
From (5.20),we have M
c c
W[i] > 2
WO = 3
c c
W[i] >
Similarly, form (5.21): M-1
i=l N
i=M N
cYi]I c
1 Wo = 2 i=l
That is, if the concomitant weights of the order statistics are added one by one beginning with the last, the concomitant weight associated with the weighted median, W [ M ] will , be the first to
make the sum greater or equal than the threshold W O .
EXAMPLE 5.2
(COMPUTATION OF THE W E I G H T E D
To illustrate the WM smoother operation for positive real-valued weights, consider the WM smoother defined by W = (0.1, 0.1, 0.2, 0.2, 0.1). The output for this smoother operating on X(n) = [12, 6,
4, 1, 91 is found as follows. Summing Wi = 0.35. The observation samthe weights gives the threshold WO= ples, sorted observation samples, their corresponding weight, and the partial sum of weights
(from each ordered sample to the maximum) are:
xf=l 6,
observation samples
corresponding weights
0.1, 0.1, 0.2, 0.2, 0.1
corresponding weights
0.2, 0.2, 0.1, 0.1, 0.1
partial weight sums
0.7, ,5.0
0.3, 0.2,
Thus, the output is 4 since when starting from the right (maximum sample) and summing the weights, the threshold Wo = 0.35 is not reached until the weight associated with 4 is added. The underlined
sum value above indicates that this is the first sum that meets or exceeds the threshold. An interesting characteristic of WM smoothers is that the nature of a WM smoother is not modified if its
weights are multiplied by a positive constant. Thus, the same filter characteristics can be synthesized by different sets of weights. Although the WM smoother admits real-valued weights, it turns out
that any WM smoother based on real-valued weights has an equivalent integer-valued weight representation [202]. Consequently, there are only a finite number of WM smoothers for a given window size.
The number of WM smoothers, however, grows rapidly with window size [201]. Weighted median smoothers can also operate in a recursive mode. The output of a recursive WM smoother is given by
Y ( n ) = MEDIAN [ W - N ~ O Y-(N~l ) , . . . , W-lOY(n - l), (5.25) W o O X ( n ) ,. . . , WiV,OX(n N l ) ]
where the weights Wi are as before constrained to be positive-valued. Recursive WM smoothers offer advantages over Wh4 smoothers in the same way that recursive medians have advantages over their
nonrecursive counterparts. In fact, recursive WM smoothers can synthesize nonrecursive WM smoothers of much longer window sizes. As with nonrecursive weighted medians, when convenient we use a
simplified representation of recursive weighted median smoothers where the weights are listed separated by commas within a double set of angle brackets [202]; thus the recursive median ..~ . , ,W N ,
) ) . smoother defined in (5.25) has the representation ( (W-N,, W - N ~ + Using repeated substitution, it is possible to express recursive WM smoothers in terms of a nonrecursive WM series expansion
[202]. For instance, the recursive three-point smoother can be represented as
Y ( n ) = MEDIAN [Y(n- l), X ( n ) , X ( n l)] (5.26) = MEDIAN [MEDIAN [ Y ( n- 2), X ( n - l),X ( n ) ] X , ( n ) ,X ( n l)].
An approximation is found by truncating the recursion above by using X ( n - 2 ) instead of Y ( n- 2). This leads to
Y ( n )= MEDIAN [MEDIAN [ X ( n- 2), X ( n - l), X ( n ) ] ,X ( n ) , X ( n
+ l)].
(5.27) Using the max-min representation of the median above it can be shown after some simplifications that the resultant max-min representation is that of a 4-point median. Representing a recursive
median smoother, its Pth order series expansion approximation, and a nonrecursive median smoother of size N L N R 1by
> ‘ ” >
((W-NR,..., & , . - . , W N ~ ) ) P (W-NL,...,
(5.28) (5.29) (5.30)
respectively, the second-order series expansion approximation of the 3-point recursive median is [202]
The order of the series expansion approximation refers to truncation of the series after P substitutions. With this notation,
The fourth-order approximation is
illustrating that recursive WM smoothers can synthesize nonrecursiveWM smoothers with more than twice their window size.
EXAMPLE 5 . 3 (IMAGEZOOMING) Zooming is an important task used in many imaging applications. When zooming, pixels are inserted into the image in order to expand the size of the image, and the major
task is the interpolation of the new pixels from the surrounding original pixels. Consider the zooming of an image by a factor of powers of two. General zooming with noninteger factors are also
possible with simple modifications of the method described next. To double the size of an image in both dimensions, first an empty array is constructed with twice the number of rows and columns as
the original (Figure 5.7u), and the original pixels are placed into alternating rows and columns (the “00” pixels in Figure 5 . 7 ~ )To . interpolate the remaining pixels, the method known as
polyphase interpolation is used. In the method, each new pixel with four original pixels at its four corners (the “11” pixels in Figure 5.7b) is interpolated first by using the weighted median of the
four nearest original pixels as the value for that pixel. Since all original pixels are equally trustworthy and the same distance from the pixel being interpolated, a weight of 1 is used for the four
nearest original pixels. The resulting array is shown in Figure 5 . 7 ~ .The remaining pixels are determined by taking a weighted median of the four closest pixels. Thus each of the “01” pixels in
Figure 5.7 The steps of polyphase interpolation.
5 . 7 ~is interpolated using two original pixels to the left and right and two previously interpolated pixels above and below. Similarly, the “10” pixels are interpolated with original pixels above
and below and interpolated pixels (“11” pixels) to the right and left. Since the “1 1” pixels were interpolated, they are less reliable than the original pixels and should be given lower weights in
determining the “01” and “10” pixels. Therefore the “11” pixels are given weights of 0.5 in the median to determine the “01” and “10” pixels, while the “00” original pixels have weights of 1
associated with them. The weight of 0.5 is used because it implies that when both “11” pixels have values that are not between the two “00” pixel values, one of the “00’ pixels or their average will
be used. Thus “11” pixels differing from the “00” pixels do not greatly affect the result of the weighted median. Only when the “1 1” pixels lie between the two “00” pixels will they have a direct
effect on the interpolation. The choice of 0.5 for the weight is arbitrary, since any weight greater than 0 and less than 1 will produce the same result. When implementing the polyphase method, the
“01” and “10” pixels must be treated differently due to the fact that the orientation of the two closest original pixels is different for the two types of pixels. Figure 5.7d shows the final result
of doubling the size of the original array. To illustrate the process, consider an expansion of the grayscale image represented by an array of pixels, the pixel in the ith row and jth column having
brightness a i,j.
The array ai,j is interpolated into the array xr:, with p and q taking values 0 or 1 indicating in the same way as above the type of interpolation required: 00
11 xl,2
==+ 10
11 x3,2
The pixels are interpolated as follows: 00
MEDIAN[ai,j, ai,j+l, 0 . 5 0 ~ ~ -0~. ,5~0, ~] ~
MEDIAN[ai,j, ai+l,j, 0 . 5 0 ~ ~ ,0~. -5 ~0 ,~ ~ , ~ ]
U i + l , j , U i , j + l , ai+l,j+l] 11
An example of median interpolation compared with bilinear interpolation is given in Figure 5.8. The zooming factor is 4 obtained by two consecutive interpolations, each doubling the size of the
input. Bilinear interpolation uses the average of the nearest two original pixels to interpolate the “01” and “10” pixels in Fig. 5.7b and the average of the nearest four original pixels for the “1
1” pixels. The edge-preserving advantage of the weighted median interpolation is readily seen in this figure.
5.2.1 The Center-Weighted Median Smoother The weighting mechanism of WM smoothers allows for great flexibility in emphasizing or deemphasizing specific input samples. In most applications, not all
samples are equally important. Due to the symmetric nature of the observation window, the sample most correlated with the desired estimate is, in general, the center observation sample. This
observation lead KOand Lee (1991) [ 1151 to define the center-weighted median (CWM) smoother, a relatively simple subset of WM smoothers that has proven useful in many applications. The CWM smoother
is realized by allowing only the center observation sample to be weighted. Thus, the output of the CWM smoother is given by
Figure 5.8 Zooming by 4: original is at the top with the area of interest outlined in white. On the lower left is the bilinear interpolation of the area, and on the lower right the weighted median
where W, is an odd positive integer and c = ( N 1)/2 = N1+ 1is the index of the center sample. When W, = 1,the operator is a median smoother, and for W , 2 N , the CWM reduces to an identity
operation. The effect of varying the center sample weight is perhaps best seen by means of an example. Consider a segment of recorded speech. The voice waveform “a” is shown at the top of Figure 5.9.
This speech signal is taken as the input of a CWM smoother of size 9. The outputs of the CWM, as the weight parameter W, is progressively increased as 1 , 3 , 5 , and 7, are shown in the figure.
Clearly, as W , is increased less smoothing occurs. This response of the CWM smoother is explained by relating the weight W, and the CWM smoother output to select order statistics (0s). The CWM
smoother has an intuitive interpretation. It turns out that the output of a CWM smoother is equivalent to computing
Y ( n )= MEDIAN [X@),xc, X(N+l-k)] >
where k = ( N 2 - W,)/2 for 1 5 W, 5 N , and k = 1 for W, > N . Since X(n) is the center sample in the observation window, that is, X, = X ( n ) , the output of the smoother is identical to the input
as long as the X ( n ) lies in the interval [X(k),X ( N + ~ - ~ )If] the . center input sample is greater than X ( N + l - k ) the smoother outputs X ( N + ~ - ~guarding ), against a high rank order
(large) aberrant data point being taken as the output. Similarly, the smoother’s output is X ( k ) if the sample X(n) is smaller than this order statistic. This implementation of the CWM filter is
also known as the LUM filter as described by Hardie and Boncelet (1993) [94]. This CWM smoother performance characteristic is illustrated in Figures 5.10 and 5.1 1. Figure 5.10 shows how the input
sample is left unaltered if it is between the trimming statistics X(k)and X(N+l-k)and mapped to one of these statistics if it is outside this range. Figure 5.1 1 shows an example of the CWM smoother
operating on a Laplacian sequence. Along with the input and output, the trimming statistics are shown as an upper and lower bound on the filtered signal. It is easily seen how increasing 5 will
tighten the range in which the input is passed directly to the output.
Application of CWM Smoother To Image Cleaning Median smoothers are widely used in image processing to clean images corrupted by noise. Median filters are particularly effective at removing outliers.
Often referred to as “salt and pepper” noise, outliers are often present due to bit errors in transmission, or introduced during the signal acquisition stage. Impulsive noise in images can also occur
as a result to damage to analog film. Although a weighted median smoother can be designed to “best” remove the noise, CWM smoothers often provide similar results at a much lower complexity. See KO
and Lee (1991) [I151 and Sun et al. (1994) [184]. By simply tuning the center weight a user can obtain the desired level of smoothing. Of course, as the center weight is decreased to attain the
desired level of impulse suppression, the output image will suffer increased distortion particularly around
time n
Figure 5.9 Effects of increasing the center weight of a CWM smoother of size N = 9 operating on the voiced speech “a”. The CWM smoother output is shown for W, =1,3,5, and 7. Note that for W, = 1the
CWM reduces to median smoothing.
Figure 5.70 The center weighted median smoothing operation. The output is mapped to the order statistic X ( k )( X ( N + l - k )if) the center sample is less (greater) than X ( k ) ( X ( N + ~ - ~ )
) , and to the center sample otherwise.
Figure 5.11 An example of the CWM smoother operating on a Laplacian distributed sequence with unit variance. Shown are the input and output sequences as well as the trimming statistics X ( k ) and X
( ~ + l - k )The . window size is 25 and 5 = 7 .
the image’s fine details. Nonetheless, CWM smoothers can be highly effective in removing “salt and pepper” noise while preserving the fine image details. Figures 5 . 1 2 ~and b depicts a noise free
image and the corresponding image with “salt and pepper” noise. Each pixel in the image has a 10 percent probability of being contaminated with an impulse. The impulses occur randomly and were
generated by MATLAB’s imnoise function. Figures 5 . 1 2 ~and d depict the noisy image processed with a 5 x 5 window CWM smoother with center weights 15 and 5, respectively. The impulse-rejection and
detail-preservation tradeoff in CWM smoothing is illustrated in these figures. Another commonly used measure of the quality of an image is the Peak Signal to Noise Ratio (PSNR) defined as: (5.37)
where MSE is the mean squared error of the image and max is the maximum pixel value (255 for 8-bit images). The value of the PSNR of the pictures shown is included in the captions for illustrative
purposes. At the extreme, for W , = 1,the CWM smoother reduces to the median smoother, which is effective at removing impulsive noise and preserving edges. It is, however, unable to preserve the
image’s fine details. Figure 5.13 shows enlarged sections of the noise-free image (left-top), the noisy image (right-top), and of the noisy image after the median smoother has been applied
(left-bottom). Severe blurring is introduced by the median smoother and it is readily apparent in Figure 5.13. As a reference, the output of a running mean of the same size is also shown in Figure
5.13 (right-bottom). The image is severely degraded as each impulse is smeared to neighboring pixels by the averaging operation. Figures 5.12 and 5.13 show that CWM smoothers are effective at
removing impulsive noise. If increased detail-preservation is sought and the center weight is increased, CWM smoothers begin to break down and impulses appear on the output. One simple way to
ameliorate this limitation is to employ a recursive mode of operation. In essence, past inputs are replaced by previous outputs as described in (5.25) with the only difference that only the center
sample is weighted. All the other samples in the window are weighted by one. Figure 5.14 shows enlarged sections of the nonrecursiveCWM filter (right-top) and of the correspondingrecursive CWM
smoother (left-bottom), both with the same center weight (W, = 15). This figure illustrates the increased noise attenuation provided by recursion without the loss of image resolution. Both recursive
and nonrecursive CWM smoothers can produce outputs with disturbing artifacts, particularly when the center weights are increased to improve the detail-preservationcharacteristics of the smoothers.
The artifacts are most apparent around the image’s edges and details. Edges at the output appear jagged and impulsive noise can break through next to the image detail features. The distinct response
of CWM smoother in different regions of the image is because images are nonstationary in nature. Abrupt changes in the image’s local mean and texture carry most of the visual information content. CWM
smoothers process the entire image with fixed weights and are inherently limited in this sense by their static nature. Although some improvement is attained by introducing recursion or by using more
weights in a properly designed WM smoother structure, these approaches are also static and do not properly address the nonstationarity nature of images. A simple generalization of WM smoothers that
overcomes these limitations is presented next. 5.2.2
Permutation-WeightedMedian Smoothers
The principle behind the CWM smoother lies in the ability to emphasize, or deemphasize, the center sample of the window by tuning the center weight while keeping the weight values of all other
samples at unity. In essence, the value given to the center weight indicates the reliability of the center sample. This concept can be further developed by adapting the value of the center weight in
response to the rank of the center sample among the samples in the window. If the center sample does not
Figure 5.12 Impulse noise cleaning with a 5 x 5 CWM smoother: (a) original “portrait” image, (6) image with salt and pepper noise (PSNR = 14.66dB), (c) CWM smoother with Wc = 15 (PSNR = 32.90dB), (d)
CWM smoother with W, = 5 (PSNR = 35.26dB).
figure 5.13 (Enlarged) Noise-free image, image with salt and pepper noise (PSNR=14.66dB), 5 x 5 median smoother output (PSNR=33.34dB), and 5 x 5 mean smoother (PSNR=23.52dB).
contain an impulse (high reliability), it would be desirable to make the center weight large such that no smoothing takes place (identity filter). On the other hand, if an impulse was present in the
center of the window (low reliability), no emphasis should be give to the center sample (impulse), and the center weight should be given the
smallest possible value, W, = 1,reducing the CWM smoother structure to a simple median. Notably, this adaptation of the center weight can be easily achieved by considering the center sample’s rank
among all pixels in the window [lo, 931. More precisely, denoting the rank of the center sample of the window at a given location as R,(n), then the simplest permutation WM smoother is defined by the
following modification of the CWM smoothing operation
if TL 5 R,(n) 5 TU
(5.38) where N is the window size and 1 5 TL 5 TU 5 N are two adjustable threshold parameters that determine the degree of smoothing. Note that the weight in (5.38) is data adaptive and may change
with n. The smaller (larger) the threshold parameter TL (Tu) is set to, the better the detail-preservation. Generally, T L and TU are set symmetrically around the median. If the underlying noise
distribution is not symmetric about the origin, a nonsymmetric assignment of the thresholds would be appropriate. Figure 5.14 (right-bottom) shows the output of the permutation CWM filter in (5.38)
when the “salt and pepper” degraded portrait image is inputted. The parameters were given the values T L = 6 and Tu = 20. The improvement achieved by switching W, between just two different values is
significant. The impulses are deleted without exception, the details are preserved, and the jagged artifacts typical of CWM smoothers are not present in the output. The data-adaptive structure of the
smoother in (5.38) can be extended so that the center weight is not only switched between two possible values, but can take on N different values:
W c ( j ) ( n )if R,(n) = j , j E (1, 2 , . . ., N } (5.39)
Wc(n)= otherwise
Thus, the weight assigned to X , is drawn from the center weight set { Wc(l),W c ( 2 ) ,
. . . , W ~ ( N )Thus } . the permutation center weighted median smoother is represented by ( W I , . . , Wc-l, W c ( j ) Wc+l,.. , . , W N ) ,with WCcj)taking on one of N possible values. With an
increased number of weights, the smoother in (5.39) can perform better. However, the design of the weights is no longer trivial and optimization algorithms are needed [lo, 931. A further
generalization of (5.39) is feasible, where weights are given to all samples in the window, but where the value of each weight is data-dependent and determined by the rank of the corresponding
sample. In this case, the output of the permutation WM smoother is found as
where W i ( ~is
5.3 THRESHOLD DECOMPOSITION REPRESENTATION Threshold decomposition (TD) is a powerful theoretical tool used in the analysis of weighted median filters and smoothers. The use of this signal
decomposition method allows the study of weighted median smoothers by merely studying their behavior on binary signals. Introduced by Fitch et al. (1984) [72], threshold decomposition was originally
formulated to admit signals having only a finite number of positive-valued quantizationlevels. Thresholddecompositionadmittinginteger-valuedsignals taking on positive or negative values is formulated
as follows. Consider an integer-valued set of samples XI, X,, . . . , X N forming the vector X = [XI, X a , . . . , X N ] ~ , where Xi E { - M , . . . , 0, . . . , M } . The threshold decomposition
of X amounts to decomposing this vector into 2M binary vectors x - ~ +. .~. , ,xo,. . . , x M , where the ith element of xm is defined by
= TyXi) =
1 -1
i f X i 2. m; ifXi<m,
where T m(.) is referred to as the thresholding operator. Using the sign function, the above can be written as x y = sgn(Xi - m-) where m- represents a real number approachingthe integer m from the
left. Although defined for integer-valued signals, the thresholding operation in (5.41) can be extended to noninteger signals with a finite number of quantization levels. The threshold decomposition
of the vector
X = [ O , O , 2 , - 2 , 1, 1 , 0 , -1,
with M = 2, for instance, leads to the 4 binary vectors 1,-1, -1, -1, -1, -1, -11 T
x1 xo
= =
[-l,-l, 1,-1, 1, 1,-1,-1,-11 [ 1, 1, 1,-1, 1, 1, 1,-l,-l]T
[ l , 1, 1,-1, 1, 1, 1, 1, 1]T.
Threshold decomposition has several important properties. First, threshold decomposition is reversible. Given a set of thresholded signals, each of the samples in X can be exactly reconstructed as
Figure 5.14 (Enlarged) Original image, CWM smoother output (PSNR=32.90dB), Recursive CWM smoother output (PSNR=32.1 ldB), and Permutation CWM smoother output (PSNR=35.0ldB). Window size is 5 x 5 .
Thus, an integer-valued discrete-time signal has a unique threshold signal representation, and vice versa:
where denotes the one-to-one mapping provided by the threshold decomposition operation. A second property of importance is the partial ordering obeyed by the threshold decomposed variables. For all
thresholding levels m > !, it can be shown that xy xf.In particular, if zCp = 1 then xf = 1for all l < m. Similarly, if xf = -1 then zy = -1, for all m > L. The partial order relationships among
samples across the various thresholded levels emerge naturally in thresholding and are referred to as the stacking constraints [1951. Threshold decomposition is of particular importance in median
smoothing since they are commutable operations. That is, applying a median smoother to a 2M 1 valued signal is equivalent to decomposing the signal to 2M binary thresholded signals, processing each
binary signal separately with the corresponding median smoother, and then adding the binary outputs together to obtain the integer-valued output. Thus, the median of a set of samples X I , Xa, . . .
,XN is related to the set of medians of the thresholded signals as [72]
1 MEDIAN(X1,. . . , X,) = MEDIAN(xy,. .. , ~ 2 m=-Mfl
The threshold decomposition property of median smoothers follows from the following. Let X be the sample median of the set of samples X I , . . . , X N , with N an odd integer. By definition of the
sample median, out of N samples in the set there are at least samples having values that are equal or smaller than the value of X . From (5.41), for a given threshold value m in m = -M + 1,.. . , X ,
at least of the thresholded binary samples x$ have value 1. Similarly, for a given threshold value m in m = X ,. . . , M , at least of the thresholded samples zT have value -1. Hence.
MEDIAN (zy,. . . ,
for - ~ + l < r n < X ;
zE) =
(5.46) -1
Consequently, M
MEDIAN(xF ,..., x;)
m=-M+l The reverse is also true. That is, half the sum of a set of filtered binary samples, which satisfy the stacking constraints, produce the median value of the samples
synthesized by the unfiltered set of thresholded binary samples. The threshold decomposition property in (5.45) is thus verified. To illustrate the concepts behind threshold decomposition consider
the sample set X = [2, -1,2,0, 1IT.Wirh M = 2 , the set of 2111 binary vectors leads to the array of binary vectors
1 -1
1 -1
(5.48) X-1
1 1 Applying the binary median operation to each binary vector
11, -1,
1, -1,
[l, -1,
1, -1,
[l, -1,
1, 1,
Summing the outputs of the binary medians and scaling by 2 leads to the desired multivalued median. { M E D I A N ( z T ~ ~ ~the )}, Since X i ‘I..o: {ZT} and MEDIAN(Xilzl) relationship in (5.45)
establishes a weak superposition property satisfied by the nonlinear median operator, which is important from the fact that the effects of median smoothing on binary signals are much easier to
analyze than that on multilevel signals. In fact, the median operation on binary samples reduces to a simple Boolean operation. The median of three binary samples z 1 , 2 2 , and 2 3 , for example,
is equivalent , the (OR) and z i z j (AND) “Boolean” operators to: 5 1 2 2 2 2 x 3 ~ 1 x 3where in the { - 1 , l ) domain are defined as
(5.50) Note that the operations in (5.50) are also valid for the standard Boolean operations in the ( 0 , l ) domain.
Stack Smoothers
The framework of threshold decomposition and Boolean operations has led to the general class of nonlinear smoothers referred to as stack smoothers, introduced by Wendt et al. (1986) [195]. The output
of a stack smoother is the result of a sum of a
stack of binary operations acting on thresholded versions of the samples spanned by the smoother's running window. The stack smoother output is defined by
c M
S f ( X 1 , .. . , X N ) = -
fb?, . . . ,GI
where xy,i = 1,.. . ,N , are the thresholded input samples x y = T m ( X i )defined in (5.41), and where f ( . ) is a Positive Boolean Function (PBF) which, by definition, contains only
uncomplemented input variables. More precisely, if two binary vectors u E { -1, l}Nand v E {-1, l } Nstack, that is, ui 2 vi for all i E (1, . . . ,N } , then it is said that u 2 v. For a PBF f ( . )
it holds that
f ( u ) 2 f(v)
if u 2 v.
The property in (5.52) is called the stacking property. A complete proof of (5.52) can be found in [145]. From (5.51), it can be seen that stack smoothers are completely characterized by their
independent operation on a set of binary vectors. The importance of this property lies in the fact that the analysis and manipulation of the nonlinear operations behind stack smoothers is simpler
with binary vectors than with real-valued vectors. The computation of a binary median of size N (odd), for instance, reduces to the sign of the sum of the N binary inputs. Given an input vector X and
its set of thresholded binary vectors x -M+l , . . . , xo, . . . , x M , it follows from the definition of threshold decomposition that the set of thresholded binary vectors satisfy the partial
x" 5 xj if i 2 j.
Consequently, the stack smoothing of the thresholded binary vectors by the PBF f (.) also satisfy the partial ordering
f(xi)5 f ( x J ) if i 2 j .
The relation above leads to an interesting interpretation on the stack smoother operation. Since stack smoothers operate independently on each of the thresholded input vectors. The stack smoother
decides whether the desired signal is less than a given level j or not based on the noisy observations provided by the binary vectors at the various thresholds. The stacking property in (5.54)
ensures that the decisions on different levels are consistent. Thus, if the smoother at a given time location decides that the signal is less than j , then the smoother outputs at levels j 1 and
greater must draw the same conclusion. The estimation consistency is illustrated in Figure 5.15 where a 5-level sequence is median smoothed via threshold decomposition. As defined in (5.51), stack
smoothers input signals are assumed to be quantized to a finite number of signal levels. For practical and analytical reasons it is desirable to extend their definition to a class of smoothers
admitting real-valued input signals.
Threshold at 2, 1, 0 , -1
Add and scale by 2
Binary Median Binary Median
Figure 5.75 Threshold decomposition for integer-valued signals. DEFINITION 5.1 (CONTINUOUS STACKSMOOTHERS) Given a set of iV realvalued samples X = ( X I , X2, . . . , X N ) ,the output of a stack
smoother defined b y a PBF f(.)is given b y
Sf(X) = max{t E R : f ( T e ( X 1 ) ., . . , f ( T e ( X ~ = ) )l},
where the thresholdingfunction T e ( . )is defined in (5.41). Although very general, Definition 5.1 is somewhat cumbersome. The following property yields an important link between the continuous
stack smoother S f (.) and the corresponding PBF f (.).
PROPERTY 5.1 (MAX-MIN REPRESENTATION OF STACKSMOOTHERS) Let X = ( X I , X z , . . . , X N ) be a real-valued input vector to a stack smoother S f ( . ) defined b y the positive Boolean function f ( 2
1 , x 2 , . . . , X N ) . Then, the PBF with the sum of products expression K
2=1 jtP, where Pi are subsets of {I, 2 , . . . , A'}, has the stackfilter representation
S f ( X )= max{min{Xj : j
Pi},min{Xj : j E Pz}, . . . ,min{Xj : j E P K } } . (5.57)
Thus, given a positive Boolean function f(x1,.. . , X N ) which characterizes a stack smoother, it is possible to find the equivalent smoother in the integer domain by replacing the binary AND and OR
Boolean functions acting on the X ~ ' Swith max and min operations acting on the real-valued X , samples. A more intuitive class of smoothers is obtained, however, if the positive Boolean functions
are further
restricted, as described in the study of Yli-Harja et al. (1991) [202]. When selfduality2 and separability are imposed, for instance, the equivalent integer domain stack smoothers reduce to the well
known class of weighted median smoothers with positive weights. For example, if the Boolean function in the stack smoother representation is selected as f(z1,2 2 , 2 3 , 2 4 ) = 2 1 x 3 2 4 + 2 2 x 4
2 2 2 3 2 1 2 2 , the equivalent WM smoother takes on the positive weights ( W I , W 2 , W3, W4) = (1, 2, 1, 1). The procedure of how to obtain the weights Wi from the PBF is described below.
Statistical Properties of Stack Smoothers Stack smoothers are primarily used for noise attenuation; thus, it is natural to resort to their output statistics for their characterization. Consider a set
of N real-valued input samples X = [X1 , X 2 , . . . , X N ] applied ~ to a stack smoother Sf(.) defined by a positive Boolean function f(q, 2 2 , . . . , ZN). From Definition 5.1, if we select a
real-valued threshold e such that
f(x1, 5 2 , ' .
' 1
49 = 1,
where zf i = 1,. . . , N are the thresholded input variables, then it follows from the stacking property that the output of the stack smoother must satisfy
Sf(X) L
Likewise, the event
indicates that the stack smoother output satisfies
sf(x)< e.
The deterministic partial ordering described in (5.58)-(5.61) was used by Arce (1986) [5] and Wendt et al. (1986) [194] to determine the output distribution for the output of a stack smoother as
Fs(!) = PT{Sf(X) 5 e} =
2 2,
. . . , X N ) = -l},
(5.62) (5.63)
where it is assumed that the distribution functions of the input random variables are continuous. Likewise, the relation
P T { S ~ ( X>) e} = P T { ~ ( &
2;, . . .
, &)
= I},
'Self-duality refers to the Boolean function structure that when complementing the input variables leads to an output that is the complement of the original output.
follows from the stacking property of stack smoothers. The probabilistic relations in (5.63) and (5.64) are referred to as the statistical threshold decomposition property of stack smoothers, which
provides the key to determining their output distribution [8, 5, 1951. Note that (5.63) provides a general statistical description of the output where no statistical assumptions have been made about
the input random variables. The computation of F s ( t ) , however, can be difficult to obtain in closed form expression except for cases where some assumptions are placed on the statistics of the
input samples. Fs (t)for general input random variables can still be computed in the general case, but numerical methods may be required. If the observation samples XI, X z , . . . , X N , are
assumed independent for example, with X i obeying the distribution function F i ( z ) ,for i = 1,. . . , N , the binary thresholded samples X: are distributed as (5.65) (5.66)
Pr{zf = -1} = Fi(t) Pr{zf = l} = 1 - Fz(t).
The distribution function for the stack smoother output S f ( ! ) is then found as
Fs(t) =
P T { f ( X e , , X$,.
. . , X L ) = -1)
N Z € f - l ( - l ) i=l
where f-'(-l) is the pre-image of -1, such that, f-' = {z : f(z)= -l}, and the binary valued variable in the exponents are to be understood as real -1s and 1s. In essence, the expression in (5.67)
adds the probabilities of all possible binary inputs (events) that lead the PBF f(.)to output a -1. Since these events are disjoint, the expression in (5.67) simply adds the probability of each
event. If the input samples are not only independent from each other but identically distributed as well, the output distribution in (5.67) can be further simplified. In this case, denote the common
distribution function of the input variables as F ( z ) ,and let w ~ ( z be ) the Hamming weight of the vector [XI 1 , 2 2 1,.. . , X N 1]/2, that is, the number of Is in z.The distribution function
F s ( t ) of the output of a stack filter is then
(5.68) i=O
where the numbers Ai are defined by
Ai = I{.
: f(x) = -1, W H ( X ) = i}lc,
where I s l ( denotes ~ the cardinality of the set 0. The expression in (5.68) is simply a reordering of the terms in (5.67),where the product operation in (5.67) is absorbed in the coefficients Ai.
This is only possible if all the samples share a common distribution function F ( . ) . In order to guarantee that the stack smoother is not defined by the trivial PBF f ( z )= -1 for all z, or f ( z
) = 1 for all 2,it is required that
Ao= 1 and AN = O .
Since Ai is the cardinality of a set of binary vectors of length N with hamming weight i, it follows that
The coefficients Ai embody important deterministic information about the underlying PBF defining a particular stack smoother. A number of properties for the coefficients Ai have been derived, some of
which we list here without proof [ 119, 1981. Because of the stacking constraints of PBFs, the coefficients A i satisfy the symmetric ordering
Ai+l L
f o r i = O , 1, . . . , N-1. S i n c e ( N - i ) / ( i + l ) implies that
I l w h e n i 2 (N-l)/2,theabove
For self-dual PBFs the Ais satisfy
...,N .
Moments of Stack Smoothers In principle, the statistical properties of the stack smoother output are embodied in the probability distribution function. In some cases, it may be simpler or more
advantageous to characterize the stack smoother output through expectations or moments. The second-order central moment, for instance, is often used to measure the noise attenuation capability of a
smoother. It measures the spread of the output variable about its expected value. A criterion for stack smoother design may take advantage of this fact and would try to minimize the second central
moment of the output [1191. The pth-order moment of the output of a stack smoother can be expressed as N-1
((1- F ( k ) ) i F ( t ) N - z dx )
for i = 1,.. . , N - 1. M ( . ) is thus a function of the input distribution F ( . ) , the window size N and the index a. The central moment about the mean of the output is then N-1 0:
C A i M ( F ,2, N , i ) i=O
As can be seen in the above equations, the function M ( F ,p , N , i ) plays a key role in determining the output moments. Notably, these functions have a number of properties that can facilitate
their computation. The following property provides a recursive tool for computation.
PROPERTY 5.2 The numbers M ( F ,p , N , i ) satisfy i
M ( F , p , N , i )=
( ) (-l)z-jM(F,p,N-j,O)
for0 5 i 5 N. Property 5.2 follows from the identity
c[ 7.
(1- F ( x ) ) i =
Using (5.76) in the definition of M ( F ,p , N , i) leads to
The integral in (5.77) is simply M ( F , p ,N - j , 0) which proves Property 5.2. The recursion in Property 5.2 allows the computation of all M ( F , p ,N , i ) numbers provided that the numbers M (
F , p ,j , 0) are known. Notice, however, that M ( F ,p , j , 0) are by definition the moments of the jth order-statistic of N i.i.d. variates with a parent distribution F ( . ) . These have been
extensively studied in statistics and have been tabulated for various distributions including the Gaussian and uniform distributions. If the input distribution is symmetric with respect to its mean,
a number of additional properties for M ( F , p , N , i ) emerge. Using these set of properties for M ( F ,p , N , i ) , it was shown in [119, 1981 that among all stack smoothers of a fixed window
size, the standard median smoother attains the smallest output variance.
Not surprisingly, the standard median also minimizes the breakdown probability [119]. Thus, the median smoother is optimal in the class of stack smoothers for noise cleaning. In more demanding tasks,
where the underlying desired signals are not constant, the standard median is no longer optimal and more elaborated stack smoother design procedures are needed.
Optimization of Stack Smoothers Having the framework of stack smoothers, it is necessary to develop tools for their design and optimization under some error criteria. This problem has been
extensively studied and consequently a number of stack smoother optimization algorithms exist today. A number of error criteria have been proposed including the traditional mean square error (MSE)
and mean absolute error (MAE). See Coyle and Lin (1988) [55], Coyle (1988) [54], Lin and Kim (1994) [131], Lin et al. [132], Yin et al. (1993) [199], and T 3bus et al. (1996) [186]. Other less common
approaches have also been considered. These include optimization under the associate memory criterion, a shape preservation criterion, and optimizationunder a set of constraints. They can be found in
Gabbouj and Coyle (1990) [76], Kuosmanen et al. (1995) [120], Yu and Coyle (1992) [203], and Yang et al. (1995) [198]. Optimization under a set of constraints take on many forms. Structural,
rank-selection, breakdown point constraints can be imposed leading to a number of optimization algorithms as in Kuosmanen and Astola [ 1191, Gabbouj and Coyle (1990) [76], and Yin et al. (1993)
[199]. A review on these approachesis given in Astola and Kuosmanen (1999) [22]. The performance of the various algorithms depends on the optimality criterion, which should be carefully chosen. Here
we present two approaches for the optimization of stack smoothers: (a) design under structural constraints, and (b) the adaptive optimization approach to be covered in Chapter 6. Stack Smoothers
Under Structural Constraints Zeng (1994) [205] introduced a simple stack smoother design procedure that can preserve certain structural features of an image, such as straight lines or comers, while
taking advantage of the optimal noise reduction properties of the median smoother. The structuralconstraints consist of a list of different structures to be preserved, deleted or modified. Since
stack smoothers obey threshold decomposition, the structural constraints only need to be considered in the context of binary signals. That is, they can be specified by a set of binary vectors and the
corresponding stack smoother outputs. Zeng’s algorithm leads to stack smoothers that compare favorably with other stack smoothers such as the center weighted median, but, with their closed-form
Boolean function representations,they are generally easier to derive. To examine the structural constraints that one may desire in the design of a stack smoother, consider the following 2-dimensional
example. Assume that a binary image of interest includes a horizontal line of width one. An unweighted median filter of size 3 or greater applied to the image will annihilate the line. Let x be the 3
x 3 observation vector:
[x7 28
where 2 5 is the center point of the window. To preserve the desired feature (horizontal lines) define the Boolean function
g g x 3 ) ( x ) = %42526
+ (24 + 2 5 + 26)m(3x3)(x)
where m ( 3 x 3 ) (is. )the Boolean function for the unweighted median smoother of size 3 x 3. We find that if 2 4 , 5 5 , 2 6 are all 1 then g g x 3 ) will be 1 no matter the values of the other
points3. Also, if there was a line of -1s instead of 1s then the term (24 2 5 x ~ )as, well as the product 5425x6, would be -1, and thus g Z x 3 ) ( . ) = -1. Hence, g g x 3 ’ ( . ) preserves
straight horizontal lines of either -1s or 1s through the center point of the binary image. If there was no such line, then z4x526 = -1 and 24 25 5 6 = 1, so that 9 g x 3 ) ( . will ) return the
value of the median m ( 3 x 3 ) ( . ) , thus employing the noise reduction property of the latter smoother. The above concept can be extended in a straightforwardmanner to create smoothen that will
preserve horizontal, vertical, or diagonal lines of any length, as well as corners. In general, let the observation window be II: = X I , 2 2 , . . . , X N Z and denote subsets
+ +
+ +
xi = X i l , x i z , . . . , X i k ,
i = 1,.. . M
that include the center point of the smoother. Further, let Pi be the product of the points in Xi and Zi be the sum of those points. Then, to create a smoother that will return a 1 (or -1) when any
subset X i is made up entirely of 1s (or -1s), and return the result of the median otherwise, we let M
For example, given the 5 x 5 mask
we can preserve horizontal, vertical, and diagonal lines of length 5 by using (5x5)
gHVD (x)
Certain center weighted smoothers will also preserve straight lines. For instance, the 5 x 5 center-weighted median smoother with center weight 17 will preserve the same features as .g;g; However,
there will be some loss in noise attenuation. In contrast to Zeng’s design method, the design of center-weighted medians, and in general of weighted median smoothers, call for more elaborated
optimization procedures. The design advantages of Zeng’s approach are further illustrated in the following example. Suppose one wishes to increase the window mask to improve noise attenuation while
fixing the length of the feature to be preserved. For instance, the design for a 5 x 5 mask that preserves lines of length 3 is g( 5j x~5)v ~ ( = X )2 1 1 2 1 2 2 1 3
This derivation is much easier than that of an equivalent weighted median smoother. Figure 5.16 illustrates the principles behind Zeng’s algorithm. Figure 5 . 1 6 ~shows a 460 x 480 test image
consisting of several horizontal, vertical, and diagonal lines. Figure 5.16b shows the image with additive 10% salt and pepper noise. An unweighted and a center-weighted median smoother of size 3 x 3
were applied to the noisy image. The median smoother annihilated all image features as expected. The CWM smoother output with center weight 5 is shown in Fig. 5.16~.The output of is shown in Fig.
5.16d. We see that Zeng’s stack Zeng’s stack smoother smoother performs as expected, preserving structural features while possessing high noise attenuation. The center-weighted median smoother
preserved details, but at
Figure 5.76 ( a )Test pattern with line features, (b)noisy pattern, (c) output of 3 x 3 CWM,
(4Zeng’s stack smoother preserving line features.
the cost of some reduction in noise attenuation. The noise attenuation of the CWM can be improved by lowering the center weight, although the feature preservation characteristics of the CWM becomes
5.4 WEIGHTED MEDIANS IN LEAST ABSOLUTE DEVIATION (LAD) REGRESSION Linear regression is widely used today in science and engineering applications. It attempts to regulate more than one set of data
which are presumed to be linear related, that is, the sets statistically differ by their scale and a possible shift. In microarray data normalization, for example, the goal is to recognize certain
contributing gene set by assaying the expression levels of thousands of genes from different arrays
[51]. The underlying assumption is that one or more reference genes have constant expression levels across batches. Thus, variation between arrays must be taken into account before any meaningful
biological comparison can be made. Microarray data normalization is often accomplished through a linear regression between two arrays of gene expressions. When the reference gene set is recognized
and the linear regression parameters are obtained, normalization of two arrays can then be carried out simply by scaling and shifting one of the arrays using the obtained regression parameters.
Comparison of two arrays can then be executed easily. Historically, linear regression has long been dominated by Least Squares (LS) techniques, mostly because of their elegant theoretical foundation
and ease of implementation. The assumption in this method is that the model has normally distributed errors. In many applications, such as in microarray data normalization, heavier-thanGaussian
tailed distributions may be encountered, where outliers in the measurements may easily ruin the estimates [35]. To address this problem, robust regression methods have been developed so as to
mitigate the influence of outliers. Among all the approaches to robust regression, the Least Absolute Deviations (LAD) method, or norm,LI, is considered conceptually the simplest, since it does not
require a tuning mechanism like most other robust regression procedures. As a result, LAD regression has drawn significant attention in statistics, finance, engineering, and other applied sciences as
detailed in a series of studies on L1-norm methods [60,61,62,63]. LAD regression is based on the assumption that the model has Laplacian distributed errors. Unlike the LS approach though, LAD
regression has no closed-form solution, hence numerical and iterative algorithms must be resorted to. Surprisingly to many, the LAD regression method first suggested by Boscovich (1757) and studied
by Laplace (1793) predated the LS technique originally developed by Legendre (1805) and Gauss (1823) [35, 601. It was not until nearly a century later that Edgeworth (1887) [66] proposed a general
numerical method to solve the unconstrained LAD problem, where the weighted median was introduced as the basic operation in each iteration. Edgeworth’s method, however, suffers from cycling when data
has degeneracies [97]. A breakthrough came in the 1950’s when Harris (1950) [95] brought in the notion that linear programming techniques could be used to solve the LAD regression, and Charnes et al.
(1955) [44] actually utilized the simplex method to minimize the LAD objective function. Many simplex-like methods blossomed thereafter, among which Barrodale and Roberts (1973) [29] and Armstrong,
Frome, and Kung (1979) [ 151 are the most representative. Other efficient approaches include the active set method by Bloomfield and Steiger (1980) [34], the direct decent algorithm by Wesolowsky
(1981) [196], and the interior point method proposed by Zhang (1993) [206]. More historical background on LAD estimates can be found in [60].
Foundation and Cost Functions
The simple LAD regression problem is formulated as follows. Consider N observation pairs ( X i ,y Z ) modeled in a linear fashion
i = 1 , 2 ,..., N
where a is the unknown slope of the fitting line, b the intercept, and U , are unobservable errors drawn from a random variable U obeying a zero-mean Laplacian distribution f ( U ) = e* with variance
o2 = 2X2. The Least Absolute Deviation regression is found by choosing a pair of parameters a and b that minimizes the objective function N
C [x- a X i - b ( ,
(5.79) i=l which has long been known to be continuous and convex [35]. Moreover, the cost surface is of a polyhedron shape, and its edge lines are characterized by the sample pairs ( X i ,y Z ) . As
a comparison, the objective function of the Least Squares regression is well known as
Fi(a,b) =
F2(a,b) =
x ( x- aXi - b)’,
which has a closed-form solution ( X , -X)(Y, -Y) (Xi_X)*
[ b* = Y - a * X .
Since the LAD objective function F1 ( a , b ) is the only one being concerned throughout the section, we will drop the subscript 1 hereafter. Figure 5.17 depicts the different line-fitting
characteristics between LAD and LS. The LS fitting line is greatly offset by the single outlier in the samples, the sign of the slope can be even reverted given a big enough outlier, as shown in
Figure 5 . 1 7 ~ . If the value of a is fixed at first, say a = ao, the objective function (5.79) now becomes a one-parameter function of b N
F(b) =
C lY, - aoXi - bl.
Assuming a Laplace distribution for the errors Ui, the above cost function reduces to a Maximum Likelihood estimator of location for b. That is, we observe the sequence of random samples {Yi - aoXi},
and the goal is to estimate the fixed but unknown location parameter b. Thus the parameter b* in this case can be obtained by
b* = MED(Y, - aoXi I El).
0 X
(a> 5'
L -
0 ,
2 . 2.
Samples LS LAD
-5 ,
Figure 5.17 LAD and LS linear regression results as the value of an outlier sample is increased.
If, on the other hand, we fix b = bo, the objective function reduces to N
(5.84) Again, if the error random variable U , obeys a Laplacian distribution, the observed samples { -} are also Laplacian distributed, but with the difference that each sample in this set has
different variance. The reason is obvious since for each known X , and zero mean U,, remains a zero mean Laplacian with variance scaled by Thus the parameter a* minimizing the cost function (5.84)
can still be seen as the ML estimator of location for a, and can be calculated out as the weighted median,
(5.85) where o is the replication operator. For a positive integer IXi 1, IXi I o U, means U, is replicated I X , I times. A simple and intuitive way of solving the LAD regression problem is through
the following iterative algorithm:
(1) Set k = 0. Find an initial value solution. (2) Set k
for a , such as the Least Squares (LS)
+ 1and obtain a new estimate of b for a fixed a k - 1 b k = MED(Y,
I iN= l ) .
(3) Obtain a new estimate of a for a fixed b k using
(4) Once a k and bk do not deviate from a k - 1 and bk-1 within a tolerance range, end the iteration. Otherwise, go back to step 2). Since the median and weighted median operations are both ML
location estimators under the least absolute criterion, the cost functions will be nonincreasing throughout the iterative procedure, that is F ( a k - 1 , bk-1)
2 F ( a k - 1 , h)2 F(arc, b k ) .
The algorithm then, converges iteratively. Since the objective function F ( a , b ) is continuous and convex, one may expect that the algorithm converges to the global
Sample Space
-5i 2
0 a
Figure 5.78 Illustration of the sample space and the parameter space in the simple linear regression problem. The circles in the upper plot represent the samples, the dot in the lower plot represents
the global minimum.
minimum. However, careful inspection reveals that there are cases where the algorithm does not reach the global minimum. To see this, it is important to describe the relationship between the sample
space and the parameter space. As shown in Figure 5.18, the two spaces are dual to each other. In the sample space, each sample pair ( X i , y Z ) represents a point on the plane. The solution to the
problem (5.78), namely ( a * ,b*), is represented as a line with slope a* and intercept b*. If this line goes through some sample pair ( X i ,yZ), then the equation Y , = a*Xi b* is satisfied. On the
other hand, in the parameter space, ( a * ,b*) is a point on the plane, and ( - X i , yZ) represents a line with slope ( - X i ) and intercept Y,. When b* = (-Xi).* K holds, it can be inferred that
the point ( a * ,b*) is on the line defined by ( - X i , Y,). As can be seen in Figure 5.18, the line going through
( X I ,Y1)and ( X 5 ,Y5)in the sample space has a slope a* and an intercept b*, but in the parameter space it is represented as a point which is the intersection of two lines with slopes ( - X I )
and ( - X 5 ) respectively. The sample set used to generate Figure 5.18 is, in a (X,,yZ) manner, [(-1.4,-0.4), (0.6,8.3), (1.2,0.5), (-0.7,-0.9), (0.8,2.6)].
Figure 5.19 The cost surface of the LAD regression problem. The dot at an intersection on the a-b plane represents the global minimum. To better illustrate the inner topology of the function, the
half surface that towards the viewers is cut off.
The structure of the objective function F ( a , b) is well defined as a polyhedron sitting on top of the a-b plane, as seen in Figure 5.19. The projections of the polyhedron edges onto the plane are
exactly the lines defined by sample pairs ( X %, which is why the term “edge line” is used. In other words, every sample pair ( X i ,y Z ) has a corresponding edge line in the parameter space.
Moreover, the projections of the polyhedron comers are those locations on the a-b plane where two or more of the edge lines intersect. Most importantly, the minimum of this convex, linearly segmented
error surface occurs at one of these corners. To describe the dynamics of this simple iterative method, consider Step 2 in the procedure, where a new estimate b k is calculated based on a fixed,
previously obtained a k - 1 through a median operation. Since the median is of selection type, its output is always one of the inputs. Without loss of generality, assume b k = y3 - U k - I X j ,
which means that the newly estimated parameter pair ( a k - 1 , b k ) is on the edge line defined by ( - X j ) and Yj. Thus, the geometrical interpretation of Step 2 can be
derived as follows: draw a vertical line at a = a k - 1 in the parameter space, mark all the intersections of this line with N edge lines4. The intersection on the edge line defined by (-Xj) and Yj
is vertically the median of all, thus its b-coordinate value is accepted as b k , the new update for b. Similar interpretation can be made for Step 3, except that the chosen intersection is a
weighted median output, and there may be some edge lines parallel to the a-axis. The drawback of this algorithm is that the convergence dynamics depends on the geometry of the edge lines in the
parameter space. As can be seen in Figure 5.20u, where the iteration is carried on between edge lines in an inefficient zigzag manner needing infinite steps to converge to the global minimum.
Moreover, as illustrated in Figure 5.20b, it is possible that vertical optimization and horizonal optimization on the edge lines can both give the same results in each iteration. Thus the algorithm
gets stuckinanonoptimal solution. The sample setusedforFigure5.20uis [(-0.1, -3.2), (-0.9, -2.2), (0.4,5.7), (-2.4, -2.1), (-0.4, -1.0)], the initial values for a and b are 5 and 6. The sample set
used for Figure 5.20b is [(0.3,-l.O), (-0.4, -O.l), (-2.0, -2.9), (-0.9, -2.4), (-1.1,2.2)], the initial values for a and b are -1 and 3.5. 5.4.2
LAD Regression with Weighted Medians
To overcome these limitations, the iterative algorithm must be modified exploiting the fact that the optimal solution is at an intersection of edge lines. Thus, if the search is directed along the
edge lines, then a more accurate and more efficient algorithm can be formulated. The approach described here is through coordinates transformation. The basic idea is as follows: In the parameter
space,if the coordinates are transformed so that the edge line containing the previous estimate ( a k-1, b k - 1) is parallel to the a’-axis at height b;-l, then the horizontal optimization based
upon b;-l is essentially an optimization along this edge line. The resultant ( a ; , b;) will be one of the intersections that this line has with all other edge lines, thus avoiding possible zigzag
dynamics during the iterations. Transforming the obtained parameter pair back to the original coordinates results in ( a,+,b k ) . This is illustrated in Figure 5.21. The only requirement for this
method is that the shape of the cost surface must be preserved upon transformation,thus the same optimizationresult can be achieved. Notice that if an edge line is horizontal, its slope ( - X j ) has
to be 0. We will show shortly that a simple shifting in the sample space can satisfy the requirement. The following is the resultant algorithm for LAD regression. (1) Set k = 0. Initialize b to be bo
using the LS solution
bo =
( X i - X ) ( Y X i - Xrz) EN%=I( X i - X > Z
4Since all meaningful samples are finite, no edge lines will be parallel to the b-axis, hence there must be N intersections.
(b) Figure 5.20 The parameters’ trajectories during the iterations. Vertical dashed lines represent b updates, horizontal dash-dotted lines represent a updates. ( a ) zigzag case, (b) nonoptimal
case. The marked dots represent the global minima. To better illustrate, the initial values for a and b are not set from the LS solution.
Calculate a0 by a weighted median
Keep the index j which satisfies a0 = Y,-bo . In the parameter space, ( a o ,bo) x, is on the edge line with slope ( - X j ) and intercept 5 .
15 a’
Figufe5.27 Theillustrationofoneiteration. Theprevious estimate (uk-1, b k - 1 ) ismapped into the transformed coordinates as bkp1); (uk,bk) is obtained through ML estimation in the transformed
coordinates; The new estimate ( a k , b k ) is formed by mapping (a;, b k ) back into the original coordinates. The sample set is [(1.6,2.8), (-1.4, -3.8), (1.2,3.5), (-4.3, -4.7), (-1.8, -2.2)].
(2) Set k = k 1. In the sample space, right shift the coordinates by X j , so that the newly formed y’-axis goes through the original ( X j ,y3 ). The transformations in the sample space are
x!= x.- X 3. , y,’ = yi,
and the transformations in the parameter space =q-1,
= bL-l
= bk-1
+ ak-lXj.
The shifted sample space ( X ’ , Y ’ ) corresponds to a new parameter space (u’, b’), where ( - X i , represents a horizontal line.
(3) Perform a weighted median to get a new estimate of a’ (5.90) Keep the new index t which gives a; =
Y,‘-b’ XI.
(4) Transform back to the original coordinates
(5) Set j = t. If ak is identical to Otherwise, go back to step 2.
within the tolerance, end the program.
It is simple to verify that the transformed cost function is the same as the original one using the relations in (5.88) and (5.89). For fixed b k, N
F’(a’) =
C ly,’
a’X,! - b’,l
U ( X i - xj)- ( U X j
i= 1 N
+ bk)I
C lY,
- bkl = F ( u ) .
This relationship guarantees that the new update in each iteration is correct.
The computational complexity in Li and Arce’s algorithm resides in the weighted median operation used at each iteration. Essentially, it is a sorting problem with complexity proportional to the order
of N log N , where N is the sample size. In this particular application, a speed-up can be achieved by not carrying out a full sorting operation every time. In [196], a short cut is used to
circumvent the time-consuming full-sorting procedure. In essence, the previous estimate can be considered close enough to the true value, thus fine-tuning can be executed around this point. Two
criteria are often used to compare LAD algorithms: speed of convergenceand complexity. Most of the efficient algorithms, in terms of convergence speed (except for Wesolowsky’s and its variations),
are derived from Linear Programming (LP) perspectives, such as simplex and interior point. Take Barrodale and Roberts’ (BR) algorithm5 (1973) for example. Its basic idea is to apply row and column
operations ‘which can be considered as the basic form of the other two best simplex-type algorithms, namely, Bloomfield and Steiger’s (1983) (BS), and Armstrong, Frome and Kung’s (1979) (AFK),
according to 1601.
Edgelines 125
Figure 5.22 Comparison of Wesolowsky’s and Li and Arce’s algorithms. ( a ) shows the iterations of the parameters and the costs. (b) the convergence of the algorithms on the parameter space. Two
algorithms choose the same LS solution as the initial point. The marked dot in (b)represents the global minimum. Notice that not all the edgelines are plotted.
on a constructed ( N
+ K ) x ( K + 1)matrix A. The initial value of A is (5.93)
where Y is an N x 1 vector of observations of the dependent variable and X is an N x K matrix of the independent variables. For the simple regression case, K = 2. BR-like algorithms usually consist
of two phases: Phase I forms a set of independent edge direction vectors, Phase I1 updates the variable basis until it converges. In general, BR-like algorithms are slightly faster than other
algorithms with simpler structures. Their computational complexity, however, is significantly higher. The complicated variable definition and logical branches used in BR-like algorithms cause
tremendous efforts in their hardware implementations and are thus less attractive in
such cases. Focusing on efficient algorithms that have a simple structure for ease of implementation, Wesolowsky’s direct descent algorithm stands out. The algorithm is summarized below.
Step 1 Set k = 0. Choose the initial values ao, bo. Choose j so that is a minimum. Step 2 Set k
- aoXj - bol
+ 1. Using the weighted median structure to get the update for b
Recording the index i at which the term (Yz- Y , X , / X j ) / ( l - X , / X , ) is the weighted median output.
Step 3
a) If b k - b k - 1 = 0: if k 2 3, go to Step 4; if not, set j = i and go to Step 2. b) If
# 0: set j
and go to Step 2.
Step 4 Let b* = b k , a* = ?/x, - b*/X,. The major difference between Wesolowsky’s algorithm and that of Li and Arce’s (LA) is that the weighted median operations in their case are used for intercept
b updates, while in LA algorithm they are used for slope a updates. Also as depicted in Figure 5.22b, the first iterations of the two algorithms are different. LA algorithm picks the first a update
horizontally, whereas Wesolowsky’s algorithm chooses a nearby intersection based on the criterion described in Step 1. Since the realization of the weighted median in both algorithms can benefit from
the partial sorting scheme stated above, the computational complexity of both methods is comparable. Li and Arce’s algorithm, however, is slightly more efficient, reaching convergence in less
iterations as depicted in Figure 5.23, in terms of number of iterations. It can be observed in Figure 5.23 that for large sample sets, Li and Arce’s LAD regression method requires 5% less iterations,
and about 15% less for small sample sets.
Problems 5.1 Let X I ,X z , . . . , X N be N i.i.d. Cauchy observations with zero median and k = 1. (a) Show that E ( X t k , ) < 00 if and only if 3
- 2.
(b) Show that the median of N Cauchy i.i.d. samples has a finite second-order central moment only if N 2 5.
5.2 Prove that the superposition property and the impulse response analysis do not apply to the running median.
PROBLEMS 8.5 1
3 51 0'
Number of samples
Figure 5.23 Comparison of the number of iterations of Wesolowsky's and LA algorithms. The dimensions of the sample sets are chosen as [20, 50,200, 1O00, 50001, each having lo00 averaging runs.
5.3 (a) Prove that the rth-order statistic of
N samples can be found by the maxima of
set of minimums:
x ( ~=) M A X set of
( N -;
minima of
bj,, Xj2 l .
. .lZj,_,+l (5.95)
where (jl,j 2 , . . . ,j a ) index all possible combinations of a samples in a set of N samples.
(b) Derive an expression for the rth-order statistic of N samples in terms of the minimum of a set of maximums. 5.4 Explain the problems encountered with the WM estimation if some of the weights in
5.16 are allowed to be negative.
5.5 Prove that the procedure described in (5.24) minimizes the sum of weighted deviations required to compute the WM in (5.15).
5.6 Recall that for a median filter of window size N = 2N1 1, a necessary and sufficientcondition for the signal to be invariant (a root) under median smoothing is
that the extended (beginning and end appended) signal be LOMO(N 1 a), where a sequence { X ( . ) }is said to be locally monotonic of length m, denoted LOMO(m), if the subsequence X ( n ), X ( n 1), .
. . , X ( n m - 1)is monotonic for all n > 1. Give the necessary and sufficient (local monotonic) conditions for a signal to be invariant under center-weighted smoothing for the center weight being
equal to 3, for N1 = 1 , 2 , 3,....
Prove that the center-weighted Median in (5.35) can be obtained by:
Weighted Median Filters
Weighted median smoothers admit only positive weights. This is a limitation as WM smoothers are, in essence, limited to have low-pass type filtering characteristics. Although WM smoothers have some
analogies with linear FIR filters, they are equivalent to the normalized weighted average with non-negative weights - a severely constrained subset of linear FIR filter. A number of engineering
applications require band-pass or high-pass frequency filtering characteristics. Equalization, deconvolution, prediction, beamforming, and system identification are example applications where filters
having band-pass or high-pass characteristics are of fundamental importance. Linear FIR equalizers admitting only positive filter weights, for instance, would lead to inadequate results. Thus, it is
not surprising that weighted median smoothers admitting only positive weights lead to inadequate results in some applications. This Chapter focuses on weighted median filters admitting positive as
well as negative weights. WM filters thus overcome the limitations of WM smoothers. As would be expected, weighted median filters reduce to weighted median smoothers whenever the filter coefficients
in the WM filter structure are constrained to be positive.
Weighted median smoothers emerge as the solution to the Maximum Likelihood estimation of location for a set of independently distributed Laplacian samples with unequal variance. For
Gaussian-distributed samples, the equivalent estimate is the 139
normalized weighted average with non-negative weights. In order to formulate the general weighted median filter structure, it is logical to ask how linear FIR filters arise within the location
estimation problem. The answer provides the key to the formulation of the general WM filter. To this end, consider N samples X 1, X Z ,. . . , XN obeying a multivariate Gaussian
1 exp[-- 1 (X - ep)T R-' (X - ep)] (27r)N/2[det(R)]l/z 2
where X = [XI,X2,. . . , X N ]is~the observation vector, e = [I,1 , .. . , 1IT,p is the location parameter, R is the covariance matrix, and det(R) is the determinant of R. The Maximum Likelihood
estimate of the location parameter p results in
p=- eTRTX = WTX eTR e
where eTRe > 0, due to the positive definite nature of the covariance matrix, and where elements in the vector eTRTcan take on positive as well as negative values. Thus, (6.2) takes the structure of
a linear FIR filter whose weights Wi may take on negative values depending on the mutual correlation of the observation samples. The extension of the above to the case of Laplacian distributed
samples, and in general to other nonGaussian distributions, unfortunately becomes too cumbersome. The multivariate Laplacian distribution, and in general all nonGaussian multivariate distributions,
do not lead to simple ML location estimates. The complexity in these solutions has hindered the development of nonlinear filters having attributes comparable to that of linear FIR filters. Notably,
however, a simple approach was discovered that can overcome these limitations [6]. In this approach, a generalization of the sample mean leads to the class of linear FIR filters. This generalization
is, in turn, applied to the sample median forming the class of weighted median filters that admits both positive and negative weights. The extension, turns out, not only to be natural, leading to a
significantly richer filter class, but it is simple as well. The sample mean MEAN ( X I ,X2, . . . ,X,) can be generalized to the class of linear FIR filters as
p = MEAN (wl . xl,wz. x2,.. . , wN. x,)
where Wi E R. In order to apply the analogy to the median filter structure, (6.3) must be written as
P=MEAN(lWlI .sgn(Wl)X1,IW21.sgn(Wz)xz,...,IW~I .sgn(K)X~), (6.4) where the sign of the weight affects the corresponding input sample and the weighting is constrained to be non-negative. By analogy,
the class of weighted median filters admitting real-valued weights emerges as defined next.
Figure 6.45 Multivariate medians for color images in salt-and-pepper noise, p = 0.001 for the WVM, p u ,pw = 0.05 for the marginal WMM. From left to right and top to bottom: noiseless image,
contaminated image, WVM with 3 x 3 window, marginal WMM with 3 x 3 window.
COLOR FlGURfS lNSfRT
Figure 6.46 Multivariate medians for color images in salt-and-pepper noise, p = 0.001 for the WVM, p U ,pw = 0.05 for the marginal WMM (continued). From left to right: WVM with 5 x 5 window, marginal
WMM with 5 x 5 window.
DEFINITION6.1 (WEIGHTEDMEDIANFILTERS) Given a set o f l v real valued weights (Wl, W2,. . . , W N )and the observation vector X = [ X I X , 2 , . . . ,X N ] ~ , the weighted median jilter output is
dejined as
6= MEDIAN(IWiI0 s g n ( W i ) X i , .. . , I W N ~sOg n ( W n ) X N ) ,
with Wi E R for i = 1 , 2 , . . . , N , and where o is the replication operatol: Note that the weight signs are uncoupled from the weight magnitude values and are merged with the observation samples.
The weight magnitudes play the equivalent role of positive weights in the framework of weighted median smoothers.
EXAMPLE 6.1 (WEIGHTED MEDIANFILTER COMPUTATION) Consider first the case where the weights are integer-valued and where these add up to an odd integer number. Let the window size be 5 defined by the
symmetric weight vector W = (1,-2,3, - 2 , l ) . For the observation vector X(n) = [2, -6,9,1,12], the weighted median filter output is found as
Y ( n ) = MEDIAN[ 1o 2, -2 o -6,3 o 9, -2 o 1,1o 12 ] = MEDIAN[ 1 0 2 , 2 0 6 , 3 0 9 , 2 0 -1,l o 121 = MEDIAN[ 2 , 6 , 6 , 9 , 9 , 9 , -1, -1,121 = MEDIAN[ - 1 , - 1 , 2 , 6 , 6 , 9 , 9 , 9 , 1 2 ]
= 6 where the median filter output value is underlined in equation (6.6).
Note that the output in the example above is a signed sample whose value is not equal to that of any of the input samples. Note also that as a result of the negative weights, the computationof the
weighted median filter is not shift invariant. Consider a shift of 2 on the samples of X such that X l = X i 2. The weighted median filtering of X’ = [4, -4,11,3,15] with the weight vector W =
(1,-2,3, - 2 , l ) leads to the output Y’(n)= 4, which does not equal the previous output in (6.6) of 6 plus the appropriate shift.
EXAMPLE 6.2 Consider the case where the WM filter weights add up to an even integer with W = (1,-2,2, - 2 , l ) . Furthermore, assume the observation vector consists of a set of constant valued
samples X(n) = [5,5,5,5,5]. The weighted median filter output in this case is found as
Y ( n ) = MEDIAN[ 1o 5, -2 o 5 , 2 0 5, -2
0 5,1O 51
= MEDIAN[ 1o 5 , 2 o -5,2 o 5 , 2 o - 5 , l o 5 ] = MEDIAN[ 5, -5, -5,5,5, -5, -5,5]
= MEDIAN[ - 5, -5, -5, -5,5,5,5,5] = 0,
where the median filter output is the average of the underlined samples in equation
Note that in order for the WM filter to have band- or high-pass frequency characteristics where constant signals are annihilated, the weights must add to an even number such that averaging of the
middle rank samples occurs. When the WM filter weights add to an odd number, the output is one of the signed input samples, and consequently the filter is unable to suppress constant-valued signals.
In general, the WM filter output can be computed without replicating the sample data according to the corresponding weights, as this increases the computational complexity. A more efficient method to
find the WM is shown next, which not only is attractive from a computational perspective but it also admits real-valued weights. The weighted median filter output for noninteger weights can be
determined as follows:
(1) Calculate the threshold TO=
(2) Sort the signed observation samples sgn(WL)Xi. (3) Sum the magnitude of the weights corresponding to the sorted "signed" samples beginning with the maximum and continuing down in order. (4) The
output is the signed sample whose weight magnitude causes the sum to become 2 2'0, For band- and high-pass characteristics, the output is the average between the signed sample whose weight magnitude
causes the sum to become 2 To and the next smaller signed sample.
EXAMPLE 6.3 Consider the window size 5 WM filter defined by the real valued weights ( W I, W Z , W,, W,, W5) = (0.1,0.2, 0.3, -0.2, O.l).Theoutputforthisfilteroperatingon theobservationset [XI, Xa,
X,, Xq, X s ] = [-2, 2, -1, 3 , 6lisfoundasfollows. Summing the weights' magnitude gives the threshold To = $ 5 IW,/ = 0.45. The signed observation samples, sorted observation samples, their
weight, and the partial sum of weights (from each ordered sample to the maximum) are:
2 , -1,
observation samples
corresponding weights
0.1, 0.2, 0.3, -0.2, 0.1
sorted signed observation samples
- 3 , -2, -1,
corresponding weights’ magnitude 0.2, 0.1, 0.3, 0.2, 0.1 partial weight sums
0.9, 0.7,,6.0
0.3, 0.1
Thus, the output is -1 since when starting from the right (maximum sample) and summing the weights, the threshold TO= 0.45 is not reached until the weight associated with -1 is added. The underlined
sum value above indicates that this is the first sum which meets or exceeds the threshold. To guarantee high- or band-pass characteristics, the WM filter output would be modified to compute the
average between -1 and -2, leading to -1.5 as the output value. Although the four-step procedure described above to compute the weighted median filter is straightforward, the weighted median filter
computation can be expressed more succinctly as follows. Let the signed samples sgn(Wi)Xi and their corresponding absolute valued weights be denoted as Si and IWiI, respectively. The sorted “signed”
samples are then denoted as S ( i )where S(l)_< S(2)5 . . . 5 S ( N ) .The absolute valued weights corresponding to the sorted signed samples are denoted as IWp]1, that is, the absolute value of the
concomitant of the lcth order statistic. In the previous example, the weight associated with the fourth-order statistic S(4)is, for instance, IW[,]I = lW2l = 0.2. With this notation, the
selection-weighted median filter output can be written as
The weighted median filter output when the average of the middle rank samples is used can be written as
Cost Function lnferprefafion The effect that negative weights have on the weighted median operation is similar to the effect that negative weights have on linear FIR filter outputs. This is
illustrated by the effect that negative weights have on the cost function minimized by the WM filter. To this end, it is simple to show that the weighted mean and the weighted median operations,
shown in (6.4) and (6.5) respectively, minimize
While G2 ( p )is a convex continuous function, G 1( p )is a convex but piecewise linear function whose minima is guaranteed to be one of the signed input samples (i.e., sgn(Wi>Xi). As an example
consider the observation vector [XI, X 2 , X,, X4, Xs] = [-2, 2, - 1, 3, 61 applied to a WM filter where two sets of weights are used. The first set is (W1, W2, W3, W4, W5) = (0.1, 0.2, 0.3, 0.2,
0.1) where all the coefficients are positive, and the second set being (0.1, 0.2, 0.3, - 0.2, 0.1) where W4 has been changed, with respect to the first set of weights, from0.2 to -0.2. Recall that
the linear FIR and WM filter outputs respectively minimize the cost functions G2 ( P ) and G1 (P). Figure 6.la shows the cost functions Gz(P), as a function of p, corresponding to the linear FIR
filter for the two sets of filter weights. Notice that by changing the sign of Wq, we are effectively moving X4 to its new location sgn(Wq)X4 = -3. This, in turn, pulls the minimum of the cost
function towards the relocated sample sgn(W4)Xq. Negatively weighting X4 on GI(@)has a similar effect as shown in Figure 6.l(b). In this case, the minimum is pulled towards the new location of sgn
(Wq)X4. The minimum, however, occurs at one of the signed samples sgn(Wi)Xi.
-3 -2
Figure 6.7 Effects of negative weighting on the cost functions Gz(@)and GI(@). The input samples are [ X I ,X Z , X s , Xq, X,] = [-2, 2, - 1, 3, 61, which are filtered by the twosetofweights(0.1,
0.2, 0.3, 0.2, O.l)a.nd(O.l, 0.2, 0.3, -0.2, O.l),respectively.
To illustrate the characteristics of WM filters and their advantages over WM smoothers, several examples are described next. The first example shows that WM filters, like their linear FIR filter
counterparts, can be designed to have frequency selection characteristics.
EXAMPLE 6.4 (BANDPASS FILTERING) Since linear FIR filters output the mean of a set of weighted samples, the median of an equivalently weighted sample set ought to provide a similar output. Notably,
this is the case even when the same set of weights as those designed for a linear FIR filter is used in the weighted median filter structure. This approach to assign the WM filter weights, however,
is not optimal and will lead to undesirable artifacts on the output. Nonetheless, the frequency response characteristics of the attained weighted median filter follows that of the equivalent linear
FIR filter, but more importantly, these are significantly more robust in the processing of signals embedded in noise. Figure 6 . 2 ~depicts a linearly swept-frequency cosine signal spanning
instantaneous frequencies ranging from 0 to 400 Hz. Figure 6.2b shows the chirp signal filtered by a 120-tap linear FIR filter designed by MATLAB’s fir1 function with pass band 0.075 5 w 1. 0.125
(normalized frequency with Nyquist=l). Figure 6 . 2 shows ~ the best WM smoother output when the coefficients are constrained to positive values only. The positive coefficients are found by the
method described in [ 1561. The WM smoother clearly fails to delete the low frequency components and it also introduces artifacts at higher frequencies. Figure 6.2d depicts the WM filter output where
realvalued weights are allowed. The 120 median filter weights are given values identical to that of the linear FIR filter weights. Although the weight values for the weighted median filter are
suboptimal, Figure 6.2d shows the significant attenuation obtained in the low-frequency components. The high-frequency terms are cancelled almost completely as well. The small amplitude artifacts
exhibited at low-frequencies arise from the fact that the WM filter output is constrained to be equal to the average of only two of the signed input samples. Methods to optimally design the weighted
median filter weights will be described shortly.
EXAMPLE 6.5 (BANDPASS FILTERING IN NOISE) Consider the case where the observed signals are noisy. Figure 6 . 3 ~ depicts the chirp test signal with added a-stable noise. The parameter a = 1.4 was
used, simulating noise with impulsive characteristics. Figure 6 . 3 is ~ truncated so that the same scale is used in all plots. Figure 6.3b shows the noisy chirp signal filtered by the 120-tap linear
FIR filter. The output is affected severely by the noise components. Ringing artifacts emerge with each impulse fed into the filter. Figure 6 . 3 ~shows the WM filter output when the coefficients are
constrained to positive values only. In this case, the noise does not deteriorate the response significantly, but the response is not
Figure 6.2 Frequency selective filter outputs: ( a )chirp test signal, (b)linear FIR filter output, ( c )weighted median smoother output, (d)weighted median filter output with suboptimal realvalued
satisfactory due to the low-pass characteristics of the WM smoother. Figure 6.3d depicts the output of the WM filter with real-valued weights which shows a considerable improvement.
EXAMPLE 6.6 (IMAGE SHARPENING
In principle, image sharpening consists in adding to the original image a signal that is proportional to a high-pass filtered version of the original image. Figure 6.4 illustrates this procedure
often referred to as unsharp masking [ 1071 on a 1-dimensional signal. As shown in Figure 6.4, the original image is first filtered by a high-pass filter that extracts the high frequency components,
and then a scaled version of the high-pass filter output is added to the original image, thus producing a sharpened image of the original. Note that the homogeneous regions of the signal, where the
signal is constant, remain unchanged. The sharpening operation can be represented by
Figure 6.3 Frequency selective filter outputs in noise: (a) chirp test signal in stable noise, (b)linear FIR filter output, ( c ) weighted median smoother output, (d)weighted median filter output
with suboptimal real-valued weights.
Y ( m ,n ) = X ( m ,n )
+ X . F ( X ( m ,n ) )
where X ( m ,n )is the original pixel value at the coordinates (m,n ) ,F ( . )is the highpass filter, X is a tuning parameter greater than or equal to zero, and Y ( m ,n) is the sharpened pixel at
the coordinates (m,n). The value taken by X depends on the grade of sharpness desired. Increasing X yields a more sharpened image. If background noise is present, however, increasing X will rapidly
amplify the noise. The key point in the effective sharpening process lies in the choice of the highpass filtering operation. Traditionally, linear filters have been used to implement the high-pass
filter, however, linear techniques can lead to rapid performance degradation should the input image be corrupted with noise. A tradeoff between noise attenuation and edge highlighting can be obtained
if a weighted median filter with appropriate weights is used. To illustrate this, consider a WM filter applied to a grayscale image where the following filter mask is used
High-pass filter
- ~.~. ..~ ..~ ~
4 Sharpened signal
figure 6.4 Image sharpening by high frequency emphasis.
-1 - 1 - 1 W=(-l 8 -1). -1 - 1 - 1
Because of the weight coefficients in (6.12), for each position of the moving window, the output is proportional to the difference between the center pixel and the smallest pixel around the center
pixel. Thus, the filter output takes relatively large values for prominent edges in an image, and small values in regions that are fairly smooth, being zero only in regions that have constant gray
level. Although this filter can effectively extract the edges contained in an image, the effect that this filtering operation has over negative-slope edges is different from that obtained for
positive-slope edges'. Since the filter output is proportional to the difference between the center pixel and the smallest pixel around the center, for negative-slopeedges, the center pixel takes
small values producing small values at the filter output. Moreover, the filter output is zero if the smallest pixel around the center pixel and the center pixel have the same values. This implies
that negative-slope edges are not extracted in the same way as positive-slope edges. To overcome this limitation, the basic image sharpening structure shown in Figure 6.4 must be modified such that
positive-slope edges as well as negative-slope edges are highlighted in the same proportion. A simple way to accomplish that is: (a) extract the positiveslope edges by filtering the original image
with the filter mask described above; (b) extract the negative-slope edges by first preprocessing the original image such that the negative-slope edges become positive-slope edges, and then filter
the preprocessed image with the filter described above; (c) combine appropriately the original image, the filtered version of the original image and the filtered version of the pre-processed image to
form the sharpened image.
'A change from a gray level to a lower gray level is referred to as a negative-slope edge, whereas a change from a gray level to a higher gray level is referred to as a positive-slope edge.
Thus both positive-slope edges and negative-slope edges are equally highlighted. This procedure is illustrated in Figure 6.5, where the top branch extracts the positiveslope edges and the middle
branch extracts the negative-slope edges. In order to understand the effects of edge sharpening, a row of a test image is plotted in Figure 6.6 together with a row of the sharpened image when only
the positive-slope edges are highlighted Figure 6.6~2,only the negative-slope edges are highlighted Figure 6.6b, and both positive-slope and negative-slope edges are jointly highlighted Figure 6.6~.
Figure 6.5 Image sharpening based on the weighted median filter.
In Figure 6.5, XI and A 2 are tuning parameters that control the amount of sharpness desired in the positive-slope direction and in the negative-slope direction respectively. The values of XI and X2
are generally selected to be equal. The output of the prefiltering operation is defined as
X(m,n)’= M
with M equal to the maximum pixel value of the original image. This prefiltering operation can be thought of as a flipping and a shifting operation of the values of the original image such that the
negative-slope edges are converted to positive-slope edges. Since the original image and the prefiltered image are filtered by the same WM filter, the positive-slope edges and negative-slope edges
are sharpened in the same way. In Figure 6.7, the performance of the WM filter image sharpening is compared with that of traditional image sharpening based on linear FIR filters. For the linear
sharpener, the method shown in Figure 6.4 was used. The parameter X was set to 1 for the clean image and to 0.75 for the noisy image. For the WM sharpener, the method of Figure 6.5 was used with XI =
X2 = 2 for the clean image, and XI = X2 = 1.5 for the noisy image. The filter mask given by (6.12) was used in both linear and median image sharpening. Sharpening with WM filters does not suffer from
noise amplification to the extent that sharpening with FIR filters do.
Figure 6.6 Original row of a test image (solid line) and row sharpened (dotted line) with ( a ) only positive-slope edges, (b) only negative-slope edges, and (c) both positive-slope and
negative-slope edges.
EXAMPLE 6.7 (EDGEDETECTION WITH WM FILTERS) The most common approach used for edge detection is illustrated in Figure 6.8. A high-pass filter is applied to the image to obtain the amount of change
present in the image at every pixel. The output of the filter is thresholded to determine those pixels that have a high enough rate of change to be considered lying on an edge, that is all pixels
with filter output greater than some value T are taken as edge pixels. The value of T is a tunable parameter that can be adjusted to give the best visual results. High thresholds lose some of the
real edges, while low values result in many false edges, thus a tradeoff needs to be made to get the best results. Other techniques such as edge thinning can be applied to further pinpoint the
location of the edges in an image. The most common linear filter used for the initial high-pass filtering is the Sobel operator, which uses the following 3 x 3 masks:
These two masks are convolved with the image separately to measure the strength of horizontal edges and vertical edges, respectively, present at each pixel. Thus if the amount to which a horizontal
edge is present at the pixel in the ith row and jth column is represented as E t j , and if the vertical edge indicator is E z j , then the values are:
Figure 6.7 (a: top-left) Original image sharpened with (b: top-right) the FIR-sharpener, and (c: middle-left) with the WM-sharpener. (d: middle-right) Image with added Gaussian noise sharpened with
(e: bottom-left) the FIR-sharpener, and (f: bottom-right) the WM-sharpener.
Figure 6.8 The process of edge detection
The two strengths are combined to find the total amount to which any edge exists at
d m .
the pixel:Ef,’$ul = This value is then compared to the threshold T to determine the existence of an edge. Instead of using linear high-pass filters, weighted median filters can be used. To apply
weighted medians to the high-pass filtering, the weights from the Sobel masks can be used. The Sobel linear high-pass filters take a weighted difference between the pixels on either side of Xi,j. On
the other hand, if the same weights are used in a weighted median filter, the value returned is the difference between the lowest-valued pixels on either side of Xi,j. If the pixel values are then
flipped about some middle value, the difference between the highest pixels on either side can also be obtained. The flipping is found as in (6.13) yielding the “flipped” sample Xi,j = M - Xi,? where
M is the maximum pixel value in the image. The lower of the two differences across the pixel can then be used as the indicator of the presence of an edge. If there is a true edge present, then both
differences should be high in magnitude, while if noise causes one of the differences to be too high, the other difference is not necessarily affected. Thus the horizontal and vertical edge
indicators are:
- 10
xi- 1,j- 1, -2 0 xi- 1, j , -10 xi-1,j+ 1,
10 Xi+l,j-l,
2 0 Xi+l,j,
10 Xi+l,j+l
E 213 h . = min
and the strength of horizontal and vertical edges E
is determined in the same way
as the linear case: = Another addition to the weighted median method is necessary in order to detect diagonal edges. Horizontal and vertical indicators are not sufficient to register diagonal edges,
so the following two masks must also be used:
These masks can be applied to the image just as the Sobel masks above. Thus the strengths of the two types of diagonal edges are E for diagonal edges going from the bottom left of the image to the
top right (using the mask on the left above) and E;d?for diagonal edges from top left to bottom right (the mask on the right), and the values are given by:
E$ = min
E$ = min
A diagonal edge strength is determined in the same way as the horizontal and dLd2 Ei,j + Ei,j . The indicator of all edges vertical edge strength above: Ei,j
h in any direction is the maximum of the two strengths E,,? and Eti1d2:Etotal 2,3 =
. As in the linear case, this value is compared to the threshold T to determine whether a pixel lies on an edge. Figure 6.9 shows the results of calculating E,,7tal for an image. The results of the
median edge detection are similar to the results of using the Sobel linear operator. Similar approaches to edge
detection with generalized median filters have been proposed in [18, 171, where a new differential filter is implemented via negative weights. The generalization in [ 18, 171 is refered to as RONDO:
rank-order based nonlinear differential operator.
Figure 6.9 ( a ) Original image, (b) Edge detector using linear method, and ( c ) median method.
Permutation-Weighted Median Filters
Permutation WM filters closely resemble permutation WM smoothers with the exception that the weights are not only data dependent but can also take on negative values [lo].
DEFINITION 6.2 (PERMUTATION W M FILTERS) Let ( W ~ ( RW~~) ,( R. .~. ,) , W",, 1) be rank-order dependent weights assigned to the input observation samples. The output of the permutation WMJilteris
found as
Note that the signs of the weights are decoupled from the replication operator and applied to the data sample. The weight assigned to X i is drawn from the weight set {Wi(l), Wi(z),. . . ,W i ( ~ )
Having }. N weights per sample, a total of N 2 weights need to be stored for the computation of (6.14). In general, an optimization algorithm is needed to design the set of weights although in some
cases only a few rank-order dependent weights are required and their design is simple. Permutation WM filters can provide significant improvement in performance at the higher cost of memory cells. To
illustrate the versatility of permutation WM filters, consider again the image sharpening example. Recall that linear high-pass filters are inadequate in unsharp masking whenever background noise is
present. Although WM high-pass filters ameliorate the problem, the goal is to improve their Performance by allowing the WM filter weights to take on rank-dependent values. The unsharp WM filter
structure shown in Figure 6.5 is used with the exception that permutation WM filters are now used to synthesize the high-pass filter operation. The weights used for the WM high-pass filter in (6.12)
were proportional to -1 -1 -1
-1 -1 -1
The weight mask for the permutation WM high-pass filter is
(6.16) W7(R7) W8(Rs) W 9 ( R s )
where W i ( ~ = i ) -1, for i # 5, with the following exceptions. The value of the center weight is given according to (8
f o r R , = 2 , 3 , ..., 8 (6.17)
Wc(R,) =
-1 otherwise.
That is, the value of the center weight is 8 if the center sample is not the smallest or largest in the observation window. If it happens to be the smallest or largest, its reliability is low and the
weighting strategy must be altered such that the center
weight is set to -1, and the weight of 8 is given to the sample that is closest in rank to the center sample leading to
if X , = X(9) (6.18)
W[8] = -
1 otherwise, (6.19)
if X , = X(l) (6.20)
W[Z] = -
1 otherwise,
where W,,]is the concomitant weight of the ith order statistic of the input vector. This weighting strategy can be extended to the case where the L smallest and L largest samples in the window are
considered unreliable and the weighting strategy applied in (6.18) and (6.20) now applies to the weights W [ L + ~and I W[N-LI. Figure 6.10 illustrates the image sharpening performance when
permutation WM filters are used. The “Saturn image” with added Gaussian backgroundnoise is shown in Figure 6.10(a). Figures 6.10(b-j) show this image sharpened with ( b ) a LUM the WM filter
sharpener, ( e ) the sharpener2, (c) a linear FIR filter sharpener, (4 permutation WM filter sharpener with L = 1, and (f) the permutation WM filter sharpener with L = 2. The X parameters were given
a value of 1.5 for all weighted median-type sharpeners, and it was set to 1 for the linear sharpener. The linear sharpener introduces background noise amplification. The LUM sharpener does not
amplify the background noise; however, it introduces severe edge distortion artifacts. The WM filter sharpener ameliorates the noise amplification and does not introduce edge artifacts. The
permutation WM filter sharpeners perform best, with higher robustness attributes as L increases.
A classical approach to filter design is to modify the filter weights so as to attain a desired spectral profile. Usually the specifications include the type of filter required, that is, low-pass,
high-pass, band-pass or band-stop, and a set of cutoff frequencies and attenuation. There are a number of design strategies for the design of linear filters. See for instance Proakis and Manolakis
(1996) [166] and Mitra (2001) [144]. These techniques, however, cannot be applied to the design of weighted medians since they lack an impulse response characterization. This section defines the
concept of frequency response for weighted median filters and develops a closed form solution for their spectral design.
2The LUM sharpener algorithm will be described in a later chapter.
Figure 6.10 (a: top left) Image with background noise sharpened with @:top right) LUM sharpener, (c: middle left) the FIR sharpener, ( d middle right) the WM sharpener, (e: bottom left) the
permutation WM sharpener with L = 1, (f: bottom right) the permutation WM sharpener with L = 2.
Median Smoothers and Sample Selection Probabilities
Spectral analysis of nonlinear smoothers has been carried out based on the theory developed by Mallows (1980) [137]. This theory allows us to analyze some characteristics of nonlinear filters based
on the characteristics of a corresponding linear filter. In particular, we are interested in designing a nonlinear filter based on some frequency-response requirements. In order to do that, the
spectrum of a nonlinear smoother is defined as the spectral response of the corresponding linear filter. Mallows focused on the analysis of the smoothing of a nonGaussian sequence X by a nonlinear
function S and how this process can be approximated by a well defined linear smoothing function, as stated on the following theorem:
THEOREM 6.1 (MALLOWS[ 1371) Given a nonlinear smoothingfunction S operating on a random sequence X = Y Z, where Y is a zero mean Gaussian sequence and Z is independent of Y ,we have that if S is
stationary, location invariant, centered (i.e., S(0) = 0), it depends on aJinite number of values of X and Var(S(X)) < 00, There exist a unique linearfunction SL such that the MSE function:
(6.21) is minimized. Thefunction S is the closest linearfunction to the nonlinear smoothing function S or its linear part. In particular, median smoothers have all the characteristics required for
this theorem and, in consequence, they can be approximated by a linear function. Median smoothers are also selection type and, refemng again to Mallows’ theory, there is an important corollary of the
previous theorem that applies to selection type smoothers whose output is identical to one of their input samples:
COROLLARY 6 . 1 [137] I f S is a selection type smoother, the coeficients of S Lare the sample selection probabilities of the smoother. The sample selection probabilities are defined next for a WM
smoother described by the weight vector W = (W1, W2, . . . , WN) and a vector of independent and identically distributed samples X = (XI, X Z ,. . . , XN).
DEFINITION 6.3 The Sample Selection Probabilities (SSPs)of a WM smoother W are the set of numbers p j dejined by:
= P (Xj = MEDIAN[Wl
0x1,W2 0 x 2 , . . . , W,
Thus, p j is the probability that the output of a weighted median filter is equal to the j t h input sample. Mallows’ results provide a link between the linear and nonlinear domains that allows the
approximation of a WM smoother by its linear part. The linear part also
provides an approximation of the frequency behavior of the smoother. Thus, in order to obtain a WM smoother with certain frequency characteristics, a linear filter with such characteristics should be
designed. This linear filter can be approximated by a WM filter with the required frequency characteristics.
SSPs for Weighted Median Smoothers
In order to find the linear smoother closer in the mean square error sense to a given weighted median smoother, a method to calculate the SSPs of the WM smoother is needed. Some examples of
algorithms to calculate the SSPs of a WM smoother can be found in Prasad and Lee (1994) [165] and Shmulevich and Arce (2001) [175]. The calculation is carried out here based on the calculation of the
weighted median that is reproduced here for convenience: Suppose that the WM filter described by the weight vector W = ( W ,, W2,. . . , W N )is applied to the set of independent and identically
distributed samples X = ( X I , X 2 , . . . , X N ) ,then the output is calculated through the steps in section 5.2 which are repeated here for convenience.
(1) Calculate the threshold To = !j
(2) Sort the samples in the observation vector X;
(3) Sum the concomitant weights of the sorted samples beginning with the maximum sample and continuing down in order; (4) The output ,8 is the first sample whose weight causes the sum to become 2 To.
The objective is to find a general closed form expression for the probability that the jth sample is chosen as the output of the WM filter, that is, to find the value p j = P(,8 = X j ) . The jth
sample in the input vector can be ranked in N different, equally likely positions in its order statistics, since the samples are independent and identically distributed. For all i this probability is
P(X(2)= X j ) = N'
Because of the different weight values applied to the input samples, each sample has a different probability of being the output of the median depending on where it lies in the set of ordered input
samples. The final value of p j is found as the sum of the probabilities of the sample X j being the median for each one of the order statistics N
N 2=.
-c-. (yz;) N
= -1c P ( , 8 = x ( i ) ( x ( i ) 1
=X,) = 1
The result in (6.24) can be explained as follows. After the sample X j has been ranked in the ith order statistic, there are N - 1samples left to occupy the remaining N - 1 order statistics: i - 1
before X j = X ( i ) and N - i after it. The total number of nonordered ways to distribute the remaining samples between the remaining order statistics is then equal to the number of ways in which we
can distribute the set of N - 1samples in two subsets of i - 1and N - i samples, leading to the denominator ( : I ) in (6.24). The order of the samples in each one of this subsets is not important
since, as it will be shown shortly, only the sum of the associated weights is relevant. The term Kij represents how many of these orderings will result in the output of the median being the sample X
j while it is ranked in the ith-order statistic, that is, the number of times that = X j = X ( i ) .Kij is found as the number of subsets of N - i elements of the vector W satisfying:
cy m , N
L To,
m=i N
where TO= Wrml and where Wr-1 is the concomitant weight associated with the mth order statistic of the input vector. Conditions (6.25) and (6.26) are necessary and sufficient for X ( Qto be the
weighted median of the sample set. This was shown in Section 5.2, where it is stated that, in order to find the weighted median of a sample set, the samples are first ordered and then the concomitant
weights of the ordered samples are added one by one beginning with the maximum sample and continuing down in order. The median of the set will be the value of the sample whose weight causes the sum
to become greater or equal than the threshold To. Conditions (6.25) and (6.26) can be rewritten in a more compact way as: N
To m=i+l
where Wiil has been replaced by Wj since it is assumed that the jth sample of the vector is the ith order statistic. In order to count the number of sets satisfying (6.27), a N product of two step
functions is used as follows: when the value A = Cm=i+l Wr-1 satisfies TO- Wj 5 A < TOthe function:
u ( A - (To - Wj))u(T; - A )
will be equal to one. On the other hand, (6.28) will be equal to zero if A does not satisfy the inequalities. Here TG represents a value approaching TOfrom the left in the real line and u is the
unitary step function defined as: u(x)= 1 if x 2 0, and 0 otherwise. On the other hand, (6.28) will be equal to zero if A does not satisfy the inequalities. Letting TI = TO- Wj and adding the
function in (6.28) over all the possible subsets of i - 1 elements of W excluding W j the result is:
+ +
where A = W,, W,, . . . Wm8and s = N - i . The SSP vector is given by P(W) = bl, pa, . . . , p n ] ,where p j is defined as:
P , 1- - EKijF N
' - N
i=l ( i - 1 )
This function calculates the sample selection probabilities of any WM smoother, that is, it leads to the linear smoother closest to a given WM smoother in the mean square error sense.
EXAMPLE 6.8 (SSPS Given W
FOR A FOUR T A P
(1, 3, 4, l),find the sample selection probability of the third sample
TI and TOare found as:
(6.31) Equation (6.30) reduces to 4
( i - 1)!(4 - i)!
For i
= 4, thus:
W[,l = 1 3 + 1 = 5 then
u ( A - Tl)u(T; - A )
~ ( -50.5)~(4.5-- 5) = 0,
hence K13 = 0. For i = 2, Wr21 = 4, then there are three possibilities for the ordering of the weights (the first weight can be either one of W1, W2 or Wd)and, in consequence, three different values
for A = 4 W[,]:
1+1 = 2
u ( A -~ T ~ ) u ( T G - A l ) = ~ ( -2 0.5)~(4.5-- 2) = 1 A2 =
u ( A -~ T ~ ) u ( -T Az) ~ .(A3
1+3 = 4
~ ( -4 0.5)~(4.5-- 4) = 1 A3 = 3 + 1 = 4 - T ~ ) u ( TL AS) = ~ ( -40 . 5 ) ~ ( 4 . 5 -- 4) 1 =
K23 =
Following the same procedure, the values of the remaining Ki3 are found to be K33 = 3 and K43 = 0. Therefore, the sample selection probability results in:
1!2! 2!1! +33! 3!
+ 0-3!0! 3!
1 2'
- -
The full vector of SSPs is constructed as: P(W) =
[ i, i,i, i]
6.2.3 Synthesis of WM Smoothers So far, this section has focused on the analysis of WM smoothers and the synthesis of linear smoothers having similar characteristics to a given WM. On the other hand,
its final purpose is to present a spectral design method for WM smoothers. The approach is to find the closest WM filter to an FIR filter that has been carefully designed to attain a desired set of
spectral characteristics. To attain this, the function obtained in (6.30) should be inverted; however, this nonlinear function is not invertible. Before studying other alternatives to solve this
problem, certain properties of weighted median smoothers should be taken into account. It has been demonstratedin Muroga (1971) [ 1451 and Yli-Harja et al. (1991) [202] that weighted median smoothers
of a given window size can be divided into a finite number of classes. Each one of the smoothers in a class produces the same output when they are fed with the same set of input samples. It has also
been shown that each class contains at least one integer-valued weighted median smoother such that the sum of its coefficients is odd. Among these smoothers, the one with the minimum sum of
components is called the representative of the class. Table 6.1 shows the representatives of the different classes of weighted median smoothers available for window sizes from one to five. Weighted
medians obtained as the permutation of the ones shown in Table 6.1 are also representatives of other classes. Additionally, a representative of a class can be padded with zeros to form a
representative of another class with larger window size.
Table 6.1 Median weight vectors and their corresponding SSPs for window sizes 1 to 5
[f 4f1
[ 11.1111 55555
For example, for window size three, we can construct four different weighted median vectors: (1, 1, 1)and the three permutations of (1, 0, 0). It is also known that each weighted median filter has a
corresponding equivalent self dual linearly separable positive boolean function (PBF) [145] and vice versa. This means that the number of different weighted medians of size N is the same as the
number of self dual linearly separable PBFs of N variables. Equivalent WM vectors will correspond to the same PBF and they will also have the same vector of SSPS. To illustrate the consequencesof
these properties, the case for smoothers of length three will be studied. Here the number of weighted median smoothersto be analyzed is reduced to include only normalized smoothers. These smoothers
are included in the two dimensional simplex W1 WZ W, = 1. According to (6.27) and Table 6.1, there are four different classes of weighted medians for this window size. They will occupy regions in the
simplex that are limited by lines of the form: Wi+Wj = $ = T o , w h er e i , j E { 1 , 2 , 3 } , i # j . Figure6.11ashowsthesimplex with the four regions corresponding to the four classes of
weighted medians and the representative of each class. The weighted median closest to a given linear smoother in the mean square error sense is found by minimizing the mean square error cost function
c N
J ( W )= IIP(W>- hIl2 =
where h is a normalized linear smoother. Since the number of SSP vectors P(W) for a given window size is finite, a valid option to solve this problem is to list all its possible values and find
between them the one that minimizes the error measure
Figure 6.11 Illustrative example showing the mapping between the WM class regions and the linear smoother regions: ( a ) simplex containing the weighted median vectors for window size three. The
simplex is divided in four regions, the representative of each region is also indicated; (b)correspondence between linear smoothers and SSP vectors of window size three. The SSP vectors are
represented by '*'.
J (W). This will lead to a division of the space of linear smoothers of window size N in regions, one for each SSP vector. Each point in the space is associated with the SSP vector that is the
closest to it in Euclidean distance to conform the regions. This situation can be viewed as a quantization of the space of normalized linear smoothers where all the points in a quantization region
are mapped to the (only) SSP included in that region. Figure 6.1 l b shows the case for window size three. All vectors in the same WM class region are mapped into the linear domain as a single point,
the corresponding SSP vector. Since all WM in a class are equivalent, the associated linear smoother to all of them is the same. Therefore, there is a unique solution to the problem of finding the
linear smoother closest in the MSE sense to a given WM. On the other hand, the reverse problem, finding the WM smoother closest to a given linear smoother, has an infinite number of solutions. Since
the linear smoother domain is quantized, a given vector h in a quantization region will be associated with the SSP vector contained in the region. This vector is mapped into the WM domain as a class
of weighted medians instead of as a single WM smoother. Any set of weights in that class will result in the same value of the distance measure J(W) and, in consequence any of them can be chosen as
the closest WM to the
linear smoother represented by h. That is, the mapping in this case is established between a quantization region in the linear domain and a class region in the WM domain in such a way that any point
in the latter can be associated with a given vector in the former. Figure 6.1 1 illustrates the mapping between quantization regions in the linear domain and class regions in the WM domain for window
size three. The procedure to transform a linear smoother into its associated weighted median reduces to finding the region in the linear space where it belongs, finding the corresponding SSP vector
and then finding a corresponding WM vector. This is possible only if all the valid weighted median vectors and their corresponding SSPs for a certain window size are known. The problem of finding all
the different weighted median vectors of size N has been subject to intensive research. However, a general closed-form solution that allows the generation of the list of PBFs, SSP vectors, or
weighted median vectors has yet to be found. Partial solutions for the problem have been found for window sizes up to nine by Muroga (1971) [145]. Even if such a general form existed, the number of
possibilities grows rapidly with the window size and the problem becomes cumbersome. For example, the number of different weighted medians grows from 2470 for window size eight to 175,428 for window
size nine according to Muroga (197 1) [1451. There is no certainty about the number of vectors for window size ten and up. Having all the possible sets of median weights for a certain window size
will assure that the right solution of the problem can be found. As it was indicated before, this option becomes unmanageable for large window sizes. This does not disqualify the method for smoothers
with small window size, but a faster, easier alternative is necessary to handle larger lengths. In the following section, an optimization algorithm for the function J(W) is presented.
General Iterative Solution
The optimization process of the cost function in (6.36) is carried out with a gradientbased algorithm, and a series of approximations derived by Hoyos et al. (2003) [ 1031. The recursive equation for
each of the median weights is:
(6.37) The first step is to find the gradient of (6.36)
where each of the terms in (6.38) is given by:
V l J ( W ) = -J(W)
= - IIP(W) - h1I2
(6.39) The derivative of p j ( W )is:
(6.40) The term Kij given in (6.29) is not differentiable because of the discontinuities of the step functions. To overcome this situation, U ( Z ) is approximated by a smooth differentiable
function: U(Z) M i ( t a n h ( z ) 1). The derivative on the right hand side of (6.40) can be computed as:
(6.41) where B
(tanh(A - T I ) 1) (tanh(TG - A)
- = C1(Wl)sech2(A-TI) awl
+ 1) and
(tanh(Ti - A) + 1)
-CZ(WL)(tanh(A - 7'1)
+ 1)sech2(T;
A). (6.42)
The coefficients C1 (Wl) and CZ(Wl) above are defined by: 1
{ -f3 -f
l=j i f i exists s.t. mi else i f i exists s.t. mi else.
(6.43) =I
That is, the coefficient Cl(Wl)will be equal to if the term of the sum in (6.39) whose derivative is being found is the lth or when Wl is one of the weights included in the sum A in (6.29).Otherwise,
C1(Wl)= -$. On the other hand, C2(Wl)will if Wl is one of the weights included in the sum A in (6.29) and be equal to otherwise. The iterative algorithm shown above approximates linear smoothers by
weighted median smoothers. The cost function in (6.36) is stepwise and, in consequence, it has an infinite number of local minima. Based on our simulation results, a smooth approximation is obtained
when replacing the step functions with hyperbolic tangents, which allows the implementation of a steepest descent algorithm to minimize it. However, no formal proof of the uniqueness of the minimum
of the approximated cost function has been found, and remains an open mathematical problem. Experimental results show that the steepest descent algorithm converges to the global minimum of this
function. Figure 6 . 1 2 illustrates ~ the cost function with respect to the optimization of W4 of a WM filter of size six while the other weights remain constant. Both the original cost function
(solid line) and the one obtained after using the hyperbolic tangent approximations (dashed line) are shown. Figure 6.12b and 6 . 1 2 ~ show the contours of the same cost function with respect to Wl
and Ws for the original and the approximated cost function, respectively. Notice the staircase shape of the original cost function. It is also noticeable that the minimum of the approximation falls
in the region where the original function reaches its minimum. In this case, the approximation is convex and in consequence it lacks local minima that can disrupt the performance of the iterative
algorithm. This procedure is generalized next to the design of weighted median filters admitting real valued weights.
Spectral Design of Weighted Median Filters Admitting Real-Valued Weights
Weighted median filters admitting real-valued weights are obtained by properly modifying the input samples according to the sign of the associated weights and then using the magnitude of the weights
for the calculation of a weighted median smoother. It was stated in Theorem 6.1 that a nonlinear function needs to satisfy certain properties in order to be best approximated under the mean squared
error sense by a linear filter. Unfortunately, the real-valued medians do not satisfy the location invariance property. However, Mallows results can be extended to cover medians like (6.5)in the case
of an independent, zero mean, Gaussian input sequence.
THEOREM 6 . 2 rfthe input series is Gaussian, independent, and Zero centered, the coeficients of the linear part of the weighted median defined in (6.5)are defined as: hi = sgn(Wi)pi, where pi are
the SSPs of the WM smoother I WiJ.
Original Cost Function Smoothed Cost Function using the tanh approximation
Figure 6.12 Cost functions in 6.36 with input FIR filter h = 1-0.0078,0.0645,0.4433, 0.4433,0.0645, -0.00781. (a) Cost functions with respect to one weight for both the original (solid line) and the
approximated cost function (dashed line), (b)contours with respect to two weights for the original cost function, ( c ) contours with respect to the same weights for the approximated cost function.
To show this theorem define Yi = sgn(Wi)Xi. In this case, the Yi will have the same distribution as the Xi. In consequence:
E {(MEDIAN(Wi o X i )
where qi = hi/sgn(Wi). From Theorem 6.1, (6.45) is minimized when the q i equal the SSPs of the smoother IWi 1, say p i . In consequence: qi = hi/sgn(Wi) = pi -+ hi = sgn(Wi)pi
According to this theorem, the proposed algorithm can be used to design WM filters using the following procedure [103, 1751:
(1) Given the desired impulse response, design the linear FIR filter h = ( h1 , ha, . . . , h ~using ) one of the traditional design tools for linear filters. (2) Decouple the signs of the
coefficients to form the vectors 1 hl = (Ih1 1, lhivl) and sgn(h) = (sgn(hi), sgn(hz), . . . , sgn(hhi)).
1, . . . ,
(3) After normalizing the vector (hl,use the algorithm in Section 6.2.4 to find the closest WM filter to it, say W’ = (W!, Wi,. . . , W&). (4) The WM filter weights are given by W = (sgn(hi)W,!lz”_,)
EXAMPLE 6.9 Design 11 tap (a) low-pass, (b) band-pass, (c) high-pass, and (6) band-stop WM filters with the cutoff frequencies shown in Table 6.2. Table 6.2 Characteristicsof the WM filters to be
Cut-off frequencies
Low pass Band pass High pass Band stop
0.25 0.35-0.65 0.75 0.35-0.65
Initially, 11 tap linear filters with the spectral characteristics required were designed using MATLAB’s function firl. These filters were used as a reference for the design of the weighted median
filters. The spectra of the WM and linear filters were approximated using the Welch method [ 1921. The results are shown in Figure 6.13. The median and linear weights are shown in Table 6.3.
The plots show that WM filters are able to attain arbitrary frequency responses. The characteristics of the WM filters and the linear filters are very similar in the pass band, whereas the major
difference is the lower attenuation provided by the WM in the stop band.
The spectral design of weighted median filters is only one approach to the design of this class of filters. Much like linear filters can be optimized in an statistical sense
Table 6.3 Weights of the median filters designed using the algorithm in Section 6.2.5 and the linear filters used as reference.
Low-pass Linear Median
-0.0039 0.0000 0.0321 0.1167 0.2207 0.2687 0.2207 0.1167 0.0321 0.0000 -0.0039
-0.0223 0.02 11 0.0472 0.1094 0.1898 0.2205 0.1898 0.1094 0.0472 0.0211 -0.0223
Band-pass Linear Median
High-pass Linear Median
-0.0000 -0.0092 0.0362 0.0384 0.0000 0.0092 -0.2502 -0.2311 0.0000 0.0092 0.4273 0.4056 0.0000 0.0092 -0.2502 -0.2311 0.0000 0.0092 0.0362 0.0384 0.0000 -0.0092
0.0039 -0.0000 -0.0321 0.1167 -0.2207 0.2687 -0.2207 0.1167 -0.0321 -0.0000 0.0039
0.0223 -0.0211 -0.0472 0.1094 -0.1898 0.2205 -0.1898 0.1094 -0.0472 -0.0211 0.0223
Band-stop Linear Median
0.0000 -0.0254 0.0000 0.1756 -0.0000 0.6996 -0.0000 0.1756 0.0000 -0.0254 0.0000
0.026 1 -0.0468 0.0261 0.1610 -0.0261 0.4278 -0.0261 0.1610 0.0261 -0.0468 0.0261
using the Wiener filter theory, weighted median filters enjoy an equivalent theory for optimization. The theory to be described below emerged from the concepts developed in Coyle and Lin (1998)
[55],Lin et al. (1990) [ 1321, Kim and Lin (1994) [131], Yin and Neuvo (1994) [200], and Arce (1998) [6]. In order to develop the various optimization algorithms, threshold decomposition is first
extended to admit real-valued inputs. The generalized form of threshold decomposition plays a critical role in the optimization of WM filters.
6.3.1 Threshold Decomposition For Real-Valued Signals This far, threshold decomposition has been defined for input sequences with a finite size input alphabet. In order to use the properties of
threshold decomposition for the optimization of WM filters, this framework must first be generalized to admit realvalued input signals. This decomposition, in turn, can be used to analyze weighted
median filters having real-valued weights. Consider the set of real-valued samples X I ,X,, . . . ,X N and define a weighted median filter by the corresponding real-valued weights W1, W2, . . . , W N
.Decompose each sample X i as
~8 where --oo < q < 00,and
= sgn ( X i - q )
-20 -
-30 .
-40 .
-40. ,A,.*
-40 ' : -15
0.4 0.6 0.8 Normalized Frequency
0.4 0.6 0.8 Normalized Frequency
Figure 6.13 Approximated frequency response of Wh4 filters designed with Mallows iterative algorithm: (a) low-pass, (b) high-pass, (c) band-pass, (4band-stop.
if X i
2 q;
-1 if X i
< q.
1 sgn ( X i - q ) =
Thus, each sample X i is decomposed into an infinite set of binary points taking values in { -1, 1). Figure 6.14 depicts the decomposition of X i as a function of q.
Figure 6.14 Decomposition of X i into the binary z: signal.
Threshold decomposition is reversible since the original real-valued sample X i can be perfectly reconstructed from the infinite set of thresholded signals. To show this, let X t = limT,, X>T> where
Since the first and last integrals in (6.48) cancel each other and since
= 2Xi,
it follows that X = X i = X i . Hence, the original signal can be reconstructed from the infinite set of thresholded signals as
(6.50) The sample X i can be reconstructed from its corresponding set of decomposed signals and consequently X i has a unique threshold signal representation, and vice versa:
xi T . D . {zf},
where denotes the one-to-one mapping provided by the threshold decomposition operation. Since q can take any real value, the infinite set of binary samples {x :} seems redundant in representing X i .
Letting xq = [x:,. . . , z&IT, some of the binary vectors {xq} are infinitelyrepeated. For X ( l ) < q 5 X ( 2 )for , instance, all the binary vectors {xq} are identical. Note however that the
threshold signal representation can be simplified based on the fact that there are at most L 1 different binary vectors {xq} for each observation vector X. Using this fact, (6.51)reduces to
[-1, -1,. . . -117]
forX(L) < q
< +m
where X & denotes a value on the real line approaching X ( i ) from the right. The simplified representation in (6.52) will be used shortly.
Threshold decomposition in the real-valued sample domain also allows the order of the median and threshold decomposition operations to be interchanged without affecting the end result. To illustrate
this concept, consider three samples X 1, X Z ,X3 and their threshold decomposition representations x xz,zg shown in Figure 6 . 1 5 ~ . The plots of x; and xi are slightly shifted in the vertical
axis for illustrative purposes. As it is shown in the figure, assume that X 3 = X ( 3 ) ,X Z = X(l),and XI = X p ) . Next, for each value of q, the median of the decomposed signals is defined as
y q = MEDIAN(xF, x;,xg).
Referring to Figure 6.15a, note that for q 5 X ( z )two of the three xf samples have values equal to 1,and for q > X ( z )two of these have values equal to -1. Thus, (6.54) A plot of yq as a function
of q is shown in Figure 6.15b. Reversing the decomposition using yq in (6.50), it follows that
sgn ( X ( 2 )- 4 ) dq
= X(2).
Thus, in this example, the reconstructed output is the second order-statistic namely the median. In the general case, we consider N samples X I ,X z , . . . , XN and their corresponding threshold
decomposition representations 11: y , x;,. . . ,x'fv. The median of the decomposed signals at a fixed value of q is y q = MEDIAN(x7, x;, . . . ,11:k)=
1 forq 5 X ( v ) ; -1 forq > X(N+I).
Reversing the threshold decomposition, Y is obtained as
Thus, applying the median operation on a set of samples and applying the median operation on a set threshold decomposed set of samples and reversing the decomposition give exactly the same result.
.............................X4 ................
x 2
............. i. ........... ~~~~~~~~~~~
Figure 6.15 The decomposed signal xT2) as the median of xy,xi,xz. The reconstructed signal results in X ( z ) .
With this threshold decomposition, the weighted median filter operation can be implemented as
Wil o
sgn [sgn (Wi)Xi
41 d q l E l
The expression in (6.58) represents the median operation of a set of weighted integrals, each synthesizing a signed sample. Note that the same result is obtained if the weighted median of these
functions, at each value of q, is taken first and the resultant signal is integrated over its domain. Thus, the order of the integral and the median operator can be interchanged without affecting the
result leading to
MEDIAN (IWil osgn[sgn(Wi)Xi - q] -m
In this representation, the signed samples play a fundamental role; thus, we define the signed observation vector S as
The threshold decomposed signed samples, in turn, form the vector s 4 defined as sq = [sgn [sgn(Wl)XI - q] , sgn [sgn(Wz)Xa - q] , . . .
= [s?, s;,.
. . , s&]
. . . , sgn [sgn(WN)XN - q]lT .
Letting W, be the vector whose elements are the weight’s magnitudes, W, = (1 W1 1, IW, 1, . . . , IWNI ) T , the WM filter operation can be expressed as
fi =
s g n ( w Z s 4 ) dq.
The WM filter representationusing threshold decomposition is compact although it may seem that the integral term may be difficult to implement in practice. Equation (6.62), however, is used for the
purposes of analysis and not implementation. In addition, if desired, it can be simplified, based on the fact that there are at most N 1 different binary signals for each observation vector sq. Let S
(i) be the ith smallest signed sample, then the N 1 different vectors sq are
[I, 1,.’ ., 11
for - 00
< q < S(l) (6.63)
[-I,-1,. . . , -11
5’“) < q < 00
where S& denotes a value on the real line approaching S(i) from the right. Using these vectors in (6.62) we have
The above equation reduces to
(6.65) which simplifies to
The computation of weighted median filters with the new threshold decomposition architecture is efficient requiring only N - 1threshold logic (sign) operators, it allows the input signals to be
arbitrary real-valued signals, and it allows positive and negative filter weights. The filter representation in (6.66) also provide us with a useful interpretation of WM filters. The output ,8 is
computed by the sum of the midrange of the signedsamples V = (S(l) s ( N ) ) / 2 ,which provides a coarse estimate of location, and by a linear combination of the (i, i 1)th spacing Vi = S(i)- S
(i-1),for i = 1,2, . . . ,N . Hence
,8 = v +
c N
C(W,, sstqv,.
The coefficients C ( . )take on values -1/2 or 1/2 depending on the values of the observation samples and filter weights.
6.3.2 The Least Mean Absolute (LMA) Algorithm The least mean square algorithm (LMS) is perhaps one of the most widely used algorithms for the optimization of linear FIR filters in a broad range of
applications [197]. In the following, a similar adaptive algorithm for optimizing WM filters is described - namely the Least Mean Absolute (LMA) adaptive algorithm. The LMA algorithm shares many of
the desirable attributes of the LMS algorithm including simplicity and efficiency [6]. Assume that the observed process { X ( n ) }is statistically related to some desired process { D ( n ) }of
interest. { X ( n ) }is typically a transformed or corrupted version of { D ( n ) } .Furthermore, it is assumed that these processes are jointly stationary. A window of width N slides across the
input process pointwise estimating the desired sequence. The vector containing the N samples in the window at time n is
X ( n ) = [ X ( n- N l ) , . . . , X ( n ) ,. . . , X ( n
+ Nz)]T
= [ X , ( n ) ,X z ( n ) ,. . . , XN(4IT,
with N = N1 Nz desired signal as
+ 1. The running weighted median filter output estimates the
~ ( T L = )
MEDIAN [IWil osgn(Wi)Xi(n)l,N=,] ,
where both the weights W , and samples Xi(.) take on real values. The goal is ~ will minimize to determine the weight values in W = (W1, W,, . . . , W N )which the estimation error. Under the Mean
Absolute Error (MAE) criterion, the cost to minimize is
E{ll/ 2
sgn(D-q)-sgn(W,s) --oo
where the threshold decomposition representation of the signals was used. The absolute value and integral operators in (6.70) can be interchanged since the integral acts on a strictly positive or a
strictly negative function. This results in
Furthermore, since the argument inside the absolute value operator in (6.7 1) can only take on values in the set {-2,0,2}, the absolute value operator can be replaced by a properly scaled second
power operator. Thus
Sm {
J(W) = 1 E (sgn(D - q ) - sgn (Wzs4))'} dq. 4 -00 Taking the gradient of the above results in
where eq(n) = sgn(D - q ) - sgn (Wzsq). Since the sign function is discontinuous at the origin, its derivative will introduce Dirac impulse terms that are inconvenient for further analysis. To
overcome this difficulty, the sign function in (6.73) is approximated by a smoother differentiable function. A simple approximation is given by the hyperbolic tangent function (6.74) Since
& tanh(z) = sech'(z)
it follows that
-sgn(WTsq) M sech' (WTsq) -( W T s q ) . dW dW Evaluating the derivative in (6.75) and after some simplifications leads to
-sgn dW
(WTsq) M
sech2 (WTsq)
Using (6.76) in (6.73) yields
d -J(W) dWj
-A/ 2
E{eq(n)sech2 ( W ~ s q ) s g n ( W j ) s ~ } d q . (6.77) -a
Using the gradient, the optimal coefficients can be found through the steepest descent recursive update
(6.78) =
Wj(n) p
E {eq(n)sech2(WT(n)sY(n))
xsgn(Wj(4,~g(n>> . Using the instantaneous estimate for the gradient we can derive an adaptive optimization algorithm where 00
Wj(n, 1) = W j ( n ) +p
ey(n)sech2( W z ( n ) s y ( n ) )sgn(Wj(n))sg(n)dq S(1)
[esGj(n)sech2 (W%(n)ssn)(n))
= Wj(n) +pJ' -00
The error term e"n) in the first and last integrals can be shown to be zero; thus, the adaptive algorithm reduces to
for j = 1?2, . . . ? N . Since the MAE criterion was used in the derivation, the recursion in (6.80) is referred to as the Least Mean Absolute (LMA) weighted median adaptive algorithm. The
contribution of most of the terms in (6.80), however, is negligible compared to that of the vector ~ ~ ( ~ as) it( will n ) be described here. Using this fact and
following the arguments used in [147,200], the algorithm in (6.80) can be simplified considerably leading to a fast LMA WM adaptive algorithm. The contribution of each term in (6.80) is, to a large
extent, determinedby sech (Wzs4),for q E S. The sech2 function achieves its maximum value when its argument satisfies W Tsq = 0. Its value decreases rapidly and monotonically to zero as the argument
departs from zero. From the N - 1 vectors sq, q E S, there is one for which the inner product W z s q is closest to zero. Consequently, the update term correspondingto this vector will provide the
biggest contributionin the update. Among all vectors s q , q E S,the vector providing the largest update contribution can be found through the definition of the weighted median filter. Since b ( n )
is equal to one of the signed input samples, the output of the WM filter is given by N
S(k,(n): k = max for which 3
The constraints can be rewritten as:
I + . . . + IW[N]1 > TO I W [ k ]1 + . . . + IW[N] I 2 TO lW[k+l]I + . + IW"] I < To
Replacing To and cancelling common terms in the summations leads to:
The threshold decomposition of the signed input vector will be, according to (6.63):
..., -1, [-l,..., -1, [-1,. . . , -1,
= [-1,
l , J , l , ...,1] - l , J , l , . . . ,1]
-l,d, 1,.. . ,1]
where the underlined element in each vector represents the kth component. Using these vectors (6.82) can be rewritten as:
This ensures that ss(*)is the vector whose inner product W T s s ( k ) is closest to zero. Accordingly (S(k+l) - S(k))sech2 (WTss(k))Ssik)is the largest contributor in (6.80). In consequence, the
derivative can be approximated as:
Removing scale factors and applying TD:
= sgn(~J)sgn(Sgn(WJ)XJ -S(k))
Using this as the principal contributor of the update, and since S( k ) is the output of the weighted median at time n ( S ( k )= f i ( n ) )the , algorithm in (6.80) is simplified leading to the
following recursion referred to as the fast LMA WM adaptive algorithm:
xsgn sgn(Wj(n))Xj(n)- f i ( n ) ) ,
f o r j = 1 , 2 , .. . , N . The updates in (6.85) have an intuitive explanation described in Figure 6.16. When the output of the WM filter is smaller than the desired output, the magnitude of the
weights corresponding to the signed samples which are larger than the actual output are increased. Thus, the weight for the signed sample (-1)X‘ is decreased (larger negative value) whereas the
weight for signed sample (+l)X, is increased. Both cases will lead to updated weights that will push the estimate higher towards D ( n ) . Similarly, the weights corresponding to the signed samples
which are smaller than the actual output are reduced. Thus, the weight for the signed sample (-1)Xe is increased (smaller negative value) whereas the weight for signed sample (+l)Xk is decreased.
Figure 6.16b depicts the response of the algorithm when the WM filter output is larger than the desired output. The updates of the various samples follow similar intuitive rules as shown in Fig.
6.16b. Since the updates only use the most significant update term in (6.80), it is expected that the fast algorithmrequires a good initial weight vector. It has been experimentally shown that a good
initial weight vector is that of the median filter. Because of the nonlinear nature of the adaptive algorithm, a convergence analysis cannot be derived. The fast algorithm, however, in practice works
quite well. Since a convergence analysis is not available for the fast LMA WM adaptive algorithm, exact bounds on the stepsize p are not available. A reliable guideline to select the step size of
this algorithm is to select it on the order of that required for the standard LMS algorithm. The step size can then be further tuned according to the user’s requirements and by evaluation of the
response given by the initial step size choice.
THE OPTIMAL WEIGHTED MEDIAN FILTERING PROBLEM ql,) .... -~ ~
. .
Figure 6.16 Weight updates when: (a)D ( n ) > B(n),and (b)D ( n ) < B(n),The signed samples are denoted as either (-1)Xz or (1)Xz.
An example of the contours of the cost function for the optimization of the WM filter is shown in Figure 6.17. The cost function is not continuous. It is composed of constant regions, represented in
the figure by different levels of gray. These regions are separated by sharp transitions. The objective of the optimization algorithm is to find the region with the lowest value (displayed in the
figure with color black). The plot shown represents the cost as a function of two out of seven weights. The white line represents the path followed by the LMA algorithm during the optimization
EXAMPLE 6.10 (DESIGNOF OPTIMAL HIGH-PASSWM FILTER) Having the optimization framework at hand, consider next the design of a high-pass WM filter whose objective is to preserve a high frequency tone
while remove all low frequency terms. Figure 6 . 1 8 ~ depicts a two-tone signal with normalized frequencies of 0.04 and 0.4 Hz. Figure 6.18b shows the multi-tone signal filtered by a 28-tap linear
FIR filter designed by MATLAB’s fir1 function with a normalized cutoff frequency 0.2 Hz. The fast adaptive LMA algorithm was used to optimize a WM filter with 28 weights. These weights, in turn, were
used to filter the multitone signal resulting in the estimate shown in Figure 6.18~.The low-frequency components have been clearly filtered out. There are, however, some minor artifacts present.
Figure 6.18d depicts the WM filter output when the weights values of the linear FIR filter are used. Although the frequency content of the output signal is within the specifications, there is a
significant distortion in the amplitude of the signal in Figure 6.18d. Next, Yin et. al’s fast adaptive LMA algorithm was used to optimize a WM filter (smoother) with 28 (positive) weights. The
filtered signal attained with the optimized weights is shown in Figure 6.18e. The weighted median smoother clearly fails to remove the low frequency components, as expected. The weighted median
smoother output closely resembles the input signal as it is the closest output to the desired signal it can produce.
Figure 6.77 Contours of the cost function of the optimization of a WM filter and weight optimization trajectory for two filter weights The step size used in all adaptive optimization experiments was
The performance of the adaptive LMA algorithm in (6.80) and of the fast adaptive LMA algorithm in (6.85) were very similar. The algorithm in (6.80), however, converges somewhat faster than the
algorithm in (6.85). This is not surprising as the fast algorithm uses the most important information available, but not all, for the update of the adaptive LMA algorithm. Figure 6.19 shows a
single-realization learning curve for the fast adaptive LMA WM filter algorithm in (6.85) and the ensemble average of 1000 realizations of the same algorithm. It can be seen that 200 iterations were
needed for the fast adaptive LMA algorithm to converge. The algorithm in (6.80) required only 120 iterations, however, due to its computational load, the fast LMA algorithm would be preferred in most
applications. The mean absolute error (MAE) between the desired signal and the output of the various filters is summarized in Table 6.4. The advantage of allowing negative weights on the median
filter structure is readily seen in Table 6.4. The performance of the LMA WM optimization and of the fast implementation are equivalent. The linear filter outperforms the median structures in the
noise-free case, as expected. Having designed the various high-pass filters in a noiseless environment, the performance on signals embedded in noise is tested next. Stable noise with a = 1.4 was
Figure 6.78 (a) Two-tone input signal and output from (b) linear FIR high-pass filter, (c) optimal WM filter, (6)WM filter using the linear FIR weight values, ( e )optimal WM smoother with
non-negative weights.
added to the two-tone signal. Rather than training the various filters to this noisy environment, we used the same filter coefficients as in the noise-free simulations. Figure 6.20u-dillustrates the
results. The MAE for the linear, WM filter, and WM smoother were computed as 0.979,0.209, and 0.692, respectively. As expected, the outputs of the weighted median filter and smoother are not
affected, whereas the output of the linear filter is severely degraded as the linear high-pass filter amplifies the high fre-
Table 6.4 Mean Absolute Filtering Errors
Filter Linear FIR Optimal WM smoother WMF with FIR weights Optimal WMF (fast alg.) Optimal WMF
noise free
with stable noise
0.012 0.688 0.501 0.191 0.190
0.979 0.692 0.530 0.209 0.205
quency noise. Table 6.4 summarizesthe MAE values attained by the various filters.
Figure 6.19 Learning characteristics of the fast LMA adaptive WM filter algorithm admitting real-valued weights, the dotted line represents a single realization, the solid line the average of 1000
Figure 6.20 ( a ) Two-tone signal in stable noise (a = 1.4), (b)linear FIR filter output, ( c ) WM filter output, (6)WM smoother output with positive weights.
Having the framework for weighted median filters, it is natural to extend it to other more general signal processing structures. Arce and Paredes (2000) 1141 defined the class of recursive weighted
median filters admitting real-valued weights. These filters are analogous to the class of infinite impulse response (IIR) linear filters. Recursive filter structures are particularly important
because they can be used to model “resonances” that appear in many natural phenomena such as in speech. In fact, in the linear filtering framework, a large number of systems can be better
characterizedmodeled by a pole-zero transfer function than by a transfer function containing only zeros. In addition, IIR linear filters often lead to reduced computational com-
plexity. Much like IIR linear filters provide these advantages over linear FIR filters, recursive WM filters also exhibit superior characteristics than nonrecursive WM filters. Recursive WM filters
can synthesize nonrecursive WM filters of much larger window sizes. In terms of noise attenuation, recursive median smoothen have far superior characteristics than their nonrecursive counterparts
[5,8]. The general structure of linear IIR filters is defined by the difference equation
where the output is formed not only from the input, but also from previously computed outputs. The filter weights consist of two sets: the feedback coefficients { A t } ,and the feed-forward
coefficients { B k } . In all, N M I M2 1 coefficients are needed to define the recursive difference equation in (6.86). The generalization of (6.86) to a RWM filter structure is straight-forward.
Following a similar approach used in the optimization of nonrecursive WM filters, the summation operation is replaced with the median operation, and the multiplication weighting is replaced by
weighting through signed replication:
+ + +
Anoncausal implementation is assumed from now on where M 2 = 0 and M I = M leading to the following definition: DEFINITION 6.4 (RECURSIVE W E I G H T E D MEDIANFILTERS) Given a set of N real-valued
feed-back coejjicients and a set of M 1 real-valued feedforward coeficients B,lEo, the M N 1 recursive W M j l t e r output is dejned as
+ +
Note that if the weights At and B k are constrained to be positive, (6.88) reduces to the recursive WM smoother described in Chapter 5. For short notation, recursive WM filters will be denoted with
double angle brackets where the center weight is underlined. The recursive WM filter in 6.88 is, for example, denoted by ( ( A N , .. ., Al, B1,. . , B M ) ) . The recursive WM filter output for
noninteger weights can be determined as follows:
(1) Calculate the threshold To = $
CEOI B k l ) .
(2)Jointly sort the signed past output samples sgn(Ae)Y( n- e) and the signed input observations s g n ( B k ) X ( n+ k). (3) Sum the magnitudes of the weights corresponding to the sorted signed
samples beginning with the maximum and continuing down in order.
(4)If 2 TOis an even number, the output is the average between the signed sample whose weight magnitude causes the sum to become 2 To and the next smaller signed sample, otherwise the output is the
signed sample whose weight magnitude causes the sum to become TO.
EXAMPLE 6.11 Consider the window size 6 RWM filter defined by the real valued weights ( ( A2 , A1 , Bo, B1, Ba, B3))= ((0.2, 0.4,, 6 . 0 -0.4, 0.2, 0.2)),wheretheuseofdoubleangle brackets is
introduced to denote the recursive WM filter operation. The output for this filter operating on the observation set [Y( n- a),Y ( n- l),X ( n ) ,X ( n 1),X ( n 2), X ( n + 3 ) I T = [ - 2 , 2 , -1,
3, 6, 8ITisfoundasfollows. Summingtheabsolute weights gives the thresholdT0 = f (I A1 I IAa I lBol+ IB1 I IBz I) = 1. The signed set of samples spanned by the filter's window, the sorted set, their
corresponding weight, and the partial sum of weights (from each ordered sample to the maximum) are:
sample set in the window
corresponding weights
0.2, 0.4, 0.6, -0.4, 0.2, 0.2
sorted signed samples
-3, -2, -1,
2 , -1,
corresponding absolute weights 0.4, 0.2, 0.6, 0.4, 0.2 0.2 partial weight sums
2.0, 1.6,
1.4, 0.8,
0.4 0.2
Thus, the output is = -1.5 since when starting from the right (maximum sample) summing the weights, the threshold To = 1 is not reached until the weight associated with -1 is added. The underlined
sum value above indicates that this is the first sum which meets or exceeds the threshold. Note that the definition of the weighted median operation with real-valued weights used here is consistent
with both the definition of median operation when the window size is an even number and the definition of WM operation when the sum of the integer-valued weights adds up to an even number, in the
sense that the filter's output is the average of two samples. The reason for using the average of two signed samples as the output of the recursive WM filter is that it allows the use of recursive WM
filters with suitable weights in band-pass or high-pass applications where the filter
should output zero when a DC component at the input is present. In addition, this overcomes the limitations of using a selection-type filter where the filter’s output is constrained to be one of the
input samples. The signed samples in the window of the recursive WM filter at time n are denoted by the vector S ( n ) = [ST(,), S:(n)lT where
S y ( n ) = [sgn(AI)Y(n- l),sgn(Aa)Y(n - a), . . . , sgn(AN)Y(n - N)IT is the vector containing the signed past output samples, and
~ x ( n =) [sgn(Bo)X(n),sgn(Bl)X(n
+ I), . . . ,sgn(BM)X(n + M)]‘
denotes the vector containing the signed input observation samples used to compute the filter’s output at time n. The ith order statistic of S ( n ) is denoted as S ( i ) ( n ) , i = 1,.. . , L,
where S(l)( n ) I S(2)(n)I . . . 5 S(~l(n) with L = N M 1as the window size. Note that S ( i )is the joint order statistic of the signed past output samples in SY and the signed input observation
samples in S X . Furthermore, we let A = [AI,A2. . . AN]^ and B = [Bo, B1,. . . , B M ]be~the vectors containing feedback and feed-forward filter coefficients respectively.
+ +
Stability of Recursive WM Filters One of the main problems in the design of linear IIR filters is the stability under the bounded-input bounded-output (BIBO) criterion, which establishes certain
constraints on the feedback filter coefficient values. In order to guarantee the BIBO stability of a linear IIR filter, the poles of its transfer function must lie within the unit circle in the
complex plane [4]. Unlike linear IIR filters, recursive WM filters are guaranteed to be stable.
PROPERTY 6.1 Recursive weighted medianfilters, as defined in (6.881, are stable under the bounded-input bounded-output criterion, regardless of the values taken by the feedback coeficients {At} for C
= 1, 2, . . . , N . The proof of this property is left as an exercise. The importance of Property 6.1 cannot be overstated as the design of recursive WM filters is not as delicate as that of their
linear counterparts. 6.4.1
Threshold Decomposition Representation of Recursive WM Filters
The threshold decomposition property states that a real-valued vector X = [ X 1 , X2, . . . , X L ] can ~ be represented by a set of binary vectors x4 E {-1, l}L,q E (-00,m), where
where sgn(.) denotes the sign function. The original vector X can be exactly reconstructed from its binary representation through the inverse process as (6.90) f o r i = l , . . . , L. Using the
threshold signal decomposition in (6.89) and (6.90), the recursive WM operation in (6.88) can be expressed as
(6.91) At this point, we resort to the weak superpositionproperty of the nonlinear median operator described in Section 6.3.1, which states that applying a weighted median operator to a real-valued
signal is equivalent to decomposing the real-valued signal using threshold decomposition, applying the median operator to each binary signal separately, and then adding the binary outputs to obtain
the real-valued output. This superposition property leads to interchanging the integral and median operators in the above expression and thus, (6.91) becomes
Y ( n )=
12 J'
f m
MEDIAN (]At1osgn[sgn(Ae)Y(n - )! - q ] J K l ,
To simplify the above expression, let { s $ } and { s g } denote the threshold decomposition of the signed past output samples and the signed input samples respectively, that is
s$(n) = [sgn[sgn(Al)Y(n- 1) - q], . . . . . . , sgn[sgn(AN)Y(n- N ) - q]IT
s ~ ( n=) [sgn[sgn(Bo)X(n)- 41,. . . . . . , sgn[sgn(BM)X(n
+M )
(6.93) where q E (--03, +a). Furthermore, we let sq(n) = [[s$(n>lT, [s>(n)ITITbe the threshold decomposition representation of the vector S ( n ) = [ST ( n ) ,S;(n)lT
containing the signed samples. With this notation and following a similar approach to that presented in Section 6.3. It can be shown that (6.92) reduces to
where A, is the vector whose elements are the magnitudes of the feedback coefficients: A, = [IAll, IA21, . . . , I A N I ]andB, ~, is thevectorwhoseelements are the ]’. Note magnitudes of the
feed-forward coefficients: B ,= [ (Bo1, IB1 1, . . . , ~ B IM in (6.94) that the filter’s output depends on the signed past outputs, the signed input observations, and the feedback and feed-forward
Optimal Recursive Weighted Median Filtering
The main objective here is to find the best filter coefficients, such that a performance cost criterion is minimized. Consider an observed process { X ( n ) }that is statistically related to a
desired process { D ( n ) } .Further, assume that both processes are jointly stationary. Under the mean absolute error (MAE) criterion the goal is to determine and { Bk} so as to minimize the cost
function the weights { A t }
where E{ .} denotes the statistical expectation and Y ( n )is the output of the recursive WM filter given in (6.88). To form an iterative optimization algorithm, the steepest descent algorithm is
used, in which the filter coefficients are updated according to
(6.96) for e = 1,.. . , N and k = 0 , . . . , M . Note that in (6.96), the gradient of the cost function (V J ) has to be previously computed to update the filter weights. Due to the feedback
operation inherent in the recursive WM filter, however, the computation of V J becomes intractable. To overcome this problem, the optimization framework referred to as equation error formulation is
used [176]. Equation error formulation is used in the design of linear IIR filters and is based on the fact that ideally the filter’s output is close to the desired response. The lagged values of Y (
n )in (6.88) can thus be replaced with the corresponding lagged values D ( n ) . Hence, the previous outputs Y ( n- k ) ( are replaced with the previous desired outputs D ( n to obtain a two-input,
and on delay single-output filter that depends on the input samples X ( n k ) I namely, samples of the desired response D ( n - 1 ) I
l?)lzl + KO
?(n) = MEDIAN(IANI osgn(AN)D(n - N ) ,. . . , IAl(osgn(AI)D(n - l),
P o l 0 sgn(Bo)X(n),. . . , IBMI0 sgn(BM)X(n + M I ) . (6.97) The approximation leads to an output Y ( n )that does not depend on delayed output samples and, therefore, the filter no longer
introduces feedback reducing the output to a nonrecursive system. This recursive decoupling optimization approach provides the key to a gradient-basedoptimization algorithm for recursive WM filters.
According to the approximate filtering structure,the cost function to be minimized is j ( A 1 , . . . , A N , & , . . . BM)= E{ID(n)- Y ( n ) l } ,
where Y ( n ) is the nonrecursive filter output (6.97). Since D ( n ) and X ( n ) are not functionsof the feedbackcoefficients,the derivative of j (A1, . . . , A N ,Bo, . . . , B M ) with respect to
the filter weights is nonrecursive and its computation is straightforward. The adaptive optimization algorithm using the steepest descent method (6.96), where J ( . ) is replaced by j ( . ) ,is
derived as follows. Define the vector S ( n ) = [ S g ( n ) , S%(n)lT as that containing the signed samples in the sliding window of the two-input, single-outputnonrecursive filter (6.97) at time n,
where S D ( ~=) [sgn(AI)D(n - l),sgn(A2)D(n - a), . . . ,sgn(AN)D(n - N)IT
and S X (n)is given by (6.93). With this notation and using threshold decomposition, (6.98) becomes
-sgn (AFsXn) + BFs%(n)) dQ]
where {s: ( n ) }is the correspondingthreshold decomposition of the vector S D (n). Now, let eq(n) be the argument inside the integral operator, such that, e q ( n ) = sgn(D(n) - Q) - sgn (AZs&(n)
BZs$(n)). Note that e Q ( n )can be thought of as the threshold decomposition of the error function e ( n ) = D ( n ) - Y ( n ) for a fixed n. Figure 6.21 shows e q ( n )for two differentcases.
Figure 6 . 2 1 shows ~ the case where the desired filter's output D ( n ) is less than the filter's output Y ( n ) . Figure 6.21b, shows the second case where the desired filter output D ( n ) is
greater than the filter output Y ( n ) . The case where the desired response is equal to the filter's output is not shown in Figure 6.21. Note that for a fixed n, the integral operator in (6.99) acts
on a strictly negative function (Figure 6 . 2 1 ~or ) a strictly positive function (Figure 6.21b), therefore, the absolute value and integral operators in (6.99) can be interchanged leading to
j ( A 1 , .. . A N ,Bo, B1;.. . ,B M )= -
where we have used the linear property of the expectation.
, 1 I
, , I
, ,
Figure 6.21 Threshold decompositions of the desired signal D ( n ) ,filter output ?(n),and error function e ( n ) = D ( n ) - Y ( n ) .(a) D ( n ) < Y ( n ) ,and (b) D ( n ) > Y ( n ) .
Figure 6.21 also depicts that e Q ( n )can only take on values in the set {-2, 0, 2}, therefore, the absolute value operator can be replaced by a properly scaled second power operator. Thus
j ( A 1 , .. . , A N ,Bo, Bi,. . . , B M )= 4 1
E (eY(n))’]dq.
Taking derivatives of the above expression with respect to the filter coefficients
At and BI,yields respectively
Since the sgn(.) function has a discontinuity at the origin, it introduces the Dirac function in its derivative which is not convenient for further analysis. In order to overcome this difficulty the
sgn function is approximated by the differentiable tanh(z) = hyperbolic tangent function sgn(z) M tanh(z) whose derivative is sech2(z).Using this approximationand letting W = ((A1, . . . , AN, Bo, .
. . B M ) ) ~ , the derivation of the adaptive algorithm follows similar steps as that used in the derivation of the adaptive algorithm of nonrecursive WM filters. This leads to the following fast
LMA adaptive algorithm for recursive WM filters
f o r t = 1 , 2 ,..., N a n d k = l , 2 ,..., M . As with the nonrecursive case, this adaptive algorithm is nonlinear and a convergence analysis cannot be derived. Thus, the stepsize p can not be
easily bounded. On the other hand, experimentation has shown that selecting the step size of this algorithm in the same order as that required for the standard LMS algorithm gives reliable results.
Another approach is to use a variable step size p ( n ) , where p ( n ) decreases as the training progresses.
EXAMPLE 6.12 (IMAGE DENOISING) The original portrait image used in the simulationsis corrupted with impulsive noise. Each pixel in the image has a 10 percent probability of being contaminated with an
impulse. The impulses occur randomly and were generated using MATLAB’s imnoise function. The noisy image is filtered by a 3 x 3 recursive center WM filter and by a 3 x 3 non-recursive center WM
filter with the same set of weights [115]. Figures 6 . 2 2 ~ and 6.22b show their respective filter outputs with a center weight W , = 5. Note that the recursive WM filter is more effective than its
nonrecursive counterpart. A small 60 x 60 pixel area in the upper left part of the original and noisy images are used to train the recursive WM filter using the Fast LMA algorithm. The same training
data are used to train a nonrecursive WM filter. The initial conditions for the weights for both algorithms were the filter coefficients of the center WM filters described above. The step size used
was lop3 for both adaptive algorithms. The optimal weights found by the adaptive algorithms are
Figure 6.22 Image denoising using 3 x 3 recursive and nonrecursive WM filters: (a) nonrecursive center WM filter (PSNR=26.81dB), ( b ) recursive center WM filter (PSNR=28.33dB), ( c ) optimal
nonrecursive WM filter (PSNR=29.91dB), (d)optimal RWM filter (PSNR=34.87dB).
1.38 1.64 1.32 1.50 5.87 2.17 0.63 1.36 2.24
Table 6.5 Results for impulsive noise removal
Normalized MSE
Normalized MAE
Noisy image
Nonrecursive center WM filter
Recursive center WM filter
Optimal nonrecursive WM filter
Optimal RWM filter
for the nonrecursive WM filter and
1.24 1.52 2.34
1.95 0.78 2.46 for the RWM filter, where the underlined weight is associated with the center sample of the 3 x 3 window. The optimal filters determined by the training algorithms were used to filter
the entire image. Figures 6.22d and 6 . 2 2 show ~ the output of the optimal RWM filter and the output of the non-recursive WM filter respectively. The normalized mean square errors and the
normalized mean absolute errors produced by each of the filters are listed in Table 6.5. As can be seen by a visual comparison of the various images and by the error values, recursive WM filters
outperform non-recursive WM filters. Figures 6.23 and 6.24 repeat the denoising example, except that the image is now corrupted with stable noise (a = 1.2). The set of weights for the previous
example are used without further optimization. Similar conclusions to the example with “salt-and-pepper’’ noise can be drawn from Figs. 6.23 and 6.24.
EXAMPLE 6.13 (DESIGNOF
Here the LMA and fast LMA adaptive optimization algorithms are used to design a robust band-pass recursive WM filter. The performance of the designed recursive WM filter is compared with the
performances of a linear FIR filter, a linear SIR filter,
Figure 6.23 Image denoising using 3 x 3 recursive and nonrecursive WM filters: ( a ) original, ( b ) image with stable noise (PSNR=21.35dB), ( c ) nonrecursive center WM filter (PSNR=31.1 ldB), (d)
recursive center WM filter (PSNR=31.4ldB).
Figure 6.24 Image denoising using 3 x 3 recursive and nonrecursive WM filters (continued): ( a )original, (b)image with stable noise, (c) optimal nonrecursive WM filter (PSNR=32.50dB), (4optimal RWM
filter (PSNR=33.99dB).
and a nonrecursive WM filter all designed for the same task. Moreover, to show the noise attenuation capability of the recursive WM filter and compare it with those of the other filters, an impulsive
noisy signal is used as test signal. The application at hand is the design of a 62-tap bandpass RWM filter with pass band 0.075 5 w 5 0.125 (normalized frequency with Nyquist = 1). We use white
Gaussian noise with zero mean and variance equal to one as input training signal. The desired signal is provided by the output of a large FIR filter (122-tap linear FIR filter) designed by MATLAB’s
firl function. The 31 feedback filter coefficients were initialized to small random numbers (on the order of 10 -3). The feed-forward filter coefficients were initialized to the values outputted by
MATLAB’s firl with 31 taps and the same pass band of interest. A variable step size p ( n ) was used in both adaptive optimizations, where p ( n ) changes according to poe-n/lOO with po =
A signal that spans all the range of frequencies of interest is used as a test signal. Figure 6 . 2 5 ~depicts a linear swept-frequency signal spanning instantaneous frequencies form 0 to 400 Hz,
with a sampling rate of 2 kHz. Figure 6.25b shows the chirp signal filtered by the 122-tap linear FIR filter that was used as the filter that produced the desired signal during the training stage.
Figure 6 . 2 5 ~shows the output of a 62-tap linear FIR filter used here for comparison purposes. The adaptive optimization algorithm described in Section 6.3 was used to optimize a 62tap
nonrecursive WM filter admitting negative weights. The filtered signal attained with the optimized weights is shown in Figure 6.25d. Note that the nonrecursive WM filter tracks the frequencies of
interest but fails to attenuate completely the frequencies out of the desired pass band. MATLAB’s yulewalk function was used to design a 62-tap linear IIR filter with pass band 0.075 5 w 5 0.125.
Figure 6.25e depicts the linear IIR filter’s output. Finally, Figure 6.25fshows the output of the optimal recursive WM filter determined by the LMA training algorithm. Note that the frequency
components of the test signal that are not in the pass band are attenuated completely. Moreover, the RWM filter generalizes very well on signals that were not used during the training stage. The
optimal RWM filter determined by the fast LMA training algorithm yields similar performance to that of the optimal RWM filter determined by the LMA training algorithm and therefore, its output is not
shown. Comparing the different filtered signals in Figure 6.25, it can be seen that the recursive filtering operation outperforms its nonrecursive counterpart having the same number of coefficients.
Alternatively, to achieve a specified level of performance, a recursive WM filter generally requires fewer filter coefficients than the corresponding non-recursive WM filter. In order to test the
robustness of the different filters, the test signal is contaminated with additive a-stable noise as shown in Figure 6 . 2 6 ~The parameter a = 1.4 was used, simulating noise with impulsive
characteristics. Figure 6 . 2 6 is ~ truncated so that the same scale is used in all the plots. Figures 6.263 and 6.26d show the filter outputs of the linear FIR and the linear IIR filters
respectively. Both outputs are severely affected by the noise. On the other hand, the non-recursive and recursive WM filters’ outputs, shown in Figures 6 . 2 6 ~and 6.26e respectively, remain
practically unaltered. Figure 6.26 clearly depicts the robust characteristics of median based filters.
Figure 6.25 Band pass filter design: ( a )input test signal, (b)desired signal, ( c ) linear FIR filter output, (d)nonrecursive WM filter output ( e ) linear IIR filter output, (f, RWM filter output.
To better evaluate the frequency response of the various filters, a frequency domain analysis is performed. Due to the nonlinearity inherent in the median operation, traditional linear tools, like
transfer function-based analysis, cannot be applied. However, if the nonlinear filters are treated as a single-input single-output system, the magnitude of the frequency response can be
experimentally obtained as follows. A single
Figure 6.26 Performance of the band-pass filter in noise: (a) chirp test signal in stable noise, (b)linear FIR filter output, (c) nonrecursive WM filter output, (d)linear IIR filter output, ( e ) RWM
filter output.
tone sinusoidal signal s i n ( 2 ~ f t is ) given as the input to each filter, where f spans the complete range of possible frequencies. A sufficiently large number of frequencies spanning the
interval [0, I] is chosen. For each frequency value, the mean power of each filter's output is computed. Figure 6 . 2 7 ~shows a plot of the normalized mean power versus frequency attained by the
different filters. Upon closer examination of Figure 6.27a, it can be seen that the recursive WM filter yields the flattest response in the pass band of interest. A similar conclusion can be drawn
from the time domain plots shown in Figure 6.25.
In order to see the effects that impulsive noise has over the magnitude of the frequency response, a contaminated sinusoidal signal, sin(2.irft) q, is given as the input to each filter, where 7 is
a-stable noise with parameter a = 1.4. Following the same procedure described above, the mean power versus frequency diagram is obtained and shown in Figure 6.27b. As expected, the magnitudes of the
frequency responses for the linear filters are highly distorted; whereas the magnitudes of the frequency responses for the median based filters do not change significantly with noise.
Figure 6.27 Frequency response (a) to a noiseless sinusoidal signal (b)to a noisy sinusoidal signal. (-) R W M ,(- . - . -) non-recursive WM filter, (- - -) linear FIR filter, and (- - -) linear IIR
6.5 MIRRORED THRESHOLD DECOMPOSITION AND STACK FILTERS The threshold decomposition architecture provides the foundation needed for the definition of stack smoothers. The class of stuckJiZters can be
defined in a similar fashion provided that a more general threshold decomposition architecture, referred to as mirrored threshold decomposition, is defined by Paredes and Arce (1999) [ 1581. Unlike
stack smoothers, stack filters can be designed to have arbitrary frequency selection characteristics. Consider again the set of integer-valued samples X I , X 2 , . . . , X N forming the vector X.
For simplicity purposes, the input signals are quantized into a finite set of values with Xi E { - M , . . . , - 1, 0, . . . , M } . Unlike threshold decomposition, mirrored threshold decomposition
of X generates two sets of binary vectors, each consisting of 2M vectors. The first set consists of the 2M vectors associated with the traditional definition of threshold decomposition x - M f l , x
- ~ ,.+. . ,~xo,.. ., X M . The second set of vectors is associated with the decomposition of the mirrored vector of X. which is defined as
Since Si take on symmetrical values about the origin from X i , Si is referred to as the mirror sample of Xi, or simply as the signed sample Xi. Threshold decomposition of S leads to the second set
of 2M binary vectors sWMf1, s-M+2,.. ., s o , .. . , S M . The ith element of xm is as before specified by
= Trn(X,)=
1 if X i -1 if Xi
>. m,; < m,
whereas the ith element of sm is defined by
(6.106) The thresholded mirror signal can be written as s? = sgn(-Xi - m - ) = -sgn(Xi m- - 1). Xi and Si are both reversible from their corresponding set of decomposed signals and consequently, an
integer-valued signal X i has a unique mirrored threshold signal representation, and vice versa:
x,5% ({x?}:
where denotes the one-to-one mapping provided by the mirrored threshold decomposition operation. Each of the threshold decomposed signal sets possesses the stacking constraints, independently from
the other set. In addition, since the vector S is the mirror of X, a partial ordering relation exists between the two sets of thresholded
-Si, the thresholded samples satisfy s'-' =-z-~+'+~ and fore = 0,1,.. . , 2 - 1. ~ As an example, the representation of the vector X = [2, - 1, 0, - 2, 1, 2, 01 in the binary domain of mirrored
threshold decomposition is signals. With X i
x2 x1 xo x-1 6.5.1
= -s-~+'+'
= = = =
[ [ [ [
s2 = [-1,-1, -1, 1,-1,-1,-11 T 1,-1,-1,-1,-1, l,-l]T s1 = [-1, l,-l, 1,-1,-1,-11 T 1,-1,-1,-1, 1, l , - l ] T 1,-1, 1,-1, 1, 1, 1]T so = [-1, 1, 1, 1,-1,-1, 1IT 1, 1, 1,-1, 1, 1, 1]T s-1 = [-l] 1, 1,
1, l,-l, 1IT.
Stack Filters
Much like traditional threshold decomposition leads to the definition of stack smoothers, mirrored threshold decomposition leads to the definition of a richer class of nonlinear filters referred to
as stackjfilters. The output of a stack filter is the result of a sum of a stack of binary operations acting on thresholded versions of the input samples and their corresponding mirrored samples. The
stack filter output is defined by
Sf(X1,.. . , X,)
.. .,G ; s;",. . ., $3,
where x y and ST, i = 1, . . . , N , are the thresholded samples defined in (6.105) and (6.106), and where f ( . ) is a 2N - variable Positive Boolean Function (PBF) that, by definition, contains
only uncomplemented input variables. Given an input vector X, its mirrored vector S, and their set of thresholded binary vectors xPM+l,. . . , xo,. . . , xM;s - ~ , .+. . ,~so, . . . , s', it follows
from the definition of threshold decomposition that the set of thresholded binary vectors satisfy the partial ordering
[xj;sj] if i
2 j.
Thus, xi E {-1, 1}, and si E {-1, l}, stack, that is, xk 5 xi and s; 5 if i 2 j, for all I; E (1,. . . , N } . Consequently, the stack filtering of the thresholded binary vectors by the PBF f ( . )
also satisfy the partial ordering
The stacking property in (6.109) ensures that the decisions on different levels are consistent. Thus, if the filter at a given time location decides that the signal is less than j , then the filter
outputs at levels j 1 and greater must draw the same conclusion. As defined in (6.107), stack filters input signals are assumed to be quantized to a finite number of signal levels. Following an
approach similar to that with stack
smoothers, the class of stack filters admitting real-valued input signals is defined next.
DEFINITION 6.5 (CONTINUOUS STACKFILTERS) Given a set of N real-valued samples X = ( X I , X 2 , . . . , X N ) ,the output of a stackjlter defined by a PBF f (.) is given by
s ~ ( x=)max1-t E R : f(P(x1), . . . , P ( x N ) ; P - x ~ ). .,. , T'(-x,))= 11, (6.110)
where the thresholdingfinction T e ( . )is dejined in (6.105). The link between the continuous stack filter S f ( . ) and the corresponding PBF f (.) is given by the following property.
PROPERTY 6.2 (MAX-MIN REPRESENTATION OF STACKFILTERS) Let X = ( X I , X 2 , . . . , X,) and S = ( - X I , - X z , . , . , - X,) be a real-valued vector and its corresponding mirrored vector that are
inputted to a stackjlter Sf (.) dejned by the positive Boolean function f ( X I ,. . . , X N ; s1, . . . , S N ) . The PBF with the sum of products expression
where Pi and Qi are subsets of { 0 , 1 , 2 , . . . , N } , has the stackjlter representation
Sf(X)= max{min{X,Sk : j E PI k E Ql}, . . . , min{XjSk : j E PK k E Q K } } (6.112)
with X O= SO= 1and Pi and Q i not having the 0th element at once. Thus, given a positive Boolean function f (1, ~ . . . , X N ; s1,.. . , S N , ) that characterizes a stack filter, it is possible to
find the equivalent filter in the real domain by replacing the binary AND and OR Boolean functions acting on the 5,'s and s,'s with max and min operations acting on the real-valued X , and S,
Integer Domain Filters of Linearly Separable Positive Boolean Functions In general, stack filters can be implemented by max - min networks in the integer domain. Although simple in concept, max - min
networks lack an intuitive interpretation. However, if the PBFs in the stack filter representation are further constrained, a number of more appealing filter structures emerge. These filter
structures are more intuitive to understand and, in many ways, they are similar to linear FIR filters. YliHarja et al. [202] describe the various types of stack smoothers attained when the PBFs are
constrained to be linearly separable. Weighted order statistic smoothers and weighted median smoothers are, for instance, obtained if the PBFs are restricted to be linearly separable and self-dual
linearly separable, respectively. A Boolean function f ( z )is said to be linearly separable if and only if it can be expressed as
(6.113) where xi are binary variables, and the weights Wi and threshold T are nonnegative real-valued [174]. A self-dual linearly separable Boolean function is defined by further restricting (6.113)
as (6.1 14) A Boolean function f(z)is said to be self dual if and only if f(x 1 ,
.. .,
z ~ =) 1 implies f(31,Z2,. . . , 2 ~ =)0, and f(x1, xz,... , Z N ) = 0 implies f(31,3 2 , . . . , ZN)= 1,where 3 denotes the Boolean complement of x [145].
Within the mirrored threshold decomposition representation, a similar strategy can be taken where the separable Boolean functions are progressively constrained leading to a series of stack filter
structuresthat can be easily implemented in the integer domain. In particular, weighted order statistic filters, weighted median filters, and order statistic filters emerge by appropriatelyselecting
the appropriate PBF structure in the binary domain. Figure 6.28 depicts the relationship among subclasses of stack filters and stack smoothers. As this figure shows, stack filters are much richer
than stack smoothers. The class of WOS filters, for example, contains all WOS and 0s smoothers, whereas even the simplest 0s filter is not contained in the entire class of stack smoothers.
Figure 6.28 Relationship among subclasses of stack filters and stack smoothers.
Stack Filter Representation of Weighted Median Filters WM filters are generated if the positive Boolean function that defines the stack filter in (6.107) is constrained to be self dual and linearly
separable. In the binary domain of mirrored threshold decomposition, weighted median filters are defined in terms of the thresholded vectors x m = [x?, . . . , .ElT and the corresponding thresholded
mirror vector sm = [ST,.. . , s ~as ] ~
~ IHI = (lH11, lH21,.. . , I H N I )are ~ 2N where W = (W1, W2 ,... , W N )and positive-valued weights that uniquely characterize the WM filter. The constant To is 0 or 1 if the weights are
real-valued or integer-valued adding up to an odd integer, respectively. I . 1 represents the absolute value operator, and is used in the definition of binary domain WM filters for reasons that will
become clear shortly. The role that Wi’s and I Hi 1’s play in WM filtering is very important as is described next. Since the threshold logic gate sgn(.) in (6.1 15) is self-dual and linearly
separable, and since the and sT respectively represent Xi and its mirror sample Si = -Xi, the integer-domain representation of (6.1 15) is given by [ 145,2021
Y = MEDIAN(WloX1, IH1I o 4 , . . . , WNO X N , I H N ~0 5 ” )
where W, 2 0 and I Hi I 2 0. At this point it is convenient to associate the sign of the mirror sample Si with the corresponding weight Hi as
Y = MEDIAN(W1 o X i , Hi o XI, . . . , W, o X N ,HNo X N )
(6.1 17)
leading to the following definition.
DEFINITION 6.6 (DOUBLE-WEIGHTED MEDIANFILTER)Given the N-long observation vector X = [XI, X2, . . . , X N ] ~the, set of 2N real valued weights ( (WI, HI), (W2, Hz), . . . , (W N ,HN))defines the
double-weighted mediunjilter output as Y
= MEDIAN((Wi,Hl)oX1,. . . , ( W N , H N ) O X N )
with W, 2 0 and used.
H,5 0, where the equivalence H, o X,= /Hi[o sgn(Hi)Xi is
Thus, weighting in the WM filter structure is equivalent to uncoupling the weight sign from its magnitude, merging the sign with the observation sample, and replicating the signed sample according to
the magnitude of the weight. Notice that each sample X , in (6.117) is weighted twice - once positively by W , and once negatively by H,. In (6.1 18), the double weight (W,,H,) is defined to
represent the positive and
negative weighting of Xi. As expected, should one of the weight pairs is constrained to be zero, the double-weighted median filter reduces to the N weight median filter structure in Definition 6.1.
The general form of the weighted median structure contains 2N weights, N positive and N negative. Double weighting emerges through the analysis of mirrored threshold decomposition and the stack
filter representation. In some applications, the simpler weighted median filter structure where a single real-valued weight is associated with each observation sample may be preferred in much the
same way linear FIR filters only use N filter weights. At first, the WM filter structure in (6.118) seems redundant. After all, linear FIR filters only require a set of N weights, albeit real-valued.
The reason for this is the associative property of the sample mean. As shown in [6],the linear filter structure analogous to (6.117) is
. Xi,] H i /. S1,.. . ,WN. X N ,IHNI. S N ) (6.119) MEAN((W1 +H~).X~,...,(WN+HN).XN) (6.120)
= MEAN(W1 =
where Wi 2 0 and Hi 5 0 collapse to a single real-valued weight in (6.120). For the sample median, however, MEDIAN((Wi, Hi) o X i l Z 1 ) # MEDIAN((Wi
+ H i ) o XilK1),
thus the weight pair (Wi,H i ) is needed in general. Weighted median filters have an alternate interpretation. Extending the concepts in the cost function representation of WM filters, it can be
shown that the WM filter output in (6.118) is the value p minimizing the cost function
c N
G ( P )=
(WilXi - PI
lHil 1 x 2 +PI)
where p can only be one of the samples X i or - X i since (6.122) is piecewise linear and convex. Figure 6.29 depicts the effects of double weighting in WM filtering where the absence of double
weighting (Wi,H i ) , distorts the shape of the cost function G1 (p) and can lead to a distorted global minima.
Stack Filter Representation of Recursive WM Filters
The WM filtering characteristics can be significantly enriched, if the previous outputs are taken into account to compute future outputs. Recursive WM filters, taking advantage of prior outputs,
exhibit significant advantage over their nonrecursive counterparts, particularly if negative as well as positive weights are used. Recursive WM filters can be thought of as the analogous of linear
IIR filters with improved robustness and stability characteristics. Given an N-input observation XI, = [ x k - ~ . . ,. , xk,. . . , X ~ + L the ] , recursive counterpart of (6.115) is obtained by
replacing the leftmost L samples of the input
figure6.29 Anobservationvector [ X I ,X Z , X s , X 4 , X,] = [-7, -2, 1, 5 , 81 filtered by the two set of weights: ( 3 , 2, 2 , - 3, 1)(solid line) and ( 3 , 2, 2, ( 2 , - 3 ) , 1) (dashed line),
respectively. Double weighting of X4 shows the distinct cost function and minima attained.
vector XI, with the previous L output samples Y k - L , the definition:
. . . Y k - 1 leading to
DEFINITION 6.7 (RECURSIVE DOUBLE-WEIGHTED MEDIANFILTER)Given . .,. , the input observation vector containing past output samples X = [ Y ~ - L Y k - 1 , xk,.. . , x k + ~ . ]the ~ ,set of positive
weights WR = ( W R - ~ . .,. W R ~ , . . . , W R ~together ) ~ with the set of negative weights HR = ( H R - , , . . . , H R ~ , . . . , H R ~define ) ~ the output of the recursive double-weighted
median filter as Y k = MEDIAN
( ( W R - ~HR,
L )
0 yk-L
, . . . , ( W R ~HR,,) , 0 X I , ,. . .
(WRL, HRI,)0XlC+L)
(6.123) f o r i E [-L,. . . , L]andwheretheeguivalenceHR,~Xk+~ =/ is used.
H R , I O ~ ~ ~ ( H R ~ ) X ~ +
Clearly, Yk in (6.123) is a function of previous outputs as well as the input signal. Recursive WM filters have a number of desirable attributes. Unlike linear IIR filters, recursive WM filters are
always stable under the bounded-input boundedoutput (BIBO) criterion regardless of the values taken by the filter coefficients. Recursive WM filters can be used to synthesize a nonrecursive WM filter
of much larger window size. To date, there is not a known method of computing the recursive WM filter equivalent to a nonrecursive one. However, a method, in the binary domain can be used to find
nonrecursive WM filter approximations of a recursive WM filter [1581. For instance, the recursive 3-point WM filter given by
YI, = MEDIAN ((1, 0) o Yk-1, (1, 0) o X k , (0, - 1)o XI,+^)
has the following first-order approximation
Y i z M E D I A N ( ( 1 , O)OXk-l, ( 1 , 0 ) 0 X k , (0, - 1 ) 0 X k + l ) ,
which, in turn, can be used to find the second-order approximation
Y: = MEDIAN ((1 , O ) o X k - 2 , (1,O) o Xk-1, ( 2 , - 1) o XI,, (0, - 2 ) o XI,+^) . (6.126) Note in (6.126) that sample XI, is weighted negatively and positively. This occurs naturally as a
consequence of mirrored threshold decomposition. In a similar manner, the third-order approximation leads to the nonrecursive WM filter
(1,0)0X1,-2, ( 2 , -1)oX1,-1, (4, - 2 ) o X k , (0, - 4) o XI,+^) . (6.127)
In order to illustrate the effectiveness of the various nonrecursive WM approximations of the recursive WM filter, white Gaussian noise is inputted to the recursive WM filter and to the various
nonrecursive approximation. The results are shown in Figure 6.30. Note that the approximation improves with the order as expected. Figure 6 . 3 0 ~shows that the output of the nonrecursive WM filter
of length 5 is very close to the output of a RWM filter of length 3. This corroborates that recursive WM filters can synthesize a nonrecursive WM filter of much larger window size. Notice also in
expressions (6.126) and (6.127) that the nonrecursive realizations of the recursive 3-point WM filter given by (6.123) requires the use of weight pairs for some of the input samples. Indeed, binary
representations having both IC i and si as part of the positive Boolean function will inevitably lead to having a weight pair (Wi,H i ) on Xi. In order to illustrate the importance of the double
weighting operation on the filter output, the same input signal used with the previous nonrecursive approximations is next fed into the nonrecursive WM filter given by (6.127), but with the positive
weight ~ , has been change from ( 2 , -1) to related to Xk-1 set to zero, that is ( W I , ~Hk-1) (0, -1). The output of this filtering operation and the output of the recursive 3-point WM filter are
shown in Figure 6.30d. Comparing Figures 6.30cand 6.30d, the strong influence of double-weighting on the filter output is easily seen. Some interesting variants of the recursive three-point WM
filters and their corresponding approximate nonrecursive WM filter are presented in Table 6.6.
Figure 6.30 Output of the recursive 3-point WM filter ((1, 1, - 1))(solid line), and its nonrecursive approximations (dashed line): ( a )first order, (b)second order, ( c )third order, (6)
third-order approximation when W k - 1 is set to 0.
Sorting and ordering a set of complex-valued samples is not uniquely defined, as with the sorting of any multivariate sample set. See Barnett (1976) [27]. The complexvalued median, however, is well
defined from a statistical estimation framework. The complex sample mean and sample median are two well known Maximum Likelihood (ML) estimators of location derived from sets of independent and
identically distributed (i.i.d.) samples obeying the complex Gaussian and complex Laplacian distributions, respectively. Thus, if X , , i = 1,. . . , N are i.i.d. complex Gaussian distributed samples
with constant but unknown complex mean /l,the ML estimate of
Table6.6 Recursive three-point WM filters and their approximate non-recursive counterpart. The underline weight is related to the center sample of the window. For short notation only the nonzero
weights are listed. Recursive 3 pt WM filter
1st-order approx.
2nd-order approx.
3rd-order approx.
location is the value fi that maximizes the likelihood function,
This is equivalent to minimizing the sum of squares as N
Letting each sample X i be represented by its real and imaginary components X i = X R ~XI^^, the minimization in (6.128) can be carried out marginally without losing optimality by minimizing real and
imaginary parts independently as = p ~ i : j p ~ , where
(6.129) 3The subindexes R and I represent real and imaginary part respectively
When the set of i.i.d. complex samples obey the Laplacian distribution, it can be shown that the maximum likelihood estimate of location is the complex-valued estimate that minimizes the sum of
absolute deviations,
(6.13 1) Unlike (6.128),the minimizationin (6.13 1) cannot be computedmarginallyin the real and imaginary components and, in general, it does not have a closed-form solution, The requiring a
two-dimensionalsearch over the complex space for the parameter suboptimal approach introduced by Astola et al. (1990) [ 191 referred to as the vector median, consistsin assuming that the $!that
satisfies (6.131) is one of the input samples Xi. Thus, Astola’s vector median outputs the input vector that minimizes the sum of Euclidean distances between the candidate vector and all the other
vectors. Astola also suggested the marginal complex median, a fast but suboptimal approximation by considering the real and imaginary parts independent of each other, allowing to break up the
complex-valued optimization into two real-valued optimizations leading to M = j p where ~ = M E D I A N ( X R ~X, R ~. ., . , X R ~and ) = MEDIAN(X1, , X i z , .. . XI^). When the complex samples are
independent but not identically distributed, the ML estimate of location can be generalized. In particular, letting X I , X2, . . . , X N be independent complex Gaussian variables with the same
location parameter but distinct variances 01, . . . , o$,the location estimate becomes
6 6~+
with Wi = 1/oz, a positive real-valued number. Likewise, under the Laplacian model, the maximum likelihood estimate of location minimizes the sum of weighted absolute deviations N
(6.133) Once again, there is no closed-form solution to (6.133) in the complex-plane and a two-dimensional search must be used. Astola’s approximations used for the identically distributed case can
be used to solve (6.133), but in this case the effect of the weights Wi must be taken into account. These approaches, however, lead to severely constrained structures as only positive-valued
weighting is allowed;
the attained complex medians are smoother operations where neither negative nor complex weights are admitted. To overcome these limitations, the concept of phase coupling consisting in decoupling the
phase of the complex-valued weight and merging it to the associated complex-valued input sample is used. This approach is an extension of the weighted median filter admitting negative weights
described in Section 6.1, where the negative sign of the weight is uncoupled from its magnitude and is merged with the input sample to create a set of signed-input samples that constitute the output
candidates. The phase coupling concept is used to define the phase coupled complex WM filter, which unlike the real-valued weighted median does not have a closed-form solution, thus requiring
searching in the complex-plane. To avoid the high computationalcost of the searching algorithm, a suboptimal implementationcalled marginal phase coupled complex WM was introduced in Hoyos et al.
(2003) 11041. This definition leads to a set of complex weighted median filter structures that fully exploits the power of complex weighting and still keeps the advantages inherited from univariate
medians. The simplest approach to attain complex WM filtering is to perform marginal operations where the real component of the weights W R affect the real part of the samples X R and the imaginary
component of the weights W I IE1affect This approach, referred to as marginal the imaginary part of the samples X I complex WM filter, outputs:
where the real and imaginary components are decoupled. The definition in (6.134) assumes that the real and imaginary components of the input samples are independent. On the other hand, if the real
and imaginary domains are correlated, better performance is attained by mutually coupling the real and imaginary components of the signal and weights. This is shown in Section 6.6.1 In the context of
filtering, weights are used to emphasize or deemphasizethe input samples based on the temporal and ordinal correlation, or any other information contained in the signal. Consider the weighted mean
operation with complex-valued weights,
(6.135) The simple manipulation used in (6.135) reveals that the weights have two roles in the complex weighted mean operation, first their phases are coupled into the samples changing them into a
new group of phased samples, and then the magnitudes of the weights are applied. The process of decoupling the phase from the weight and merging it to the associated input sample is calledphase
coupling. The definition of the phase coupled complex WM filter follows by analogy.
6.6.1 Phase-Coupled Complex WM Filter Given the complex valued samples X 1, X2, . . . , X N and the complex valued weights W, = IWi Iej’%,i = 1,. . . , N , the output of the phase-coupled complex WM
is defined as N
(6.136) This definition of the complex weighted median delivers a rich class of complex median filtering structures. The solution to (6.136), however, suffers from computational complexity as the
cost function must be searched for its minimum. Any one of the already mentioned suboptimal approximations, such as assuming that the output is one of the phase-coupled input samples or, spliting the
problem into real and imaginary parts, arise as effective ways to reduce the complexity. The following definition, from Hoyos et al. (2003) [ 1041,provides efficient and fast complex-valued WM filter
6.6.2 Marginal Phase-Coupled Complex WM Filter
DEFINITION 6.8 (MARGINAL PHASE-COUPLED COMPLEX WM FILTER) Given a complex valued observation vector X = [ X I , X 2 , . . . , X N ]and ~ a set of complex valued weights W = (W1, Wz,. . . , W N )the
marginal phase-coupled complex WMfilter output is dejned as:
= bR
= MEDIAN(IW,10Re{e-”B2X,}
lMEDIAN(JW,/OIm{e-’e’X,} I,”=,),
where 0 is the replication operator, Re{.} and Im{.} denote real and imaginary part respectively. Thus, , b is ~ the weighted median of the real parts of the phasecoupled samples and , b ~is the
weighted median of the imaginary components of the phase-coupled samples. To help understand this definition better, a simple example is given in Figure 6.3 1. Three complex-valued samples X I , X z
, X3 and three complex-valued weights in the unit circle W,, W2, W3 are arbitrarily chosen. The phase-coupled samples P I , P2, P3 ( where P, = e-Je%X,)are plotted to show the effect of phase
coupling. The weights are not directly shown on the figure, but their phases 1 9 1 , 82, 83 are shown as the angles between original and altered samples. In addition, since the marginal phase-coupled
complex WM filter outputs one of the real and imaginary parts of the phase-coupled samples, the filter does not necessarily select one of the phase-coupledinputs, which gives it more flexibility than
the selection phase-coupled complex WM filter. Because of the nonlinear nature of the median operations, direct optimization of the complex weighted median filter is not viable. To overcome this
situation, threshold decomposition must be extended to the complex domain, and then used
e’r -----+T
0 x 3
Figure 6.37 Marginal phase-coupled CWM illustration, “0” : original samples, “0” : phasecoupled samples, “A” : marginal median output, “0” : marginal phase-coupled median output
to derive an adaptive algorithm for the marginal phase coupled complex weighted median filter in the minimum mean square error sense.
Complex threshold decomposition
For any real-valued signal X , its real threshold decomposition representationis given in equation (6.50), repeated here for convenience. (6.138) where -00 < q < 00,and (6.139) Thus, given the
samples { X i I,”=,} and the real-valued weights {Wi weighted median filter can be expressed as
(6.140) where Si = sgn(Wi)Xi, S = [Sl,S Z , .. . ,S N ] ~S’ , = sgn(Si - q) and S‘J= [S:, S;, . . . Sg]’. Since the samples of the median filter in (6.140) are either 1 or
-1, this median operation can be efficiently calculated as sgn(W Z S q ) , where the elements of the new vector Wz are given by Wat = lWil Equation (6.140) can be written as l r n (6.141) Y =5 sgn
Therefore,the extension of the threshold decomposition representation to the complex field can be naturally carried out as,
X =
sgn(Re{X} - q ) d q +
sgn(Im{X} - p)dp7
where real threshold decomposition is applied onto real and imaginary part of the complex signal X separately.
Optimal Marginal Phase-Coupled Complex WM
The real and imaginary parts of the output of the marginal phase-coupled complex WM in (6.137) are two separate real median operations, and thus the complex-valued threshold decomposition in (6.142)
can be directly applied. Given the complexand the complex-valued weights lWile-jet define valued samples Xi P.2 -- e-jesXi as the phase coupled input samples and its real and imaginary parts as P R ~
= Re{Pi}, PI% = Irn{Pi}. Additionally define:
PS R, = sgn(PRt - s) PL
= sgn(PI%- T )
ps,= [~&7pslz,...7piAr1T
[P;~,P;~, . . . pFNIT.
Similar to the real threshold decomposition representation of WM in (6.141), the complex-valued threshold decomposition for the marginal phase-coupled complex WM can be implemented as
N N MED (IWzI 0 PR,12=1) + . W E D (IWZlo PI, /,=I)
(6.143) Assume the observed process {X(n)} and the desired process { P ( n ) }are jointly stationary. The filter output j ( n ) estimating the desired signal P(n) is given in (6.143). Under the Mean
Square Error (MSE) criterion, the cost function to minimize is
= E
(sgn(PR - s) - sgn(WTPk))ds
{ (/:
(sgn(PI - r ) - sgn(W:PT,))dr
(1: '} eydr)
where PR = Re{P(n)}, PI = Im{P(n)}, eR = Re{P(n) - ,8(n)}, e l = Im{P(n) - ,6(n)}.Utilizing the relationship between the complex gradient vector V J and the conjugate derivative d J/dW * [99],
results in
To take the derivatives needed in (6.145), the sign function is approximated by a differentiableone to circumventthe inconvenience of having a Dirac impulse term in further analysis. The chosen
substitute is the hyperbolic tangent function sgn(z) M em-e-= tanh(z) = -and its derivative &tanh(z) = sech2(s) = Thus,
m8s g n ( W z P & ) M sech2(WTP&)&&(WzP&). Furthermore, the derivative with respect to only one weight is
d -sgn(WTF&) dW,.
d sech2(WTP&)-(1 Wi/Pii) dW,.
(6.146) Given the relationship
equation (6.146) can be written as:
d ---sgn(WTP&)
1 -sech2(WTP&)eJet(P& 2
+ sech2(PRz
and similarly
-sgn(WTP;) dW,.
1 -sech2(WTPl;)eJez 2 (PL -sech2(PIt - r ) j P R , ) .
Integrating both sides
+ lejQ"pr 2
sech2(WTP&)sech2(PR,- s)ds. (6.147)
The second integral in (6.147) can be expanded as follows 00
sech2(WTP&)sech2(PRi- s)ds =
(6.148) and recalling that
s sech2(x)dn: s dtanh(x) =
sech2(WTP&)sech2(PR,- s)ds =
At this time tanh(z) can be replaced again with sgn(z). As a result, all terms involving sign(PR%- s) in the previous equation will be zero, except the one when
In this case: sgn(PRt - s)I
= 2.
On the other hand, when PR,= , 8 ~the , product WZPS, is approximately zero. In this case sech2(WTPS,) M 1, and since this is the largest contributor to the sum in (6.149) all the other terms can be
omitted. All these approximationsresult in:
leading to the following weight update equation:
EXAMPLE 6.14 (LINE ENHANCEMENT) Adaptive line enhancement consists of an adaptivefilter driven with a delayed version of the input signal, which uses the noisy signal itself as the reference. The
goal is to exploit the signal correlation and the noise uncorrelation between the received signal and its shifted version to filter out the noise. The algorithm also tunes the weights to correct the
phase introduced between the filter input and the reference signal. A basic block diagram of a line-enhancer implemented with the complex WM filter is shown in Figure 6.32. In the first
experiment,the input of an 11-tap line enhancer is a complex exponential contaminated with a-stable noise with dispersion y = 0.2, a running from 1.3 to 2 (Gaussian noise) to show different levels of
noise impulsiveness. The weights of the marginal phase-coupled complex WM filter are designed using the previously developed LMS algorithm. In addition, LMS algorithms are implemented to design a
marginal complex weighted median and a linear complex-valued filter. The same noisy signal will be filtered using these three schemes to compare the results obtained
COMPLEX WMd ~
Figure 6.32 Block diagram for line enhancer implemented with complex WM filter.
with each one of them. To analyze the convergence properties of the algorithms, the learning curves calculated as the average MSE of 1000realizations of the experiment are plotted. Figure 6.33 shows
the results for two values of a: (a) a=1.3 and (b) a = l . 7 where the LMS algorithm for the linear filter diverges for a < 2. On the other hand, the robustness of the marginal phase-coupled complex
WM is clearly seen. For the values of a shown, the plot of the MSE remains almost unaltered, that is, the impulsiveness of the noise does not have a major effect in the performance of the algorithm
for a < 2. Table 6.7 summarizes the average of 2000 values of the MSE after the convergence of the LMS algorithm for the complex filters. These results show the reliability of the complex WM filters
in a-stable environments. For this particular application and noise conditions, the marginal phase-coupled outperforms the marginal complex WM filter. Unlike linear filters, the step size has a small
effect on the floor error of the complex WM filter as it is illustrated in Figure 6.34 where the learning curves of the LMS algorithm for ,LL = 0.1 and ,u = 0.001 are shown. The plot shows how a
higher value of the step size improves the convergence rate of the algorithm without harming the robustness of the filter or modifying significantly the value of the floor error. Table 6.7 LMS
average MSE for line enhancement. ( p = 0.001, y = 0.2)
Filter Noisy signal Linear filter Marginal complex WM Marginal phase coupled complex WM
a = 1.5
0.3621 0.1455
0.3728 0.1011
0.3975 0.1047
0.8258 0.0804 0.4297 0.1162
(a) alpha = 1.3 I
marginal complex WM
0.1 marginal phase coupled complex WM 0 0
(€1alpha = 1.7 0.5 0.4 0.3
0.2 0.1
Figure 6.33 Learning curves of the LMS algorithm of a linear filter, marginal complex WM and marginal phase-coupled complex WM (p=O.OOl) for line enhancement in a-stable noise with dispersion y = 0.2
(ensemble average of lo00 realizations): ( a ) a=1.3,(b)a=1.7
For illustrative purposes the real part and the phase of the filter outputs are shown in Figure 6.35 and Figure 6.36, respectively. The plot shows 2000 samples of the filter output taken after the
LMS algorithm has converged. As it can be seen, the linear filter is not successful at filtering the impulsive noise, while the complex WM filters are able to recover the original shape of the
signal, being the output of the marginal phase-coupled complex WM the one that resembles the best the original signal.
EXAMPLE 6.15
In this example, the complex weighted median filter is designed to approximate the frequency response of a complex linear filter. To obtain this, the system shown in Figure 6.37 is used.
WElGHTED MEDlAN FlLTERS
0 2E
Figure 6.34 Learning curves of the LMS algorithm of the marginal phase-coupled complex WM (ensemble average of 1000 realizations) with p = 0.1 and p = 0.001 for Line enhancement in a-stable noise (y
= 0.2). 1 (a)otiginal \ ' signd
- 0
4200 ! 5 ; 4
\ 6
4150 r
@)mar fll)ef
Figure 6.35 Real part of the output of the filters for a
= 1.7, y = 0.2 and p = 0.1.
(*)original signal
4000 -2
2 (b) noisy
40 2 (c)
linear filter
0 -2 40 2
(d) marginal
Figure 6.36 Phase of the output of the filters for a = 1.7, y = 0.2 and p = 0.1.
Gaussian noise generator
linear filter
Complex WM
/ Figure 6.37 Block diagram of the frequency response design experiment.
The Gaussian noise generator provides a complex Gaussian sequence that is fed to both a complex linear filter and to the complex weighted median filter being tuned. The difference between the outputs
of the two filters is the error parameter used in an LMS algorithm that calculates the optimum weights for the complex-weighted
median filter. After convergence, the complex-weighted median filter should be a close approximation of the original linear filter in the mean square error sense. Figure 6.38 shows the ensemble
average learning curves for 1000 realizations of the experiment. The learning curve of a linear filter calculated in the same way has also been included. For this experiment the complex linear
filter: -0.0123 - 0.01232 -0.0420 - 0.12933 -0.0108 0.06872 0.07443 -0.0541 0.2693 - 0.13722 0.5998 h= 0.2693 0.13722 -0.0541 - 0.07443 -0.0108 - 0.06873 0.12933 -0.0420 - -0.0123 0.01233
+ +
+ +
was used. This is a complex low pass filter with normalized cut off frequencies = -0.4 and w2 = 0.7. The designed complex WM filters have the same number of taps (1 1).
I 1
0.2 I
marginal phase coupled complex WM
I b
filter '. linear -
Figure 6.38 Learning curves of the LMS algorithm of the marginal phase coupled complex WM, the marginal complex WM and a linear filter with p = 0.01 for the frequency response design problem.
As expected, the MSE for the linear filter reaches a minimum of zero. In addition, the floor error for both complex WM filters is similar. The frequency response of the complex weighted median
filters, as well as the one of the original linear filter, were calculated as follows: 10,000 samples of complex Gaussian noise were fed to the filters and the spectra of the outputs were calculated
using the Welch method [192], the experiment was repeated 50 times to get an ensemble average, the results are shown in Figure 6.39. 0 -2
-8 marginal complex WM -10
marginal phase coupled complex WM
' +------
-16 -18 -1
linear filter
figure 6.39 Approximated frequency response of the complex WM filters for the frequency response design problem.
The filters have approximately the same band-pass gain and even though the rejection in the stop band is not as good as the obtained with the linear filter, the levels reached for the complex WM
filters are acceptable. All the previous merit figures applied to this experiment had been designed for linear filters in a Gaussian environment. The real strength of the weighted median filter comes
out when the classical Gaussian model is abandoned and heavy-tailed random processes are included. In order to show the power of the filters designed with the adaptiveLMS algorithm,the sum of two
complex exponentialsof magnitude one and two different normalized frequencies, one in the pass band (0.2) and one in the stop band (-0.74) of the filters is contaminatedwith a-stable noise with a
values 1, 1.3, 1.7, 2 and y = 0.1. If a linear filter were used to filter the clean signal, the output will be a complex exponential of normalized frequency 0.2. This signal is
used as a reference to calculate the MSE of the outputs of the complex WM filters and a linear filter in the presence of the noise. Table 6.8 shows the average of 100 realizations of the filtering of
200 samples of the noisy signal for each case. As it can be seen, for this application, the marginal phase-coupled complex WM obtains the best results. On the other hand, the linear filter is unable
to remove the impulsive noise, as is shown in the high values of the MSE of its output. In the Gaussian case the linear filter shows its superiority. Table 6.8 Average MSE of the output of the
complex WM filters and the linear filter in presence of a-stable noise
Linear filter
Marginal complex WM
Marginal phase coupled complex WM
An example of the real part of the original signal, the noisy signal and the output of the filters is shown in Figure 6.40. The plots show the presence of only one sinusoidal in the outputs, still
showing some artifacts from the remaining noise after filtering. The other exponential has been eliminated from the signal, showing the frequency selection capabilities of the complex WM filters.
Spectral Design of Complex-Valued Weighted Medians
Equation (6.137) shows that the complex-valued weighted median filter operation consists of properly modifying the input samples according to the associated weights and then using the magnitude of
the weights for the calculation of positive weighted medians. It was stated in Theorem 6.1 that a nonlinear function needs to satisfy certain properties in order to be best approximated under the
mean squared error sense by a linear filter. Unfortunately, the complex-valued medians do not satisfy the location invariance property. A similar procedure to the one in Section 6.2.5 can be used to
extend Mallows results to the complex domain.
THEOREM 6 . 3 If the real and imaginary parts of the input series are Gaussian, independent, and zero centered, the coeficients of the linear part of the weighted median dejined in (6.137)are dejined
as: hi = e-j'api, where pi are the SSPs of the WM smoother IWi 1.
Original Signal
15 -
0 5 . -1
Linear filter
Marginal Complex WM
-2 150
-1.5 1 150
I 180
Marginal Phase-CoupledComplex WM
1 0.5
0 -0.5 -1 -1.5 I 150
Y 160
Figure 6.40 Real part of the output of the complex Wh4 filters for the frequency response design problem with (cu=l). (the real part of the ideal output is shown in dash-dot)
To show the theorem define Yi = e - j 6 i Xi = Ui
E{(MEDIAN(Wi o Xi)- xhiXi12} =
+j V
where qi = ejei hi = bi +jci. Againfrom Mallows' theorem, (6.153)is minimized when c = 0and bi = p i .
This characteristicspermit the development of a design method for complex valued WM filters from spectral requirements using the algorithms described in Section 6.2 as follows Design a linear complex
valued FIR filter h = ( h l , ha,. . . , h N ) given the impulse response and the other desired characteristics for the filter. Decouple the phases of the coefficients to form the vectors Ihl = (1 h
11, Ih2 1, . . . , Ih") and O(h) = (B(hl),6(h2),. . . , B ( h N ) ) , where O(hi)represents the phase of hi. Normalize the vector Ihl and find the closest WM filter to it using the algorithm based on
the theory of sample selection probabilities,developed in Section 6.2, say W' = (W;, W i , .. . , Wh). (4) The complex WM filter is given by W = [ej'('""W~.I~l] EXAMPLE6.16
Design 9-tap marginal phase-coupledcomplex weighted median filters with the characteristics indicated in Table 6.9. Figure 6.41 shows that the frequency response characteristicsof the complex WM are
very close to the ones of their linear counterparts. The values of the weights for the linear and median filters are shown in Table 6.10 H
EXAMPLE 6.17 Repeat example 6.15 using the algorithm for the spectral design of complex valued weighted medians developed in Section 6.6.5. (1) The complexvaluedlinearfilterto approximateis: h =
[-0.0123-0.01232, 0.0420 - 0.12932, - 0.0108 0.06872, - 0.0541 0.07442, 0.2693 0.13725, 0.5998, 0.2693 0.1372i, - 0.0541 - 0.07442, - 0.0108 0.06872, - 0.0420 0.12932, - 0.0123 0.0123iIT.
Table 6.9 Characteristicsof the complex-weighted median filters to be designed
Cut off frequencies
-1 -0.5,0.3 0.7
-1 -0.4,0.2 0.8
1' ,1 r"l fi j
-20 -1
Normalized Frequency
Normalized Frequency
%? -10
-15 -1
Normalized Frequency
1 Normalized Frequency
Figure 6.41 Approximated frequency response of the complex WM filters designed with the algorithmin Section 6.6.5 (a) low-pass, (b)high-pass, ( c )band-pass, (d)band-stop (dotted: Marginal Phase
Coupled Complex WM, dashed: Linear filter)
Table 6.70 Weights of the complex median filters designed using the algorithm in Section 6.6.5 and the linear filters used as reference. Low-pass
Linear -0.0324 -0.0172 -0.0764 0.2421
+ -
Median 0.0997i 0.108Oi 0.1050i 0.12351
-0.0215 -0.0112 -0.0473 0.1266
0.1235i 0.105Oi 0.108Oi 0.0997i
0.1266 -0.0473 -0.0112 -0.0215
0.5714 0.2421 -0.0764 -0.0172 -0.0324
+ -
+ + -
Linear 0.06621 0.0707i 0.0651i 0.06461
0.0601 0.0212 -0.1766 -0.1487
0.06461 0.0651i 0.0707i 0.0662i
-0.1487 -0.1766 0.0212 0.0601
+ -
+ -
+ + -
+ -
+ -
+ + -
0.0189 0.0106 0.0559 -0.1486
0.0583i 0.0666i 0.0769i 0.075%
0.12353 0.1050i 0.108Oi 0.09973
-0.1486 0.0559 0.0106 0.0189
+ -
-0.0601 -0.0212 0.1766 0.1487
0.4286 -0.2421 0.0764 0.0172 0.0324
Median 0.0997i 0.108Oi 0.1050i 0.1235
0.0918i 0.081Oi 0.1038i 0.0338i 0.033% 0.1038i 0.0810i 0.09 18i
Median 0.1844i 0.1530i 0.22701 0.06471
0.4998 0.0758i 0.0769i 0.06661 0.0583i
0.1487 0.1766 -0.0212 -0.0601
0.01602 -0.0249
+ -
0.0647i 0.22701 0.1530i 0.1844i
+ -0.0299 -0.0112 0.0808 + 0.0776 + 0.2109 0.0776 0.0808 -0.0112 + -0.0299 -
0.07662 -0.0074
0.0918i 0.0810i 0.1038i 0.03383 0.0338i 0.1038i 0.081Oi 0.0918i
+ 0.04722
+ 0.04632. 0.1412 - 0.07192 0.2671 0.1412 + 0.07192 -0.0336 0.04632 -0.0074 - 0.04722 - -0.0249 + 0.07662 -0.0159 + 0.01602 -0.0336
0.06471 0.2270i 0.1530i 0.18441
0.0299 0.0112 + -0.0808 -0.0776 0.2109 + -0.0776 -0.0808 + 0.0112 0.0299 +
Linear 0.0324 0.0172 0.0764 -0.2421
Median 0.18441 0.1530i 0.2270i 0.06471
WElGHJED MEDlAN FlLTERS FOR MULTlCHANNEL SlGNALS
Gaussian noise and approximating the spectra of the outputs using the Welch method. The results are shown in Figure 6.42. 0 -2
MPCCWM designed with Mallows alg.
MPCCWM designed in Ex. 6.15
-16 -18
Figure 6.42 Approximated frequency response of the complex WM filters designed with and adaptive LMS algorithm (dotted), the algorithm for spectral design of complex valued weighted medians in
Section 6.6.5(solid), and the linear filter used as a reference to design them (dashed).
6.7 WEIGHTED MEDIAN FILTERS FOR MULTICHANNEL SIGNALS The extension of the weighted median for use with multidimensional (multichannel) signals is not straightforward. Sorting multicomponent (vector)
values and selecting the middle value is not well defined as in the scalar case, see Barnett (1976) [27]. In consequence, the weighted median filtering operation of a multidimensional signal can be
achieved in a number of ways among which the most well known are: marginal medians of orthogonal coordinates in Hayford (1902) [98], L 1-norm median, from Gini and Galvani (1929) [SO] and Haldane
(1948) [89] that minimizes the sum of distances to all samples, the halfplane median from Tukey (1975) [190] that minimizes the maximum number of samples on a halfplane, convex hull median from
(1976) [27] and Shamos (1976) [ 1721 that is the result of continuous “peeling” off pairs of extreme samples, the simplex median from Oja (1983) [ 1521 that minimizes the sum of the volumes of all
simplexes4 formed by the point and some samples, the simplex median of Liu (1990) [133] that maximizes the number of simplexes that contain it, and the hyperplane median from Rousseeuw (1999) [170]
that maximizes the hyperplane depth. For historical reviews on multivariate medians, see Aloupis (2001) [ l ] and Small (1990) [178]. Other approaches can be found in Hardie and Arce (1991) [92],
Koivunen (1996) [116], Pitas and Tsakalides (1991) [163], and Trahanias and Venestanopoulos (1996) [ 1851. A problem with many definitions of multivariate medians is that they have more conceptual
meaning than practical use because of their high computational complexities. The algorithms used to compute the L 1 median often involve gradient techniques or iterations that can only provide
numerical solutions as shown by GroSand Strempel (1998) [88], and even the fastest algorithm up-to-date for the Oja median is about O(n3log n) in time, see Aloupis (2001) [ 11. Moreover, they usually
have difficulties with extension on more complex weighting structures. Many definitions are also difficult to analyze. Simple and mathematically tractable structures to perform multivariate weighted
median filtering are described below.
Marginal WM filter
The simplest approach to WM filtering of a multidimensional signal is to process each component independently by a scalar WM filter. This operation is illustrated in Figure 6.43 where the green,
blue, and red components of a color image are filtered independently and then combined to produce the filtered color image. A drawback associated with this method is that different components can be
strongly correlated and, if each component is processed separately, this correlation is not exploited. The advantage of marginal processing is the computational simplicity. Marginal weighted median
filters are, in general, very limited in most multichannel signal processing applications, as will be illustrated shortly.
Figure 6.43 Center WM filter applied to each component independently. (Figure also appears in the Color Figure Insert)
4A simplex is a d-dimensional solid formed by d
+ 1points in @.
Vector WM filter
A more logical extension is found through the minimization of a weighted cost function which takes into account the multicomponent nature of the data. Here, the filtering operation processes all
components jointly such that some of the crosscorrelation between components is exploited. As it is shown in Figure 6.44 the three components are jointly filtered by a vector WM filter leading to a
filtered color image. Vector WM filtering requires the extension of the original WM filter
Figure 6.44 Center vector WM filter applied in the 3-dimensional space. (Figure also appears in the Color Figure Insert)
definition as follows (Astola 1990 [19]). The filter input vector is denoted as X = [rz'l 2 2 ... where gi= [ X t X: . . . X,"IT is the ith M-variate sample in the filter window. The filter output is
9 = [Y' Y 2 . . . YMIT.Recall that the weighted median of a set of 1-dimensionalsamples X i i = 1, . . . , N is given by
Y = a r g r n i n x IWillsgn(Wi)Xi
Extending this definition to a set of M-dimensionalvectors L?i for i to
1,. . . , N leads
(6.154) +
where Y as
[Y', Y 2 , . . , Y M I T$i, = sgn(Wi)zi,and 11.11 is the L2 normdefined
lip- $11
-S t)'
+ (p2- S:)' + . . . + ( p M - S?)')'
The vector weighted median thus requires N scalar weights, with one scalar weight assigned per each input vector sampk. Unlike the 1-dimensional case, r' is not generally equal in value to one of the
Si. Indeed, there is no closed-form solution for 9. Moreover, solving (6.154) involves a minimization problem in a M-dimensional space that can be computationally expensiv:. To overcome these
difficulties, a suboptimal solution for (6.154) is found if Y is restricted to be one of the signed samples $. This leads to the definition of the weighted vector median. N
That is, the vector WM filter output of {$I,. . . , S N }such that N
. . . ,*N
has the value of
?, with ? E
This definition can be implemented as follows: 0
For each signed sample gj, compute the distances to all the other signed samples (ll$j - &[I) for i = 1 , .. . , N using (6.155). Compute the sum of the weighted distances given by the right side of
(6.157). Choose as filter output the signed sample ,!?j that produces the minimum sum of the weighted distances.
In a more general case, the same procedure can be employed to calculate the vector weighted median of a set of input samples using other distance measures. The vector median in Astola (1990) [19]
uses the norm,l, defined as llgllp= IXil.); as a distance measure, transforming (6.154) into
(6.158) Several optimization algorithms for the design of the weights have been developed. One such method, proposed by Shen and Bamer (2004) [ 1731 is summarized below as an example. By definition,
the WVM filter is selection type and its output is one of the input samples as it is shown in (6.158). First it is necessary to find the closest sample to the desired output, say gemzn. The output of
the filter is then calculated using the current weights. If the output of the filter is gemzn the weights are considered optimal. Otherwise, the weights should be modified in order to obtain as the
output. The optimization process can be summarized as follows: (1) Initialize the filter weights (Wi = 1,i = 1 , .. . , N ) .
(2) Calculate the distance of each input sample to the desired output as: ei =
1lgi(n) -6(n)ll,
i = 1; 2 , . . . , N
(3) Find the sample geman such that the error e i is minimum +
Zemtn= argmine, = argmin llZ,(n) - ~ ( n ) / l . 2€{9%}
(4) If is the current output of the WVM filter, set AWi = 0. Otherwise, compute the necessary weight changes so that becomes the filter output using the set of weights Wi(n) AWi. The A Wi are given
where d ( 2 j ) =
-fi 11 and zjois the current filter output.
(5) Update the filter weights: Wi(n+ 1) = Wi(n)+PAW.,
i = 1, 2 , . . . , N ,
where p is the iteration step size. This algorithm is a greedy approach since it determines the weight changes based on local characteristics. Despite the existence of several optimization algorithms
like the one just shown, weighted vector medians have not significantly spread beyond image smoothing applications. The limitations of the weighted vector median are deep and their formulation needs
to be revisited from its roots. With this goal in mind, a revision of the principles of parameter estimation reveals that the weighted vector median emerges from the location estimate of independent
(but not identically distributed) vector valued samples, where only the scale of each input vector sample varies. The multichannel components of each sample are, however, still considered mutually
independent. In consequence, the weighted vector median in (6.158) is cross-channel blind. In the following, more general vector median filter structures are presented. These structures are capable
of capturing and exploiting the spatial and cross-channel correlations embedded in the data. First, the vector location estimate of samples that are assumed to be mutually correlated across channels
but independent (but not identical) in time is revisited. This model leads to a multichannel median structure that is computationally simple, yet it exploits cross-channelinformation. The structure
can be adapted to admit positive and negative weights using sign coupling.
6.7.3 Weighted Multichannel Median Filtering Structures As it was done in the scalar case, the multivariate filtering structure is derived from the Maximum Likelihood estimation of location, this
time in a multivariate signal space. Consider a set of independent but not identically distributed vector valued samples, each obeying a joint Gaussian distribution with the same location parameter
F? (6.163) is the Mixl M crosswhere 2.and il are all M-variate column vectors, and @ channel correlation matrix of the sample 2,. The Maximum Likelihood estimation of location 2 can be derived as
(6.164) As in the univariate case, a general multivariate filtering structure results from the maximum likelihood estimator as (6.165) where WiT = (EL1C;) C.'; An example of an optimal filter design
algorithm for this linear filtering structure is shown by Robinson (1983) [168]. It presents only one inconvenience: the overwhelming size of the weight matrix. For instance, to filter a 3-channel
color image using a 5x5 window requires the optimization of 225 weights. Alternative filter structures requiring lesser weights are needed. The following approach, proposed by Li et al. (2004) [130]
provides such implementation.
Weighted Multichannel Median (WMM) Filter I In most multichannel applications, the signals from sub-channels are often correlated. Further, the correlation structure between subchannels may often be
stationary or at least quasi-stationary for a period of time. In these cases, the assumption that the correlation matrices C i1 differ only by a scale factor is valid, that is I;@
i p .
The corresponding MLE is then
qi@-'Xi provides the filtering structure. Removing the normalization constant, the filtering structure can be formulated as
is a normalization constant and
where V;is the (timekpatial) weight applied to the ith vector sample in the observation window and W;j is the cross-channel weight exploiting the correlation between the
ith and jth components of a sample. The filter thus consists of M 2 N weights. In the example of a RGB image with a 5 x 5 window, the number of weights would be reduced from 225 to 32 + 25 = 34. Even
though it is mathematically intractable to derive a similar result as in (6.169) from a multivariate Laplacian distribution, it is still possible to define a nonlinear multivariate filter by direct
analogy by replacing the summations in (6.169) with median operators. This filter is referred to as the Weighted Multichannel Median (WMM) and is defined as follows (Li et al. (2004) [130]). (6.170)
is an M-variate vector. As it was stated before, there is no unique way of defining even the simplest median over vectors, in consequence, the outer median in (6.170) can have several different
implementations.Due to its simplicityand ease of mathematical analysis, a suboptimalimplementation of (6.170)can be used, where the outer median in (6.170) is replaced by a vector of marginal
medians. Thus, the Marginal Weighted Multichannel Median (Marginal WMM) is defined as in Li et al. (2004) [130].
Weighted Multichannel Median (WMM) Filter I1 There are some applications where the initial assumption about stationarity stated in (6.166) may not be appropriate. The need of a simpler filtering
structure remains, and this is why a more general structure for median filtering of multivariate signals is presented as in Li et al. (2004) [130]. In such case replace (6.166) by
(6.173) (6.174)
In this case, the cross-channel correlation is not stationary, and the q { represent the correlation between components of different samples in the observation window. The linear filtering structure
reduces to
. . WM1
. . WMM
[y ]
(6.175) (6.176)
where V,‘ is the weight reflecting the influence of the lth component of the ith sample in the lth component of the output. The weights W a j have the same meaning as in the WMM filter I. Using the
same analogy used in the previous case, a more general weighted multichannel median filter structure can be defined as MEDIAN((V,’l osgn(V,l)MEDIAN((WJ1I)o sgn(WJ1)X; I,”=l)lEl
LMEDIAN(IV,”/ osgn(V,M)MEDIAN(IWj“J) o s g n ( W j M ) X j l g l ) I z l
(6.178) This structure can be implemented directly, that is, it does not require suboptimal implementations like the previous one. The number of weights increases, but is still significantly smaller
compared to the number of weights required by the complete version of the filter in (6.165). For the image filtering example, the number of weights will be M x ( N M ) = 84. In the following section,
optimal adaptive algorithms for the structures in (6.170) and (6.178) are defined.
Filter Optimization
Assume that the observed process g(n)is statistically related to a desired process 6(n)of interest, typically considered a transformed or corrupted version of 6 ( n ) . The filter input vector at
time n is X(n) = [&(n)&(n)
. .. &(n)]T,
wherezi(n) = [X:(n)X : ( n ) . . . X Y ( n ) l T .Thedesiredsignalis 6(n)= [ D 1 ( n ) P ( n ) . . . D”(n)]T.
Optimization for the WMM Filter I Assume that the timekpatial dependent weight vector is V = [Vl V2 . . . VNIT, and the cross-channel weight matrix is
Denote Qf = MED(1Wj'l o sgn(Wj')X: of the marginal WMM can be defined as
... W M "
for 1 = 1,.. . , M , then the output
6 = [Bl f i 2 . . . P I T , where
= MED(IV,/ osgn(V,)Qi
lzl) I
= 1 , .. . , M .
Applying the real-valued threshold decomposition technique as in Section 6.3.1, we can rewrite (6.179) to be analyzable as follows, N MED(IV,I osgn(sgn(V,)Q&- p I ) li=l)dpz
12 J'sgn(VTGpL)dpl,
whereV, = [IVll lv2l . . . / v ~ I ] ~ a n d= G [sgn(sgn(Vl)Q;-pl) ~' . . . sgn(sgn( V,)QL - pl)lT. Similarly, by defining
be; = [IWlll 4
s,"'= [sgn(sgn(W1')X;
. . . IW M l I] T ,
q f ) . . . sgn(sgn(WM1)X:f - qf)lT ,
the inner weighted medians will have the following thresholded representation
(6.18 1) Under the Least Mean Absolute Error (LMA) criterion, the cost function to minimize is (6.182) (6.183)
Substitute (6.180) in (6.183) to obtain
{ f t: M
Jl(V,W) = E
- p ' ) - sgn(V:Gpz)dp'l}.
Since the integrals in (6.184) act on strictly positive or strictly negative functions, the absolute value operators and the integral operators can thus be interchanged, leading to
Isgn(D' - p l ) - sgn(V:Gpz)I d p ' } .
Due to the linearity of the expectation,the summation, and the integration operations, (6.185) can then be rewritten as M
Jl(V,W) = Z 1X / E { l s g n ( D '
-p') -sgn(V~GPz)~}dp'.
Furthermore, since the absolute value operators inside the expectationsin (6.186) can only take values in the set ( 0 , a}, they can be replaced by a properly scaled square operator resulting in
Taking the derivative of the above equation with respect to
-d Jl(V,W) dV
a results in
- 1 ~ ~ / . { e pI~ds g n ( V ~ G p z ) } d p ' ,
where epl = sgn(D' - p ' ) - sgn(VzGpz).For convenience, the non-differentiable sign function is approximated by the hyperbolic tangent function sgn(z) G tanh(z) e"-e-" - e ~ + e - " . Since its
derivative &tanh(z) = sech2(x) = ( e m + e - m ) 2 , it follows that
1W(VN)Gp, where Gf = sgn(sgn(V,)Qt - p z ) for i = 1,. . . , N . Substituting (6.189) in (6.188) leads to the updates for the Vi
Table 6.7 7 Summary of the LMA Algorithm for the marginal WMM Filter I
K ( n + 1) = K ( n ) +2pw
Using the instantaneous estimate for the gradient, and applying an approximation similar to the one in Section 6.3.2, we obtain the adaptive algorithm for the time dependent weight vector ? of the
marginal WMM filter as follows,
( 1) n = ~ ( n )pwsgn(K(n)).".(n)~~(n),
where Gf = [GP' . . . GP"]' and GP1 = sgn(sgn(V,)Qf - bz) for 1 = 1,.. . , M . To derive the updates for W, it is easy to verify that - + A
/ {
E ep1sech2(VzGP1)Vz-aG wS ptl ) }
sgn(WSt)sgn(sgn(Wst)Xf- 4:) 1 = t (6.194)
Notice that in (6.194), the derivative that introduces one more sech term is omitted since it is insignificant compared to the other one. After some mathematical manipulations and similar arguments
as in Section 6.3.2, the adaptive algorithm for the cross-channel weight matrix W can be simplified as follows
W s t ( n 1) = W s t ( n + ) ~ L , s g n ( W s t ( n ) ) e t ( n ) ( ~ T ( n ) ~ ((6.195) n)), where A S = [AS A; . . . A%]*,and A: = G(sgn(V,)Qk - fiL)sgn(sgn(Wst)XfQ i ) for i = 1,.. . , N , where
S(z) = 1 for z = 0 and G(z) = 0 otherwise. Table 6.11 summarizes the LMA algorithm for the marginal WMM filter I.
EXAMPLE 6.18 A RGB color image contaminated with 10% correlated salt-and-pepper noise is processed by the WVM filter, and the marginal WMM filter separately. The observation window is set to 3 x 3
and 5 x 5. The optimal weights for the marginal WMM filter are obtained first by running the LMA algorithm derived above over a small part of the corrupted image. The same section of the noiseless
image is used as a reference. A similar procedure is repeated to optimize the weights of the WVM filter. The adaptation parameters are chosen in a way such that the average absolute error obtained in
the training process is close to its minimum for each filter. The resulting weights are then passed to the corresponding filters to denoise the whole image. The filter outputs are depicted in Figures
6.45 and 6.46. As a measure of the effectiveness of the filters, the mean absolute error of the outputs was calculated for each filter, the results are summarized in Table 6.12. Peak signal-to-noise
ratio (PSNR) was also used to evaluate the fidelity of the two filtered images. The statistics in Table 6.12 show that the marginal WMM filter outperforms the WVM filter in this color image denoising
simulation by a factor of 3 in terms of the mean absolute error, or 8-1 IdB in terms of PSNR. Moreover, the output of the marginal WMM filter is almost salt and pepper noise free. As a comparison,
the output of the WVM filter is visually less pleasant with many unfiltered outliers. Notice that the output of the marginal WMM filter with the 3 x 3 observation window preserves more image details
than that of the 5 x 5 realization, and has a better PSNR though the mean absolute errors in the two cases are roughly the same.
Figure 6.45 Multivariate medians for color images in salt-and-pepper noise, ,u = 0.001 for the WVM, p L upw , = 0.05 for the marginal WMM. From left to right and top to bottom: noiseless image,
contaminated image, WVM with 3 x 3 window, marginal WMM with 3 x 3 window. (Figure also appears in the Color Figure Insert)
figure 6.46 Multivariate medians for color images in salt-and-pepper noise, p = 0.001 for the WVM, p v ,pw = 0.05 for the marginal WMM (continued). From left to right: WVM with 5 x 5 window, marginal
WMM with 5 x 5 window. (Figure also appears in the Color Figure Insert) Table 6.72 Average MAE and PSNR of the output images.
Noisy signal
PSNR (dB)
MAE 5x5
marginal WMM
Figure 6.47 shows the optimum weights obtained for all the filters used in this example. The noise generated for this example was cross-channel correlated. As a result, Figures 6.47 (c) and 0,show
that the optimum cross-channel weights for the 3 x 3 and 5 x 5 window are very similar, since they are based on the same statistics. Figures
3 ~~
~ 3
Figure 6.47 Optimum weights for the multivariate medians for color images in salt-andpepper noise, (a)5 x 5 WVM, (b) Q in 5 x 5 marginal WMM I, ( c ) w in 5 x 5 marginal WMM I, (43 x 3 WVM, (el in 3
x 3 marginal WMM I, 0 w in 3 x 3 marginal WMM I.
6.47 (b) and ( e ) show that, spatially, the marginal WMM filter I tries to emphasize the center sample of the window. This is an expected result since the noise samples are spatially independent.
Finally, Figures 6.47 (a)and (4show a distribution of the spatial weights that is not as smooth as the one shown in Figures 6.47 (b)and (e),this shows the negative effects that the cross channel
correlation of the noise generates in the WVM filter. w Optimization of the WMM filter /I The optimization process for the second WMM filtering structure is very similar to the one shown above (See
Li et al. (2004) [130]). Assume that the time/spatial dependent weight matrix and the cross-channel weight matrix are:
If Qt and written as
,!$are defined as in the previous case, the output of the filter can be 6 = [@ b2 . . .
where GP1= [sgn(sgn(Vl)Qi - p ' ) . . . sgn(sgn(VA)Qh -$)IT
Under the Least Mean Absolute (LMA) criterion, the cost function to minimize will be just like (6.187). Taking the derivative of the above equation with respect to V and using similar
approximationsto the ones used on the previous case results in
W) =
---Jl(V, d dV2
E ePtssech2((V;)TGPL)sgn(V,")Gf} dpt (6.199)
where ePt = sgn(Dt - p t ) - sgn((Vk)TGpt). Using instantaneous estimates for the expectation the updates for V result in
v,"(n 1) = V,"(n)
+ pVet(n)sgn(V,"(n))sgn(sgn(V,t(n))Q:(n) - 5'(n)) (6.200)
+ pVet((n)sgn(V:(n))G:jt
On the other hand, the updates for W are given by:
(6.202) W S t ( n 1) = W s t ( n ) pwsgn(WSt(n))et(n)((V')*(n)ASt(n)), that is basically the same as (6.195) with the difference that V is now a matrix and ASt= [S(sgn(Kt)Q: - &)sgn(sgn(Wst)X,8 - Q:)
EXAMPLE 6.19 (ARRAYPROCESSING
To test the effectiveness of the WMM filter 11, a simple array processing problem with real-valued signals is used. The system shown in Figure 6.48 is implemented.
s3 f= 0.175 s 2
f= 0.25
Figure 6.48 Array of Sensors
It consists of a 3 element array and 3 sources in the farfield of the array transmitting from different directions and at different frequencies as indicated in the figure. The goal is to separate the
signals from all sources using the array in the presence of alpha stable noise. In order to do so, a WVM filter, a marginal WMM filter and a WMM filter I1 all with a window size of 25 are used. The
filters are optimized using the algorithms described earlier in this section,with a reference signal whose components are noiseless versions of the signals emitted by the sensors. The results
obtained are summarized in Figure 6.49 and Table 6.13. Figure 6.49 Table 6.13 Average MAE of the output signals.
Noisy signal
Marginal WMM I
WMM I1
shows that the WMM filter I1 is able to extract the desired signals from the received signals at the sensors successfully. The WVM filter and the marginal WMM filter I are unable to do so. Linear
filters were implemented with adaptive algorithms
CHANNEL 1
Reference Signal
Received Signal
WVM output
WMM output
WMMll output
Figure 6.49 Input and output signals for array processing with multivariate medians. Each column corresponds to a channel (only channels one and three are shown) and the rows represent: the reference
signal, the signal received at the sensors, the output of the WVM filter, the output of the marginal WMM filter and the output of the WMM filter 11.
to optimize them for this problem but the impulsiveness of the noise made the optimization algorithms diverge. The final weights obtained with the optimization algorithms are shown in Figures 6.50
and 6.51. Figure 6 . 5 0 ~shows why the WVM filter is not able to obtain a good result for this problem. The optimal weights are erratically distributed and in consequence, the output looks nothing
like the desired signal. A similar conclusion can be reached for the weights of the marginal WMM filter in Fig. 6.50b. The outer weights of the WMM filter I1 are shown in in Figs. 6.50c-e, each one
corresponding to a different channel of the signals. It can be seen how the weights show a certain periodicity with frequencies related to the ones of the signals we want to extract in each channel.
The inner weights for the marginal WMM filter and the WMM filter I1 are shown in Fig. 6.5 la and b respectively. It can be seen that the extra time correlation included in this problem completely
distorts the weights W of the marginal WMM filter. The weights W of the WMM filter 11, on the other hand, reflect the cross-channel correlation of the signals. rn
Figure 6.50 Optimized weights for the qultivariate medians in the array processing example: ( a )WVM, (b)Marginal WMM filter I V, (c) first row of V for WMM filter 11, (d)Second row of V, ( e ) Thud
row of V
Problems 6.1 Show that the ML estimate of location for samples observing a multivariate Gaussian distribution as in (6.1) reduces to = WTX as shown in (6.2). 6.2 Prove that the integral operation in
(6.58) can be taken out of the median operation leading to (6.59).
6.3 Prove that the absolute value and integral operator in (6.70) can be interchanged leading to (6.71).
Show that &sgn (WTsq)in (6.75) is equivalent to the expression in (6.76).
Prove the BIB0 stability of recursive WM filters stated in property (6.1)
Show that &$sgn(WzP&)
in (6.146) reduces to (6.147).
6.7 Show (using sample selection probabilities) that a center weighted median filter with W, 2 N is an identity operator (i.e., the sample selection probability of the center sample is l),where N is
the number of taps, N odd.
Figure 6.57 Optimized inner weights for the Multivariate medians in the array processing example. (a) Marginal WMM filter I, (b)WMM filter 11.
6.8 Find the closest linear filter to the weighted median filter given by the weight vector: W = [I, 2 , 3, 2, 11. 6.9
Show that
(a) The maximum likelihood estimator of location for the distribution in (6.163) equals (6.203)
(b) The same MLE reduces to
under the condition in (6.166).
6.10 Given a vector weighted median filter defined by the weights W j Show that AWi as defined in (6.161) is the change required in the weight W i to make the output of the vector weighted median
change from the value r?j, to the value .+
7 Linear Combination of Order Statisti& Given the ordered set X(l),X ( 2 ) ., . . , X ( N )corresponding to the N observation samples X I ,X Z ,. . . , X N , an alternative approach to use the order
statisticsorder statistics is to work with linear combinations of these. Simple linear combinations, of the form
c N
are known as -statistics or L-estimates. Arnold et L. (1992) 61, Davit 982) [58], and Hosking (1998) [ 1021 describe their long history in statistics. L-statistics have a number of advantages for use
in signal processing. If the random variable 2 is a linear transformation of X , 2 = Q yX, for y > 0, then the order statistics ) , L-statistics computed from them satisfy of X and 2 satisfy Z(i) = Q
+ Y X ( ~ and Y ( z )= Q EWi Y Y ( ~ Thus, ) . Y ( z )= Q + if X W i = 1, a required condition to use L-statistics for location estimates. In addition, by appropriatechoice of the weights Wi, it is
possible to derive robust estimators whose properties are not excessively dependent on correct statistical assumptions. L-estimates are also a natural choice for censored estimation where the most
extreme order statistics are ignored. As it is described later in this chapter, useful generalizations of L-statistics are obtained if the weights in (7.1) are made data dependent or if functions of
order statistics are used in the linear combination. In particular, several hybrid filter classes are presented where L-filter attributes are complemented with properties of linear FIR filters. I
7.1 L-ESTIMATES OF LOCATION
In the location estimationproblem, the observation samples are of the form X i = p Zi, where P is the constant location parameter to be estimated, and where Z i is a zero mean sequenceof
independentand identically distributed noise samples with variance c?. For the sake of simplicity, we assume that the noise is symmetrically distributed. Given the observationsX I , X Z ,. . . ,X N
and the correspondingorder statistics X ( i ) , i = 1 , 2 . . . , N , the goal is to design an L-estimate of the location parameter p. Lloyd (1952) [ 1341 showed how the location parameter can be
more efficiently estimated with a linear combination of ordered samples than by the classical sample mean. The corresponding mean square error will always be smaller than, or equal to, that obtained
with the sample mean or sample median. Simplifying Lloyd's contribution that dealt with the simultaneous estimation of location and scale parameters, Bovik et al. (1983) [38] considered the
restoration of a noisy constant signal, say P, with an L-estimate designed to minimize the mean square error. Using the fact that the unknown parameter P is constant, the simplified approach starts
by relating the order statistics of the observations and the noise as X(i) = P + Z (i).The L-estimate of location is then (7.2) where the Wis form an N-dimensional vector of real coefficients.
Further, the }= P leading to estimate in (7.2) is required to be unbiased such that E{ j N i=l
N i=l
cwi + cwi N
Assuming the noise samples Z(i) are independent, identically distributed, and zero mean with a symmetric probability density function ( f ( z ) = f ( - z ) ) , then E[Z(i)]= -EIZ(N-i+l)]. Using this
fact in (7.3), the estimate is unbiased if the N weights are symmetric (W~-i+l = Wi) and if CiZl Wi = 1. The above can be written in vector notation as = W'(P e
+ zL)
where e is the N-long one-valued vector e = [l,1,.. . ,1]T , and where Z L is the vector comprised of the noise component order statistics
ZL =
IZ(I),Z(Z)’... >+)IT.
The mean-square estimation error can be written as
where the unbiasedness constraint WTe = 1was utilized, and where the correlation matrix RLis given by
1 where E Z ( i ) Z ( jis ) the correlation moment of the ith and jth noise order statistics. The minimization of J(W), subjected to the unbiasedness constraint WTe = 1, is a quadratic optimization
problem that can be solved by the method of Lagrange multipliers. The Lagrangian function of the constrained mean-square-error cost function J(W) is written as F(X, W) = W ~ R L W
+ X(WTe
Taking the derivative of F(X,W) with respect to the weight vector W and setting it to zero leads to
~ R L W Xe = 0.
In order to solve for A, the terms above are multiplied by e TR,l resulting in A=
-2 eTRL e ’
which is then used in (7.9) to obtain the optimal L-estimate of location weights
(7.11) with the correspondingmean square error
J(W0) = Jmzn =
The estimate in (7.11) is optimal only among the restricted class of L-estimates. There is no assurance that other estimators, such as maximum likelihood estimators,
may be more efficient. Since the sample mean is included in the class of L-estimates, optimal L-estimates will always do better than, or at least equal to, the linear estimate. The conditions that
determine when an L-estimate will improve on the sample mean can, in fact, be determined [134]. Given that the correlation matrix E { Z L Z T } is positive definite, by the use of a Cholesky
decomposition it can be representedby a product of two triangular matrices
E { z ~ z ;=}LL*.
F'remultiplying and postmultiplyingthe matrix R Lby the vectors eT and e leads to
eTE{ZLZT}e = E { e T Z L Z z e ) = E{eTZZTe) = No2
where Z = [ Z , ,2 2 , . . . , Z N ]is~the vector of unordered noise samples, and where we have used the fact that the sum of the elements in Z L is the same regardless of the order in which the
elements are summed. From (7.13), it is also found that (7.14) is given by (7.15)
eTLLTe = hTh N
i=l where h = LTe. Similarly, letting k = L-'e
(7.16) i=l
Moreover, since N
i=l = eTLL-'e = N ,
and invoking the Cauchy-Schwartz inequality / N
Table 7.7 Optimal weights for the L-estimate of location N = 9 (adapted from [38]) Distribution
Weights w1
equations (7.15) and (7.17) lead to
(7.19) Hence, the minimum L-estimate mean square error satisfies (7.20) with equality if and only if h
= k correspondingto the condition
E { Z L Z E } e = e.
The L-filter estimate will thus perform better than the sample mean whenever the row sums (or column sums) of the correlation matrix E{ Z L Z E } are not equal to one [134]. The optimal weights of
the L-estimate will have markedly different characteristics depending on the parent distribution of the noise samples Zi. Bovik et al. computed the optimal L-filter coefficients for various i.i.d.
noise distributions. Table 7.1 shows the optimal coefficients for the L-location estimate when the additive noise ranges from uniformly distributed, to Gaussian distributed, to Laplacian distributed,
for N = 9. The impulsive nature of the Laplacian noise is referred to as being heavytailed, while the bounded shape of the uniform distribution is referred to as being short-tailed. Table 7.1
illustrates the L filter weights as the impulsive characteristics of the parent distribution are varied. For a Laplacian distribution, the center order statistics are emphasized since outliers
disturbing the estimation will be placed in the outer order statistics. On the other hand, the L estimate in uniformly distributed noise reduces to the midrange,which relies on the first and last
order statistic alone. Finally, as expected, in Gaussian noise, all order statistics are equally weighted leading the estimate to the sample mean. Tables for the optimal coefficients of L-filters for
some common distributions are listed by Sarham and Greenberg (1962) [33].
Near/y Best L-Estimates The disadvantage of Lloyd’s and Bovik et al.’s approaches is that they require the tabulation of covariances of order statistics for every distribution and sample size for
which it is used. Combinatorial formulas for their computation can lead to unfeasible complexity even for small sample sizes. Blom’s (1962) [33]“unbiased nearly best linear estimate” overcomes this
disadvantage by using asymptotic approximations of the covariances. A similar approach was used by Oten and Figueiredo (2003) [ 1531 where a Taylor’s expansion approximation is used. Suppose the
L-estimate in (7.1) is location invariant and is used to estimate a constant signal p embedded in zero mean noise with a symmetric density function f ( z ) and distribution function F ( z ) . The
L-estimate in (7.1) can be rewritten as: (7.22) The original weights Wi are related to the weights wi as follows:
w.a -
Wi N
The filter coefficients wi are symmetric about the median, that is wi = w(N-i+l). Therefore, the filter can be rewritten using only half the coefficients as
where r = and bi = wi except that for N odd b, = Following a procedure similar to that used in Section 7.1 the optimal coefficients b = [bl, bz, . . . , b,IT are found as:
bT = C-’e
where C = [cij]is the covariance matrix of the noise samples Z(i) Z ( ~ - i + l )C. is related to the covariance matrix p = [ p ( ( i j ) : N of ] the order statistics of the noise samples Z(i) since
A procedure to calculate the covariance matrix C is developed in the following. Let U be a uniformly distributed random variable in the range ( 0 , l ) . The random variable obtained through the
transformation F ( U ) obeys the distribution function F ( X ) ,namely
F y U )
where = denotes equality in "distribution" and F - l denotes the inverse of the distribution function. In general, if X I ,X2, . . . , X N are i.i.d. random variables taken from the distribution F (
X ) , and U1, U 2 , . . . , UN are i.i.d. uniform (0,1) random variables, then
(x(1),. .,X(N))
(F-l(u(l)),... ,F-l(u(N))).
Applying a Taylor expansion to F -'( U(i))around the point X i = EU(2)= NL t l and neglecting the higher order terms the following representation is obtained
denotes the derivative of F-'(u) evaluated at u = &. Taking expectationon both sides of (7.27) leads to the approximatefirst order moment of the ith order statistic. such that
E X ( i )xF-l
(Nyl) -
Similarly,using(7.27)and(7.28)in~ov(X(~),X ( j ) )= E X ( i ) X ( j-)E X ( i ) E X ( j ) leads to (7.29)
COV(U(Z),U(j)) =
i ( N 1- j ) - X i ( 1 - X j ) (N 1)2(N 2) - n + 2
and (7.31) the covariance approximationreduces to
Here, its assumed that F - ' ( l ) ( u ) exists at u = Consequently, F ( X )and f ( X ) must be differentiable and continuous at X = &,respectively. Empirical results have shown that the covariance
approximationsin (7.32) are more precise if the term
N 2 in the denominator is replaced by N . Using either representation does not make any difference since that term will be cancelled during the normalization of the coefficients. In the following, N
2 will be replaced by N for simplicity. The cijs can be calculated now using (7.26) and (7.32) as:
(7.33) ) f ( ~ ~ where the fact that X j = 1 - X N - ~ + ~and f ( ~ j = Once the matrix c has been approximated, its inverse C lated as well as:
c,z M 4Nf2(.i)(N
+ 1) + 1)
Err z5 2 N f 2 ( K r ) ( N
-2Nf(Ki)f(.j)(N c'23. . - 0 Eij M
- j +was ~ )used.
[cij]can be calcu-
+ 1) i = j
- 1 = 1 , .. . ,
li - j l > 1.
These values can be substituted in (7.25) to calculate the coefficients bi as (7.35) j=1
In order to calculate the output of the filter, (7.24) must be evaluated. Since the coefficients bi appear in the numerator and denominator of (7.24) the constants obtained in (7.35) can be
eliminated leading to
bl =
bz =
bT = f ( b - ) [ f ( . r - d
+ f(.z)l -
- f(.T)I.
+ f(Ki+l)],
= 2,..., r -
1 (7.36)
Having the coefficients bi, . . . , b,, the actual filter coefficients can be calculated using (7.23). This method was used to recalculate the coefficients in Table 7.1 obtaining the values in Table
7.2 L-SMOOTHERS L-smoothers are obtained when L-estimates are computed at each location of a running window and, as such, they are more general than running median smoothers, as shown in Nodes and
Gallagher (1982) [150], Bovik et al. (1983) [38], and Bednar and Watt
Table 7.2 Approximated weights for the L-estimateof location N = 9 (adapted from [ 1531)
Distribution w1
(1984) [30]. The application of L-smoothers to image denoising is illustrated in Pitas and Venestanopoulos (1992) [162], and Kotropoulos and Pitas (1996) [ l IS]. Adaptive optimization of L-smoothers
is discussed in Kotropoulos and Pitas (1992) [ 1171, Pitas and Venestanopoulos(1991) 11611, and Clarkson and Williamson (1992) ~521.
DEFINITION 7.1 (RUNNING LSMOOTHERS) Given a set of N real valued weights Wl , Wz,. . ., WNassigned to the order statistics X p , X ( z ) ,. . ., X ( N ) ,in the running window X(n) = [ X , ( n ) , X
z ( n ) ,. . . , X,(n)] , the L-smootheroutput is
c N
Y ( n )=
Note that if the weights are chosen uniformly as Wi = 1/N, the L-smoother reduces to the running mean. In fact, the running mean is the only smoother that is both linear and an L-smoother. Another
example of an L-smoother with a long history is the running range Y ( n )= X Q )- X ( N )often used as an informal measure of the dispersion of the sample set [58].
EXAMPLE 7.1 (AM DEMODULATION) Perhaps the simplest L-smootheris found by zero-weightingall order statistics except for one. That is Wi = 0 for i = 1,.. . ,r - 1,r + 1,.. . ,N and W, = 1 leading to
the rank-smoother
Y ( n )= rth Largest Sample of [ X , ( n ) ,X z ( n ) ,. . . ,X,(n)], with the median smoother as a special case. Notably, the nonlinearity of even this simple smoother can produce useful results
[150]. The demodulation of AM signals is one such example where the output of the rank-smoother is selected so as to tract the envelope function of the AM signal. Figure 7.1 depicts the AM
demodulation of a
5-kHz tone signal on a 31-kHz carrier and sampled at 250kHz using an eighth-rankedorder operation with a running window of size 9. Figure 7 . 1 shows ~ the envelope detection when no noise is
present, whereas Figure 7. l b shows the envelope detection in an impulsive noise environment. Note that while impulsive noise is very disruptive with most envelope detectors, the output of the
rank-order filter is hardly perturbed by the noise.
Trimmed means are yet another subclass of L-smoothers that have received significant attention in statistics, see Hosking (1998) [102]. As described in Chapter 3, trimmed means are useful in
situations where the signal under analysis may contain observations that are discordant with the rest of the signal. Because outlying values are mapped to the most extreme order statistics, trimmed
smoothers give zero weight to these order statistics leading to
1 ~
N - 2r
c xw
where r = “a]. Thus, the largest and smallest T observations, each representing a fraction a of the entire sample, are ignored when calculating the running mean. Appropriate choice of a, the amountof
trimming, depends on the degree of robustness required - larger amounts of trimming provides higher protection against heaviertailed noise. Crow and Siliqui (1967) [57] suggest a value of a = 1/5
when possible distributions range from the Normal to the Laplacian, and values of a between 1/4 and 1/3, when noise distributions range from the Normal to the Cauchy. As described by Bednar and Watt
(1984) [30],trimmed means provide a connection between average smoothingand median smoothing. Consider a segmentof the voiced waveform “a”, shown at the bottom of Figure 7.2. This speech signal is
placed at the input to several trimmed mean filters of size 9. The outputs of the trimmed means as we vary the trimming parameter r from zero to four are also shown in Figure 7.2, for r = 0, 1, 2, 3,
4. The vertical index denotes the trimming where the top signal is the median filtered output, the second signal from the top is the trimmed mean with r = 3 output signal, and successively the other
trimmed means are displayed in Figure 7.2. The different characteristics of the filtered signals as we vary the trimming can be immediately seen. Notice that while the running mean results in a
smooth blurring of the signal, the running median smoothes the signal with sharp discontinuities. This arises from the fact that the running median restricts the output value to be identical to the
value of one of the input signals in the observation window. Depending on the amount of trimming, the alpha trimmed filter removes narrow impulses, but also does some edge smoothing. These
smoothingproperties are even more pronounced if the filtering is performed repeatedly over the same input signal. Figure 7.3 shows the speech signal that has been repeatedly filtered for five
consecutive times with the various trimmed mean filters. It is interesting to note that in the extreme trimming case of the median, after a few iterations, the resultant signal will not be modified
by further median filtering.
Original signal 160 I
80 -
Original signal
Detected signal
70 .
(a) Signal corrupted with impulsive noise 160 1
8ot 70
Noisy signal
.- - - Detected signal
figure 7.1 Rank-order AM demodulation. The window size is 9, and the output is the 8th largest in the window. Baseband signal is at 5kHz with a carrier of 31kHz. The sampling frequency is 250kHz. (a)
noiseless reception. (b)noisy reception with impulsive noise.
time n
Figure 7.2 Trimmed mean filtering of a speech signal for various levels of trimming. The bottom waveform is the original speech signal
Thus, it can be seen that there exist fixed-point signals to the median filter that are not affected by the running median. This is only true for the median filter and not for a general trimmed mean,
since the averaging operation in the trimmed mean will inevitably modify the output of the filter after every filter pass.
7.3 Ll?-FILTERS Weighted medians only admitting positive weights were labeled “smoothers” in Chapter 5 since these lead to operations having low-pass characteristics. The low-pass characteristics of
weighted median smoothers are well understood and are directly attributed to their nonnegative weights. L-smoothers, on the other hand, also exhibit low-pass characteristics even though positive and
negative valued weights are allowed in their computation. In this case, the limitation does not arise as a consequence of the weights values, but from the fact that prior to the weighting operation,
the observation samples are sorted, and as a result their temporal ordering is lost. Thus, the Lsmoother weights cannot exploit the time ordering relationship of time-series that is
250 time n
Figure 7.3 Five passes of trimmed mean filtering of a signal with various levels of trimming. critical in signal processing applications. To overcome this limitation of L-smoothers, methods that
combine rank-ordering and time-ordering of the observations samples have been developed. The class of Lt-filters is one approach introduced by Palmieri and Boncelet (1989) [155] and independently by
Ghandi and Kassam (1991) [79]. Their approach is to modify the weight structure of L-smoothers such that the weights depend on both, the time-order and rank-order characteristics of the data.
DEFINITION7.2 (Lt-FILTERS)Given the observation vector at time n, X(n) = [ X , ( n ) ,X z ( n ) ,. . . , X,(n)lT, where Xi(.) = X ( n i - ( K 1 ) ) with N = 2K 1, and the ranks Ri for each of the
samples X i , i = 1 , 2 , . . . , N , the Le-jilter output is defined as
'Also referred to as combination filters.
where the weight given to the ith sample X i , W ~ , R depends ~ , on the sample's rank
Ri . Since Ri can take on N possible weights, there are up to N different weights per sample. The Ll-filter thus requires a total of N 2 weights. The name Lt-filter was coined borrowing notation from
order statistics, L referring to the rank-ordering and t referring to the time-ordering of the observation samples. While Definition 7.2 is succinct, it does not lead to a simple-to-optimize
formulation since the linear combination in (7.39) involves data-dependent weights. An equivalent definition, using vectorial notation, will prove useful in this regard. Dropping the temporal index
n, for notational simplicity, and expressing the temporal-order and rank-order observations as X = [ X IX2, , . , . , XNIT and XL = [ X ( l )x(2), , . . . , X(N)IT, the N2-long vector XLe that
combines the rank and temporal ordering is defined as [79, 1551
X ( j ) denotes the event that the ith element in X is the jth and where X i smallest in the sample set. Thus, the ith input sample is mapped into the bin of samples Xi(l), Xip), . . . , Xi"),of
which N - 1 are zero, and where only one is nonzero having the same value as X i . The location of the nonzero sample, in turn, characterizes the ranking of X i among the N input samples. Again, Ri
represents the rank of X i among the elements of X. For example, the ranks of the elements in the observation vector X = [ 3, 5, '2 1 are R1 = 2, R2 = 3,and R3 = 1leading to the L t vector
XLe=[o, 3 , 0 1 0 , 0, 5 1 2 , 0 ,
In the case of rank ties among a subset of input samples, stable sorting is performed where a lower rank is assigned to the sample with the lower time indexing in the subset containing rank ties. The
decomposition X E R N XLe E RN2specified in (7.40) and (7.41) is a one-to-one nonlinear mapping where X can be reconstructed from X Le as
where IN is an N x N identity matrix, e N is an N x 1 one-valued vector, and @ is the matrix Kronecker product. Since the X L vector ~ contains both, time and rank ordering information, it is not
surprising that we can also obtain X L from X L as~ XL =
[ez I N ]XLe.
EXAMPLE 7.2
Consider again the length 3 observation vector X = [ 3, 5, 2 ] which is mapped to the LC vector X L =~ [ 0, 3, 0 I 0, 0, 5 I 2, 0, 0 The vector X can be reconstructed from X L as~
[o, 3, 010, 0, 5 1 2 , 0, oIT
[o, 3, 010, 0, 512, 0, o]',
where e 3 = [l,1,1IT,and 0 = [O,O, 0IT. Similarly, XL is obtained from X Las~ XL = [ I 3
I I 3 I I 3 I [ 0, 3, 0 I 0, 0, 5 I 2, 0, 0 I'
where 1 3 is the 3 x 3 identity matrix. The decomposition X X L thus ~ maps a vector with time-ordered samples into a vector whose elements are both time- and rank-ordered. The rank of X i , Ri,
determines the value of all the elements Xi(l),X i ( 2 ) ,. . . , X i ( N ) regardless , of how the other samples in the window are ranked. Having the L l vector representation, the filter output
definition follows naturally.
DEFINITION 7.3 (U-FILTERS (VECTOR Jilter is given by the inner product
Y ( n )= WT XLe(n)
The output of the Ll-
where the weight vector is W = [ ( W I ) ' \ ( W ~ ) ~. ~I .(.W N ) ' ]in~which Wi = [Wi(l), Wi(z),. . . , W ~ ( Nis)the ] ~N long tap weight vector associated with the ith input sample.
7.3.1 Design and Optimization of Ll?-filters The simplest method to design robust LC-filters is through the concept of trimming. Observations that are discordant with the main body of samples map to
the most extreme order statistics, thus, Ll-filters that give zero weight to these order statistics are robust in nature. Trimming in the LC weight vector is accomplished easily as
where the first and last of the N weights associated with a sample are set to zero. The remaining N - 2 samples can be optimized, or they can all be assigned the corresponding weights values of an
FIR filter so as to mimic the linear FIR spectral characteristics. The optimization of Ll-filters, in general, is straight forward. The inner product formulation of Lt-filters is well suited for the
development of optimization algorithms, since the filter's output is linear with respect to the observation samples. The goal is to minimize the error e ( n ) between a desired signal D ( n ) and the
Lt-filter estimate Y ( n ) .The optimization follows the well known Wiener filter solution of linear FIR filters, with the exception that the N 2x N 2L t correlation matrix of the Lt observation
vector is used rather than the N x Ncorrelation matrix associated with the observation vector X in the Wiener filter [99]. Under the mean-square-errorcriterion, the cost functionis defined as J(W) =
E { e 2 ( n ) }where , e ( n ) = D ( n )- Y ( n ) .For each value assigned to the filter weight W, a corresponding mean square estimation error occurs. The goal is then to obtain the weight vector
values so that the error function is minimized. Using the vector definition of the Lt-filter, the cost function can be expressed as
J(W) = E [ ( D ( n ) W'x~g(n))(D'(r~)- Xzt(n)W)] = - 2pEeW WTR~gW,
(7.50) (7.5 1)
where p ~ = e E { D ( n )X L ~and } where R Lis~the N2x N 2 symmetric correlation ~ E{XL~X;,}. J(W) is quadratic in W; thus, for different values matrix R L = in W1(1),W1(2),. . . , WN(N),J(W)
defines a bowl-shape surface in an 2N 1 dimensional space having a unique minimum. The global minimum is found through the gradient
= ~ R L ~ -2 Wp ~ e .
Setting the gradient to zero yields the optimal weights for the Lk'-filter
wo = R,:PLe.
The optimal solution, thus requires knowledge of the second order moments of the rank and time ordered vector XLe and of the cross-correlation between this vector and the desired signal. In order to
gain more intuition, the structure of the correlation matrix R L =~ E{XL~X:,}, can be further analyzed. Partitioning the Lk' vector as
where XT = [Xi(l),Xi(2),. . . ,Xi(N)], the LC correlation matrix can be written as (7.54)
in which R,, = E{X,X,} are the submatrices in (7.55). These submatrices are sparse as a result that many ordered sample combinations cannot occur, that is, two samples cannot have the same rank. In
fact, all submatrices R,,, for u = 1,2, . . . ,N, are diagonal having all their off-diagonal elements zero. Also, the sub matrices R for u # 21 form off-diagonal matrices whose diagonal elements are
all zero. The correlation R Lfor ~ N = 3, for example, is the expectation of the elements in
x(2)lx(3)2x(2)1x(3) (2)ZX(3)1
( 2 ) 3 x ( l ) l x(2)3x(1)2
( 3 0
0 ~
X2 (313
where the structure of the submatrices is readily seen. This example matrix provides us with a more intuitive understanding of the L l correlation matrix. Note that for stationary observations, R
Lis~not Toeplitz since, in general, EX&j # EX$ for i # k. On the other hand, in a stationary environment, EX (i)jX(k)l = EX(i),f;(k),, for any j , m, 1, n. Thus, the correlation matrix R L for ~ all
LC estimators is blockToeplitz.
EXAMPLE 7.3 (ROBUST WAVELET DENOISING) Denoising by wavelet shrinkage has evolved into a popular application of wavelet analysis. The key concept in wavelet denoising lies in the fact that most
signals encountered in practice have a relatively small number of wavelet coefficients with significant energy. Wavelet shrinkage, developed by Johnstone and Donoho (1995) [64] decomposes a signal by
a discrete wavelet transform and then selects the significant coefficients based on thresholding. Coefficients that fall below a threshold are removed and those that exceed the threshold are either
kept in their original magnitude (hard thresholding) or are preserved with their magnitude reduced by the threshold level (soft thresholding). By an appropriate choice of the threshold value,
signals corrupted by Gaussian noise are effectively denoised. The noise is assumed white and its power is assumed to be much smaller than the signal power. Under these conditions, wavelet shrinkage
has been shown superior to traditional linear filtering methods. Figure 7.4 depicts the wavelet shrinkage of the noisy blocks and doppler signals with soft thresholding. The noise is Gaussian and a
3-level wavelet decomposition is used with Daubechies wavelet filter coefficients of length 6. The wavelet shrinkage removes some of the signal’s power, but is very effective at removing the white
noise power that was uniformly distributed throughout the various levels of decomposition. A particular strength of wavelet shrinkage is its ability to preserve sharp features.
0 -10
Clean Blocks
Clean Doppler
0 -10 Gaussian Blocks
Gaussian Doppler
0 -10 Reconstructed Blocks
Reconstructed Doppler
Figure 7.4 Wavelet denoising of block and doppler signals corrupted by Gaussian noise.
When the noise power is no longer small in comparison to the signal power, or when the noise has heavier-than-Gaussian tails, traditional wavelet denoising is not as effective since the noise
components can lead to coefficients with significant magnitude in the wavelet decomposition and consequently cannot be removed by simple thresholding. Figure 7.5 depicts the block and doppler signals
corrupted by contaminated Gaussian noise and the signal reconstructionsafter wavelet shrinkage. The “Gaussian” component of the noise in the reconstruction attained by wavelet shrinkage is
effectively filtered out, but the majority of the outlier samples remain. 30 20
0 -10
Clean Blocks
Clean Doppler
0 -10
Contaminated Gaussian Blocks
Contaminated Gaussian Doppler
‘1 201 20
Figure 7.5 Wavelet denoising of block and doppler signals corrupted by contaminated Gaussian noise.
One approach to overcome the effects of outliers is to prefilter the input signal by a median or similar filter prior to applying wavelet shrinkage. Prefiltering will
remove the impulsive noise but it will also affect the underlying signal structure as the characteristics of wavelet shrinkage are not used at this initial stage. Lau et al. (1996) [ 1231introduced a
different approach to robust signal denoising. In their approach, the low-pass and high-pass filters used in the first stage of the wavelet decomposition are replaced by LC filters designed to mimic
the finite impulse response of the wavelet FIR filters. The LC filter weights are selected such that the smallest and largest samples in the observation vector are weighted by zero. The L! weights
associated with the ith input sample in the running filter are thus W i = [0, Wi(2),. . . , W i ( ~ - lO]*. ) , The remaining coefficients in Wi are optimized so that the LC filter response mimics
that of the corresponding wavelet filter when no noise is present. Since outliers are for the most part removed in the first level of the decomposition, the filters used in the remaining levels of
the decomposition, and in the reconstruction after shrinkage, are simply the traditional wavelet FIR filters. Figure 7.6 depicts the denoising of the block and doppler signals corrupted by
contaminated Gaussian noise. The outputs of the optimal Le filter with 12 datadependent coefficients are shown in the figure. The output is free of outliers but the “Gaussian” component of the noise
is not removed adequately. The output of the robust wavelet shrinkage, on the other hand, eliminates both the Gaussian and nonGaussian noise components. The robust wavelet shrinkage in this example
uses three levels of decomposition and length 6 Daubechies filter coefficients at all levels with the exception of the first level of decomposition where L! filters that mimic Daubechies filter’s
response are designed.
In LC-filtering, the weight given to a sample X i is determined by the sample’s rank Ri. Thus, X i is weighted by one of N possible weights stored in memory. Kim and Arce (1994) [ 1141 showed that in
some cases, it is useful to extend this concept such that the weight given to X i is not only dependent on its rank Ri, but also on the rank of some other sample, X j , in the observation window. In
this case, X i is given one of N ( N - 1) weights stored in memory2. A total of N 2 ( N - 1)weights would be required for the weighting of the N observation samples. This concept can be generalized
progressively such that at the end, the weight given to a sample X i depends not only on its rank Ri, but also on the ranks of the remaining N - 1 samples in the window. In the most general case, a
sample in the so calledpermutationJilterwould be assigned one of N ! weights [26, 15.51.
DEFINITION 7.4 (PERMUTATION FILTERS) Given the observation vector at time n, X ( n ) = [ X I Xa, , . . . , X ~ J and ] ~ the , ranks Ri for each of the samples X i , a = 1 , 2 , . . . , N , the
permutationfilter output is defined as the linear combination ’For each value of
8, there are an additional N
1 possible values that
4 can take.
0 -10
0 -10'
LI-Filtered Blocks
LI-Filtered Doppler
LI-Shrinkage Blocks
LI-Shrinkage Doppler
Figure 7.6 L l filtering and Robust wavelet denoising of block and doppler signals corrupted by contaminated Gaussian noise.
(7.56) i=I
where the weight given to the ith sample X i , W Z , is ~ indexed ~, by the permutation p x of the index elements in X into the corresponding ranks Ri
px =
( i21:: nl,)
As an example, the index permutation associated with the observation X = [3,5,2]' is 1 2 3 px=(231)
A permutation filter, in this example, would allow 3! weights per sample. The sample permutation shown above would point to one weight in this set. Permutation filters in their most general form are
not practical as their memory requirements grow factorially with N . Several approaches to simplify their structure have been proposed by Barner and Arce (1986) [26], Kim and Arce (1994) [114], and
Palmieri and Boncelet (1989) [ M I . The formulation in [114] is particularly useful as it provides a progressive coloring of the permutation space. , . . . , XN]' and its corresponding Consider the
observation vector X = [ X I X2, sorted vector XL = [ X ( l )X, ( 2 ) ,. . . , X")]'. Define the rank indicator vector
Ri = [Ril,RiZ,. . .,RiNIT,
where (7.60) where X i H X ( k ) denotes that the ith temporal sample occupies the kth-order statistic. The variable Ri is then defined as the rank of X i ; hence, R ~ R=, 1 by definition. Assuming
the rank indicator vector 72,is specified, if we would like to jointly characterize the ranking characteristics of X i and its adjacent sample, Xi+l, contained in X, then an additional indicator
vector is needed which does not contain the information provided by Ri. Hence, we define the reduced rank indicator of Xi+l, as Rj,where we have removed the Rith element from the rank indicator
vector Ri+1.The two indicators R i and Ra fully specify the rank permutation characteristics of X i and Xi+l. We can generalize this concept by characterizing the rank permutation characteristics of
a set of j samples. Here, a reduced rank indicator of R:,is formed by removing the Rith, Rielst, . . . , Rie(a-l)th elements from the rank indicator vector Rieuwhere @? denotes the Modulo N addition
i EE a = (i a ) Mod N . 3 The pawhose rank information is being considered in rameter a specifies the sample, addition to the rank informationof the samples ( X i ,X i e l , . . . , X i e ( u - l ) )
,such that, a = 1considers the rank information of the sample X z e l when the rank information of the sample X i is known, and a = 2 considers the rank information of the sample Xiea when the rank
information of the samples ( X i ,Xiel)is known. For example, if X = [6,3,10,1IT and XL = [l,3,6, 10IT, then the rank indicator vectors and their respective rank parameters are
3The Modulo N operation defined here is in the group {1,2,. . . , N}, such that (N Mod N = N) and ( N 1 Mod N = 1).
Ri = [O,O, 1,OIT, Ri = 3, 722 = [0,1, O,OIT, R2 = 2, 723 = [O,O, 0, lIT,R3 = 4, R4 = [l,O,O, OlT, R4 = 1.
The reduced rank indicator vectors 77,;and R: are, for instance,
R; =
p y - = [l,O,O]T
R 4
R: = [flR4,0,i,pR31T= [o, llT,
where the R3th sample was removed from R 3 B 1 = R4 to obtain R;and where the R3th and R4th samples were deleted from R3a2 = 72.1 to get Ri.Note that the notation flRi used in (7.62) represents the
deletion of the sample “0” which is the Rith element of the rank indicator vector. The 72.; indicates that the sample X4 is the first ranked sample among ( X I ,X2, X4), and similarly R?jindicates
that X1 is the second ranked sample among ( X I,X2). The general idea behind the reduced rank q is that it characterizes the rank information of the sample X i a a under indicator R the situation in
which the rank information of the samples ( X i ,Xiel , . . . ,XiB(a-l)) is known. The rank indicator vector and the reduced rank indicator vectors are next used to define the rank permutation
indicator Pi as
Pj = Ri @a; @ . . . @ 7Z-l
for 1 5 j 5 N , where 18denotes the matrix Kronecker product. Note that while the vector Ri is of length N , the vector Pi in (7.63) has length of P,j d5.f N ( N - 1) . . . ( N - j 1)which represents
the number of permutations choosing j samples from N distinct samples. The vector Pi effectively characterizesthe relative ranking of the samples ( X i ,X i a l , . . . ,Xia(j-l)), that is, the rank
permutation of ( X i ,X i a l , . . . ,Xia(j-ll). Hence, Pp doesnotunveilanyrankinformation, whereas Pt provides the rank information of X i but ignores the rank of the other N - 1input samples.
Similarly, P: provides the rank information of X i and Xial but eludes the ranking information of the other N - 2 input samples. Clearly, P accounts for the ranks of all input samples X 1 through X N
. In order to illustrate the formulation of the vector P!, again let the observation vector take on the values X = [6,3,10,1] and XL = [l,3,6, lOlT. The rank indicator vectors, Ri,for this example
vector X were listed in (7.61), then the rank permutation indicators for j = 2 are found as
Pf = Rl @ a: = [O, O , l , O ] T I8 [O, 1,O]T Pi = a 2 I8 R; = [O, 1,0,0]T8 [O, 0,1]T P; = R3 8 72; = [O, o,o, 1]T I8 [l,0 , o y Pi = a 4 8 R; = [l,o,o, O]T c3 [O, l,O]T.
To see how the Pi characterizes the rank permutation, let us carry out the matrix Kronecker product in the first equation in (7.64), that is,
p: = [ ( O , O , O ) , (O,O, 01, (0,1,0),(O,O, 0)l'
where parentheses are put for ease of reference. Note that the 1located in the second position in the third parentheses in P implies that the rank of X 1 is 3 and the rank of X 2 among ( X 2 ,X 3 ,
X,) is 2. Thus, P: obtained in this example clearly specifies the rank permutation of X 1 and X 2 as (R1,R2) = (3,2). Notice that the vectors Pa can be found recursively from (7.64) as P = P: @ R:.In
general, it can be easily seen from (7.63) that this recursion is given by P{ = P{-' @ ' Rj z'. The rank permutation indicator forms the basis for the rank permutation vectors XL3gdefined as the NP&
long vector XLj! =
[ X I ( P y I x2 (PH)T1 . . . 1 X N (P$)T]
Note that XL~!places each X i based on the rank of j time-ordered samples ( X i ,XiB1,. .., Consequently, we refer to it as the L3t vector, where the j superscript stands for the j sample ranks used
to determine the weight given to each observation sample. It should be mentioned here that there are other ways of defining rank permutation indicators. For instance, we could let Pa characterize the
rank permutation of the samples ( X i e l ,Xie3, . . . , Xie(2j+ll),or it can characterize the rank permutation of ( X I ,X z , . . . , X j ) regardless of the index i. Here, we use the definition of
Pi in (7.63) since it provides a systematic approach to the design.
DEFINITION 7.5 (PERMUTATION Lje FILTERS) Given the observation sample X = [ X I ,X 2 , . . . ,X N ]and ~ the corresponding Ljk vector of length N P A given in (7.66)),the Lje estimate is dejined as
5 ( n )= WT3e XL3e
where the weight vector is (7.68)
in which Wi is the PA long tap weight vectol; and where Yj( n)is the L j l estimate. Notice that for j = 0, the permutation filter L o t reduces to a linear FIR filter. For j = 1,the permutation
filter is identical to the Le filter introduced earlier.
Optimization Given the L3.t filtering framework, the goal is to minimize the error e ( n )between the desired signal D ( n ) and the permutation filter estimate. Under the MSE criterion, the
optimization is straightforward, since the output of the L j e filter is linear with respect to the samples in X, . Hence, it is simple to show that the optimal L3e filter is found as W O L3g P t
where p
~ =~{ Dg( n )Xh3!}
= Ri;gPLJe
and RL~! is the P& x P& correlation matrix
in which
Rt:' = E { x ~ x ~ P ~ ( P ~ ) ~ }
From (7.72), it can be seen that the diagonal submatrix of RL3g, Rki', constitutes a diagonal matrix whose off-diagonal elements are all zeros. Also, the off-diagonal submatrix R ; : ' for u # Y
forms an off-diagonal matrix whose diagonal elements are all zeros. The solution of the optimal filter in (7.69) will be unique only when the correlation matrix in (7.7 1) is nonsingular. Certainly
if the correlation matrix is singular, a solution could be found by use of the singular value decomposition [99]. Although in most of the applications encountered in practice, where broadband noise
is present, the L J t autocorrelation matrices will be nonsingular.
LC filters and L j t filters exploit the rank and temporal ordering characteristics of the data. The major drawback of these filters, however, is the large number of filter weights needed. Their
complexity increases very rapidly with the window size N and with the parameter j , limiting practical implementations to relatively small sample sizes. Alternative filtering approaches which combine
the rank-ordering and temporal-ordering of the underlying signals have been developed as described next. 7.5.1
Median and FIR Affinity Trimming
An alternative approach to combining the attributes of linear FIR filters and L-filters is through modified sample-trimming strategies that exploit the temporal and rank characteristics of the data.
Recall that trimmed means discard a fraction a of all observation samples, regardless of their dispersion characteristics. When outliers are not present, trimmed means rapidly loose efficiency with
increasing a. Several location estimators have been proposed to overcome this limitation of trimmed means. Lee and Kassam (1985) [125] introduced the modified trimmed mean {MTM), where samples that
differ considerably from the sample median are discarded prior to averaging. The MTM location estimate is formed as (7.73)
ai =
1 if (xi- M E D I A N ( X ~ I ~ NI , ,q) I 0 else
where q is a user-defined parameter that determines the amount of trimming. A similar approach to trimming using distances to a reference point was proposed by Pomalaza and McGillem (1984)[164]. Much
like trimmed means, modified trimmed means are robust estimators that can be used as running smoothers. A variation of (7.73),referred to as the double window modijled trimmedmean (DWMTM), employs
two overlapping smoother windows [ 1251. In this case, the sample median of the smallest running window is used as a reference point. Another related class of smoothers takes on the form (7.73) with
the difference that the trimming reference point is not the sample median, but the center sample in the running window. The K-nearest neighbor smoother 1591 and the sigma smoother [ 1241 are two
smoothers with this trimming structure where the coefficients a i take on the values ai =
{ 1 ifelse[ X i
X,l 5 q
where X , is the center sample in the observation window. Thus, those samples with values close enough to the central sample X , are averaged. The parameter q is used to trim out the samples whose
values differ significantly from the value of X , . This structure has remarkable detail-preserving characteristics but is very fragile to outliers and impulsive noise. Having the modified trimmed
mean and the K-nearest neighbor estimators at hand, it is natural to extend these smoothers to a filtering structure where samples are not only trimmed and averaged, but are temporally weighted as
well. In this manner, the rank and temporal ordering characteristics are captured at once.
Weighfed Median Af7ine Filters Introducedby Flaig et al. (1998) [73], weighted median affine filters use a weighted median as the trimming reference point, the trimming is soft rather than hard, and
the samples are weighted averaged according to their temporal ordering. DEFINITION 7.6 Given the set of N observations { X I ,X z , . . . , X N } in an observation window, a set of N real-valued
afinity weights { C1, Cz, . . . , C N } ,and a set of N filter weights { W l ,Wz,. . . , W N } ,the trimming reference p ( n ) is defined as the weighted median
p ( n ) = MEDIAN(IC11 o s g n ( C l ) X 1 , .. . ,
where ICI o X
, X , . . . ,X . The (normalized) WM affine FIR filter is defined as: ( C / times
where K ( n ) is the normalization constant K ( n ) = IWi(g( 7 Thefunction g ( . ) measures the afinity of the ith observation sample with respect to the weighted median reference ,u(n).The
dispersion parameter y is user dejined.
Figure 7.7 The affinity function assigns a low or high affinity to the sample X i depending on the location and dispersion parameters p ( n ) and y(n). The filter structure in (7.77) weights each
observation twice: first, according to its reliability through g(.), and second, according to its natural order through the Wis. WM affine estimates are therefore based on observations that are both,
reliable and favorable due to their natural order. Observations that fail to meet either, or both, criteria have only a limited influence on the estimate. The affinity function can take on many
forms. The exponential distance (7.78) is commonly used. Figure 7.7 depicts the affinity measure assigned by (7.78) as samples deviate from the point of reference p ( n ) . When the context is clear,
WM affine FIR filters are referred to as WM affine filters. Note that the values of g ( . ) depend on the distance of the observation X , to the weighted median ,u(n).While the affinity weighting
leaves samples located close to p ( n ) unaltered, the magnitude of samples distant from p ( n ) is, in general, reduced. Note, that the total weight ascribed to the observations in (7.77) varies.
Thus, the normalization K ( n )is needed to guarantee unbiasedness of the filter as a location estimator or low-pass filter, for instance. The flexibility provided by the tunable affinity function
translates to the filter characteristics of the estimator. By varying the dispersion parameter y certain properties of the WM affine filter can be stressed: Large values of y emphasize the linear
properties of the filter whereas small values of y put more weight on its order statistics properties. Of special interest are the limiting cases. For y -+ 00, the affinity function is constant on
its entire domain. The estimator, therefore, weights all observations merely according to their natural order, that is
and the WM affine estimator reduces to a normalized linear FIRJilter. For y + 0, on the other hand, the affinity function shrinks to a &impulse at p ( n ) . Thus, the weights W, are disregarded and
the estimate is equal to the weighted median p ( n ) , such that lim YY(n)= p ( n ) .
The WM affine filter assumes a particularly simple form when the reference point is equal to the sample median, which has proven useful as a reference point in the MTM filter. Accordingly, this
estimator is referred to as the median afJineJilter.
, .......
, e l (v).
Figure 7.8 Structure of the median affine filter, with AT = g
Figure 7.8 shows a schematic diagram of the (unnormalized) median affine filter. The similarity to a linear transversal FIR filter is obvious. It is interesting to note that the median affine
estimator subsumes the MTM filter [125]. The latter is obtained by using an affinity function with rectangular shape and uniform weights.
Medianization of Linear FIR Filters In order to apply median affine filters, design procedures to determine the dispersion parameter y and the values of the weights Ci and Wi,for i = 1 , 2 , . . . .
N are needed. A gradient optimization approach is derived in [73]. This adaptive method, similar in structure and complexity to the LMS algorithm, requires a desired training signal. The set of
weights designed by this algorithm minimize the mean square error criterion, assuming stationary observations. Table 7.3 summarizes the adaptive algorithm that can be used to optimize the affinity
function parameters and filter weights. An alternate design approach that is simple and intuitive can be derived from the fact that the median affine filter behaves like a linear FIR filter for y +
co.Setting y to a large initial value, one can take advantage of the multitude of linear filter design
Table 7.3 Summary of the Median Affine Adaptive Optimization Algorithm
Initial Conditions: Data (a) Given: (b) Compute:
N = number of taps vw = positive weight adaptation constant vr = positive dispersion adaptation constant
Wi(n)= 0, i = 1 , 2 , . . . ,N ; y set to a large value
The N observation samples X i at location n and the desired response at time n, D ( n ) Wi(n+1) = estimate of tap weight at time n+ 1,i = 1,.. . ,N
methods to find the Wi coefficients of the median affine filter. Holding the Wis constant, the filter performance can, in general, be improved by gradually reducing the value of y until a desired
level of robustness is achieved. During the actual filtering process y is fixed. Since this process strengthens the median-like properties while weakening the influence of the FIR filter weights,
this design approach is referred to as the medianization of a linear FIR filter.
FIR Affine L-Filters The trimming affinity function used in (7.74) can be used within the L-filter framework to define the class of FIR afine L-filters, which are dual to WM affine FIR filters. In
this case, the affinity function is utilized to measure the distance of the order statistics to the reference point that is defined as an FIR estimate. Therefore, the mode of the affinity function is
positioned on the FIR estimate. Orderstatistics that are distant from the FIR filter output reference are discarded from the L-estimate. The sigma and K-nearest neighbor estimates are special cases
of this filter class. The FIR affine L-filter is defined as follows:
DEFINITION 7.7 Consider the observations X 1 , X2, . . . , X N and their corresponding order statistics X Q ) ,X(2), . . . ,X ( N ) .Given a set of N afinity weights { C1, C2, . . . , C N } ,and a
set of N filter weights { W1, W2, . . . , W N } ,the trimming reference p ( n )is the FIRfilter output p ( n ) = CiXi. The (normalized)FIR afine L-filter is then defined as:
(7.81) where K ( n ) is the normalization constant K ( n ) =
[cE1IWilg (X(t)
The function g ( . ) measures the afinity of the ith order-statistic X ( i ) with respect to the FIRJilter output reference p ( n ) . The dispersion parameter y is user dejined.
When the context is clear, FIR affine L-filters are referred to as FIR affine filters. FIR affine filters weight the order statistics first according to their affinity to the FIR estimate p ( n )
,and second according to their rank. The estimate, therefore, is based mainly on those order statistics that are simultaneouslyclose to the FIR estimate and preferable due to their rank-order. The
affinity function can take on many forms. The exponential distance (7.82)
is commonly used. Like the WM affine filter, FIR affine filters reduce to their basic structure at the limits of the dispersion parameter y: (7.83) and (7.84) Thus, the FIR affine L-filter reduces to
an FIR filter with coefficients Ci and to a (normalized) L-filter with coefficients Wi for y .--f 0 and y --f 00, respectively. A special case of the FIR affine filter emerges when the coefficients
Ci are chosen such that the FIR filter reduces to an identity operation, such that the order statistics are related to the center observation sample. The obtained estimator is referred to as the
center afJineJilter. Clearly, the center affine filter reduces to an identity operation, 0. that is, it is linearized, for y ----f
A powerful tool for the analysis of signals are their time-frequency representations (TFRs). The Wigner distribution (WD)
WD,(t, f ) =
(t + i)x* ( t
in particular satisfies a number of desirable mathematical properties and features optimal time-frequency concentration, see Cohen (1995) [53]. Its use in practical
applications, however, has been limited by the presence of cross terms. The Wiener distribution of the sum of two signals ~ ( t )y(t)
waz+,(t,f)= w w t , f ) + 2Re(WD,,,(t, f ) )+ WD,(t, f )
includes the cross term 2Re (WD,,,(t, f ) )where WD,,, is defined as:
WD,,,(t, f ) =
Irn(t + ); 2
(t - 4)
Cross terms are problematic, specially if the WD is to be studied by a human analyst. Consequently, they have been studied extensively. It is known that cross terms lie between two auto components
and are oscillatory with a frequency that increases with the time-frequencydistance between the auto components, leading to oscillations of relatively high frequency (See Fig. 7 . 9 ~ ) .These
oscillations can be attenuated by a smoothing operation that usually produces: (a) A (desired) partial attenuation of the interference terms, (b) a (undesired) broadening of signal terms, that is, a
loss of time-frequency concentration, (c) a (sometimes undesired) loss of some of the mathematical properties of the WD (See Fig. 7.9b). A filtering scheme that achieves effect (a) while avoiding
effect (b) and also effect (c) if required is needed. Beginning with a time-frequency distribution, in this case the Discrete WD (DWD), a filtering process should be carried out over the whole TFD
plane. A center ufinefilter, that is, an affine filter whose reference point is the center sample in the observation window, provides a solution to this problem. See Arce and Hasan (2000) [ 111. The
requirements of the problem are to obtain a response equal to zero if the observation window is in the cross-termregion and equal to the center sample of the window if the observation window is in
the auto-term region. To achieve this, some specifications about the filtering structure need to be done: (1) The Gaussian affinity function is used to calculate the affinity of the samples. (2) The
absolute value of the samples is used instead of their actual value to calculate the affinities. The reference point is set as the absolute value of the center sample.
(3) The tuning parameter y is made proportional to the local variance at which the window is centered. Thus, a higher value of y is obtained for observations in the cross-term region and a smaller
one for observations in the auto-term region.
(4)The TFR is obtained by filtering the original WD samples using the affinity functions of the corresponding absolute valued samples.
To estimate the tuning parameter y,the variance of some selected samples of the observation window (a subwindow of size 1 x 1 around the center sample) is calculated. In the cross-term region, the
samples chosen to calculate the variance are
positive and negative and have similar magnitudes. The corresponding variance will be high and so will be y. On the other hand, the samples in the auto-term region are all positive and have similar
magnitudes leading to a much lower variance and a low value of y. The test signal X ( n ) is a 128-point computer generated signal made up of a Gaussian pulse and a parabolic frequency modulated
where r,,b(n) is the gating function:
1, a L n L b (7.89) 0 , otherwise. Figure 7 . 9 ~shows the DWD of the signal having auto components well localized but numerous high amplitude oscillating cross-terms. Figure 7.9b shows the
pseudosmoothed DWD (using a 13-point Gaussian time smoothing window and 31-point Gaussian frequency smoothing window) of the test signal, reducing the cross terms by both frequency and time direction
smoothing. Its interpretation is much easier, but the signal component localization becomes coarser. Figure 7 . 9 shows ~ the Choi-Williams distribution (1989) [45] of the given signal with the
kernel width n = 1. Again, most of the cross terms are gone at the cost of reduced localization. Obvious problems, whenever the signal components overlap in time and frequency, are also visible.
Figure 7 . 1 0 ~shows the representation given in Jones and Baraniuk (1993) [log]. A signal-dependent kernel is designed for the test signal using a radial Gaussian kernel. This representation is
referred to as the Baraniuk-Jones method- 1. This method fails to track the smoothly varying parabolic chirp as result that component looks like two connected linear chirps. Figure 7.10b shows the
representation given in Jones and Baraniuk (1995) [ 1091. Here a time-adaptive radially Gaussian kernel is used. The results are adequate, although some loss in auto component localization occurs. In
addition, the computation cost is high as the kernel is computed in a local window sliding over the signal. Figure 7 . 1 0 ~shows the Affine Center filtered TFR. An almost complete reduction in cross
terms is attained without losing the resolution and localization provided by the Wigner distribution.
EXAMPLE 7.5 (ISAR IMAGEDENOISING) To further illustrate the attributes of the center affine filter, consider the denoising of the ISAR image shown in Figure 7.1 la4. ISAR images emerge from the
mapping of the reflectivity density function of the target onto the range-Doppler plane. 4Data provided by Victor C. Chen, Airborne Radar, Radar Division Naval Research Laboratory, Washington DC.
Figure 7.9 Time-frequency representation of the signal, z b ( n ) , using (a) Wigner distribution, (b)pseudo-smoothed Wigner distribution, (c) Choi-Williams distribution with spread factor CT = 1.
Figure 7.70 Time-frequency representation of the signal, l c b ( n ) (continued). ( a )BaraniukJones distribution (Method l), (b) Baraniuk-Jones (method 2), and (c) Center Affine filtered WD, L = 11.
Difficulties in target identification arise from the fact that radar backscatters from the target are typically embedded in heavy clutter noise. It is desirable to remove the noise without altering
the target backscatters, which exhibit pulse-like features in nonGaussian noise. We compare the performance of a center affine filter to that of a weighted median filter and an Lt-filter. The ISAR
image is a 128 x 128,8 bitdpixel intensity image of a B-727. The various filters were adaptively optimized to extract signal features embedded in background ISAR noise. The Ll-filter was found by
standard minimization of the MSE. All filters utilize a 5 x 5 observation window and were trained in a single run over the entire training images.
Figure 7.7 7 ISAR feature enhancing: (a)unprocessed ISAR image, ( b ) WM filter output, (c) LE-filter output, (d)center affine filter output.
Figure 7.11 shows the L,! WM, and center affine filtering outputs, respectively. The WM filter is effective at reducing the noise level but the signal details are blurred. The L!-filter output shown
in Figure 7.1 l c preserves the plane details better but is not effective in removing the clutter noise. The center affine filter removes the background noise to a large extent while preserving the
plane in all its details. While operating in the background noise, all observations are close to the center sample. Thus, the center affine filter behaves like an L-filter smoothing the clutter
noise. When encountering backscatters from the plane, the center affine filter considers only those pixels with similar intensity as the center sample, and thus preserves the plane details. Both the
WM filter and the Ll-filter suffer from the inflexibility of their structure. The WM filter assigns a high weight to the center sample. This weight, however, is not large enough to preserve single
outlying observations. The LL-filter puts stronger emphasis on the center observation. This results in a better preserved plane at the cost of poor noise suppression.
7.6 LINEAR COMBINATION OF WEIGHTED MEDIANS L!-filters and Mediaainear hybrid filters provide two distinct approaches to the combination of the attributes of linear and median filters. A third
approach, proposed by Choi et aL(2001) [46], is the class of Linear Combination of Weighted Medians (LCWM). This class of filters represents a simple alternative to the ones shown previously. The
LCWM is the dual of the FIR-Median Hybrid (FMH) filter described in Nieminen and Neuvo (1987) [147] depicted in Figure 7 . 1 2 ~ .FMH filters are composed of two stages. In the first, the input
signal is filtered by several FIR subfilters whose outputs are then filtered by a weighted median filter in the second stage to generate the final output. FIR-Median hybrid filters are described in
Astola et al. (1989) [20], have been extensively studied in Nieminen et al. (1987) [147], Yin and Neuvo (1993) [200], and Yin et al. (1996) [201]. Since in the LCWM, the first stage is dedicated to
the weighted median filters, the outlier rejection capabilities of these filters is greater compared to that of FMH filters that perform linear operations on the input data, including the outliers.
The LCWM also leads itself to simple analysis and design. The structure of the LCWM is based on the structure used in the design of linear-phase FIR high-pass filters. These are easily obtained by
changing the sign of the filter coefficients of a FIR low-pass filter in the odd positions. In consequence, this filter can be represented as the difference between two low-pass subfilters. A general
N-tap FIR filter is given by:
Y ( n )=
h(lc)X(n- k )
Figure 7.72 FIR-median hybrid filter and linear combination of weighted median filter structures.
where h = [h(O),h(l), . . . , h(N - 1)ITrepresents the filter coefficients and X = [ X ( n ) ,X ( n - l),. . . , X ( n - N 1)IT is the input vector. A low pass FIR filter
can be obtained by restricting the coefficients to positive values. Let these positive coefficients be denoted by h L ( k ) . To obtain the high pass FIR, change the sign of the coefficients in the
odd positions after reversing the filter window in the time domain. This results in a filter given by:
k - l ) X ( n-k)
k=O N-1
-2h H ( k ) x ( n
- k).
This filter can be divided into two subfilters as follows:
c c
yH(n) =
h H ( k ) X ( n- k )
k=O N-1
b 1 ( k ) X ( n- k ) -
b 2 ( k ) X ( n- k )
K(") - Y z ( n ) ,
where bl ( k ) and bz ( k ) are given by:
(7.93) The two filters in (7.93) have only non-negative coefficients. They can be normalized so the final output can be written as:
Y ( n )= a1Y?(n)
where a1 = Ckz0b l ( k ) and a2 = CF=ilb 2 ( k ) . In essence, (7.94) synthesizes a general high-pass or band-pass filter, based on linear combination of low pass FIR filters. This concept is next
applied to the construction of linear combinations of weighted median filters.
LCWM Filters
The nonlinear counterpart of the linear combination of FIR filters in (7.94) is given by:
P(n)= aylYIW"(n) - azY2W"(n)
where Ylw" and YzWM are WM smoothers,that is, they admit only positive weights. These smoothers are designed based on their spectral response using the algorithms derived in Section 6.2.
EXAMPLE 7.6
[i, i, i, 5, 6,
ConsiderthelinearFIRhighpassfilterh = coefficients can be rewritten in the form of (7.94) as: a1
= 51 ,
i, IT.
az= 51
[i,01 i, 0, i, 01 bz = [0, i, 0, i,0, 61. bl =
According to Mallows' theory and using the spectral design of weighted medians shown in Section 6.2, it can be shown that the median equivalents to the filters b l and b2 are W1 = (1, 0 , 1, 0, 1, 0)
and WZ= (0, 1, 0, 1, 0, 1). The frequency response of both, linear and median filters were approximated and the results are shown in Figure 7.13. Notice that both filters produce very similar
The procedure described above can be generalized to include more than two WM filters with overlapping or non-overlapping windows. The general form of Equation (7.94) is: K
where K is the number of subfilters, and b i ( j ) is the j 1st coefficient of the ith subfilter. A matrix containing all the subfilters can be defined as:
bi = [bi(O), b i ( l ) , . . . , bz(N - l)]
5T0approximate the frequency response of the LCWM filters, 10,000 samples of standard Gaussian i.i.d. samples were inputted and the spectra of the outputs was calculated with the Welch method [192].
The procedure was repeated 50 times to get an ensemble average.
-2 -4
-6 -8
:I -10
-20 -18 0
Normalized Frequency
Figure 7.73 Estimated frequency response of a LCWM filter (dotted) and the linear FIR filter used as reference for its design (solid).
B =
bl] ,
so that (7.97) can be represented as: y(n) = GTBX = hTX
where& = [al,. . . , (.KIT is the weightingfactorvectorand h = [ h l ,hz, , . . . , h ~ ] ~ . Solving the previous for 6:
GT = hTBP1.
Using the theory developed in Section 6.2, the nonlinear equivalents of b i and B can be found as wi and W. The LCWM filter emerges from (7.97) as:
where (wi) = MEDIAN[w o X I and
The structure of the filter is depicted in Figure 7.12b. 7.6.2
Design of LCWM filters
Having defined the structure of LCWM filters, a procedure to design them to follow a desired spectral profile is of interest. Choi et al. (2001) [46] provide a systematic approach. The first step is
to design a prototype FIR using any of the standard FIR design tools. Next a LCWM is obtained by using a transformation of the prototype. Before proceeding to the design of the LCWM filters, a brief
review of some basic concepts of linear algebra is necessary. A real N-dimensional vector space RN is spanned by N linearly independent vectors b 1, . . . , bN, each one with N components. This set
of vectors is a basis of the space. That is, each vector in the space can be represented as a linear combination of elements of the basis as:
(7.104) Suppose h represents a FIR filter. Then the filter can be represented as a linear combination of subfilterswith coefficients given by the vectors b 1,. . . , bN. Equation (7.104) is identical
to (7.101). The central issue is now how to determine the basis
B. The first step is to define the vector space. In order to do that, the size M < N of the subfilters bi in (7.104) must be predefined by the user. The number of ways a . These subvector of M
elements can be chosen from a vector of N elements is subvectors are represented in a x N matrix called BN,Mand they constitute the vector space. Each row of the matrix represents a different
subvector and each one of its elements will be a one if the corresponding element belongs to the subvector or a zero if it does not. Once BN,Mhas been built, a basis for the space of subvectors of M
elements of an N element vector can be built by choosing N linearly independentrows of B N , M using a recursive row search algorithm. These vectors are stored in another matrix (BP(N,M)). The row
search algorithm can be summarized as:
where B N - ~ , M1( :~N, - 1)represents the first row of the matrix B N - ~ , M and is a row vector composed by N - 1 ones followed by all zeros. B P(N,l) = IN,where IN is the N x N identity matrix,
and BP(N,N) =l l x ~ .
EXAMPLE 7.7 (Row SEARCH ALGORITHM) Use the row search algorithm to calculate Bp(6,3). According to (7.105), in order to calculate Bp(6,3), BP(5,2) has to be calculated first, and the calculation of
B,(5,21requires the calculation of Bp(4,1), that is, the 4 x 4 identity matrix. The process is summarized as:
-1 1 1 0 1 1 0 1 1 1 0 0 =+ BP(6,3)= 1 1 0 0 1 0 1 1
ob 0 1 0 0
In order to obtain the matrix B of linear filters in (7.104), the “1”s in B P ( N , M ) should be replaced by their corresponding filter coefficients. For reasons that will be clear shortly, each “1”
in BP(N,M) will be replaced by $ to obtain B. In this way all the linear subfilters will perform standard mean operations. Once B is found, 6 can be calculated using (7.101). To design the LCWM
filter, the median equivalent to each one of the subfilters in B should be found. According to the theory of Mallows and given that all the coefficients in the linear filters are the same, their
median equivalents will be standard median operators, that is, all their nonzero weights will be set to one. In consequence W = B,.
EXAMPLE 7.8 DesignaLCWM with3-tapsubfilters fromthe6-taplinear filter h = [-0.3327, 0.8069, -0.4599, - 0.1350, 0.0854, 0.0352IT. In this case B6,3, BP(6,3) and B will be:
F l1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 B6,3 = 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0
10000 1 0 0 0 0 1 0 0 0 0 1 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 0 0 1 0 1 0 0 1 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1-
0 0 0 1 0 0-
-1 1 1 0 0 0 3 3 3 1 r o + 0 0 3 3
1 l o o ; o
1 1 0 0 0 ~ 3 3 ~ O L l . 0 0 3 3 0 1 1 1 0 03
dT = hTB-l = [0.0430, 1.0176, 0.2563, 0.1057,
2.4207, 0.99801 (7.108)
The frequency responses of the filters obtained are shown in Figure 7.14
Symmetric LCWM Filters
If the FIR filter used as a reference to design the LCWM filter is linear phase, the number of subfilters can be reduced since a (2N 1)-tap linear filter has only N 1 independent coefficients (a
2N-tap linear filter has N ) . Denote by h‘ = [hi, ha,. . . , h’,+,]* with hi = h(N i - 1) the vector consisting of the independentcoefficients of the linear FIR h (the second half of h). The reduced
number of coefficients leads to a reduced ( N 1) x (N + 1) basis matrix Bp’. If the LCWM to be designed is made up of (2M 1)-tap subfilters, B,’ will consist of N 1independent rows of B N + ~ , MThe
+ ~ corresponding . Bp is found by left unfolding Bp’with respect to its first column, given the relationship between h and h’. The matrix B contains the median subfilters of the LCWM. To calculate
the linear combination coefficients d, the linear equivalents of the filters in B, have to be calculated using sample selection probabilities to obtain the matrix B.
-15 1 0
0.4 0.5 0.6 Normalized Frequency
Figure 7.14 Frequency response of a LCWM with 3-tap medians (dotted) and the linear FIR taken as a reference for its design (solid)
Given the symmetry properties that have been used during the process, the matrix B will also be symmetric. The last N f 1columns of B are taken to build the matrix B’ from which Ci is calculated as6:
EXAMPLE 7.9 Design a LCWM with subfilters of 5 and 6 taps based on a 7-tap bandpass filter with cutoff frequencies [0.3 0.71. The symmetric linear filter is h = [O, 0.1597, 0, 0.6806, 0, - 0.1597,
OIT. For this case N = 3 and 111 = 2, resulting in:
61f h’ is built as the first half of h instead of the second, Bp’ should be right unfolded to create Bp and B’ will be constituted by the first N 1columns of B
The first three rows of B, represent 5-tap weighted medians so the SSPs for this vectors are The last one represents a 6-tap weighted median and the SSPs for this vector are
i. i.
1 1 1 0 (7.1 11)
Having B',
'1 6
d can be calculated as:
dT = [0.6806, 0, - 0.1597, 01 x
= [0.8682, 1.6667, 0.8682,
The frequency response of the resultant LCWM and the original linear FIR are shown in Figure 7.15
EXAMPLE 7.10 (ROBUST FREQUENCY-SELECTIVE FILTERING) Filters that are jointly robust and frequency-selective are of interest. Here, the filter coefficients of a linear FIR bandpass filter are used to
design a Lt, a median affine filter, and a LCWM filter. Let h l , h ~. .,. ,h~ be the coefficients of an FIR bandpass filter of size N = 41 (designed with a Hamming window and cutoff frequencies 0.2~
and 0.4w, where w denotes the Nyquist frequency) and output D)FIR. The Lt-bandpass-filter is designed by setting the weights associated with observation X i equal to C, whenever X i is not an extreme
order statistic. When X I corresponds to the two smallest or largest observations, its weight is set to zero. Thus, the weight vector of the Lt-filter is given by =
[o,0, h i , . . . , h i , 0,010,0, ha,. . . , hz, 0,01 . . . /o,o,h N , . . . , h N , 0,0].
This weighting pattern rejects outlying observations and weights the remaining samples by the corresponding FIR filter coefficients.
0.4 0.5 0.6 Normalized Frequency
Figure 7.15 Frequency response of a bandpass LCWM (dotted) and the symmetric linear filter (solid) used as a reference.
The median affine bandpass filter is designed according to the “FIR medianization” method, that is, the coefficients are chosen equal to those of the linear bandpass and y is successively decreased
until a desired level of robustness is achieved, in this case y = 8. This value is fixed during the filtering process. To improve the signal tracking ability of the median affine filter, the sample
median of a subobservationwindow of size M = 5 serves as a reference point. The LCWM filter was designed using standard medians of window size 5 and the method developed in Section 7.6.2 resulting in
d = [ 0.0415,
0.0159, -0.0470, -0.0621, -0.0191, 0.0245, O., -0.5561, -0.7722, -0.0391, 0.0491, 0.2609, 0.3294, O., -0.2814, 0.5725, 0.9958, 0.5725, -0.2814, -0.7722, -0.5561, 0. , 0.3294, 0.2609, 0.0491, -0.0391,
O., 0.0245, -0.0191, -0.0621, -0.0470, O., 0.0256, 0.0187, 0.0033, -0.0026, O., -0.0178, -0.0024, 0.0035, 0.0009]
The performance of the above filters on a 2048-sample quad-chirp (sinusoidal waveform with quadratically increasing frequency) clean and additively contaminated by a-stable noise ( a = 1.2) to
simulate an impulsive environment is compared in Figures 7.16 and 7.17. The corresponding mean square and absolute errors for the
noisy case are given in Table 7.4. As expected, the impulsive noise is detrimental for the linear filter. The LC-filter is robust, but suffers from the blind rejection of the extreme order
statistics, which results in artifacts over most of the frequency spectrum. This effect is even stronger for a higher signal-to-noise ratio (not shown). The median affine filter preserves the desired
frequency band well, while strongly attenuatingthe impulsive noise, which is reflected in a MSE that is roughly half of that achieved by the Ll-filter. By comparisonof the linear estimate and that of
the median affine filter, it can be seen that the latter behaves similar to the linear filter whenever no impulses are present. Finally, note that the LC-filteruses N 2 coefficients, whereas the
median affine filter requires only N 1 coefficients.
Table 7.4 Comparison of filter performance on noise-corrupted chirp.
(a) None (b) FIR (c)LCWM (d) L l (e) Median affine
1.07 x lo6 6.0863 0.1115 0.0261 0.0198
23.72 0.5 169 0.1665 0.1280 0.1036
Problems 7.1 Let X I , X2,. . . , X N be N observations of a constant parameter p in noise such that X i = /3 Zi where Zi is a zero mean sequence of i.i.d. noise samples, with a parent density
function f ( z ) = +6(1 - z ) ;6(1 2 ) .
(a) Find the probability density function of Z ( i ) . (b) Find the joint probability density function of Z ( i )and Z ( j )for i < j. (c) For N
= 2, find the best unbiased L-estimate of
the constant parameter.
7.2 Derive the equations for the LMS algorithm for the design of the Median Affine filter shown in Table 7.3, following the next steps:
(a) Define the cost function J ( 7 ,W) = E{ (d - d)2},where drepresents the output of the median affine filter and d the desired signal. (b) Find the derivative of the cost function with respect to y
as a function of Then show that:
Figure 7.76 (a) The chirp signal and the outputs of the (b) linear FIR bandpass, ( c ) the LCWM, (4 the L!, ( e ) the median affine filter.
(c) Calculate
and replace it in the gradient based algorithm y(n
+ 1) = y(n) - v
to obtain the expression in Table 7.3.
(7.1 14)
figure 7.7 7 (a) The noise-corrupted chirp, the outputs of the (b) linear FIR bandpass, ( c ) the LCWM, (6)the Le, (e ) the median affine filter.
(d) Repeat (b) and (c) derivating with respect to the weight Wi and using the gradient based algorithm
8J Wz(n 1) = Wz(n)- vw8Wz
obtain the expression in Table 7.3.
7.3 Consider a binary i.i.d. sequence { X ( n ) } with a parent density function f ( X ) = pb(X - 1) + (1 - p ) 6 ( X ) ,where X = 1 with probability p and X = 0 with probability 1- p . (a) Find the
correlation matrix of the observation vector X ( n ) = [ X ( n ) ,X ( n l),. . . , X ( n - N + 1)IT, for N = 2. Is this correlation matrix singular? (b) Find the correlation matrix of the L l l
observation vector XIfor N = 3 (use stable sorting if needed). 7.4
Consider the filtering problem where the observations X i follow the model X i = S + Z,, where the signal is constant and where the noise is i.i.d. The L estimate of location (i.e., the constant
signal S), for an N long observation is S = N W,X(j) where X ( j ) is the jth sample order statistic. Constraining the estimation to be location invariant (i.e., W T e= 1 where eT = [l,1 , . . .
1IT), find the expression for the optimal set of L filter coefficients which will minimize the MSE. Assume the noise is symmetrically distributed about zero.
Design two LCWM filters based on an 11-taphigh-pass FIR filter with a cutoff frequency of 0.5 and medians of size 5. Use the algorithms in Sections 7.6.2 and 7.6.3 and compare the results.
Part 111
Signal Processing with the Stable Model
This Page Intentionally Left Blank
8 Myriad Smoothers
The motivations for using stable models, as described in Chapter 2, are simple yet profound. Firstly, good empirical fits are often found through the use of stable distributions on data exhibiting
skewness and heavy tails. Secondly, there is solid theoretical justification that nonGaussian stable processes emerge in practice. The third argument for modeling with stable distributions is perhaps
the most significant and compelling. Stable distributions satisfy an important generalization of the central limit theorem, which states that the only possible limit of normalized sums of independent
and identically distributed terms is stable. A wide variety of impulsive processes found in many applications arise as the superposition of many small independent effects. While Gaussian models are
clearly inappropriate, stable distributions thus have the theoretical underpinnings to accurately model these type of impulsive processes [ 149, 2071. Stable models are thus appealing since the
generalization of the central limit theorem explains the apparent contradictions of its ordinary version, which could not naturally explain the presence of heavy-tailed signals. Having the rich
modeling characteristics of stable distributions at hand, several signal processing approaches have been developed which are suitable for processing and analyzing stable processes. One approach which
has received considerable attention is based on the concept of Fractional Lower Order Moments (FLOMs) [149]. FLOM based signal processing has been studied extensively, as detailed in Nikias and Shao
(1995). Section 8.1 in this chapter provides an introduction to FLOM smoothers and its applications. A second approach to signal processing for stable models is described, based on the so called
Weighted Myriad filters derived from the theory of M-estimation. Much like the Gaussian assumption has motivated the development of linear filtering theory, weighted myriad filters are motivated by
the need of a flexible filter class with increased efficiency in nonGaussian impulsive environments. The remainder sections of this chapter focus on this second approach. The foundation for these
algorithms lies in the definition of the sample myriad as a maximum likelihood estimate of location derived from the stable model, leading to cost functions of the form
+ x2],
p ( x ) = log[K2
where K is a tunable parameter. Along the range of tuning values of K , the sample myriad enjoys optimality properties in several practical impulsive models. The possibility of tuning the parameter K
, provides the myriad filter with a rich variety of modes of operation that range from highly resistant mode-type estimators to the very efficient class of linear FIR filters. Since the myriad filter
class subsumes that of linear FIR filters, weighted myriad filters can also be tuned to operate efficiently under the Gaussian model. Much like the mean and median have had a profound impact on
signal processing, the sample myriad and its related myriad filtering framework lead to a powerful theory upon which eficient signal processing algorithms have been developed for applications
exhibiting impulsive processes.
8.1 FLOM SMOOTHERS Under the framework of Gaussian processes, the sample mean of a random variable can be described as the parameter that minimizes the second moment of the shifted variable X - ,B
over all possible shifts p. This is
E(X) = p
= argminE(X
Given that second-order moments do not exist with stable processes, but fractionalorder moments do, the second moment in (8.2) can be replaced by fractional lowerorder moments (FLOMs) to obtain the
following measure of location
pP = argminE(1X - @ I p ) ,
< 2.
FLOM smoothers follow from (8.3) where FLOM estimates are computed in the running window X(n) = [ X , ( n ) ,X , ( n ) , . . . , X,(n)lT as N
withp < 2. The behavior of FLOM smoothers is markedly dependant on the choice of p . As p + 2, FLOM smoothers resemble the running mean. As p is reduced in value, FLOM smoothers become more robust
and its output can tract discontinuities more effectively. Figure 8.1 depicts the effect of varying the value of p in the smoothing of a speech signal. As described in Chapter 4, FLOM smoothers also
arise from
the location estimation problem under the generalized Gaussian distribution. A plot of the FLOM smoother cost function, found in Figure 4.2 for different values of p , reveals that for p < 1, FLOM
smoothers are selection type where the output value is equal to that of one of the input samples. The cost function exhibits several local minima, located in the values of the input samples. For p >
1,the cost function is convex and the output is not necessarily equal in value to one of the input samples. FLOM smoothers thus represent a family of smoothersindexed by the value assigned to the
parameter p .
figure 8.7 FLOM smoothing of a speech signal for different values of p and window size 5.
A drawback of the FXOM smoother class, however, is that their computation in (8.4) is in general nontrivial. A method to overcome this limitation is to force the output of the smoother to be
identical in value to one of the input samples. Selection type FLOM smoothers are suboptimalin the sense that the output does not achieve the minimum of the cost function. Selection type FXOM
smoothers have been studied by Astola (1999) 1231 and are referred to as gamma filters. The gamma filter for
p = 2 is particularly interesting as its output can be shown to be the input sample that is closest in value to the sample mean [86].
EXAMPLE 8.1 (IMAGE DENOISING) Consider the denoising of the image shown in Figure 8 . 2 ~which is the contaminated . contaminating noise is salt and pepper version of the image shown in Figure 8 . 4
~The with a probability of 5%. FLOM and gamma smoothers are used with window sizes o f 3 x 3 a n d 5 x 5. Figure 8.2 shows the output of a FLOM smoother for different values of the parameter p and
window size 3 x 3. When p < 1 as in Figure 8.2b and c, the smoother tends to choose as the output one of the most repeated samples. This explains the clusters of positive or negative impulses that
are still visible in the output of the smoother, specially for the smallest value ofp. Figure 8.2dshows the output for p = 1. In this case, the FLOM smoother is equivalent to the median operator.
When p = 2 the smoother is equivalent to a mean operator. In this case (Fig. 8.3a), the blurriness in the output is caused by the averaging of the samples in the observation window. Figure 8.3b shows
the output of the smoother for p = 10. As the parameter p grows, the FLOM operator gets closer to a midrange smoother. Figure 8.4 shows the denoising of the image in Figure 8.2a, where the gamma
smoother is used. Since the FLOM and gamma smoother are equivalent for p 5 1, only the results for greater values of p are shown. Figure 8.4b shows the output for p = 2 . The result is similar to the
one shown in Figure 8.3a, except for certain details that reveal the selection-type characteristics of the gamma smoother. The image in Figure 8 . 3 is ~ smoother while Figure 8.4b has more
artifacts. These effects are more visible in Figures 8 . 4 ~ and 8.5b where the images seem to be composed of squares giving them the look of a tiled floor. Figures 8.4d and 8 . 5 ~and b show the
output of a 5 x 5 gamma smoother. The smoothing of these images is greater than that of their 3 x 3 counterparts. The artifacts are more severe for the larger window size when p is too large or too
The sample myriad emerges as the maximum likelihood estimate of location under a set of distributions within the family of a-stable distributions, including the well known Cauchy distribution. Since
their introduction by Fisher in 1922 [70], myriad type estimators have been studied and applied under very different contexts as an efficient alternative to cope with the presence of impulsive noise
[2, 3, 39, 90, 167, 1811. The most general form of the myriad, where the potential of tuning the socalled linearity parameter in order to control its behavior is fully exploited, was first introduced
by Gonzalez and Arce in 1996 [82]. Depending on the value of this
Figure 8.2 FLOM smoothing of an image for different values of p. (a)Image contaminated with salt-and-pepper noise (PSNR=17.75dB) and outputs of the FLOM smoother for: (b) p = 0.01 (PSNR=26.12dB), (c)
p = 0.1 (PSNR=31.86dB), (d)p = 1 (median smoother, PSNR=37.49dBl
Figure 8.3 FLOM smoothing of an image for different values of p (continued). (a) p = 2 (mean smoother, PSNR=33.53dB), (b)p = 10 (PSNR=31.15). free parameter, the sample myriad can present drastically
different behaviors, ranging from highly resistant mode-type estimators to the familiar (Gaussian-efficient)sample average. This rich variety of operation modes is the key concept explaining
optimality properties of the myriad in the class of symmetric a-stable distributions. Given an observation vector X ( n ) = [ X l ( n ) , X z ( n ).,. . , X,(n)] and a fixed positive (tunable) value
of K , the running myriad smoother output at time n is computed as
YK(~ = )MYRIAD[K; & ( n ) , X ~ ( n ). ., . , X N ( ~ ) ] N
= arg min
I1[ K +~ (xi(n) ,B)’] . -
The myriad Y K ( ~ is thus ) the value of ,B that minimizes the cost function in (8.5). Unlike the sample mean or median, the definition of the sample myriad in (8.5) involves the free-tunable
parameter K . This parameter will be shown to play a critical role in characterizing the behavior of the myriad. For reasons that will become apparent shortly, the parameter K is referred to as the
linearity parameter. Since the log function is monotonic, the myriad is also defined by the equivalent expression
Figure 8.4 Gamma smoothing of an image for different values of p and different window sizes. ( a ) Original image and output of the 3 x 3 gamma smoother for (b) p = 2 (sample closest to the mean,
PSNR=32.84dB), ( c ) p = 10 (PSNR=32.32dB), and the 5 x 5 gamma smoother for (6)p = 0.1 (PSNR=28.84dB).
figure 8.5 Gamma smoothing of an image for different values of p and different window sizes (continued). (a) p = 1 (PSNR=29.91dB), (b)p = 10 (PSNR=28.13&).
In general, for a fixed value of K , the minima of the cost functions in (8.5) and (8.6) leads to a unique value. It is possible, however, to find sample sets for which the myriad is not unique. The
event of getting more than one myriad is not of critical importance, as its associated probability is either negligible or zero for most cases of interest. To illustrate the calculation of the sample
myriad and the effect of the linearity parameter, consider the sample myriad, f i ~of, the set { -3,10,1, -1, 6}:
, k =~ MYRIAD(K; -3,10,1,
for K = 20,2,0.2. The myriad cost functions in (8.5), for these three values of K , are plotted in Figure 8.6. The corresponding minima are attained at ,&= 1.8, = 0.1, and fi0.2 = 1,respectively. The
different values taken on by the myriad as the parameter K is varied is best understood by the results provided in the following properties.
Figure 8.6 Myriad cost functions for different values of k PROPERTY 8.1 (LINEARPROPERTY) Given a set of samples, X I ,X2, . . . , X N , the sample myriad f i converges ~ to the sample average as K +
w. This is, Iim
= lim
xl,. . . ,x N )
1 = N E. x p .
To prove this property, first note that' f i _<~ X ( N )by checking that for any z, and forp > X ( N ) , K 2 + ( X i - P ) 2> K 2 + ( X i - X ( ~ ) ) 2in. the same way,$^ 2 X ( 1 ) . Hence,
K 2 N K 2 N - 2c ( X i - ,0)2 f ( K ) i=l
where f ( K ) = O ( K 2 N - 4 and ) 0 denotes the asymptotic order as K --+ 00 '. Since adding or multiplying by constants does not affect the arg min operator, Equation (8.10) can be rewritten as
'Here, X ( i ) denotes the ith-order statistic of the sample set. *Given nonnegative functions f and g,we write f = O ( g ) if and only if there is a positive constant C and an integer N such that f
(x) 5 Cg(x) V x > N.
Letting K
+ 00,the term
O ( K 2 N - 4 ) / K 2 N -becomes 2 negligible, and
Plainly, an infinite value of K converts the myriad into the sample average. This behavior explains the name linearity given to this parameter: the larger the value of K , the closer the behavior of
the myriad to a linear estimator. As the myriad moves away from the linear region (large values of K ) to lower linearity values, the estimator becomes more resistant to the presence of impulsive
noise. In the limit, when K tends to zero, the myriad leads to a location estimator with particularly good performance in the presence of very impulsive noise. In this case, the estimator treats
every observation as a possible outlier, assigning more credibility to the most repeated values in the sample. This “mode-type’’ characteristic is reflected in the name mode-myriad given to this
DEFINITION 8.1 (SAMPLEMODE-MYRIAD) Given a set of samples X I , Xz, ..., X N , the mode-myriad estimatol; fro, is defined as
PO= K-0 lim P K , where f
i =~ MYRIAD(K;X I ,X z , . . . , X N ) .
The following property explains the behavior of the mode-myriad as a kind of generalized sample mode, and provides a simple method for determining the modemyriad without recurring to the definition
in (8.5).
PROPERTY 8.2 (MODE PROPERTY) The mode-myriad ,& is always equal to one of the most repeated values in the sample. Furthermore, N
where M is the set of most repeated values.
Proof : Since K is a positive constant, the definition of the sample myriad in (8.5) can be reformulated as b~ = arg minp PK(P), where (8.14)
When K is very small, it is easy to check that
where r ( p ) is the number of times the value ,O is repeated in the sample set, and 0 denotes the asymptotic order as K + 0. In the limit, the exponent N - .(/I) must be minimized in order for PK
(p) to be minimum. Therefore, the mode-myriad will lie on a maximum of ~ ( p or ) , in other words, ,& will be one of the most repeated values in the sample. Now, let T = maxj r ( X j ) . Then, for
Xj E M , expanding the product in (8.14) gives
Since the first term in (8.15) is 0(& ) N - - r , the second term is negligible for small values of K , and ,&, can be calculated as
An immediate consequence of the mode property is the fact that running-window smoothers based on the mode-myriad are selection-type, in the sense that their output is always, by definition, one of
the samples in the input window. The mode myriad output will always be one of the most repeated values in the sample, resembling the behavior of a sample mode. This mode property, indicates the high
effectiveness of the estimator in locating heavy impulsive processes. Also, being a sample mode, the mode myriad is evidently a selection-typeestimator,in the sense that it is always equal, by
definition,to one of the sample values. This selection property, shared also by the median, makes mode-myriad smoother a suitable framework for image processing, where the application of
selection-type smoothers has been shown convenient.
EXAMPLE 8.2 (BEHAVIOR OF
Consider the sample set 2 = [I, 4, 2.3, S, 2.5, 2, 5, 4.25, 61. The mode Myriad of 2 is calculated as the sample S varies from 0 to 7. The results are shown in Figure
8.7. It can be seen how the myriad with K -+ 0 favors clusters of samples. The closest samples are 2, 2.3 and 2.5 and, when the value of S is not close to any other sample, the output of the
mode-myriad is the center sample of this cluster. It can also be seen that, when two samples have the same value, the output of the myriad takes that value. Look for example at the spikes in S = 1 or
S = 6. There is also another cluster of samples: 4 and 4.25. The plot shows how, when S gets closer to these values, this becomes the most clustered set of samples and the myriad takes on one of the
values of this set. 6
b 3
n 0
r*. "
G G
m r" .
Figure 8.7 Mode myriad of a sample set with one variable sample. The constant samples are indicated with "0"
EXAMPLE 8.3 ( MODE-MYRIAD PERFORMANCE
IN a-STABLE NOISE)
To illustrate the performance of the mode myriad, in comparison to the sample mean and sample median, this example considers the location estimation problem in i.i.d. a-stable noise tor a wide range
of values ot the tail parameter a. Figure 8.8 shows the estimated mean absolute errors (MAE) of the sample mean, the sample median, and the mode-mvriad when used to locate the center of an i.i.d.
svmmetric a-stable sample of size N = 5. The result comes from a Monte Car10 simulation with
200,000 repetitions. The values of the tail parameter range from a = 2 (Gaussian case) down to a = 0.3 (very impulsive). Values of a slightly smaller than 2 indicate a distribution close to the
Gaussian, in which case the sample mean outperformsboth the median and the mode-myriad estimator. As a is decreased, the noise becomes more impulsive and the sample mean rapidly loses
efficiency,being outperformedby the sample median for values of a less than 1.7. As a approaches 1, the estimated MAE of the sample mean explodes. In fact, it is known that for a < 1, it is more
efficient to use any of the sample values than the sample mean itself. As a continues to decrease, the sample median loses progressively more efficiency with respect to the mode-myriad estimator, and
at a x 0.87, the mode-myriad begins to outperform the sample median. This is an expected result given the optimality of the mode-myriad estimator for small values of a. For the last value in the
plot, a = 0.3, the modemyriad estimator has an estimated efficiency ten times larger than the median. This increase in relative efficiency is expected to grow without bounds as a approaches 0 (recall
that a = 0 is the optimality point of the mode-myriad estimator).
Figure 8.8 Estimated Mean Absolute Error of the sample mean, sample median and modemyriad location estimator in a-stable noise (A’= 5).
EXAMPLE 8.4 (DENOISING OF
This example illustrates the denoising of a signal which has been corrupted with very impulsive noise. The performance of the mode-myriad location estimator with that of the FLOM estimator is
compared . The observation is a corrupted version of the "blocks" signal shown in Figure 8.9(a). The signal corrupted with additive stable noise with a = 0.2 is shown in Figure 8.9(b) where a
different scale is used to illustrate the very impulsive noise environment. The value of a is not known a priori. The mean square error between the original signal and the noisy observation is 8.3 x
loss. The following running smoothers (location estimators) are applied, all using a window of size N = 121: the sample mean (MSE = 6.9 x shown in Figure 8.9(c), the sample median (MSE = 3.2 x l o 4
) shown in Figure 8.9(d), the FLOM with p = 0.8 (MSE = 77.5) in Figure 8.9(e), and the mode-myriad location estimator (MSE = 4.1) shown in Figure 8.90. As shown in the figure, at this level of
impulsiveness, the sample median and mean break down. The FLOM does not perform as well as the mode-myriad due to the mismatch of p and a. The performance of the FLOM estimator would certainly
improve, but the parameter p would have to be matched closely to the stable noise index, a task that can be difficult. The mode-myriad, on the other hand performs well without the need of parameter
GeometricalInterpretation of the Myriad Myriad estimation, defined in (8.5), can be interpreted in a more intuitive manner. As depicted in Figure 8.10(a), the sample myriad, is the value that
minimizes the product of distances from point A to the sample points X I X2, , . . . , x6. Any other value, such as X = p', produces a higher product of distances. This can be shown as follows: Let D
i be the distance between the point A and the sample X i . The points A, ,d and Xi form a right triangle with hypotenuse Di. In consequence Di is calculated as:
0' = K2
+ ( X i - ,d)2.
Taking the product of the square distances to all the samples and searching for the minimum over p results in: N
matching the definition in (8.5).
I 0
(4 I
I '
. -10 0
Figure 8.9 Running smoothers in stable noise (a = 0.2). All smoothers of size 121; (a) original blocks signal, (b)corrupted signal with stable noise, (c) the output of the running mean, (6)the
running median, ( e ) the running FLOM smoother, and v) the running mode-myriad smoother.
As K is reduced, the myriad searches clusters as shown in Figure 8.lO(b).If K is made large, all distances become close and it can be shown that the myriad tends to the sample mean.
EXAMPLE 8.5 Given the sample set X = { 1, 1, 2, 10) compute the sample myriad for values of K of 0.01, 5, and 100. The outputs of the sample myriad are: 1.0001, 2.1012, and 3.4898. In the first case,
since the value of K is small, the output goes close to the mode of the sample set, that is 1. Intermediate values of K give an output that is close to the most clustered set of samples. The largest
value of K outputs a value that is very close to the mean of the set, such as 3.5. Figure 8.11 shows the sample set and the location
Figure 8.10 ( a )The sample myriad, ,8, minimizes the product of distances from point A to all samples. Any other value, such as z = /3', produces a higher product of distances; (b)the myriad as K is
reduced. of the myriad for the different values of K . It is noticeable how raising the value of K displaces the value of the myriad from the mode to the mean of the set.
* o -5
-50 0
* o -2
! %
Figure 8.11 Sample myriad of the sample set (1, 1, 2, 10) for ( a ) K = 0.01, (b)K = 5, ( c ) K = 100.
The Tuning of K The linear and mode properties indicate the behavior of the myriad estimator for large and small values of K . From a practical point of view,
it is important to determine if a given value of K is large (or small) enough for the linear (or mode) property to hold approximately. With this in mind, it is instructive to look at the myriad as
the maximum likelihood location estimator generated by a Cauchy distribution with dispersion K (geometrically, K is equivalent to half the interquartile range). Given a fixed set of samples, the ML
method locates the generating distribution in a position where the probability of the specific sample set to occur is maximum. When K is large, the generating distribution is highly dispersed, and
its density function looks flat (see the density function corresponding to K2 in Fig. 8.12). If K is large enough, all the samples can be accommodated inside the interquartile range of the
distribution, and the ML estimator visualizes them as well-behaved (no outliers). In this case, a desirable estimator would be the sample average, in complete agreement with the linear property. From
this consideration, it should be clear that a fair approximation to the linear property can be obtained if K is large enough so that all the samples can be seen as well-behaved under the generating
Cauchy distribution. It has been observed experimentallythat values of K on the order of the data range, K X ( N )- X(l),often make the myriad an acceptable approximation to the sample average. N
2K2 Figure 8.12 The role of the linearity parameter when the myriad is looked as a maximum likelihood estimator. When K is large, the generating density function is spread and the data are visualized
as well-behaved (the optimal estimator is the sample average). For small values of K , the generating density becomes highly localized, and the data are visualized as very impulsive (the optimal
estimator is a cluster locator).
Intermediate values of K assume a sample set with some outliers and some well behaved samples. For example, when K = [ X ( i N ,- X ( t N , ]half the samples will be outside an interval around the
myriad of length 2K = X - X [ a N )and will be considered as outliers. On the other side, when K is small, the generating Cauchy distribution is highly localized, and its density function looks
similar to a positive impulse. The effect of such a localized distribution is conceptually equivalent to observing the samples through a magnifying lens. In this case, most of the data look like
possible outliers, and the ML estimator has trouble locating a large number of observations inside the interquartile range of the density (see the density function corresponding to K1 in Fig. 8.12).
Putting in doubt most of the data at hand, a desirable estimator would tend to maximize the number of samples inside the interquartile range, attempting to position the density function in the
vicinity of a data cluster. In the limit case, when K + 0, the density function gets infinitely localized, and the only visible clusters will be made of repeated value sets. In this case, one of the
most crowded clusters (i.e., one of the most repeated values in the sample) will be located by the estimator, in accordance with the mode property. From this consideration, it should be clear that a
fair approximation to the mode property can be obtained if K is made significantly smaller than the distances between sample elements. Empirical observations show that K on the order of
min ( X i - X j l , Z J
is often enough for the myriad to be considered approximately a mode-myriad. The myriad estimator thus offers a rich class of modes of operation that can be easily controlled by tuning the linearity
parameter K . When the noise is Gaussian, for example, large values of the linearity can provide the optimal performance associated with the sample mean, whereas for highly impulsive noise
statistics, the resistance of mode-type estimators can be achieved by using myriads with low linearity. The tradeoff between efficiency at the Gaussian model and resistance to impulsive noise can be
managed by designing appropriate values for K (see Fig. 8.13).
MODE (Cluster searcher) -C
Increased efficiency in Gaussian noise
Increased resistance to outliers
large K
Figure 8.13 Functionality of the myriad as K is varied. Tuning the linearity parameter K adapts the behavior of the myriad from impulse-resistant mode-type estimators (small K ) to the
Gaussian-efficient sample mean (large K ) .
To illustrate the above, it is instructive to look at the behavior of the sample myriad shown in Figure 8.14. The solid line shows the values of the myriad as a function of K for the data set
{0,1,3,6,7,8,9}. It can be observed that, as K increases, the myriad tends asymptotically to the sample average. On the other hand, as K is decreased, the myriad favors the value 7, which indicates
the location of the cluster formed by the samples 6,7,8,9. This is a typical behavior of the myriad for small
K : it tends to favor values where samples are more likely to occur or cluster. The term myriad is coined as a result of this characteristic. 10
1oo 10’ Linearity Parameter ( K )
Figure 8.14 Values of the myriad as a function of K for the following data sets: (solid) original data set = 0 , 1 , 3 , 6 , 7 , 8 , 9 ;(dash-dot) original set plus an additional observation at 20;
(dotted) additional observation at 100; (dashed) additional observations at 800, -500, and 700.
The dotted line shows how the sample myriad is affected by an additional observation of value 100. For large values of K , the myriad is very sensitive to this new observation. On the contrary, for
small K , the variability of the data is assumed to be small, and the new observation is considered an outlier, not influencing significantly the value of the myriad. More interestingly, if the
additional observationsare the very large data 800, -500, 700 (dashed curve), the myriad is practically unchanged for moderate values of K ( K < 10). This behavior exhibits a very desirable outlier
rejection property, not found for example in median-type estimators.
Scale-invariant Operation Unlike the sample mean or median, the operation of the sample myriad is not scale invariant, that is, for fixed values of the linearity parameter, its behavior can vary
depending on the units of the data. This is formalized in the following property.
PROPERTY 8.3 (SCALEINVARIANCE) Let f i ~ ( Xdenote ) themyriadof order K of the data in the vector X. Then,for c > 0, f i K ( e x ) = &qC(XI.
Proof : Let X I ,X z , . . . ,X N denote the data in X. Then,
According to (8.19), a change of scale in the data is preserved in the myriad only if K experiences the same change of scale. Thus, the scale dependence of the myriad can be easily overcome if K
carries the units of the data, or in other words, if K is a scale parameter of the data.
8.3 OPTlMALlTY OF THE SAMPLE MYRIAD Optimality In The a-Stable Model In addition to its optimality in the Cauchy distribution ( a = I), the sample myriad presents optimality properties in the
a-stable framework. First, it is well known that the sample mean is the optimal location estimator at the Gaussian model; thus, by assigning large values to the linearity parameter, the linear
property guarantees the optimality of the sample myriad in the Gaussian distribution ( a = 2). The following result states the optimality of the myriad when a -+ 0, that is, when the impulsiveness of
the distribution is very high. The proof of Proposition 8.1 can be found in [82].
PROPOSITION 8.1 Let Ta,?(X1,X 2 , . . . ,XN)denote the maximum likelihood location estimator derived from a symmetric a-stable distribution with characteristic exponent a and dispersion y. Then, lim
Ta,?(X1,X a , . . . , X N ) = MYRIAD (0; X I ,X 2 , . . . , X N } .
This proposition states that the ML estimator of location derived from an a-stable distribution with small a behaves like the sample mode-myriad. Proposition 8.1 completes what is called the a-stable
triplet of optimality points satisfied by the myriad. On one extreme ( a = 2), when the distributions are very well-behaved, the myriad reaches optimal efficiency by making K = co. In the middle ( a
= l),the myriad reaches optimality by making K = 7, the dispersion parameter of the Cauchy distribution. On the other extreme ( a + 0), when the distributions are extremely impulsive, the myriad
reaches optimality again, this time by making K = 0.
The a-stable triplet demonstrates the central role played by myriad estimation in the a-stable framework. The very simple tuning of the linearity parameter empowers the myriad with good estimation
capabilities under markedly different types of impulsiveness, from the very impulsive ( a + 0) to the non impulsive ( a = 2). Since lower values of K correspond to increased resistance to impulsive
noise, it is intuitively pleasant that, for maximal impulsiveness ( a + 0), the optimal K takes precisely its minimal value, K = 0. The same condition occurs at the other extreme: minimal levels of
impulsiveness (a = 2), correspond to the maximal tuning value, K = co. Thus, as a is increased from 0 to 2, it is reasonable to expect, somehow, a progressive increase of the optimal K , from K = 0
to K = 00. The following proposition provides information about the general behavior of the optimal K . Its proof is a direct consequence of Property 8.3 and the fact that y is a scale parameter of
the a-stable distribution.
PROPOSITION 8.2 Let a and y denote the characteristic exponent and dispersion parameter of a symmetric a-stable distribution. Let KO( a ,y) denote the optimal tuning value of K in the sense that
minimizes a given pe$ormance criterion (usually the variance) among the class of sample myriads with non negative linearity parameter. Then, K o ( a , y ) = K O ( % 117. (8.22)
p ~ ,
Proposition 8.2 indicates a separability of KOin terms of a and y,reducing the optimal tuning problem to that of determining the function K ( a ) = Ko(a,1). This function is of
fundamentalimportancefor the proper operation of the myriad in the astable framework, and will be referred to it as the a-K curve. Its form is conditioned to the performance criterion chosen, and it
may even depend on the sample size. In general, as discussed above, the a-K curve is expected to be monotonically increasing, with K ( 0 ) = 0 (very impulsive point) and K ( 2 ) = co (Gaussian
point). If the performance criterion is the asymptotic variance for example, then K ( l ) = 1, correspondingto the Cauchy point of the a-stable triplet. The exact computation of the a-K curve for
a-stable distributions is still not determined. A simple empirical form that has consistently provided efficient results in a variety of conditions is
K(a) =
/x 2-a’
which is plotted in Figure 8.15. The a - K curve is a valuable tool for estimation and filtering problems that must adapt to the impulsiveness conditionsof the environment. a - K curves in the
a-stable framework have been used, for example, to develop myriad-based adaptive detectors for channels with uncertain impulsiveness [84]. Opfimalify in the Generalized t Model The family of
generalized t distributions was introduced by Hall in 1966 as an empirical model for atmospheric radio noise [90]. These distributions have been found to provide accurate fits to different types of
atmospheric noise found in practice. Because of its simplicity and parsimony, it has been used by Middleton as a mathematically tractable approximation to
1 IMPULSIVE POIN7
0 050 ~
figure 8.15 Empirical a - K curve for a-stable distributions. The curve values at a = 0, 1, and 2 constitute the optimality points of the a-stable triplet.
his widely accepted models of electromagnetic radio noise [ 1431. Long before the introduction of the model by Hall, the generalized t distributions have been known in statistics as a family of
heavy-tailed distributionscategorized under the type VZZ of Pearson’s distributional system [ 1771. Generalized t density functions can be conveniently parameterized as (8.24) where CJ
> 0, a > 0, and c is a normalizing constant given by (8.25)
It is easy to check that the distribution defined by (8.24) is algebraic-tailed, with tail constant a and scale parameter 0 . Although Q may take values larger than 2 , its meaning is conceptually
equivalent to the characteristic exponent of the astable framework. At one extreme, when Q + 00,the generalized t distribution is equivalent to a zero-mean Gaussian distribution with variance CJ 2 .
As it is the case
with a-stable distributions, decreased values of a correspond to increased levels of impulsiveness. For values of a 5 2, the impulsiveness becomes high enough to make the variance infinite, and when
a = 1, the model corresponds to the Cauchy 0, the distribution exhibits the highest distribution. At the other extreme, when a levels of impulsiveness. The maximum likelihood estimator of location
derived from the t density in (8.24) is precisely the sample myriad with linearity parameter --f
K = &a.
The optimality of the myriad for all the distributions in the generalized t family indicates its adequateness along a wide variety of noise environments,from the very 0) to the well-behaved Gaussian
(a + co). Expression (8.26) impulsive ( a gives the optimal tuning law as a function of a and CJ (note the close similarity with Equation (8.22) for a-stable distributions). Making CT = 1, the a-K
curve for generalized t distributions is obtained with K ( a ) = fi.Like the a-K curve of a-stable distributions, this curve is also monotonic increasing, and contains the optimality points of the
a-stable triplet, namely the Gaussian point (K(co)= co), the Cauchy point (K(1)= l),and the very impulsive point ( K ( 0 )= 0). The generalized t model provides a simple framework to assess the
performance of the sample myriad as the impulsiveness of the distributions is changed. It can be proven that the normalized asymptotic variance of the optimal sample myriad at the generalized t model
is (for a derivation, see for example [Sl]): --f
A plot of Vmyr vs. a is shown in Figure 8.16 for CT = 1. The asymptotic variances of the sample mean (Vmean) and sample median (Vmed) are also included for comparison [81]. The superiority of the
sample myriad over both mean and median in the generalized t distribution model is evident from the figure.
8.4 WEIGHTED MYRIAD SMOOTHERS The sample myriad can be generalized to the weighted myriad smoother by assigning positive weights to the input samples (observations); the weights reflect the varying
levels of reliability. To this end, the observations are assumed to be drawn from independent Cauchy random variables that are, however, not identically disand nonnegative weights {Wi 2 O}L1,
tributed. Given N observations A let the input and weight vectors be defined as X = [ X I X2, , . . . ,X N ] and ~
For a given nominal scale factor K , [WI, W2,. . . ,W N ] respectively. ~,
3Let V , ( N ) be the variance of the estimator T when the sample size is N. Then, the normalized asymptotic variance V is defined as V = limN+oc N V T ( N ) .
Figure 8.76 Normalized asymptotic variance of the sample mean, sample median, and optimal sample myriad in generalized t noise. The myriad outperforms the mean and median for any level of
the underlying random variables are assumed to be independent and Cauchy distributed with a common location parameter P, but varying scale factors { S i } E l : X i Cauchy(P, Si), where the density
function of X i has the form N
f x , ( & ;P, Si) = 7r
s,2 + ( X i - p)2'
< X i < 03,
and where
(8.29) A larger value for the weight W , (smaller scale S,) makes the distribution of X , more concentrated around P, thus increasing the reliability of the sample X ,. Note that the special case
when all the weights are equal to unity corresponds to the sample myriad at the nominal scale factor K , with all the scale factors reducing to S , = K . Again, the location estimation problem being
considered here is closely related to the problem of smoothing a time-series { X ( n ) }using a sliding window. The output Y ( n ) ,at time n, can be interpreted as an estimate of location based on
the input samples {XI( n ) ,X Z( n ), . . . , X N (n)}.Further, the aforementioned model of independent but not identically distributed samples synthesizes the temporal correlations usually present
among the input samples. To see this, note that the output Y ( n ) ,as an estimate of location, would rely more on (give more weight to) the sample X (n), when compared with samples that are further
away in time. By assigning varying scale factors in modeling the input samples, leading to different weights (reliabilities), their temporal correlations can be effectively accounted for.
The weighted myriad smoother output , ~ K ( W X), is defined as the value P that maximizes the likelihood function f x , (Xi; P, Si). Using (8.28)for f x , (Xi; P, Si) leads to
(8.30) which is equivalent to
,iK(w,x)= A
argmin P(P);
D Alternatively, we can write ,
8(W, ~ X)
, b as~
Q(p) = argmin P
log [ K 2
+ Wi(Xi- P)2] ;
thus /?'K is the global minimizer of P(P)as well as of Q(P) = log(P(p)). Depending on the context, we refer to either of the functions P(P) and Q(P) as the weighted myriad smoother objectivefunction.
Note that when Wi = 0, the corresponding term drops out of P(P)and Q(P); thus a sample Xi is effectively ignored if its weight is zero. The definition of the weighted myriad is then formally stated
DEFINITION 8.2 (WEIGHTED MYRIAD) Let W = [Wl , W2, . . . ,W N ]be a vector of nonnegative weights. Given K > 0, the weighted myriad of order K for the Xz ,. . . ,XN is dejned as data XI,
, 8= ~ MYRIAD {K; W i o Xi,. . . ,I+"0 XN}
c N
= arg min
+ w~(x, P)~],
log [ K ~
where W io Xi represents the weighting operation in (8.34). In some situations, the following equivalent expression can be computationally more convenient N
[ K ~w i(xi- p12] .
,8K = arg min
It is important to note that the weighted myriad has only N independent parameters (even though there are N weights and the parameter K ) . Using (8.35), it can be inferred that if the value of K is
changed, the same smoother output can be obtained provided the smoother weights are appropriately scaled. Thus, the following is true
(8.36) since
= b1
Hence, the output depends only on The objective function P ( p ) in (8.35) is a polynomial in p of degree 2 N , with well-defined derivatives of all orders. Therefore, P ( p ) (and the equivalent
objective function Q @ ) )can have at most (2N - 1) local extremes. The output is thus one of the local minima of Q (p):
Q’(,B) = 0.
Figure 8.17 depicts a typical objective function Q(p) for various values of K and different sets of weights. Note in the figure that the number of local minima in the objective function Q(p) depends
on the value of the parameter K . Note that the effect of the weight of the outlier on the cost functions (dashed lines on Fig. 8.17) is minimal for large values of K , but severe for large K . While
the minimum is the same for both sets of weights using the small value of K , the minimum is shifted towards the outlier in the other case. As K gets larger, the number of local minima of G(P)
decreases. In fact, it can be proved [ 1111 (by examining the second derivative G ’ (p))that a sufJicient (but not necessary) condition for G(P) (and log(G(P))) to be convex and, therefore, have a
unique local minimum, is that K > ( X ( N) X ( 1 ) ) This . condition is however not necessary; the onset of convexity could be at a much lower K .
-_ - --- - _ -- _
Figure 8.17 Sketch of a typical weighted myriad objective function Q(p)for the weights [l, 2, 3, 2, 11 (solid line), and [I, 100, 3 , 2, 11 (dashed line), and the sample set [-I, 10, 3, 5 , - 31.
As stated in the next property, in the limit as K -+ 03, with the weights { W i }held constant, it can be shown that Q(p) exhibits a single local extremum. The proof is a generalized form of that
used to prove the linear property of the unweighted sample myriad.
PROPERTY 8.4 (LINEARPROPERTY) In the limit as K myriad reduces to the normalized linear estimate
the weighted
(8.40) Again, because of the linear structure of the weighted myriad as K -+ 00, the name "linearity parameter" is used for the parameter K . Equation (8.40) provides the link between the weighted
myriad and a constrained linear FIR filter: the weighted myriad smoother is analogous to the weighted mean smoother having its weights constrained to be nonnegative and normalized (summing to unity).
Figure 8.18 also depicts that the output is restricted to the dynamic range of the input of the weighted myriad smoother. In consequence, this smoother is unable to amplify the dynamic range of an
input signal.
PROPERTY 8.5 (NO UNDERSHOOT/OVERSHOOT) myriad smoother is always bracketed by
The output of a weighted
where X ( 1 )and X(N)denote the minimum and maximum samples in the input window.
Proof: For ,B < X(l),
K 2 + WZ(Xi - X(q)2 < K 2
+ W<(Xi- @ ) 2 ,
and consequently N
This implies that any value of P smaller than X ( 1 ) leads to a larger value of the myriad objective function. Therefore, the weighted myriad cannot be less than X ( l ) . A similar argument can be
constructed for X ( N ) leading , to the conclusion that the weighted myriad cannot be larger than X"). At the other extreme of linearity values ( K + 0), the weighted myriad becomes what is referred
to as the weighted mode-myriad. Weighted mode-myriad smoothers maintain the same mode-like behavior of the unweighted mode-myriad as stated in the following.
PROPERTY 8.6 (MODEPROPERTY)Given a vector of positive weights, W = [ W l , . . . , W N ] the , weighted mode-myriad is always equal to one of the most repeated values in the sample. Furthermore,
where M is the set of most repeated values, and r is the number of times a member of M is repeated in the sample set.
Proof : Following the steps of the proof for the unweighted version, it is straightforward that N
= argmin X,EM
J-J w,(x~ xj)'. -
Dividing by Wi,and applying square root to the expression to be minimized, the desired result is obtained.
PROPERTY 8.7 (OUTLIERREJECTION PROPERTY)Let I( < 00, and let W denote a vector of positive and$nite weights. The outlier rejection property states that: lim XN-I+CX)
~K(W X 1; , X a , . . . , X,) = , ~ K ( W Xl,Xz,. ; . . , X N - ~ ) . (8.44)
Figure 8.78 Sketch of a typical weighted myriad objective function Q(p).
This result can be shown as follows:
Evaluating the limit on the right
lim XN’IkOO
~K(W X i; , . . . , X N ) = argmin WN P
[ K 2 W i ( X i - /3)2] (8.46)
and, since W N ispositive
According to Property 8.7, large gross errors are efficiently eliminated by any weighted myriad smoother with a finite linearity parameter. Note that this is not the
case for the weighted median smoother, in which large positive (negative) errors can always shift the value of the smoother to the right (left).
PROPERTY 8.8 (UNBIASEDNESS) Let X I ,X 2 , . . . , X N be all independent and symmetrically distributed around the point of symmetry c. Then, $K is also symmetrically distributed around c. In
particulal; if E exists, then EPK = c.
Proof : If X i is symmetric about c, then 2c - X i has the same distribution as X i . It follows that ( X I ,X2, . . . , X N )has the same distribution as b~ (2c - X I ,2c Xa, . . . , 2 c - X N )
,which from the property stated in problem 8.8, is identical to 2c - , 8 ~ ( X 1X,z , . . . , X N ) . It follows that p ~ ( X 1X,z , . . . ,X N ) is symmetrically distributed about c.
Geometrical Interpretation Weighted myriads as defined in (8.35) can also be interpreted in a more intuitive manner. Allow a vertical bar to run horizontally through the real line as depicted in
Figure 8 . 1 9 ~ . Then, the sample myriad, indicates the position of the bar for which the product of distances from point A to the sample points X I ,X2, . . . , X N is minimum. If weights are
introduced, each sample point X i is assigned a different point Ai in the bar, as illustrated in Figure 8.19b. The geometrical interpretation of the myriad is intuitively insightful. When K
approaches 0, it gives a conceptually simple pictorial demonstration of the modemyriad formula in (8.13).
Unlike the weighted mean or weighted median, the computation of the weighted myriad is not available in explicit form. Its direct computation is therefore a nontrivial task, since it involves the
minimization of the weighted myriad objective function, Q(P) in (8.33). The myriad objective function, however, has a number of characteristics that can be exploited to construct fast iterative
methods to compute its minimum. Recall that the weighted myriad is given by
argmin log(P(P)) = argmin
Q(P) (8.48)
where Q(p) is the weighted myriad objectivefunction. Having well defined derivatives, , h is~one of the local minima of Q(p),that is, the values for which Q’(,8) = 0. Since Q(P) = log ( P ( P ) )the
, derivative of Q(P) can be written as
indicates the position of a moving bar such that Figure 8.79 ( a ) The sample myriad, the product of distances from point A to the sample points X I ,X 2 , . . . ,X N is minimum. (b) If the weight Wq
> 1is introduced, the product of distances is more sensitive to the variations of the segment very likely resulting in a weighted myriad closer to X,.
(8.50) From (8.49), it follows that (8.51)
Using the fact that Si=
&,the above can be written as
Defining (8.53) and referring to (8.52) the following equation is obtained for the local extremes of
Q (PI : N
-. $
(y) = 0.
By introducing the positive functions (8.55) for i = 1,2, . . . , N , where (8.56) the local extremes of Q(p)in (8.54) can be formulated as N
C ht(P) . (Xz - P I
= 0.
This formulation implies that the sum of weighted deviations of the samples is zero, with the (positive) weights themselves being functions of 0.This property, in turn, leads to a simple iterative
approach to compute the weighted myriad as detailed next.
Fixed Point Formulation Equation (8.57) can be written as N
a= 1
where it can be seen that each local extremum of Q(p), including the weighted myriad /!?, can be written as a weighted mean of the input samples X i . Since the weights hi@) are always positive, the
right hand side of (8.58) is in (X(11,X")),
confirming that all the local extremes lie within the range of the input samples. By defining the mapping N
the local extremes of Q ( p ) ,or the roots of Q’(P), are seen to be thejxed points of
= T(P*).
The following fixed point iteration results in an efficient algorithm to compute these fixed points: N
(8.61) i=l
In the classical literature, this is also called the method of successive approximation for the solution of the equation P = T ( P )[112]. It has been proven that the iterative method of (8.61)
converges to a fixed point of T ( . ) ;thus,
lim ,Om = /3* = T(/3*).
The recursion of (8.61) can be benchmarked against the update in Newton’s method [112] for the solution of the equation Q ’ ( p ) = 0: (8.63) which is interpreted by considering the tangent of Q‘(P)
at P =
z(P) 2
+ Q”(Pm) (P
Here, Z(P)is used as a linear approximationof Q (p)around the point Pm, and /3k+l isthepointatwhichthetangentZ(P)crossesthepaxis: Q’(PA+l) = Z(p&+l)= 0. Although Newton’s method can have fast
(quadratic) convergence, its major disadvantageis that it may converge only if the initial value POis sufficiently close to the solution P* [ 1121. Thus, only local convergence is guaranteed. On the
other hand, Kalluri and Arce [112] have shown that the fixed point iteration method of (8.61) decreases the objective function Q(P) continuously at each step, leading to global convergence
(convergence from an arbitrary starting point).
Table 8.7 Summary of the fast iterative weighted myriad search.
Parameters: Select Initial Point: Computation:
L = number of iterations in search !?,o
= arg minx,
F o r m = 0,1, . . . , L. Pm.+l = T ( P m ) .
The speed of convergence of the iterative algorithm (8.61) depends on the initial value PO. A simple approach of selecting & is to assign its the value equal to that of the input sample X i which
that leads to the smallest cost P ( X i ) .
Fixed Point Weighted Myriad Search Step 1: Select the initial point bo among the values of the input samples: ,& = argmin P(x,).
xi Step 2: Using ,&as the initial value, perform L iterations of the fixed point recursion T ( & ) of (8.61). The final value of these iterations is then chosen ) as the weighted myriad: & ~ p = T (
L(bo). =
This algorithm can be compactly written as (8.64) Note that for the special case L = 0 (meaning that no fixed point iterations are performed), the above algorithm computes the selection weighted
myriad. Table 8.1 summarizes the iterative weighted myriad search algorithm.
8.6 WEIGHTED MYRIAD SMOOTHER DESIGN 8.6.1 Center-Weighted Myriads for Image Denoising Median smoothers are effective at image denoising, especially for impulsive noise, which often results from bit
errors in the transmission stage and/or in the acquisition stage. As a subset of traditional weighted median smoothers, center-weightedmedian (CWM) smoothers provide similar performance with much
less complexity. In CW medians, only the center sample in the processing window is assigned weight, and all other samples are treated equally without emphasis, that is, are assigned a weight
of 1. The larger the center weight, the less smoothing is achieved. Increasing the center weight beyond a certain threshold will turn the CW median into an identity operation. On the other hand, when
the center weight is set to unity (the same as other weights), the CW median becomes a sample median operation. The same notion of center weighting can be applied to the myriad structure as well,
thus leading to the center-weighted myriad smoother (CWMy) defined as:
= MYRIAD
{ K ;X i , . . . , W, o X,, . . . ,X,}
+ (Xi- b)2].
The cost function in (8.33) is now modified to
Q(p) = log [ K 2 + W c ( X c- p)2] +
log [ K z
xizxc While similar to a CW median, the above center weighted myriad smoother has significant differences. First, in addition to the center weight W,, the CWMy has the free parameter ( K )that
controls the impulsiveness rejection. This provides a simple mechanism to attain better smoothing performance. Second, the center weight in the CWMy smoother is inevitably data dependent, according
to the definition of the objective function in (8.66). For different applications, the center weight should be adjusted accordingly based on their data ranges. For grayscale image denoising
applications where pixel values are normalized between 0 and 1, the two parameters of the CWMy smoother can be chosen as follows:
(1) Choose K = ( X ( u ) X ( L ) ) / where ~ , 1 5 L < U 5 N , with X ( u )being the Uth smallest sample in the window and X ( L )the Lth smallest sample.
(2) Set W, = 10,000.~ The linear parameter K is dynamically calculated based on the samples in the processing window. When there is “salt” noise in the window (outliers having large values), the
myriad structureassures that they are deemphasized because of the outlier rejection property of K . The center weight W , is chosen to achieve balance between outlier rejection and detail
preservation. It should be large enough to emphasize the center sample and preserve signal details, but small enough so it does not let impulsive noise through. It can also be shown that [129], the
CWMy smoother with K and W , defined as above, has the capability of rejecting “pepper” type noise (having values close to 0). This can be seen as follows. For a single “pepper” outlier sample, the
cost function (8.66) evaluated at p = K will always be smaller than that at /? = 0. 4This is an empirical value. A larger value of the center weight will retain details on the image, hut it will also
cause some of the impulses to show in the output. A smaller value will eliminate this impulses but it might cause some loss in detail. This characteristics will be shown in Figure 8.22.
Thus, “pepper” noise will never go through the smoother if the parameters K and W, are chosen as indicated. Denote X as the corrupted image, Y the output smoothed image, and CWMy the smoother
operation. A 2-pass CWMy smoother can be defined as follows:
Y = 1 - CWMy(1- CWMy(X)).
Figure 8.20 depicts the results of the algorithm defined in (8.67). Figure 8 . 2 0 ~ shows the original image. Figure 8.20b the same image corrupted by 5% salt and pepper noise. The impulses occur
randomly and were generated with MATLAB’s imnoise function. Figure 8 . 2 0 is ~ the output of a 5 x 5 CWM smoother with W, = 15, d is the CWMy smoother output with W , = 10,000 and K = ( X ( z l ) X
(51)/2. The superior performance of the CWMy smoother can be readily seen in this figure. The CWMy smoother preserves the original image features significantly better than the CWM smoother. The mean
square error of the CWMy output is consistently less than half of that of the CWM output for this particular image. The effect of the center weight can be appreciated in Figures 8.22 and 8.23. A low
value of the center weight will remove all the impulsive noise at the expense of smoothing the image too much. On the other hand, a value higher than the recommended will maintain the details of the
image, but some of the impulsive noise begins to leak to the output of the filter.
The linear property indicates that for very large values of K , the weighted myriad smoother reduces to a constrained linear FIR smoother. The meaning of K suggests that a linear smoother can be
provided with resistance to impulsive noise by simply reducing the linearity parameter from K = 00 to a finite value. This would transform the linear smoother into a myriad smoother with the same
weights. In the same way as the term linearization is commonly used to denote the transformation of an operator into a linear one, the above transformation is referred to as myriadization.
Myriadization is a simple but powerful technique that brings impulse resistance to constrained linear filters. It also provides a simple methodology to design suboptimal myriad smoothers in impulsive
environments. Basically, a constrained linear smoother can be designed for Gaussian or noiseless environments using FIR filter (smoother) design techniques, and then provide it with impulse
resistance capabilities by means of myriadization. The value to which K is to be reduced can be designed according to the impulsiveness of the environment, for example by means of an a-K curve. It
must be taken into account that a linear smoother has to be in constrained form before myriadization can be applied. This means that the smoother coefficients W, must be nonnegative and satisfy the
normalization condition W, = 1. A smoother for which W, # 1, must be first decomposed into the cascade of its
figure 8.20 ( a )Original image, (b)Image with 5% salt-and-pepper noise (PSNR=17.75dB), ( c) smoothed with 5 x 5 center weighted median with W, = 15(PSNR=37.48dB), (d) smoothed with 5 x 5 center
weighted myriad with W, = 10,000 and K = (X(zl) X ( Q ) / ~ (PSNR=39.98dB)
Figure 8.27 Comparison of different filtering schemes (Enlarged). ( a ) Original Image, (b) Image smoothed with a center weighted median (PSNR=37.48dB), ( c ) Image smoothed with a 5 x 5 permutation
weighted median (PSNR=35.55dB), (d)Image smoothed with the center weighted myriad (PSNR=39,98dB).
figure 8.22 Output of the Center weighted myriad smoother for different values of the center weight W, (a)Original image, (b)100 (PSNR=36.74dB), ( c ) 10,000 (PSNR=39.98dB), (6)1,000,000 (PSNR=
figure 8.23 Output of the center-weighted myriad smoother for different values of the center weight W, (enlarged) ( a ) Original image, ( b ) 100, ( c ) 1O,O00, (6)1,000,000.
normalized version with an amplifier of gain illustrated in the following example.
EXAMPLE 8.6 (ROBUSTLOW
Wi. Design by myriadization is
Figure 8.24a depicts a unit-amplitude linearly swept-frequency cosine signal spanning instantaneous frequencies ranging from 0 to 400 Hz. The chirp was generated with MATLAB’s chirp function having a
sampling interval of 0.0005 seconds. Figure 8.24b shows the chirp immersed in additive Cauchy noise (y = 1). The plot is truncated to the same scale as the other signals in the figure. A low-pass
linear FIR smoother with 30 coefficients processes the chirp with the goal of retaining its low-frequency components. The FIR low-pass smoother weights were designed with MATLAB’s fir1 function with
a normalized frequency cutoff of 0.05. Under ideal, no-noise conditions, the output of the linear smoother would be that of Figure 8.24~. However, the impulsive nature of the noise introduces severe
distortions to the actual output, as depicted in Figure 8.24d. Myriadizing the linear smoother by reducing K to a finite value of 0.5 , significantly improves the smoother performance (see Fig.
8.24eJ). Further reduction of K to 0.2 drives the myriad closer to a selection mode where some distortion on the smoother output under ideal conditions can be seen (see Fig. 8.24g). The output under
the noisy conditions is not improved by further reducing K to 0.2, or lower, as the smoother in this case is driven to a selection operation mode.
EXAMPLE 8.7 (MYRIADIZATION OF P H A S E LOCK LOOP FILTERS) First-order Phase-Locked Loop (PLL) systems, depicted in Figure 8.25, are widely used for recovering carrier phase in coherent demodulators
[1461. Conventional PLL utilize a linear FIR low-pass filter intended to let pass only the low frequencies generated by the multiplier. The output of the low-pass filter represents the phase error
between the incoming carrier and the recovered tone provided by the controlled oscillator. The system is working properly (i.e., achieving synchronism), whenever the output of the low-pass filter is
close to zero. To test the PLL mechanisms, the FIR filter synthesized used 13 normalized coefficients where the incoming signal is a sinusoid of high frequency and unitary amplitude, immersed in
additive white Gaussian noise of variance 10 -3, yielding a signal-to-noise ratio (SNR) of 30 dB. The parameters of the system, including (linear) filter weights and oscillator gain, were manually
adjusted so that the error signal had minimum variance. Three different scenarios, corresponding to three different low-pass filter structures were simulated. The incoming and noise signals were
identical for the three systems. At three arbitrary time points (t M 400,820,1040), short bursts of high-power Gaussian noise were added to the noise
Figure 8.24 Myriadizing a linear low-pass smoother in an impulsive environment: ( a )chirp signal, (b)chirp in additive impulsive noise, (c) ideal (no noise) myriad smoother output with K = 00, ( e )
K = 0.5, and ( g ) K = 0.2; Myriad smoother output in the presence of noise with (d)K = 00,(f) K = 0.5, and ( h ) K = 0.2.
signal. The length of the bursts was relatively short (between 4 and 10 sampling times) compared to the length of the filter impulse response (12 sampling times). The SNR during burst periods was
very low (about -10 dB's), making the noise look heavy impulsive. Figure 8.26 shows the phase error in time when the standard linear filter was used. It is evident from the figure that this system is
very likely to lose synchronism after a heavy burst. Figure 8.26b shows the phase error of a second scenario in which a weighted median filter has been designed to imitate the low-pass
characteristics of the original linear filter [6,201]. Although the short noise bursts do not affect the estimate of the phase, the variance of the estimate is very large. This noise amplification
behavior can be explained from the inefficiency introduced by the selection property of the median, that is, the fact that the filter output is always constrained to be one of its inputs. Finally,
Figure 8 . 2 6 ~shows the phase after the low-pass filter has been myriadized using a parameter K equal to half the carrier
amplitude. Although the phase error is increased during the bursts, the performance of the myriadized PLL is not degraded, and the system does not lose synchronism.
Signal and noise 'Phase Detector (Product)
, Low-Pass Filter
Figure 8.25 Block diagram of the Phase-Locked Loop system.
Figure 8.26 Phase error plot for the PLL with (a) a linear FIR filter; (b)an optimal weighted median filter; and ( c )a myriadized version of the linear filter.
Problems 8.1
Show that:
(a) A FLOM smoother with p < 1 is selection type. (b) The gamma smoother with p = 2 outputs the sample that is closest to the mean of the input samples. 8.2
Prove that the sample mode-myriad is shift and scale invariant. Thus, given
2,= aXi + b, for i = 1 , .. . , N , show that
/%(ZI,.. . , ZN)= abo(X1,.. . , X N ) b.
8.3 Prove that the sample mode-myriad satisfies the "no overshoothndershoot" property. That is & is always bounded by X(2)
I Po I X(N--I),
where X ( i )denotes the ith-order statistic of the sample. 8.4
Show that if N = 3, $ !/ is equivalent to the sample median.
8.5 Show that for a Cauchy distribution with dispersion parameter K , the interquartile range is the value of K .
For the weighted median smoother defined in (8.34) show that:
Prove Property 8.40, the linear property of the weighted myriad.
(Shift and sign invariance properties of the weighted myriad) Let Z i = X i Then, for any K and W ,
+ b.
(a) 6,421, . . . , ZN)= &(XI, . . . , XN)+ b; (b) b ~ ( - Z i , ... , - Z N ) = - b ~ ( Z l ,. .. ,ZN).
8.9 (Gravitational Property of the Weighted Myriad) Let W denote a vector of positive finite weights, show that there always exists a sample X i such that (8.71) where Wi is the weight assigned to X
i .
Weighted Myriad Filters
Myriad smoothers admitting positive weights only are in essence low-pass-type filters. Weighted myriad smoothers are thus analogous to normalized linear FIR filters with nonnegative weights summing
to unity. There is a clear need to extend these smoothers into a general filter structure, comparable to linear FIR filters, that admit real-valued weights. In the same way that weighted median
smoothers are extended to the weighted median filter, a generalized weighted myriad filter structure that admits real-valued weights is feasible. This chapter describes the structure and properties
of such class of filters admitting positive as well as negative weights. Adaptive optimization algorithms are presented. As would be expected, weighted myriad filters reduce to weighted myriad
smoothers whenever the filter coefficients are constrained to be positive.
The approach used to generalize median smoothers to a general class of median filters can be used to develop a generalized class of weighted myriad filters. To this end, the set of real-valued
weights are first decoupled in their sign and magnitude. The sign of each weight is then attached to the corresponding input sample and the weight magnitude is used as a positive weight in the
weighted myriad smoother structure. Starting from the definition of the weighted myriad smoothers (Def. 8.2), the class of weighted myriad filters admitting real-valued weights emerges as follows:
DEFINITION9.1 (WEIGHTEDMYRIADFILTERS)Given a set of N real vahed weights (w1, wz, . . . , WN) and the observation vector x = [XI,xz,. . . , XNIT, the weighted myriadjilter output is dejined as
is the objective function of the weighted myriadjltel: Since the log function is monotonic, the weighted myriad filter is also defined by the equivalent expression
thus f i is~ the global minimizer of P ( p )as well as of Q(p) = log(P(p)). Like the weighted myriad smoother, the weighted myriad filter also has only N independent parameters. Using (9.2), it can
be inferred that if the value of K is changed, the same filter output can be obtained provided the filter weights are appropriately scaled. The following is true
(9.6) Hence, the output depends only on $. The objective function P ( p ) in (9.4) is a polynomial in p of degree 2 N , with well-defined derivatives of all orders. Therefore, P(p) (and the
equivalent objective function Q(p))can have at most (2N - 1)local extremes. The output is thus one of the local minima of Q(P):
Q'(fi) = 0.
Figure 9.1 depicts a typical objective function Q(,B),for various values of K . The effect of assigning a negative weight on the cost function is illustrated in this figure. As in the smoother case,
the number of local minima in the objective function Q(P) depends on the value of the parameter K . When K is very large only one extremum exists.
x 3
x 3
Figure 9.7 Weighted myriad cost function for the sample set X = [-1, 10, 3, 5 , - 31 with weights W = [l,2, 3, f 2, 11 fork = 0.1, 1, 10 In the limit as K 4 00,with the weights { Wi} held constant,
it can be shown that Q(p) exhibits a single local extremum. The proof is a generalized form of that used to prove the linear property of weighted myriad smoothers.
PROPERTY 9.1 (LINEARPROPERTY)In the limit as K myriadJilter reduces to the normalized linear FIRJilter
00, the
Once again, the name ‘linearity parameter’ is used for the parameter K . At the other extreme of linearity values ( K t 0), the weighted myriad filter maintains a mode-like behavior as stated in the
PROPERTY 9.2 (MODE PROPERTY) Given a vector of real-valued weights, W = [Wl, . . . , WN],the weighted mode-myriad 80is always equal to one of the most repeated values in the signed sample set {sgn
(Wl)Xl, sgn(Wz)Xa,. . . , sgn(WN) X N). Furthermore,
where M is the set of most repeated signed values, and r is the number of times a member of M is repeated in the signed sample set.
Using the same technique developed in Section 8.5, the fast computation for the realvalued weighted myriad can be derived. Recall that the myriad objective function now is
c N
Q(P) =
log [ K 2
+ IWil. (sgn(Wi)Xi - p)’] .
Its derivative with respect to ,B can be easily found as (9.1 1) By introducing the positive functions
(9.12) for i = 1 , 2 , . . . , N , the local extremes of
Q(P) satisfy the following condition
f i ~
Since is one of the local minimaof Q(P), we have Q ’ ( f i ~ = ) 0. Equation (9.13) can be further written as weighted mean of the signed samples N
P =
i= 1
By defining the mapping
the local extremes of
Q(P), or the roots of Q’(p),are seen to be thefiedpoints of
The following efficientfiedpoint iteration algorithm is used to compute the fixed points: N
(9.17) i=l
In the limit, the above iteration will converge to one of the fixed points of T ( . ) lim
= T(P*).
It is clear that the global convergence feature of this fixed point iteration can be assured from the analysis in Section 8.5, since the only difference is the sample set.
Fixed Point Weighted Myriad Search Algorithm Step 1: Couple signs of the weights and samples to form the signed sample vector [sgn(WI)Xl,sgn(Wz)X2,. . ., s ~ ~ ( W N ) X N ] . Step 2: Compute the
selection weighted myriad: ,&
arg min P(sgn(Wi)Xi).
PE {sgn( Wi)xi1
Step 3: Using $0 as the initial value, perform L iterations of the fixed point recursion Pm+l = T(Pm)of (9.17). The final value of these iterations is then chosen as the weighted myriad: ,&p = T @
($0). ) The compact expression of the above algorithm is (9.19)
9.3 WEIGHTED MYRIAD FILTER DESIGN 9.3.1
The linear property indicates that for very large values of K , the weighted myriad filter reduces to a constrained linear FIR filter. This characteristic of K suggests that a linear FIR filter can
be provided with resistance to impulsive noise by simply reducing the linearity parameter from K = 00 to a finite value. This would transform the linear FIR filter into a myriad filter with the same
weights. This transformation is referred to as myriadization of linear FIR filters. Myriadization is a simple but powerful techniquethat brings impulseresistance to linear FIR filters. It also
provides a simple methodology to design suboptimalmyriad filters in impulsive environments. A linear FIR filter can be first designed for Gaussian or noiseless environments using FIR filter design
tools, and then provided with impulse resistance capabilities by means of myriadization. The value to which K is to be reduced can be designed according to the impulsiveness of the environment.
Design by myriadization is illustrated in the following example.
Example: Robust Band-Pass Filter Design Figure 9 . 2 depicts ~ a unit-amplitude linearly swept-frequencycosine signal spanning instantaneous frequencies ranging from 0 to 400 Hz. The chirp was
generated with MATLAB’s chirp function having a sampling interval of 0.0005 seconds. Figure 9 . 3 shows ~ the chirp immersed in additive Cauchy noise (y = 0.05). The plot is truncated to the same
scale as the other signals in the figure. A band-pass linear FIR filter with 31 coefficients processes the chirp with the goal of retaining its mid-frequency components. The FIR band-pass filter
weights were designed with MATLAB’s fir1 function with normalized frequency cutoffs of 0.15 and 0.25. Under ideal, no-noise conditions, the output of the FIR filter would be that of Figure 9.2b.
However, the impulsive nature of the noise introduces severe distortions to the actual output, as depicted in Figure 9.3b. Myriadizing the linear filter by reducing K to a finite value of 0.5 ,
significantly improves the filter performance (see Figs. 9 . 2 ~ and 9 . 3 ~ )Further . reduction of K to 0.2 drives the myriad closer to a selection mode where some distortion on the filter output
under ideal conditions can be seen (see Fig. 9.26). The output under the noisy conditions is not improved by further reducing K to 0.2, or lower, as the filter in this case is driven to a selection
operation mode (Fig. 9.36).
Figure 9.2 Myriadizing a linear band-pass filter in an impulsive environment: (a) chirp signal, (b) ideal (no noise) myriad smoother output with K = 03, ( c ) K = 0.5, and (d) K = 0.2.
figure 9.3 Myriadizing a linear band-pass filter in an impulsive environment (continued): ( a ) chirp in additive impulsive noise. Myriad filter output in the presence of noise with (b) K = 00, ( c )
K = 0.5, and (d)K = 0.2
9.3.2 Optimization The optimization of the weighted myriad filter parameters for the case when the linearity parameter K satisfies K > 0 was first described in [lll]. The goal is to design the set of
weighted myriad filter weights that optimally estimate a desired signal according to a statistical error criterion. Although the mean absolute error (MAE) criterion is used here, the solutions are
applicable to the mean square error (MSE) criterion with simple modifications. A
Given an input (observation) vector X = [ X I X , Z ,. . . ,X N ] ~a weight , vector A
W = [WI,Wz,. . . , W N ]and ~ linearity parameter K , denote the weighted myriad filter output as Y = YK(W,X), sometimes abbreviated as Y (W,X). The filtering error, in estimating a desired signal D
( n ), is then defined as e( n ) = Y( n )- D (n). Under the mean absolute error (MAE) criterion, the cost function is defined as
A(W , K )
E { l e l } = E{IYK(W,X)
- DI),
where E { . } represents statistical expectation. The mean square error (MSE) is defined as
When the error criterion adopted is clear from the context, the cost function is written as J(W,K ) . Further, the optimal filtering action is independent of K (the filter weights can be scaled to
keep the output invariant to changes in K ) . The cost function is therefore sometimes written simply as J ( W ) , with an assumed arbitrary choice of K . Obtaining conditions for a global minimum
that are both necessary and sufficient is quite a formidable task. Necessary conditions, on the other hand, can be attained by setting the gradient of the cost function equal to zero. The necessary
conditions to be satisfied by the optimal filter parameters are obtained as:
dJ (W)
= 0 , i = 1 , 2 ,..., N .
The nonlinear nature of the equations in (9.22) prevents a closed-form solution for the optimal parameters. The method of steepest descent is thus applied, which continually updates the filter
parameters in an attempt to converge to the global minimum of the cost function J ( W ) :
1 aJ W,(n+ 1) = wi(n) - - p -( n ) , i = 1 , 2 , . . . , N , 2
where Wi( n )is the ith parameter at iteration n, p and the gradient at the nth iteration is given by
> 0 is the step-size of the update, i = 1 , 2 ,..., N .
When the underlying signal statistics are unavailable, instantaneous estimates for the gradient are used, since the expectation in (9.24) cannot be evaluated. Thus, removing the expectation operator
in (9.24) and using the result in (9.23), the following weight update is found
aY Wi(n+ 1) = Wi(n) - p e ( n ) -( n ) , i
1 , 2 , . . . ,N .
,..., N .
All that remains is to find an expression for
- -
a aw,
X), = argmin Recallthattheoutputofthe weightedmyriadfilteris , ~ K ( W where Q(p) is given by
The derivative of ~ K ( W X) , with respect to the weight Wi, holding all other quantities constant must be evaluated. Since ~ K ( W , X=) is one of the local minima of Q(p),it follows that
Q’(6) = 0.
Differentiating (9.27) and substituting into (9.28), results in
where the function G(., .) is introduced to emphasize the implicit dependency of the output ,8 on the weight Wi,since we are interested in evaluating while holding all other quantities constant.
Implicit differentiation of (9.29) with respect to Wi leads to
($).(*) + (””) 3Wi
from which can be found once % and ap straightforwardto show that
= 2
c j=1
lWj1 (
% a e evaluated. Using (9.29), it is (b
- l ~ j l .
= 0,
2 -
sgn(Wj)xj) 2’
(9.3 1)
(j- sgn(Wj)Xj)’)
2 l ~ j l .
Evaluation of from (9.29) is, however, a more difficult task. The difficulty arises from the term sgn(Wi) that occurs in the expression for G( Wi). It would seem impossible to differentiate G ( b
,Wi) with respect to Wi, since this would involve the quantity &sgn(Wi), which clearly cannot be found. Fortunately, this problem can be circumvented by rewriting the expression for G( 6, Wi) in
(9.29), expanding it as follows:
where the fact that IWj I .sgn(Wj) = Wj was used. Equation (9.32) presents no mathematical difficultiesin differentiatingit with respect to Wi,since the term sgn(Wi) is no longer present.
Differentiating with respect to Wi,and performing some straightforward manipulations,leads to
multiplying and cancelling common terms reduces to:
Substituting (9.31) and (9.34) into (9.30), the following expression for obtained:
d ---bK(W,X)
& is
Using (9.25), the following adaptive algorithm is obtained to update the weights
{ Wi},N_I:
wi(n 1) =
ab ( n ) , 4.) aw,
with (n)given by (9.35). Considerable simplification of the algorithm can be achieved by just removing the denominator from the update term above; this does not change the direction of the gradient
estimate or the values of the final weights. This leads to the following computationally attractive algorithm:
It is important to note that the optimal filtering action is independent of the choice of K ; the filter only depends on the value of $. In this context, one might ask how the algorithm scales as the
value of K is changed and how the step size ,u and the initial weight vector w(0) should be changed as K is varied. To answer this, A
let go = wo,l denote the optimal weight vector for K = 1. Then, from (9.6), or go = . Now consider two situations. In the first, the algorithm K2 in (9.37) is used with K = 1, step size p = p 1 ,
weights denoted as g i ( n ) and initial weight vector g(0). This is expected to converge to the weights go. In the second, the algorithm uses a general value of K , step size p = p~ and initial
weight vector w ~ ( 0 )Rewrite . (9.37) by dividing throughout by K 2 and writing the algorithm in terms of an update of 3 .This is expected to converge to since (9.37) should converge to w,,K. Since
g o = the above two situations can be compared and the initial weight vector WK(O) and the step size ,UK can be chosen such that the algorithms have the same behavior in both cases and converge, as a
result, to the samejfilter. This means that g i ( n ) = at each iteration n. It can be shown that this results in
p~ = K4p1 and WK(O)
This also implies that if K is changed from K1 to Kz, the new parameters should satisfy
EXAMPLE 9.1 f ROBUST HIGH-PASS FILTER DESIGN) Figure 9.4 illustrates some highpass filtering operations with various filter structures over a two-tone signal corrupted by impulsive noise. The signal
has two sinusoidal components with normalized frequency 0.02 and 0.4 respectively. The sampling frequency is 1000Hz. Figure 9.4a shows the two-tone signal in stable noise with exponent parameter a =
1.4, and dispersion y = 0.1. The result of filtering through a high-pass linear FIR filter with 30 taps is depicted in Figure 9.4b. The FIR highpass filter coefficients are designed with MATLAB’sfir1
function with a normalized cutoff frequency of 0.2. It is clear to see that the impulsive noise has strong effect on the linear filter output, and the high frequency component of the original twotone
signal is severely distorted. The myriadization of the above linear filter gives a ~ the sense that the impulsiveness is little better performance as shown in Figure 9 . 4 in greatly reduced, but the
frequency characteristicsare still not satisfactorilyrecovered. Figure 9.4e is the result of the optimal weighted myriad filtering with step size p = 3. As a comparison, the optimal weighted median
filtering result is shown in Figure 9.4d, step size p = 0.15. Though these two nonlinear optimal filters perform significantly better than the linear filter, the optimal weighted myriad filter
performs best since its output has no impulsiveness presence and no perceptual distortion
(except magnitude fluctuations) as in the myriadization and the optimal weighted median cases. Moreover, signal details are better preserved in the optimal weighted myriad realization as well.
Figure 9.4 Robust high-pass weighted myriad filter: (a)two-tone signal corrupted by astable noise, a = 1.4, y = 0.1, (b)output of the 30-tap linear FIR filter, ( c )myriadization, (6) gptimal
weighted median filter, ( e ) optimal weighted myriad filter.
Another interesting observation is depicted in Figure 9.5, where ensemble performances are compared. Though both median and myriad filters will converge faster when the step size is large and slower
when the step size is small, they reach the same convergence rate with different step sizes. This is expected from their different filter structures. As shown in the plot, when the step size p is
chosen to be 0.15 for the median, the comparable performance can be found when the step size p is in the vicinity of 3 for the myriad. The slight performance improvement can be seen from
the plot in the stable region where the myriad has lower excess error floor than the median. Adaptive Weighted Median 1
Figure 9.5 Comparison of convergence rate of the optimal weighted median and the optimal weighted myriad: (a) optimal weighted median at p = 0.04,0.15, (b)optimal weighted myriad at ,LL = 0.8,3.
EXAMPLE 9.2 (ROBUST BLINDEQUALIZATION) The constant modulus algorithm (CMA) may be the most analyzed and deployed blind equalization algorithm. The CMA is often regarded as the workhorse for blind
channel equalization, just as the least mean square (LMS) is the benchmark used for supervised adaptive filtering [ 1011. Communication technologies such as digital cable TV, DSL, and the like are
ideally suited for blind equalization implementations, mainly because in their structures, training is extremely costly if not impossible. In CMA applications, the linear FIR filter structure is
assumed by default. In applications such as DSL, however, it has been shown that impulsive noise is prevalent [204], where inevitably, CMA blind equalization using FIR structure collapses. Here we
describe a real-valued blind equalization algorithm that is the combination of the constant modulus criterion and the weighted myriad filter structure. Using myriad filters, one should expect a
performance close to that of linear filters when the linear parameter K is set to values far bigger than that of the data samples. When the noise contains impulses, by reducing K to a suitable level,
one can manage to remove their influence without losing the capability of keeping the communication eye open. Consider a pulse amplitude modulation (PAM)communication system, where the signal and
channel are both real. The constant modulus cost function is defined as follows:
J ( WK , )
{ (lY(n)12- &)2)
S ( n ) is the signal constellation, Y ( n )the filter output. The gradient of the above cost function can be calculated out as
A scaled version of a real valued weighted myriad filter is used where the sum of the magnitudes of the filter weights are used as the scaling factor.
(9.43) This is particularly important for equalization applications since the signal energy needs to be considered in these cases. The derivative of the filter output with respect to a single weight
is expressed as (9.44) and the derivative of ,d has already been shown in (9.35). Finally, the weight update can be carried out using the following equation
W,(n+l) = Wt(n)+pY(n)(V(n)I2- Rz) (9.45) Unlike the regular WMy filters having only N independent parameters, as described in [I 131, all N 1 parameters of the scaled WMy filters, that is N weights
and one linear parameter, are independent. Thus, to best exploit the proposed structure, K needs to be updated adaptively as well. This time, we need to reconsider the objective function of the
weighted myriad, since K is now a free parameter:
(1wilo s g n ( w i ) x i
[log ( K 2
+ lWil . (sgn(W,)Xi
-log K ] (9.46)
= argminQ(P, K ) .
D Denote
Following a similar analysis as in the weight update, one can develop an update algorithm for K . However, two reasons make it more attractive to update the squared n linearity parameter K = K 2
instead of K itself. First, in myriad filters, K always occurs in its squared form. Second, the adaptive algorithm for K might have an ambiguity problem in determining the sign of K . Rewrite (9.47)
Implicitly differentiatingboth sides with respect to K , leads to
(%).($) + (E) = o .
Thus, (9.50) Finally, the update for K can be expressed as
(9.5 1)
Figure 9.6 depicts a blind equalization experiment where the constellation of the signal is BPSK, and the channel impulse response is simply [l 0.51. Additive stable noise with a = 1.5, y = 0.002
corrupts the transmitted data. Figure 9.6a is the traditional linear CMA equalization while Figure 9.6b is the myriad CMA equalization. It is can be seen that, under the influence of impulsive noise,
the linear equalizer diverge, but the myriad equalizer is more robust and still gives very good performance. Figure 9.7 shows the adaptation of parameter K in the corresponding realization.
Figure 9.6 Blind equalization of a BPSK signal using ( a ) linear blind equalization, (b) myriad blind equalization.
Problems 9.1
Prove the scale relationships established in Equations (9.5) and (9.6).
Prove the linear property of the weighted myriad filters (Property 9.1).
Prove the mode property of the weighted myriad filters (Property 9.2).
Figure 9.7 Adaptation of K
9.4 Prove if weighted myriad filters satisfy or not the shift and sign invariance properties described in problem 8.8.
This Page Intentionally Left Blank
1. G. Aloupis, M. Soss, and G. Toussaint. On the computation of the bivariate median and a fennat-torricelli problem. Technical Reports SOCS-01.2, School of Computer Science, McGill University,
Montreal, Canada, February 2001. 2. S. Ambike and D. Hatzinakos. A new filter for highly impulsive a-stable noise. In Proc. of the 1995 Int. Workshop on Nonlinear Signal and Image Proc., Halkidiki,
Greece, June 1995. 3. D. Andrews, D. Bickel, P. Hampel, F. Huber, P. Rogers, and J. Tukey. Robust Estimates of Location: Survey and Advances. Princeton University Press, Princeton, NJ, 1972. 4. A.
Antoniou. Digital Filtel: Analysis, Design and Applications. McGraw-Hill, Inc, New Jersey, U.S.A., 1993.
5. G. R. Arce. Statistical threshold decomposition for recursive and nonrecursive median filters. IEEE Trans. on Information Theory, IT-32(2):243-253, March 1986. 6. G. R. Arce. A general weighted
median filter structure admitting negative weights. IEEE Trans. on Signal Proc., 46(12), December 1998. 7. G. R. Arce and R. E. Foster. Detail-preserving ranked-order based filters for image
processing. IEEE Trans. on Acoustics, Speech, and Signal Proc., 37( 1):83-98, January 1989. Also reprinted in Digital Image Processing, by R. Chellappa, IEEE Press 1992. 365
8. G. R. Arce and N. C. Gallagher. Stochastic analysis of the recursive median filter process. ZEEE Trans. on Information Theory, IT-34(4), July 1988. 9. G. R. Arce and N. C. Gallagher, Jr. State
description of the root set of median filters. ZEEE Trans. on Acoustics, Speech, and Signal Proc., 30(6):894-902, December 1982. 10. G. R. Arce, T. A. Hall, and K. E. Barner. Permutation weighted
order statistic filters. ZEEE Trans. on Image Proc., 4, August 1995. 11. G. R. Arce and R. Hasan. Elimination of interference terms in the wigner ville distribution using nonlinear filtering. ZEEE
Trans. on Signal Proc., 48(8):2321233 1, August 2000. 12. G. R. Arce and Y. Li. Median power and median correlation theory. ZEEE Trans. on Signal Proc., 50(11):2768-2776, November 2002. 13. G. R.
Arce and J. Paredes. Image enhancement with weighted medians. In S . Mitra and G. Sicuranza, editors, Nonlinear Image Processing, pages 27-67. Academic Press, 2000. 14. G. R. Arce and J. L. Paredes.
Recursive weighted median filters admitting negative weights and their optimization. IEEE Trans. on Signal Proc., 48(3):768779, March 2000. 15. R. D. Armstrong, E. L. Frome, and D. S. Kung. A revised
simplex algorithm for the absolute deviation curve fitting problem. Communications in Statistics, B8:175 - 190, 1979. 16. B. C. Arnold, N. Balakrishnan, and H. N. Nagaraja. A First Course in Order
Statistics. John Wiley & Sons, New York, NY, 1992. 17. A. Asano, K. Itoh, and Y. Ichioka. Rondo: Rank-order based nonlinear differential operator. Pattern Recognition, 25(9): 1043-1059,1992. 18. A.
Asano, K. Itoh, and Y. Ichioka. A generalization of the weighted median filter. In Proc. 20th Joint Conference on Image Technology, Tokio, Japan, 2003. 19. J. Astola, P. Haavisto, and Y. Neuvo.
Vector median filters. Proc. of the IEEE, 78(4):678-689, April 1990. 20. J. Astola, P. Heinonen, and Y. Neuvo. Linear median hybrid filters. ZEEE Trans. on Circuits and Systems, CAS-36: 1430-1438,
November 1989. 21. J. Astola and P. Kuosmanen. Nonlinear Digital Filtering. CRC Press, New York, NY, 1997. 22. J. Astola and P. Kuosmanen. Representation and optimization of stack filters. In E.
Dougherty and J. Astola, editors, Nonlinear Filters for Image Processing, chapter 7. SPIE, 1999.
23. J. Astola and Y. Neuvo. Optimal median type filters for exponential noise distributions. Signal Proc., 17:95-104, June 1989. 24. N. Balakrishnan and C. Rao. Order statistics: Applications. In
Handbook of Statistics, volume 17. Elsevier Science, Amsterdam, 1998. 25. A. Bangham. Properties of a series of nested median filters, namely the data sieve. IEEE Trans. on Signal Proc., 41131-42,
1993. 26. K. E. Barner and G. R. Arce. Coloring schemes for the design of permutation filters: A group theoretic method for filter class reduction. ZEEE Trans. on Circuits and Systems, July 1996. 27.
V. Barnett. The ordering of multivariate data. J. R. Stat. Soc. A , 139:331-354, 1976. 28. V. Barnett and T. Lewis. Outliers in Statistical Data. John Wiley & Sons, New York, second edition, 1984.
29. I. Barrodale and F. D. K. Roberts. An improved algorithm for discrete 1 1 linear approximation. SIAM J. Numer Anal., 10(5):839 - 848, October 1973. 30. J. B. Bednar and T. L. Watt. Alpha-trimmed
means and their relationship to median filters. IEEE Trans. on Acoustics, Speech, and Signal Proc., ASSP32(2), February 1984. 3 1. J. Beran, R. Sherman, M. S. Taqqu, and W. Willinger. Long-range
dependance in variable bit-rate video traffic. IEEE Trans. on Communications, 43: 15661579.1995. 32. P. Bickel and K. Doksum. Mathematical Statistics. Holden-Day, San Francisco, CA, 1977. 33. G.
Blom. Nearly best linear estimates of location and scale parameters. In A. E. Sarhan and B. G. Greenberg, editors, Contributions to Order Statistics, pages 34-46. John Wiley & Sons, New York, NY,
1962. 34. P. Bloomfield and W. Steiger. Least absolute deviations curve-fitting. SZAM J. Sci. Statist. Comput., 1:290 - 301, 1980. 35. P. Bloomfield and W. L. Steiger. Least Absolute Deviations:
Theory, Applications, and Algorithms. Birkhauser, Boston, MA, 1983. 36. C. G. Boncelet, Jr. Algorithms to compute order statistic distributions. SZAM J. Sci. Stat. Comput., 8(5), September 1987. 37.
A. C. Bovik. Streaking in median filtered images. IEEE Trans. on Acoustics, Speech, and Signal Proc., ASSP-35, April 1987.
38. A. C. Bovik, T. S. Huang, and Jr. D.C. Munson. A generalization of median filtering using linear combinations of order statistics. IEEE Trans. on Acoustics, Speech, and Signal Proc., 3 1(
12),December 1983. 39. K.L. Boyer, M. J. Mirza, and G. Ganguly. The robust sequential estimator: A general approach and its application to surface organization in range data. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 16(10):987-1001, October 1994. 40. B. Wade Brorsen and S. R. Yang. Maximum likelihood estimates of symmetric stable distribution parameters. Commun.
Statist.-Simula., 19(4):1459-1464, 1990. 41. D. R. K.Brownrigg. The weighted median filter. Communications of the ACM, 27(8):807-818,August 1984. 42. 0.Cappt, E Moulines, J. C. Pesquet, A. Petropulu,
and X. Yang. Long-range dependence and heavy-tail modeling for teletraffic data. IEEE Signal Proc. Magazine, May 2002. 43. J. M. Chambers, C. Mallows, and B. W. Stuck. A method for simulating stable
random variables. J. Amer. Stat. Association, 71(354):340-344,1976. 44. A. Charnes, W.W. Cooper, and R. 0.Ferguson. Optimal estimation of executive compensation by linear programming. Management
Science, l(2):138 - 15 1, January 1955. 45. H.-I. Choi and W. J. Williams. Improved time-frequency representation of multicomponent signals using exponential kernels. IEEE Trans. on Acoustics,
Speech, and Signal Proc., 37:862-871,1989. 46. K.S. Choi, A.W. Morales, and S.J. KO.Design of linear combination of weighted medians. IEEE Trans. on Signal Proc., 49(9):1940-1952,September 2001. 47.
T.C. Chuah, B. S Sharif, and 0. R. Hinton. Nonlinear decorrelator for multiuser detection in nongaussian impulsive environments. IEE Electronic Letters, 36(10):920-922,2000. 48. T. C. Chuah, B. S
Sharif, and 0. R. Hinton. Nonlinear space-time decorrelator for multiuser detection in nongaussian channels. IEE Electronic Letters, 36(24):2041-2043,2000. 49. T. C. Chuah, B. S Sharif, and 0. R.
Hinton. Robust adaptive spread-spectrum receiver with neural-net preprocessing in nongaussian noise. IEEE TRans. on Neural Networks, 12546-558,2001. 50. T. C. Chuah, B. S Sharif, and 0. R. Hinton.
Robust decorrelating decisionfeedback multiuser detection in nongaussian channels. Signal Processing, 81 (2001): 1997-2004,2001.
5 1. G. A. Churchill. Fundamentals of experimental design for cdna micromays. Nature Genetics, 32:490 - 495,2002. 52. P. M. Clarkson and G. A. Williamson. Minimum variance signal estimation with
adaptive order statistic filters. In Proc. of the 1992 ZEEE International Conference on Acoustics, Speech and Signal Processing, volume 4, pages 253256,1992. 53. L. Cohen. Eme-Frequency Analysis.
Prentice-Hall, Upper Saddle River, NJ, 1st edition, 1995. 54. E. J. Coyle. Rank order operators and the mean absolute error criterion. IEEE Trans. on Acoustics, Speech, and Signal Proc., 36(1):63-76,
January 1988.
55. E. J. Coyle and J.-H. Lin. Stack filters and the mean absolute error criterion. IEEE Trans. on Acoustics, Speech, and Signal Proc., 36(8), August 1988. 56. H. Cramer. Mathematical Methods of
Statistics. Princeton University Press, New Jersey, 1946. 57. E. L. Crow and M. M. Siddiqui. Robust estimation of location. J. Amel: Statist. Association, 62:353-389,1967.
58. H. A. David. Order Statistics. Wiley Interscience, New York, 1981. 59. L. Davis and A. Rosenfeld. Noise cleaning by iterated local averaging. IEEE Trans. on Systems, Man, and Cybernetics,
8:705-710, September 1978. 60. Y. Dodge, editor. Statistical Data Analysis: Based on the L 1 -Norm and Related Methods. Elsevier Science, The Netherlands, 1987. 6 1. Y. Dodge, editor. L 1
-Statistical Analysis and Related Methods. North-Holland, The Netherlands, 1992. 62. Y. Dodge, editor. L 1-Statistical Procedures and Related Topics. Institute of Mathematical Statistics, 1997. 63.
Y. Dodge and W. Falconer, editors. Statistical Data Analysis Based on the L1-Norm and Related Methods. Barika Photography & Productions, 2002. 64. D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and
D. Picard. Wavelet shrinkage: Asymptopia? J. R. Statist. Soc. B., 57(2):301-337,1995. 65. R. Durrett. Probability: Theory and Examples. Duxbury Press, New York, second edition. 1996. 66. F. Y.
Edgeworth. A new method of reducing observations relating to several quantities. Phil. Mag. (Fiph Series), 24:222 - 223, 1887. 67. R. J. Adleret al., editor.A Practical Guide to Heavy Tails:
Statistical Techniques and Applications. Birkhauser, Boston, MA, 2002.
68. E. F. Fama and R. Roll. J. amer. stat. association. J. Amel: Stat. Association, 66(6), June 1971. 69. W. Feller. An Introduction to Probability Theory and its Applications, volume I of Wiley
Series in Probability and Mathematical Statistics. John Wiley & Sons, New York, 1970. 70. R. A. Fisher. On the mathematical foundation of theoretical statistics. Phylosophical Trans. of the Royal
Society of London, 222(Series A), 1922. 71. J. Fitch. Software and VLSI algorithms for generalized ranked order filtering. IEEE Trans. on Circuits and Systems, 34(5), May 1987. 72. J. P. Fitch, E. J.
Coyle, and N. C. Gallagher. Median filtering by threshold decomposition. IEEE Trans. on Acoustics, Speech, and Signal Proc., 32( 12), December 1984. 73. A. Flaig, G. R. Arce, and K. E. Barner. Affine
order statistic filters: Medianization of fir filters. IEEE Trans. on Signal Proc., 46(8):2101-2112, August 1998. 74. M. FrCchet. Sur la loi des erreurs d’observation. Matematicheskii Sbornik,
32:1-8. 1924. 75. M. S. Taqqu G. Samorodnitsky. Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. Chapman & Hall, New York, 1994. 76. M. Gabbouj and E. J. Coyle. Minimum
mean absolute error stack filtering with structural constraints and goals. IEEE Trans. on Acoustics, Speech, and Signal Proc., 38(6):955-968, June 1990. 77. J. Galambos. The Asymptotic Theory of
Extreme Order Statistics. John Wiley & Sons, New York, 1978. 78. N. C . Gallagher, Jr. and G. L. Wise. A theoretical analysis of the properties of median filters. IEEE Trans. on Acoustics, Speech,
and Signal Proc., 29(12), December 1981. 79. P. Ghandi and S. A. Kassam. Design and performance of combination filters. IEEE Trans. on Signal Proc., 39(7), July 1991. 80. C. Gini and L. Galvani. Di
talune estensioni dei concetti di media ai caratteri qualitative. Metron, 8:3-209, 1929. 81. J. G. Gonzalez. Robust Techniques for Wireless Communications in NonGaussian Environments. Ph. D.
dissertation, Dept. of Electrical and Computer Engineering, University of Delaware, Newark, DE, 1997. 82. J. G. Gonzalez and G. R. Arce. Weighted myriad filters: A robust filtering framework derived
from alpha-stable distributions. In Proceedings of the 1996
IEEE International Conference on Acoustics, Speech, and Signal Processing, Atlanta, Georgia, May 1996. 83. J. G. Gonzalez and G. R. Arce. Optimality of the myriad filter in practical impulsive-noise
environments. IEEE Trans. on Signal Proc., 49:438441, February 2001. 84. J. G. Gonzalez, D. W. Griffith, Jr., A. B. Cooper, 111, and G. R. Arce. Adaptive reception in impulsive noise. In Proc. IEEE
Int. Symp. on Information Theory, Ulm, Germany, June 1997.
85. J. G. Gonzalez, D. Griffith, and G. R. Arce. Zero order statistics. In Proc. of the 1997 Workshop on Higher-Order Statistics, Banff, Canada, July 1997. 86. J. G. Gonzalez and D. L. Lau. The
closest-to-mean filter: an edge preserving smoother for gaussian environments. In Proc. IEEE ICASSP 1997, Munich, Germany, April 1997. 87. C. Goodall. M-estimators of Location: An Outline of the
Theory. In D. C. Hoaglin et al., editor, Understanding Robust and Exploratory Data Analysis, chapter 11, pages 339-403. John Wiley & Sons, New York, 1983. 88. C. GroBand T. Strempel. On
generalizations of conics and on a generalization of the fermat-torricelliproblem. American Mathematical Monthly, 105(8):732743,1998. 89. J. B. S. Haldane. Note on the median on a multivariate
distribution. Biometrika, 35(3/4):414-4 15, December 1948. 90. H. M. Hall. A new model for “impulsive” phenomena: Application to atmosheric noise communication channels. Technical Reports 3412-8 and
7050-7, Stanford Electronics Lab., Stanford University, August 1966. 91. F. Hampel, E. Ronchetti, P. Rousseeuw, and W. Stahel. Robust Statistics: The Approach Based on Injuence Functions. John Wiley
& Sons, New York, NY, 1986. 92. R. Hardie and G. R. Arce. Ranking in r p and its use in multivariate image estimation. IEEE Trans. on video Technology, 1(2), June 1991. 93. R. C. Hardie and K. E.
Barner. Rank conditionedrank selection filters for signal restoration. IEEE Trans. on Image Proc., 3(2), March 1994. 94. R. C. Hardie and C. G. Boncelet, Jr. LUM filters: A class rank order based
filters for smoothing and sharpening. IEEE Trans. on Signal Proc., 41(5), May 1993. 95. T. E. Harris. Regression using minimum absolute deviations. American Statistician, 4( 1):14 - 15, Feb. 1950.
96. H. L. Harter. Order Statistics and Their Use in Testing and Estimation, volume 1 & 2. U.S. Government Printing Office, Washington D.C., 1970. 97. R. W. Hawley and N. C. Gallagher. On
edgeworth’smethod for minimum error linear regression. IEEE Trans. on Signal Proc., 43(8), August 1994. 98. J. F. Hayford. What is the center of an area or the center of a population? J. Amer. Stat.
Association, 8(58):47-58, 1902. 99. S. Haykin. Adaptive Filter Theory. Prentice Hall, New Jersey, 1995. 100. S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, New Jersey, 1998.
101. S. Haykin. Unsupervised Adaptive Filtering, Volume I and Volume II. WileyInterscience, 2000. 102. J. R. M. Hosking. L-estimation. In C. R. Rao and N. Balakrishnan, editors, Order Statistics and
Their Applications, Handbook of Statistics. Elsevier, 1998. 103. S. Hoyos, J. Bacca, and G. R. Arce. Spectral design of weighted median filters: A general iterative approach. IEEE Trans. on Signal
Proc., 2004. Accepted for Publication. 104. S. Hoyos, Y. Li, J. Bacca, and G. R. Arce. Weighted median filters admitting complex-valued weights and their optimization. IEEE Trans. on Signal Proc.,
2004. Accepted for Publication. 105. P. J. Huber. Robust Statistics. John Wiley & Sons, New York, 1981. 106. J. Ilow and D. Hatzinakos. Analytic alpha-stable noise modeling in a Poisson field of
interferers or scatterers. IEEE Trans. on Signal Proc., 46(6): 1601-161 1, June 1998. 107. A. K. Jain. Fundamentals of Digital Image Processing. Prentice Hall, New Jersey, 1989. 108. D. L. Jones and
R. G. Baraniuk. A signal dependent time-frequency representation: Optimal kernel design. IEEE Trans. on Signal Proc., 41:1589-1601, April 1993. 109. D. L. Jones and R. G. Baraniuk. An adaptive
optimal-kernel time-frequency representation. IEEE Trans. on Signal Proc., 43:2361-237 1, October 1995. 110. T. Kailath. Linear Systems. Prentice-Hall, New Jersey, 1980. 111. S. Kalluri and G. R.
Arce. Adaptive weighted myriad filter optimization for robust signal processing. In Proc. of the 1996 Conference on Information Science and Systems, Princeton, NJ, 1996.
112. S. Kalluri and G. R. Arce. Fast algorithms for weighted myriad computation by fixed point search. IEEE Trans. on Signal Proc., 48: 159-171, January 2000. 113. S. Kalluri and G. R. Arce. Robust
frequency-selectivefiltering using weighted myriad filters admitting real-valued weights. IEEE Trans. on Signal Proc., 49(11):2721-2733, November 2001. 114. Y.-T. Kim and G. R. Arce. Permutation
filter lattices: a general order statistic filtering framework. ZEEE Trans. on Signal Proc., 42(9), September 1994. 115. S.-J. KOand Y. H. Lee. Center weighted median filters and their applications
to image enhancement. IEEE Trans. on Circuits and Systems, 38(9), September 1991. 116. V. Koivunen. Nonlinear filtering of multivariate images under robust error criterion. ZEEE Trans. on Image
Proc., 5, June 1996. 117. C. Kotropoulos and I. Pitas. Constrained adaptive LMS L-filters. Signal Proc., 26(3):335-358,1992. 118. C. Kotropoulos and I. Pitas. Adaptive LMS L-filters for noise
suppression in images. ZEEE Trans. on Image Proc., 5(12):1596-1609, December 1996. 119. P. Kuosmanen and J. Astola. Optimal stack filters under rank selection and structural constraints. Signal
Proc., 41, February 1995. 120. P. Kuosmanen, P. Koivisto, P. Huttunen, and J. Astola. Shape preservation criteria and optimal soft morphologicalfiltering. J. Math. Imag. Vis., 5, December 1995. 121.
E. E. Kuruoglu. Density parameter estimation of skewed a-stable distributions. IEEE Trans. on Signal Proc., 49( lo), 2001. 122. P. S. Laplace. MCmoire sur la probabilitk des causes par les Cvbnemens.
Mkmoires de Mathkmatique et de Physique, 6, 1774. 123. D. L. Lau, G. R. Arce, and N. C. Gallagher. Robust image wavelet shrinkage for denoising. In Proceedings International Conference on Image
Processing 1996, volume 1, pages 37 1-374,1996. 124. J. S. Lee. Digital image smoothing and the sigma filter. Computer Vision, Graphics, and Image Processing, 24:255-269, November 1983. 125. Y. H.
Lee and S. A. Kassam. Generalized median filtering and related nonlinear filtering techniques. IEEE Trans. on Acoustics, Speech, and Signal Proc., 33(6), June 1985. 126. E. L. Lehmann. Theory of
Point Estimation. J Wiley & Sons, New York, NY, 1983.
127. W. E. Leland, M. S. Taqqu, W. Willinger, and D. V. Wilson. On the self-similar nature of ethernet traffic (extended version). ZEEE/ACM Trans. on Networking, 2(1):1-15, February 1994. 128. P.
LCvy. Calcul des probabilitks. Gauthier-Villars, Paris, 1925. 129. Y. Li and G. R. Arce. Center weighted myriad smoother in image denoising. Technical report, Dept. of Elect. Comp. Eng., University
of Delaware, February 2003. 130. Y. Li, G. R. Arce, and J. Bacca. Generalized vector medians for correlated channels. In Proceedings of the 2004 EUSZPCO European Signal Processing Conference, Vienna,
Austria, September 2004. 131. J. Lin and Y.-T. Kim. Fast algorithms for training stack filters. ZEEE Trans. on Signal Proc., 42(4), April 1994. 132. J.-H. Lin, T. M. Sellke, and E. J. Coyle. Adaptive
stack filtering under the mean absolute error criterion. ZEEE Trans. on Acoustics, Speech, and Signal Proc., 38(6), June 1990. 133. R. Y. Liu. On a notion of data depth based on random simplices.
Annals of Statistics, 18(1):405-414, March 1990. 134. E. H. Lloyd. Least-squares estimation of location and scale parameter using order statistics. Biometrika, 39, 1952. 135. X. Ma and C . L. Nikias.
Parameter estimation and blind channel identification in impulsive signal environments. IEEE Trans. on Signal Proc., 43( 12):288& 2897, December 1995. 136. X. Ma and C. L. Nikias. Joint estimation of
time delay and frequency delay in impulsive noise using Fractional Lower-Order Statistics. ZEEE Trans. on Signal Proc., 44( 11):2669-2687, November 1996. 137. C . L. Mallows. Some theory of nonlinear
smoothers. Annals of Statistics, 8(4), 1980. 138. B. Mandelbrot. Long-run linearity, locally Gaussian processes, H-spectra, and infinite variances. Znterant. Econ. Rev., 10:82-111, 1969. 139. I.
Mann, S. McLaughlin, W. Henkel, R. Kirkby, and T. Kessler. Impulse generation with appropriate amplitude,length, interarrival,and spectral characteristics. ZEEE Journal on Selected Areas in
Communications, 20(5):901-912, June 2002. 140. P. A. Maragos and R. W. Schafer. Morphological filters - Part I: Their set theoretic analysis and relations to linear shift invariant filters, and
Morphological filters - Part 11: Their relations to median, order-statistic, and stack filters. ZEEE Trans. on Acoustics, Speech, and Signal Proc., 35(8):1153-1 169,August 1987.
141. V. J. Mathews and G. Sicuranza. Polynomial Signal Processing. John Wiley & Sons, New York, NY, 1994. 142. J. H. McCulloch. Simple consistentestimators of stable distribution parameters.
Communications in Statistics-Simulation and Computation, 1986. 143. D. Middleton. Statistical-physical models of electromagnetic interference. ZEEE Trans. on Electromagnetic Compatibility, EMC-19:
106-127, 1977. 144. S. K. Mitra. Digital Signal Processing: A Computer-Based Approach, 2e with DSP Laboratory using MATLAB. McGraw-Hill Higher Education, 2nd edition, 2001. 145. S. Muroga. Threshold
Logic and Its Applications. John Wiley & Sons, New York, NY, 1971. 146. H. Myer and G. Ascheid. Synchronization in Digital Communications. John Wiley & Sons, New York, NY, 1990. 147. A. Nieminen, P.
Heinonen, and Y. Neuvo. A new class of detail-preserving filters for image processing. ZEEE Trans. on Pattern Analysis and Machine Intelligence, 9( l), January 1987. 148. C. L. Nikias and A. T.
Petropulu. Higher-Order Spectra Analyisis: A Nonlinear Signal Processing Framework. Prentice-Hall, New Jersey, 1993. 149. C. L. Nikias and M. Shao. Signal Processing with Alpha-Stable Distributions
and Applications. Wiley-Interscience,New York, 1995. 150. T. A. Nodes and N. C. Gallagher, Jr. Median filters: some modifications and their properties. IEEE Trans. on Acoustics, Speech, and Signal
Proc., 30(2):739-746, October 1982. 151. John P. Nolan. Stable Distributions. Birkhauser, first edition, June 2002. 152. H. Oja. Descriptive statistics for multivariate distributions. Statistics &
Probability Letters, 1(6):327-332, October 1983. 153. R. Oten and R.J.P. de Figueiredo. An efficient method for 1-filterdesign. ZEEE Trans. on Signal Proc., 51: 193- 203, January 2003. 154. J. M.
Paez-Borrallo and S. Zazo-Bello. On the joint statistical characterization of the input and output of an order statistic filter. ZEEE Trans. on Signal Proc., 42(2):456-459, February 1994. 155. F.
Palmieri and C. G. Boncelet, Jr. LZ-filters-anew class of order statistic filters. ZEEE Trans. on Acoustics, Speech, and Signal Proc., 37(5), May 1989. 156. F. Palmieri and C. G. Boncelet, Jr.
Frequency analysis and synthesis of a class of nonlinear filters. ZEEE Trans. on Acoustics, Speech, and Signal Proc., 38(8), August 1990.
157. A. Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, NY, 1991. 158. J. L. Paredes and G. R. Arce. Stack filters, stack smoothers, and mirrored threshold
decomposition.IEEE Trans. on Signal Proc., 47:2757-2767, October 1999. 159. V. Paxson and S. Floyd. Wide-area traffic: The failure of Poisson modeling. IEEE-ACM Trans. on Networking, 3:226-244,1995.
160. I. Pitas and A. N. Venetsanopoulos. Non-linear Filters. Kluwer, 1989. 161. I. Pitas and A. N. Venetsanopoulos. Adaptive filters based on order statistics. IEEE Trans. on Signal Proc., 39(2),
February 1991. 162. 1. Pitas and A. N. Venetsanopoulos. Order statistics in digital image processing. Proceedings of the IEEE, 80(12):1893-1921, December 1991. 163. I. Pitas and S. Vougioukas. LMS
order statistic filters adaptation by back propagation. Signal Processing, 25(3), December 1991. 164. C. A. Pomalaza-Raez and C . D. McGillem. An adaptive, nonlinear edgepreserving filter. IEEE
Trans. on Acoustics, Speech, and Signal Proc., 32(3):571-576, June 1984. 165. M. K. Prasad and Y. H. Lee. Stack filters and selection probabilities. IEEE TRansactions on Signal Processing,
42:2628-2642, October 1994. 166. J. G. Proakis and D.G. Manolakis. Digital Signal Processing. Principles, Algorithms and Applications. Prentice Hall, third edition, 1996. 167. S. Rappaport and L.
Kurz. An optimal nonlinear detector for digital data transmission through non-gaussianchannels. IEEE Trans. on Communications, 14(3), March 1966. 168. E. A, Robinson. Multichannel T h e Series
Analysis. Goose Pond Press, Houston, Texas, 1983. 169. S. Ross. A First Course in Probability. Prentice Hall, Upper Saddle River, New Jersey, 1998. 170. P. J. Rousseeuw and M. Hubert. Depth in an
arrangement of hyperplanes. Discrete and Computational Geometry, 22(2): 167 - 176, September 1999. 171. E. Sarhan and B. G. Greenberg. Contributions to Order Statistics. John Wiley & Sons, New York,
NY, 1962. 172. M. I. Shamos. Geometry and statistics: Problems at the interface. In J. F. Traub, editor, Algorithms and Complexity: New Directions and Recent Results. Academic Press, Inc., New York,
173. Y. Shen and K. Barner. Fast optimization of weighted vector median filters. IEEE Trans. on Signal Proc., 2004. Submitted. 174. C. L. Sheng. Threshold Logic. The Ryerson Press, Ontario, Canada,
1969. 175. I. Shmulevich and G. R. Arce. Spectral design of weighted median filters admitting negative weights. Signal Processing Letters, IEEE, 8( 12):313-3 16, December 200 1. 176. J. Shynk.
Adaptive IIR filtering. IEEEASSP Magazine, 6(2):4-21, April 1989. 177. H. S. Sichel. The method of frequency moments and its application to Type VII populations. Biometrika, pages 404-425, 1949. 178.
C. Small. A survey of multidimensional medians. Int. Stat. Rev., 58(3), 1990. 179. E. Souza. Performance of a spread spectrum packet radio network in a Poisson field of interferences. IEEE Trans. on
Information Theory,IT-38(6):1743-1754, November 1992. 180. K. S. Srikantan. Recurrence relations between the pdfs of order statistics and some applications. Annals of Mathematical Statistics, 33:
169-177, 1962. 181. F. Steiner. Most frequent value and cohesion of probability distributions. Acta Geod., geophys. et Mont. Acad. Sci. Hung., 8(3-4):381-395,1973. 182. B. W. Stuck. Minimum error
dispersion linear filtering of scalar symmetric stable processes. IEEE Trans. on Automatic Control, 23(3):507-509,1978. 183. M. T. Subbotin. On the law of frequency of errors. Matematicheskii
Sbornik, 3 1:296-301,1923. 184. T. Sun, M. Gabbouj, and Y. Neuvo. Center weighted median filters: Some properties and their applications in image processing. Signal Proc., 35213229,1994. 185. P. E.
Trahanias and A. N. Venetsanopoulos. Directional processing of color images: Theory and experimental results. IEEE Trans. on Image Proc., 5 , June 1996. 186. I. Tiibucs, D. Petrescu, and M Gabbouj. A
training framework for stack and boolean filtering fast optimal design procedurtes and robustness case study. IEEE Trans. on Image Proc., Special Issue on Nonlinear Image Processing, 5:809-826, June
1996. 187. J. Tukey. A problem of berkson and minimum variance orderly estimators. Annals of Mathematical Statistics, 29588-592,1958.
188. J. W. Tukey. A survey of sampling from contaminated distributions. In I. Olkin, S. G. Ghurye, W. Hoeffding, W. G. Madow, and H. B. Mann, editors, Contributions to Probability and Statistics,
Essays in Honor of Harold Hotelling, pages 448485. Stanford University Press, Stanford, CA, 1960. 189. J. W. Tukey. Nonlinear (nonsuperimposable) methods for smoothing data. In Con& Rec., Eascon,
1974. 190. J. W. Tukey. Mathematics and the picturing of data. In Int Congress Math., pages 523-53 1, Vancouver, 1975. 191. V. V. Uchaikin and V. M. Zolotarev. Chance and Stability, Stable
Distributions and Their Applications (Modern Probability and Statistics). VSP, Utrecht, Netherlands, 1999. 192. P. D. Welch. The use of fast fourier transforms for the estimation of power spectrum: A
method based on time averaging over short modified periodograms. IEEE Trans. on Audio and Electroacustics, 15:70-73, June 1967. 193. P. D. Wendt. Nonrecursive and recursive stack filters and their
filtering behavior. IEEE Trans. on Acoustics, Speech, and Signal Proc., 33, August 1986. 194. P. D. Wendt, E. J. Coyle, and N. C. Gallagher, Jr. Some convergence properties of median filters. ZEEE
Trans. on Circuits and Systems, 33(3), March 1986. 195. P. D. Wendt, E. J. Coyle, and N. C. Gallagher, Jr. Stack filters. ZEEE Trans. on Acoustics, Speech, and Signal Proc., 34(8), August 1986. 196.
G. 0. Wesolowsky. A new descent algorithm for the least absolute value regression. Commun. Statist., B10(5):479 - 491, 1981. 197. B. Widrow and S. D. Steam. Adaptive Signal Processing. Prentice Hall,
Englewood Cliffs, NJ, 1985. 198. R. Yang, L. Yin, M. Gabbouj, J. Astola, and Y. Neuvo. Optimal weighted median filters under structural constraints. ZEEE Trans. on Signal Proc., 43(6), June 1995.
199. L. Yin, J. Astola, and Y. Neuvo. Adaptive stack filtering with applications to image processing. IEEE Trans. on Acoustics, Speech, and Signal Proc., 41(1), January 1993. 200. L. Yin and Y.
Neuvo. Fast adaptation and performance characteristics of fir-wos hybrid filters. IEEE Trans. on Signal Proc., 42(7), July 1994. 201. L. Yin, R. Yang, M. Gabbouj, and Y. Neuvo. Weighted median
filters: a tutorial. Trans. on Circuits and Systems II, 4 1, May 1996. 202. 0.Yli-Harja, J. Astola, and Y. Neuvo. Analysis of the properties of median and weighted median filters using threshold
logic and stack filter representation. IEEE Trans. on Acoustics, Speech, and Signal Proc., 39(2), February 1991.
203. P. T. Yu and E. Coyle. On the existence and design of the best stack filter based on associative memory. ZEEE Trans. on Circuits and Systems, 39(3), March 1992. 204. W. Yu, D. Toumpakaris, J.
Coffi, D. Gardan, and F. Gauthier. Performance of asymmetric digitial subscriber lines (adsl) in an impulse noise environment. ZEEE Trans. on Communications, 51( 10):1653-1657, October 2003. 205. B.
Zeng. Optimal median-type filtering under structural constraints. ZEEE Trans. on Image Proc., pages 921-931, July 1995. 206. Y. Zhang. Primal-dual interior point approach for computing 1 1-solutions
and 1,-solutions of overdetermined linear systems. J. Optim. Theory Appl., 77(2):323 - 341, May 1993. 207. V. M. Zolotarev. One-dimensional Stable Distributions. American Mathematical Society,
Providence, Rhode Island, 1986. Translation from the Russian, 1983.
This Page Intentionally Left Blank
Appendix A Software Guide
Chapter 2 astable astableflom astablelogmom contgaussrnd Laplacian parestssd
Generate an a-stable distributed data set. Estimate the density parameters of a-stable distributions. Estimate the density parameters of a-stable distributions. Generates a random data set with a
contaminated Gaussian distribution function. Generates a data set with a Laplacian distribution. Parameter estimates for Symmetric Stable Distributions
Chapter 4 tmean wmean
Calculates the trimmed mean of a data vector. Calculates the windsorized mean of a data vector.
Smooths an image keeping horizontal, vertical and diagonal lines. Two dimensional permutation weighted median filtering of a sequence. One-dimensional weighted median filtering. Two-dimensional
weighted median filtering. Compute the weighted median of an observation vector.
pwmedfilt:! wmedfilt wmedfilt2 wmedian
LMSMPCCWM marginalWMM1 mcwmedfilt mcwmedian optmarginalWMMI optvmedfilt optWMM1I rwmedfilt rwmedfilt2 rwmedopt
Designs an optimal marginal phase coupled complex weighted median filter. Multivariate weighted median filtering for stationary correlated channels. One-dimensional marginal phase coupled complex
weighted median filtering. Calculates the marginal phase coupled complex weighted median of an observation vector. Optimization algorithm for the marginal WMM I. Calculates the optimum weights for
the weighted vector median filter. Optimum weights for the weighted multivariate median filter 11. One-dimensional recursive weighted median filter. Two-dimensional recursive weighted median filter.
Design one-dimensional recursive weighted median filters using the fast “recursive decoupling” adaptive optimization algorithm.
383 Chapter 6 rwmedopt2
SSP2WM Vwmedfilt Vwmedian WM2SSPxeal
wmedopt wmedopt2 WMMII wmsharpener
Design two-dimensional recursive weighted median filters using the fast “recursive decoupling” adaptive optimization algorithm. Finds the closest marginal phase coupled complex weighted median filter
to a given complex valued linear filter. Finds the closest weighted median filter to a given linear FIR filter. Weighted vector median filtering. Weighted vector median of an observation window.
Finds the linear filter closest in the MSE sense to a given real valued weighted median filter. Same as the previous including also the first derivative of the cost function in (6.36) Design
one-dimensional weighted median filters using a fast adaptive optimization algorithm. Design two-dimensional weighted median filters using a fast adaptive optimization algorithm. Weighted
multivariate median I1 filtering. Sharpens a gray-scale image using permutation high pass median filters.
Chapter 7 LCWM LCWM-design LCWMsymmetric Lfilter Llfilter medianaffine optmedianaffine opt-weightsl opt-weightsll
One-dimensional LCWM filtering. Designs a LCWM filter based on a given linear filter. Designs a LCWM filter based on a symmetric linear filter. Performs L-filtering of the sequence X Performs
L1-filtering of the sequence X . Performs median affine filtering of the vector X . Designs an optimal median affine filter (linear weights and dispersion parameter) with an adaptive algorithm. Finds
the optimum weights for a location estimator using L- filters. Performs L1-filtering of the sequence X .
Chapter 8 cwmyrfilt2 fwmyriad wmyrfilt wmyriad
Smoothing of images with the use of weighted myriad filters. Compute the fast weighted myriad of an observation vector. One-dimensionalweighted myriad filter. Compute the fast weighted myriad of an
observation vector.
Additional impoint medcor medcov medpow
Pointillize image Sample median correlation. Sample median covariation. Sample median power.
Generate an a-stable distributed data set.
x = astable(m,n,alpha,deIta,gamma,beta).
astable returns a m x n dimensional data set satisfying an astable distribution described by the parameters: a, 6, y and p. The index of stability a E (0,2]measures the thickness of the tails and,
therefore, the impulsiveness of the distribution. The symmetry parameter b E [-1,1]sets the skewness of the distribution. The scale parameter y > 0, also called the dispersion, is similar to the
variance. The location parameter p E (-03, 03) sets the shift of the probability distribution function.
Generate a a-stable distributed data sequence LC
= astabZe(l0,2,1.5,0.5,1,20);
The result is
22.8381 20.0192 18.4857 20.7009 20.02 18 19.4775 20.3570 20.3778 20.6206 20.8048
21.5816 18.7585 15.4310 19.8038 16.9714 20.4440 20.2192 2 1.1832 19.8470 19.4305
Refer to [43] for details of the algorithm.
See Also
astable-logmom, astable-flom, Section 2.2.4.
Estimate the density parameters of a-stable distributions.
[alpha,delta,gamma,beta]= astable-flom(x)
astable-flom returns the density parameters a , 6, y and ,B of the skewed a-stable distribution of a sample set z calculated by fractional lower order moment methods. The index of stability a E (0,2]
measures the thickness of the tails and, therefore, the impulsiveness of the distribution. The symmetry parameter 6 E [-1) 11 sets the skewness of the distribution. The scale parameter y > 0, also
called the dispersion, is similar to the variance. The location parameter p E (-co,co) sets the shift of the probability distribution function.
Generate an a-stable distributed data sequence z = astable(10000,1,1.5,0.5,1,20);
Then estimate the density parameters a, y,6 and ,6 of the distribution [alpha,delta, gamma, beta] = astable-f Eom(z); The result is alpha = delta = gamma = beta =
1.5270 0.4493 1.0609 20.4428
The algorithm is based on the properties of the fractional lowerorder moments of a-stable distribution. Refer to [121] for details.
See Also
astable, astable-logmom, Section 2.3.3.
Estimate the density parameters of a-stable distributions.
[alpha,delta,gamma,beta] = astable-logmom(x)
astable-logmom returns the density parameters a, 6, y and /3 of the skewed a-stable distribution of a data set x . The parameters are calculated by logarithmic moment methods. The index of stability
a E (0,2] measures the thickness of the tails and, therefore, the impulsiveness of the distribution. The symmetry parameter 6 E [ -1, 11 sets the skewness of the distribution. The scale parameter y >
0, also called the dispersion, is similar to the variance. The location parameter /3 E (-m, co) sets the shift of the probability distribution function.
Generate an a-stable distributed data sequence
x = astabZe(10000,1,1.5,0.5,1,20); Then estimate the density parameters a , y,6 and ,B of the distribution
[alpha,delta, gamma, beta] = astable-logmom(x); The result is alpha = delta = gamma = beta =
1.4772 0.4564 0.9790 20.4522
The algorithm is based on the properties of the logarithmic moments of a-stable distribution. Refer to [ 1211for details.
See Also
astable, astableflom, Section 2.3.3.
Generates a random data set with a contaminated Gaussian distribution function.
Y = contgaussrnd(M, N, mean, sigmal, sigma2, p)
contgaussrnd returns a M x N data set that satisfies a contaminated Gaussian distribution with parameters mean, sigmal, sigma2 and p. Mean is the mean of the distribution, sigmal is the standard
deviation of the original Gaussian distribution, sigma2 is the standard deviation of the contaminating Gaussian distribution and p is the proportion of contaminated samples.
Generate a contaminated Gaussian data sequence z = contgaussmd(10,2,20,1,10,0.1);
The result is z =
20.8013 19.9186 20.4586 19.9364 21.5763 19.9550 19.0683 17.8455 20.749 1 19.7286
20.6137 17.8818 17.1818 20.4960 18.7519 20.4427 18.2451 11.2404 20.4547 18.6845
Initially a M x N vector of uniformly distributes random variables is generated. This vector is read component by component. Every time a component of the vector is greater than p, a Gaussian random
variable with standard deviation sigmal is generated. Otherwise, a Gaussian random variable with standard deviation sigma2 is generated.
See Also
Laplacian and Section 2.1.
Smoothing of images with the use of weighted myriad filters.
y = cwmyrfilt2(X, Wc, N, L, U)
y = cwmyrfilt2 performs a double center weighted myriad operation on the image X to remove impulsive noise. The input parameters are:
X Input data vector.
WC Center weight (A value of 10,000is recommended for gray scale images).
N Window size. L Lower order statistic used in the calculation of K .
U Upper order statistic used in the calculation of K .
cwmyrfilt2is used to clean an image corrupted with 5% salt-andpepper noise. The clean, noisy and filtered images are shown below.
The output of the algorithm is obtained as Y = 1- CWMy(1CWMy(X)), where CWMy represents the center weighted myriad with K =
See Also
wmyrfilt, Section 8.6.1 and [129].
Compute the fast weighted myriad of an observation vector.
Owmy = fwmyriad(x, w, k)
fwmyriad(x,w,k) finds the approximate value of the weighted myriad of a vector x using a fast algorithm. w is an N-component vector that contains the real-valued weights and k is the linearity
parameter that takes on values greater than zero. The default values for the weights and the linearity parameter are the all-one vector and one respectively.
x = [3 2 4 5 81; w = [0.15 -0.2 0.3 -0.25 0.11; Owmf = wmyriad(x,w,l); Owmf = 3.1953
fwmyriad(x,w,k) implements a fast algorithm introduced by S. Kalluri et al. to compute an approximate value for the weighted myriad of an observation vector.
See Also
wmyriad, wmyrfilt, wmyropt, Section 8.5 and [ 1121.
Smooths an image keeping horizontal, vertical, and diagonal lines.
y = g3hvd5filt(X)
y = g3hvd5filt(X) slides a 5 x 5 running window over the black and white image X and performs a smoothing operation with a mask that keeps the horizontal, vertical, and diagonal lines of length 3 .
The following shows a clean image, the same image contaminated with salt & pepper noise and the output of the stack smoother applied to the noisy signal.
See Section 5.3.1 and 12051.
Pointillize image.
impoint first scales the image ’infilename’ by a factor of n. It then applies brushstrokes to the image to give it a ”painted” feeling. Smaller 7 x 7 pixel strokes are used near high frequency edges
of the image. Larger 14 x 10 pixel strokes are used on the rest of the image. The red, green, and blue components of each brushstroke are independently leveled to 0, 64, 128, 192, or 255, so there
are at most 53 = 125 colors used in the finished image, which is written out to ’outfilename’.
The algorithm was applied to the following picture:
Figure A. 1 Luminance of the original and pointillized images.
Generates a data set with a Laplacian distribution.
Y = Laplacian(M, N, lambda)
Laplacianreturns a A4 x N of random variables with a Laplacian distribution with parameter lambda.
z = Zuplacernd(l0,2,1); The result is x = 0.5424 -0.7765
0.0290 0.3497 0.1406 -0.4915 0.2874 1.9228 -2.7274 0.2739 -0.4697
-0.6927 -1.7152 1.1511 2.3975 -0.4506 0.3298 0.0151 -1.4150 -0.2175
The program generates a M x N vector ( U ) of uniformly distributed random variables between 0 and 1 and applies:
Y =F ( U )
where F-l represents the inverse of the distribution function of a Laplacian random variable with parameter lambda.
See Also
contgaussrnd, Section 2.1 and [169].
One-dimensionalLCWM filtering.
y = LCWM(alpha,Bp,x)
LCWM(alpha,Bp,x) filters the sequence 2 using the LCWM filter described by the vector of linear coefficients alpha and the matrix of median weights Bp.
The program performs weighted median operations over the input sequence x using the weighted medians described by the rows of the matrix Bp. The outputs of the medians are scaled using the
coefficients in the vector alpha and added together to obtain the final output of the filter.
See Also
combmat, rowsearch, LCWMsymmetric, LCWM-design, Section 7.6.1 and [46].
LCWM-design Purpose
Designs a LCWM filter based on a given linear filter.
[alpha,Bp] = LCWM-design(h,M)
LCWM-design(h,M) designs a LCWM filter based on medians of size M and the linear filter h. The matrix B p contains the weights for the medians and the vector alpha the coefficients for the linear
Design a LCWM filter from a high-pass, 7-tap linear filter with a cutoff frequency of 0.5 using medians of length 3.
1 1 1 1 1 1 -0
0 0 0 0 1 0 0-
The spectra of the original linear filter and the LCWM filter are shown in Figure A.2.
The program uses the routines combmat and rowsearch to generate a set of linearly independent combinations of m elements from a set of n elements ( n is the length of h). This combinations are
grouped in a matrix that is used to calculate the coefficients of the linear combination.
See Also
combmat, rowsearch, LCWM, LCWMsymmetric, Section 7.6.2 and [46].
04 05 06 Normalized Frequency
Figure A.2 Approximated spectra of a LCWM filter (dotted) and the linear filter used as a reference to design it (solid).
Designs a LCWM filter based on a symmetric linear filter.
[alpha,Bp] = LCWMsymmetric(h,M)
LCWMsymmetric(h,M) designs a LCWM filter based on medians of size 2M 1 and 2(M 1) and the symmetric linear filter h. The matrix B p contains the weights for the medians and the vector alpha the
coefficients for the linear combination.
Design aLCWM filter from a high-pass, 7-tap linear filter with a cutoff frequency of 0.5 using medians of length 3 and 4 (M = 1).
h = [0.0087, 0, - 0.2518, 05138, - 0.2518, 0, 0.00871T ci! = [0.3798, 1.1353, 0.0262, - 1.51381 0011100 0101010
0110110 The spectra of the original linear filter and the LCWM filter are shown in Figure A.3.
The program uses the routines combmat and rowsearch to generate a set of linearly independent combinations of m elements from a set of n elements (n = f or n = depending if 1 is even or odd, I is the
length of h). This combinations are grouped in a matrix that is used to calculate the coefficients of the linear combination.
See Also
combmat, rowsearch, LCWM, LCWM-design, Section 7.6.3 and [46].
Normalized Frequency
Figure A.3 Approximated spectra of a symmetric LCWM filter (dotted) and the linear filter used as a reference to design it (solid).
Performs L-filtering of the sequence X
y=Lfilter(X,W) filters the data in vector X with the Lfilter described by the weight vector W .
Algorithm N
Y ( n )= C W i X ( 2 ) . i=l
See Also
opt-weights-L, Llfilter, Section 7.2 and [16, 58, 1021.
Performs L1-filteringof the sequence X .
y=Llf ilter(X,W)
y=Llfilter(X,W) filters the data in vector X using the L1-filter described by the weight vector W .
Llfilter passes a window of length N (the length of W is N 2 ) over the data X . With the data in the window it generates the L l vector X L and ~ multiplies it by W to obtain the output.
See Also
Lfilter, opt-weights-Ll, Section 7.3 and [79, 1551.
Designs an optimal marginal phase coupled complex weighted median filter.
[Wmpc,e,y]=LMS-MPCCWM(mu,taps,reference, received)
LMS-MPCCWM(mu,taps, reference,received)Uses an adaptive LMS algorithm to design an optimum marginal phase coupled complex weighted median. The filter is optimum in the sense that the MSE between its
output and a reference signal is minimized. The input parameters are: 0
mu: Step size of the adaptive algorithm.
taps: Number of weights of the filter to be designed.
reference: Reference signal (desired output) for the LMS algorithm. received: Received signal. Input to the complex weighted median being designed.
Design a high-pass complex weighted median filter with cutoff frequencies equal to f 0 . 3 and test its robustness against impulsive noise. Fs=2000; t=O: 1Fs:1; d=exp(2*pi*i*400*t); x=exp
(2*pi*i*20*t)+d; n=4 1;
% Sampling freq. % Desired signal
9% Training signal
% Number of % filter coefficients mu=O.OO1; % Step size h=cremez(n-l,[-l -.3 -.2 .2 .3 lJ,’highpass’); % Linear coeff. [Wmpc,e,y]=LMSMPCCWM(mu,n,d,x); % Training stage % WM filter output yo=
mcwmedfilt(x,Wmpc); % FIR output yl=filter(h,1,x); Tn=x+astable( 1, length(t), 1.5,0,0.5,0) +j*astable(l,length(t),lS, 0, 0.5,O); % Noisy signal yon=mcwmedfilt(Tn,Wmpc); % WM filtering of Tn yln=
filter(h,l ,Tn); % FIR filtering of Tn
The next figure shows the results:
Figure A.4 Filtering of a sum of exponentials in a noiseless and noisy environment. The first plot represents the original signal, the second the signal filtered with a FIR filter and the third the
signal filtered with an optimal marginal phase coupled complex weighted median filter.
The weights are updated according to the equation:
'The subindexes R and I represent real and imaginary parts respectively
time n, e R ( n ) and e l ( n ) represent the difference between the output of the filter and the desired output at time n.
See Also
mcwmedfilt, mcwmedian, Section 6.6.4 and [1041.
Performs marginal multivariate weighted median filtering of stationary cross-channel correlated signals.
[y] = marginalWMMI(X,W,V,wsize)
[y] = marginalWMMI(X,W,V,wsize)filters the data in vector X with the marginal multivariate weighted median filter I described by the N-dimensional vector V and the M x A4 matrix W . N is the number
of samples in the observation window and M is the dimension of the components of X . The window size is specified by the parameter wsize. This parameter has the form [mn],where m x n = N .
marginalWMMl pases a window over the data in X and computes, for each component of each sample in the window, the weighted median described by the columns of W . After that it calculates the output
of the marginal weighted median described by V applied to the outputs of the previous operation.
See Also
optmarginalWMMI, WMMll and Section 6.7.3.
One-dimensional marginal phase-coupled complex weighted median filtering.
y = mcwmedfilt(X,W)
y=mcwmedfilt(X,W) filters the data in vector X with the complex weighted median filter described by the vector W using the marginal phase coupling algorithm. The window size is specified by the
dimensions of the vector W .
mcwmedfilt pases a window over the data in X and computes, at each instant n, the marginal phase-coupled complex weighted median of the samples X (n , . . . , X ( n ) ,. . . , X (n + where N = length
(W). The resultant value is the filter output at instant n.
See Also
mcwmedian, Section 6.6.2 and [104].
Calculates the marginal phase-coupled complex weighted median of an observation vector.
y = mcwmedian(X, W)
mcwmedian computes the phase-coupled complex-weighted median of an observation vector X . W is a complex-valued vector of the same length of X containing the filter weights.
+ 0.5547i1 0.8858 - 0.3101i1 0.5131 -0.23499, 0.9311 + 0.42579, - 0.8017 0.625491 W = [-0.0430 + 0.05929, 0.3776 - 0.19243, 0.6461, 0.3776 + 0.19243, - 0.0430 0.059293 X
P = [-0.0297 - 0.90279, 0.6484 - 0.67853, 0.5131 -0.23499, 0.6363
+ 0.80203,
- 0.0347
+ 1.01623]
y = mcwmedian(X,W )= 0.6363 - 0.23493
Given a set of N complex-valued weights (WI , W2, . . . , W N ) and an observation vector X = [ X I, X2 , . . . , X N ] the ~ , output of the marginal phase coupled complex-weighted median filter is
defined as:
See Also
mcwmedfilt, wmedian, Section 6.6.2 and [ 1041.
R I I
Figure A.5 Marginal phase-coupled complex-weighted median. The samples are reprethe weights by A, the phase-coupled samples by x, and the output by 0 . The sented by 0, dashed lines show how the
output is composed by the real part of P4 and the imaginary part O f P3.
Sample median correlation.
mcor = medcor(x,y)
medcor(x,y)returns the sample median correlationbetween the sequences x and y. Both sequences should have the same length.
Given the observation sets {xi(.)} and {yi(.)} taken from two joint random variables IC and y, the sample median correlations is defined as
See Also
medpow, medcov and [12].
Sample median covariation.
mcov = medcov(x,y)
medcov(x,y) computes the sample median covariation between the sequences x and y. Both sequences should have the same length.
Given the observation sets { x i ( . ) }and {yi(.)} taken from two joint random variables z and y, the sample median correlations is given by
R,, = MEDIAN(^^^^
MEDIAN (1yil o sgn(yi)zi
1 ; ~
where n is the length of the observation sequences and Iyil o sgn(Yi)zci IL=l Y l l 0 sgn(yl)al, l Y a l 0 W ( Y 2 ) 2 2 , . . . , lynlo W(Yn
See Also
medpow, medcor and [12].
Performs median affine filtering of the vector X.
median-affine filters the data in vector X using the median weights C to calculate the reference,the parameter y to calculate the affinity and the weights W to perform the linear combination.
median-affine passes a window over the data vector X that selects N = Zength(W) samples. Then it takes a window of size M = Zength(C) around the center sample and calculates the reference as the
weighted median of this samples with the weights in C. Once the reference point is obtained, the exponential affinity function is calculated for each sample in the original window. With the weights
and the affinity function, the normalization constant K is calculated. with all this elements the final value of the median affine can be calculated as:
See Section 7.5.1 and [731.
Sample median power.
mpow = medpow(x) mcor = medpow(x,type)
medpow returns the sample median power of the observation sequence x. medpow(x,type) uses a second input parameter to specify the type of median power to be returned.
If type = cor, medpow(x,type)returns the sample median correlation power of x. If type = COV, medpow(x,type)returns the sample median covariation power of x. By default, medpow(x) returns the sample
median correlation power.
For an observation sequence {x(.)} of length n, the sample median correlation power and the sample median covariation power are respectively defined as
See Also
medcov, medcor and [12].
Finds the optimal weights for the marginal multivariate weighted median filter.
[v,w,y,e] = Opt-marginalWMMl(x,wsize,d,muv,muw)
Opt-marginalWMM1 finds the optimumweights for the marginal weighted multivariate median filter I using an adaptive LMA algorithm.Theparameters of the function are:
x is the input signal to be filtered. wsize is the window size of the filter in a vector form ( [ mn]).
d reference signal for the adaptive algorithm. muv step size used in the calculation of the outer weights v. muw step size used in the calculation of the inner weights w. v N-dimensional vector of
optimum outer weights ( N = m x n). w A4 x A4 matrix of optimum inner weights (Ad)is the dimension of the space. y Output signal.
e Absolute error of the output signal.
Filter a color image contaminated with 10% salt and pepper noise using the marginal WMM filter I and a 5 x 5 window.
See Also
marginalWMMI, Opt-WMMII and 6.7.4.
Figure A.6 Luminance of the noiseless image, noisy image, and output of the marginal WMM filter I.
Designs an optimal median affine filter (linear weights and dispersion parameter) with an adaptive algorithm.
[Wopt, gopt, y] = opt-median-affine(x, Wini, C, gini, stepW, stepg, Yd)
opt-median-aff ine calculates the optimal linear weights and dispersion parameter for a median affine filter using a given median operator and the exponential distance as affinity function. The input
parameters are: x is the input signal to be filtered. Wini is the initial value for the linear weights of the filter.
C contains the median weights of the filter. gini is the initial value for the dispersion parameter. stepW is the step size for the adaptive algorithm that calculates the weights. It should be much
smaller than the step size used in the adaptive algorithm of the dispersion parameter. stepg is the step size for the adaptive algorithmof the dispersion parameter. yd is the reference signal.
Design a median affine filter to recover a sinusoidal contaminated with a-stable noise. t=l :1 :1000; yd=sin(2*pi*t/50); x=yd+astable(l , 1000, 1, 0, 0.2, 0); ( Wopt, gopt, y) = opt-median-affine(x,
ones(l,9), ones(l,5), 1 , 0.001, 0.1, yd); y=median-aff ine(x,Wopt,ones(1,5),gopt);
The original signal, the noisy signal and the output of the filter are shown below
Figure A.7 Reference signal, input and output of a median &ne filter.
The weights and the dispersion parameter are updated according to the following equations:
+ 1)
= Wt(n)
+ u w e ( n ) (9%
- tanh(Wi)Xk)
r ( n + 1) = r ( n ) +
where gi stands for the abbreviated affinity function of the ith sample.
See Also
median-affine, Section 7.5.1, and [73].
optJwmedfil t
Finds the optimum weights for the vector-weightedmedian filter.
[w, wCurve] = Opt-Vwmedfilt(1-n, I, w0, mu)
Opt-Vwmedfilt calculates the optimum weights for a vectorweighted median filter using a fast adaptive greedy algorithm. The parameters of the algorithm are: I-n Input signal.
I Reference signal. WO Initialization of the weights (usually an all ones m x n matrix).
mu Step size for the adaptive algorithm. w Matrix of optimum weights with the same dimensions as w0. wCurve Values of the weights during all the adaptivealgorithm
Filter a color image contaminated with 10% salt-and-pepper noise using the vector-weightedmedian filter and a 5 x 5 window.
The weights are optimized according to:
Wi(n+ 1) = W.(TL) + pAWi, i = 1, 2 , . . . , N , (A.4)
See Also
Vwmedfilt, Vwmedian, Section 6.7.2 and [ 1731.
-Figure A.8 Luminance of the noiseless image, noisy image, and output of the VWM filter.
Finds the optimum weights for a location estimator using Lfilters.
wopt=opt-weights-L(x,w,d) calculates a w x w correlationmatrix and the expected values of the order statistics of the vector x and uses them to calculate the optimum weights for the location
estimator. w is the filter window size and d is the reference signal.
Suppose a 3V DC signal is embedded in alpha-stable noise with parameters a = 1, 6 = 0, y = 0.2, P = 3. find the optimum 9-tap L-filter to estimate the DC signal. x = astable(1, 1000, 1, 0, 0.2, 3)
wopt = opt-weights-L(x,9,3) y = Lfilter (x,wopt).
The resulting weights are: wopt = [-0.012, - 0.0209, 0.0464, 0.3290, 0.5528, 0.1493, 0.1010, -0.0630, -0.0012]. The noisy signal has an average value of 4.2894 and a MSE of 1.036 x lo3. The filtered
signal has an average value of 2.9806 and a MSE of 0.0562.
Algorithm wopt = R-'dpT
where R is the correlation matrix of the order statistics of X and p is the vector of expected values of the order statistics.
See Also
Lfilter, opt-weights-Ll, Section 7.3.1 and [38].
SOFnfZlARE GUIDE
Finds the optimum weights for a location estimator using Le filters.
opt-weights-Ll(x,w,d) calculates a w 2 x w 2correlation matrix and the expected values of the vector X L and ~ uses them to calculate the optimum weights for the location estimator. w is the window
size and d is the desired signal.
Suppose a 3V DC signal is embedded in alpha-stable noise with parameters a = 1, 6 = 0, y = 0.2, ,B = 3. find the optimum 9-tap Le-filter to estimate the DC signal.
x = 3 + astable(1, 1000, 1, 0, 0.2, 0) wopt = opt-weights-Ll(x,9,3) y = Llfilter (x,wopt).
The noisy signal has an average value of 4.2894 and a MSE of 1.036 x lo3. The filtered signal has an average value of 2.9810 and a MSE of 0.0554.
See Also
Llfilter, opt-weights-L, Section 7.3.1 and [79, 1551.
opt-W MMII
Finds the optimal weights for the multivariate weighted median filter 11.
[v,w,y,e]= Opt-WMMll(x,d,wsize,mu)
Opt-WMMll(x,d,wsize,mu)Uses an adaptive algorithm to calculate the optimal weights V and Wfor the WMM filter 11. The parameters of the function are:
x Input data vector. wsize Window size in vector form ( [ m4). d desired signal.
mu Vector containing the step sizes for the outer and inner weights respectively ([muvm u w ] )
v M-dimensional vector containing the optimal timehpace N dimensional vector weights ( N = m x n). w M x M matrix containingthe optimal cross-channelweights. M is the dimension of the input samples.
y Marginal multivariate weighted median filtered sequence.
e Absolute error of the output.
See Also
WMMll and Section 6.7.4.
Parameter estimates for Symmetric Stable Distributions
[alpha, disp, loc] = parestssd(x)
parestssd(x) uses a method based on sample fractiles to estimate the characteristic exponent (alpha), the dispersion (disp) and the location parameter (Ioc) of the symmetric alpha-stable distribution
that outputs x.
x = astable(1, 10000, 1.75, 0, 2, 5); [alpha, disp, loc] = parestssd(x); alpha = 1.7436 dips = 2.0188 IOC = 5.0289
parestssd uses a simplified method based on McCulloch’s fractile method for estimating the characteristic exponent and the dispersion of a symmetric a-stable distribution. This method is based on the
computation of four sample quantiles and simple linear interpolations of tabulated index number. The location parameter (loc) is estimated as a p-percent truncated mean. A 75% truncated mean is used
for a! > 0.8 whereas for a 2 0.8, a 25% truncated mean is used. McCulloch’s method provides consistent estimators for all the parameters if a 2 0.6.
See Also
astable, Section 2.3.3 and [142, 149,751.
Two-dimensional permutation weighted median filtering.
pwmedfilt2 filters X with a weighted median filter of size N 2 whose weights depend on the ranking of the center sample.
load portrait.mat X = imnoise(1,’salt & pepper’,.03) Y = pwmedfilt2(X, 5, 6,20, 0) imshow([l X Y])
pwmedfilt2 pases a window over the data in X that selects N 2 samples. It sorts them and finds the rank of the center sample. The program performs an identity operation (i.e., the center weight is
set equal to the window size N 2 while all the others are set to one) when the rank of the center sample is in the interval [Tl Tu], and a standard median operation (i.e., all the weights are set to
one) when the rank of the center sample is outside the interval.
See Also
wmedfilt2, Section 6.1.1 and [lo, 931.
One-dimensional recursive weighted median filter.
y = rwmedfilt(x, a, b) y = rwmedfilt(x, a, b, oper)
rwmedfilt(x, w) filters the one-dimensional sequence x using a recursive weighted median filter with weights a and b, where a is an N-component vector containing the feedback filter coefficients and
b is an M-component vector containing the feedforward filter coefficients. oper indicates the kind of filtering operation to be implemented. oper = 0, for low-pass filter applications whereas for
high-pass or band-pass applications oper = 1. The default value for oper is 0.
See Also
wmedfilt, rwmedopt, Section 6.4 and [14].
rwmedfilt2 Purpose
Two-dimensional recursive weighted median filter.
Y = rwmedfilt2(X, W) Y = rwmedfilt2(X, W, a)
Y = rwmedfilt2(X, W) filters the data in X with the twodimensionalrecursive weighted median with real-valued weights W. The weight matrix, W, contains the feedback and feedfoward filter coefficients
according to the following format
where A i j ’ s are the feedback coefficients, Bi,j’s are the feedfoward coefficients, and 2m 1 x 2n 1 is the observation window size.
rwmedfilt2(X,W, a) uses a third input parameter to indicate the filtering operation to be implemented. a = 0 for low-pass operations a = 1 for band-pass or high-pass operations
load portrait.mat X = imnoise(1,’salt & pepper’,.05) w = [l 1 1 ; l 4 1;l 1 11 Y = rwmedfilt2(X, W, 0) imshow(X,[ 1) figure imshow(Y,[1)
rwmedfilt2 passes a window over the image, X that selects, at each window position, a set of samples to compromise the observation array. The observation array for a window of size
2m+l x2n+l positioned at the ith row and jth column is given by [Y(i-m: i-1, j-n:j+n); Y(i, j-n: j-1), X(i, j: j+n); X(i+l : i+m, jn:j+n)] where Y(k,l) is the previous filter output. rwmedfilt2calls
wmedian and passes the observation array and the weight W as defined above.
See Also
wmedian, rwmedopt2, Section 6.4 and [14].
Design one-dimensionalrecursive weighted median filters using the fast “recursive decoupling”adaptiveoptimization algorithm.
[fb, ff] = rwmedopt(x, d, fbO, ffO, mu, a) [fb, ff, e, y] = rwmedopt(x, d, fbO, ff0, mu, a)
rwmedopt implements the fast “recursive decoupling” adaptive optimization algorithm for the design of recursive WM filters. The optimal recursive WM filter minimizes the mean absolute error between
the observed process, x, and the desired signal, d. The input parameters are as follows. 0
x is the training input signal. d is the desired signal. The algorithm assumes that both x and d are available. fbO is an N-component vector containingthe initial values for the feedback
coefficients. It is recommended to initialize the feedback coefficients to small positive random numbers (on the order of low3). ffO is an M-component vector containing the initial values for the
feedforwardcoefficients. It is recommended to initialize the feedforwardcoefficients to the values outputted by Matlab’s fir1 with M taps and the same passband of interest. mu is the step-size of the
adaptive optimizationalgorithm. A reliable guideline to select the algorithm step-size is to select a step-size on the order of that required for the standard LMS algorithm.
a is the input parameter that defines the type of filtering operation to be implemented. a = 0 for low-pass applications. a = 1 for high-pass and band-pass applications.
[fb, ff] = rwmedopt(x, d, fbO, ffO, mu, a) outputs the optimal feedback and feedfoward filter coefficients. [fb, ff, e, y] = rwmedopt(x, d, fbO, ffO, mu, a) also outputs the error between the desired
signal and the recursive WM filter output, (e),and the recursive WM filter output (y) as the training progresses.
Design an one-dimensionalband-pass recursive WM filter with pass band between 0.075 and 0.125 (normalized frequency, where 1.0 corresponds to half the sampling rate). Compare the performance of the
designed recursive WM filter to that yielded by a linear IIR filter with the same number of taps and passband of interest.
% TRAINING STAGE x = randn(l,700); lfir = firl(l21, [0.075 0.1251); d = filter(lfir, 1, x); fbO = 0.001 *rand(1, 31); ffO = fir1(31, [0.0750.1251); mu = 0.01; a = 1; [fb, ff] = rwmedopt(x, d, fbO,
ffO, mu, a) % TESTING STAGE FS = 2000; t = 0: l/Fs: 2; z = chirp(t,O,l,400) ;
%training data %model %Desired signal %Initial weights %Training
%Test signal
% Linear IIR filter with the same passband of interest [fbiir,ffiir]=yulewalk(30, [0 0.075 0.075 0.1 25 0.125 11,
[O 0 1 1 0 01); Orwm = rwmedfilt(z, fb, ff, 1); % RWM filter output Oiir = filter(fbiir, ffiir, z); % IIR filter output figure subplotjs,1 ,l/;pIotP,: subplot 3,1,2 plot Oiir); subplot 3,1,3 plot
axisjtl 1200 -1 111; axis 1 1200 -1 11 ; axis [ l 1200 -1 11 ;
% Test stage with a-stable noise zn = z + 0.2 astable(1, length(z),l.4); Orwmn = rwmedfilt(zn, fb, ff, 1); % RWM filter output % IIR filter output Oiirn = filter(fbiir, ffiir, zn); figure
See Also
axis([l 1200 -4 41); axis [ l 1200 -1.5 1.51); axis[[, 1200 -1 11);
rwmedian, wmedopt, Section 6.4.2 and [ 14, 1761.
' "' 111 1 I
'I1 I
Design two-dimensional recursive weighted median filters using the fast “recursive decoupling” adaptive optimization algorithm.
Wopt = rwmedopQ(X, D, WO, mu, a) [Wopt, e, Y] = rwmedopt2(X, D, WO, mu, a)
rwmedopt2(X, D, WO, mu, a) implements the fast “recursive decoupling” optimization algorithm for the design of twodimensional recursive weighted median filters. X is the input image used as training
data, D is the desired image, WO is an m x n matrix containing the initial values for the filter coefficients, mu is the step-size used in the adaptive algorithm and a is an input parameter that
specifies the type of filtering operation on training. a = 0 for low-pass filtering operations, whereas for high-pass or band-pass filtering operations a = 1. Wopt is an m x n matrix that contains
the optimal feedback and feedforward filter coefficients in accordance with the following format feedback = Wopt(i,j) for
1 = 1 , 2. . . * and j = 1 ,...,n and j = 1 , 2 ...+
l I. m + = and ~j = q , . . . , n feedforward =Wopt(id for - m2+3. . . and = 1, . . . where n and m are assumed to be odd numbers.
[Wopt, e, Y] = rwmedopt2(X, D, WO, mu, a) outputs the optimal filter weights, the error signal, e = d - Y, and the recursive WM filter output, Y, as the training progresses.
load portrakmat D=l; %Desired image Xn = 255*imnoise(D/255,’salt & pepper’,O.l); % Training data WO = ones(3,3); W0(2,2) = 5; % Initialization of filter coefficients mu = 0.001 :
Wopt = rwmedopt2(Xn(1:60,1:60), D(1:60,1:60), WO, mu, 0); YO=rwmedfilt2(Xn, WO, 0); Yopt=rwmedfilt2(Xn, Wopt, 0 ) ; imshow([D, Xn; YO Yoptl, [I);
See Also
rwmedfilt2, rwmedoptl, Section 6.4.2 and [14, 1761.
Finds the closest marginal phase-coupled complex weighted median filter to a given complex-valued linear filter.
W = SSP2MPCCWM(h,u)
SSP2MPCCWM(h,u) calculates the marginal phase-coupled complex weighted median filter W that is closest in the mean square error sense to the complex valued linear filter h according to the theory of
Example h = [-0.0147
+ O.O93Oi,
0.1044 0.1437i, 0.3067 - 0.1563i, 0.5725, 0.3067 0.1563.2, - 0.1044 - 0.1437.1, - 0.0147 - 0.09305]
[-0.0055 0.03465, - 0.0662 O.O911i, 0.1857 - 0.0946.1, 0.2878, 0.1857 0.09465, - 0.0662 - O.OSlli, 0.0055 - 0.034653
The algorithm divides the complex weights in magnitude and phase, normalizes the magnitudes and runs SSP2WM with the normalized magnitudes of the weights as the input and the parameter u as the step
size. The output of this algorithm is then coupled with the phases of the original linear weights to get the final median weights.
See Also
SSP2WM, Section 6.6.5 and [103].
Finds the closest weighted median filter to a given linear FIR filter.
W = SSP2WM(h,u)
SSP2WM(h,u) calculates the weighted median filter W that is closest in the mean square error sense to the linear filter h according to the theory of Mallows. The code implements and adaptive
algorithm that tries to minimize the MSE between the output of the reference linear filter h and the tentative weighted median filter W . The weights of the median filter are adjusted according to
the first derivative of the MSE and the step size u.
Example h = [0.0548, 0.1881, SSP2WM(P,O.O1) = [0.0544, 0.1891,
- 0.1214, 0.05841 - 0.1223, 0.1891, - 0.2685, - 0.1223, 0.05441
The weights of the median filter are updated according to the equation
The derivative
See Also
- 0.1214, 0.1881, - 0.2714,
&J (W) can be found in [11.
WM2SSP_real, WM2SSP_realfd, Section 6.2 and [103].
Calculates the trimmed mean of a data vector.
T = tLmean(X,alpha)
t-mean sorts the samples in the input vector X , then discards the highest and the lowest alpha-order statistics and calculates the average of the remaining samples.
n: =
2, -1,3,6,8];
a!=l 1
t-mean(x, a ) = - [-1+ 2 =
See Also
4 2.5
w-mean, wmedian and Section 4.3.
+ 3 + 61
Performs vector-weighted median filtering of the data X.
Vwmedfilt(X,W) Filters the vector valued data in X using the real-valued weights in the m x n matrix W.
Vwmedfilt(X,W) passes a window overthe data in X and passes the data in the window to Vwmedian to calculate the output of the filter at a given instant.
See Also
Opt-Vwmedfilt, Vwmedian and Section 6.7.2.
Performs vector-weighted median filtering of an observation vector with real-valued weights
Vwmedian(X,W,dist) filters the vector valued data in X using the real-valued weights in the m x rz matrix W. X and W should have the same size. The parameter dist is a D x D matrix ( D = m x n) that
initializes the values of the distances between samples, it is used to avoid recalculation of distances in Vwmedfilt and, if unknown, it should be initialized to a zero matrix.
Vwmedian(X,W,dist) calculates,for each sample, the distances to all the other samples and obtains a weighted sum of them using the weights in W. Then it compares the results and chooses the minimum
weighted sum. The output of the filter is the sample corresponding to this weighted sum.
See Also
Opt-Vwmedfilt, Vwmedfilt and Section 6.7.2.
Calculates the windsorized mean of a data vector.
w-mean sorts the samples in the vector X , then removes the lowest and highest r-order statistics and replaces them with the r 1st and the N - rth-order statistics of the vector to calculate the
Example z [ - 2 , 2 , -1,3,6,8]; r = l 1 wmean(x, r) = - [2 3 2 * (-1 6 = 2.5 1
+ +
See Also
t-mean, wmedian and Section 4.3.
+ 6)]
Finds the linear filter closest in the MSE sense to a given realvalued weighted median filter.
h = WM2SSP_real(W)
WM2SSP_real(W)calculates the linear filter h that is closest in the mean square error sense to the real-valued weighted median filter W according to the theory of Mallows.
Example W = (1, - 2 , 3 , - 4 , 3, - 2, 1) 17 79 19 79 WM2SSP_real(W) = -, - - -, -- O :[ 140’ 420 70’ 420’
140’ 420
The algorithm is based on the closed form function for the samples selection probabilities developed in [ 13.
See Also
SSP2WM, WM2SSP_realfd,Section 6.2.2 and [103, 1371.
Finds the closest linear filter to a given real-valued weighted median filter in the MSE sense and the first derivative (gradient) of the cost function
c N
J ( W )= IIP(W)- h1I2=
( P j m -h
j= 1
where W is a vector of median weights and h is a normalized linear filter.
[h,fd] = WM2SSP_realfd(W)
WM2SSP_realfd(W)calculates the linear filter h that is closest in the mean square error sense to the real-valued weighted median filter W according to the theory of Mallows. It also calculates the
gradient of the cost function indicated above. This algorithm is used in the iterative calculation of the weighted median closest to a given linear filter in SSP2WM.
The algorithm is based on the closed form function for the samples selection probabilities developed in [I].
See Also
SSP2WM, WM2SSP-rea1, Sections 6.2.2,6.2.4 and [103].
One-dimensional weighted median filtering.
y = wmedfilt(x, w) y = wmedfilt(x, w, a)
y = wmedfilt(x, w) filters the data in vector x with the weighted median filter described by weight vector W. The window size is specified by the dimensions of the vector w. y = wmedfilt(x, w, a)
uses the third input argument to specify the filtering operation at hand. For low-pass filtering operation a is set to zero whereas for band-pass or high-pass filtering application a is set to one.
wmedfilt passes a window over the data x that computes, at each instant n, the weighted median value of the samples x(n-(m1)/2), . . . , x(n), . . . , x(n+(m-1)/2) where m = length(w). The resultant
value is the filter output at instant n. Due to the symmetric nature of the observation window, m/2 samples are appended at the beginning and at the end of the sequence x. Those samples appended at
the beginning of the data have the same value as the first signal sample and those appended at the end have the same value of the last signal sample.
See Also
wmedian, wmedopt, Lfilter, Section 6.1 and [6, 1891.
no-dimensional weighted median filtering.
Y = wmedfilt2(X, W, a)
Y = wmedfilt2(X, W) filters the image in X with the twodimensional weighted median filter with real-valued weights W. Each output pixel contains the weighted median value in the m. by-n neighborhood
around the corresponding pixel in the input image, where [m, n] = size(W). The third input argument is set to zero for low-pass filtering and to one for band-pass or highpass filtering. The program
appends m/2 (n/2) rows(co1umns) at the top(1eft) and bottom(right) of the input image to calculate the values in the borders. load portrait.mat X = imnoise(1,’salt & pepper’,.03)
= [l 1 1;l 4 1;l 1 11
imshow(X,[ 1) figure imshow(Y,[ 1)
Y = wmedfilt2(X, W, 0)
wmedfilt2 uses wmedian to perform the filtering operation using an m-by-n moving window.
See Also
wmedian, wmedopt2, Section 6.1 and [13, 1151.
Figure A.9 Image contaminated with salt-and-pepper noise and output of the weighted median filter.
Compute the weighted median of an observation vector.
y = wmedian(x, w, a)
wmedian(x, w, a) computes the weighted median value of the observation vector x. w is a real-valued vector of the same length of x that contains the filter weights. For a = 0 the output is one of the
signed samples, whereas for a = 1 the output is the average of two signed samples. The default value for a is zero.
x = [-2, 2, -1, 3, 6, 81; w = [0.2, 0.4, 0.6, -0.4, 0.2,0.21; wmedian(x, w, 0) = -1 wmedian(x, w, 1) = -1.5
See Also
wmedfilt, wmedfilt2, Section 6.1 and [6].
Design one-dimensional weighted median filters using a fast adaptive optimization algorithm.
wopt = wmedopt(x, d, w0, mu, a) [wopt, e, y] = wmedopt(x, d, w0, mu, a)
wmedopt implements the fast adaptive optimization algorithm for the design of weighted median filters. The filters are optimal in the sense that the mean absolute error between the observed process,
X, and the desired signal, d, is minimized. WO are the initial values for the filter coefficients. As good
initial values, use the filter coefficients of a linear FIR filter designed for the same application. That is, WO = firl (n-1 ,Wn) where n is the number of filter coefficientsand Wn represents the
frequency of interest. See MATLAB’s firl function for further information.
mu is the step size of the adaptive optimization algorithm. A reliable guideline to select the algorithm step-size is to select a step size on the order of that required for the standard LMS.
a = 0 for low-pass filter applications, 1 otherwise. wopt = wmedopt(x, d, w0, mu, a) returns the row vector, wopt, containing the n optimal filter coefficients. [wopt, e, y] = wmedopt(x, d, w0, mu,
a) also returns the error between the desired signal and the W M filter output (e), and the WM filter output (y) as the training progresses.
Design a high-pass W M filter with cutoff frequency equal to 0.3 (normalized frequency,where 1corresponds to half the sampling frequency), and then test its robustness again impulsive noise.
FS = 2000; t = [O:1/FS:11; x = sin(2*pi*20*t) + sin(2*pi*400*t);
% Sampling frequency % Training signal
% Desired signal d = sin(2*pi*400*t); % Number of filter coefficients n = 40; % Step size parameter mu = 0.001; % Initialization of coefficients w0 = fir1(n, 0.3, 'high'); % Training stage wopt =
wmedopt(x, d, w0, mu, 1); % WM filter output Owmf = wmedfilt(Ts, wopt, 1); % Linear FIR filter output Ofir = filter(w0, 1, Ts); % Testing stage with a-stable noise Tn = Ts + 0.5*astable(lI length
(Ts), 1.50); Owmfn = wmedfilt(Tn, wopt, 1); Ofirn = filter(w0, 1, Tn);
See Also
wmedian, wmedopt , Section 6.3.2 and [6,200].
Design two-dimensional weighted median filters using a fast adaptive optimization algorithm.
Wopt = wmedopt2(X, D, WO, mu, a) [Wopt, e, Y] = wmedopt2(X, D, WO, mu, a)
wmedopt2(X, D, w0, mu, a) outputs the optimal two-dimensional filter coefficients where X is the training input image, D is the desired image, WO is a matrix containing the initial values of the
weights, mu is the step size and a describes the kind of WM filtering operation. Use a = 0 for low-pass operations and a = 1 for band-pass or high-pass operations. wmedopt2 also outputs the
difference between the training input data and the desired signal, and the filter output as the training progresses.
Xn = 255*imnoise(D/255,’salt & pepper’,O.l); % Training data WO = ones(3,3); W0(2,2) = 5; % Initialization of filter coefficients [Wopt, e, Y] = wmedopt2(Xn(1:60,1:60), D(1:60,1:60), wo, 0.001, 0);
YO=wmedfilt2(Xn, WO, 0); Yopt=wmedfilt2(Xn, Wopt, 0); imshow([D, Xn, YO Yopt], [I);
See Also
wmedopt, wmedfilt2, Section 6.3.2 and [6,200].
Figure A. 70 Noiseless image, image contaminated with 10%salt-and-pepper noise, output of a weighted median filter, and output of an optimized weighted median filter.
Performs multivariate weighted median filtering of a vector Valued signal.
[y] = WMMII(X,W,V, wsize)
WMMII(X,W,V,wsize) filters the data in vector X with the multivariate weighted median filter I1 described by the M dimensional vector V and the M x A4 matrix W. Each component of V is a N-dimensional
vector. N is the window size and M is the dimension of the input samples. The window size is specified by the parameter wsize in the form [mn],where mxn=N.
Running window over the data in X that computes, for each component of each sample in the window, the weighted median described by the columns of W . After that it calculates the output of the
marginal weighted median described by each component of V applied to the outputs of the previous operation.
See Also
marginalWMMI, Opt-WMMII, section 6.7.4 and [lo].
Sharpens a grayscale image using permutation high pass median filters.
s=wmsharpener(X, N, lambda1, lambda2, L)
wmsharpener performs a linear combination of the original image and two high-pass filtered versions of it (positive edges enhanced and negative edges enhanced) to obtain a sharper version of it.
Y = wmsharpener(1, 3, 2, 2, 1)
To obtain the two high-pass filtered images, the same permutation high-pass weighted median is applied to the original image, to enhance positive edges, and to its negative, to enhance negative
edges. Once the two filtered images are obtained, they are
scaled with the coefficients lambda1 and lambda2 and added to the original image to obtain the final output.
See Also
pwmedfilt2, Example 6.1 and [lo].
One-dimensionalweighted myriad filter.
y = wmyrfilt(x, w, k) y = wmyrfilt(x, w, k, method)
y = wmyrfilt(x, w, k) performs the weighted myriad filtering of the data X. w is an N-component vector that contains the myriad filter coefficients and k is the linearity parameter. The observation
window size is defined by the length of the weight vector w.
The nth element of the output vector y is the weighted myriad valueofobservationvector[x(n-(N-l)/2), . . ., x(n), . . . x(n+(N1)/2)]. Due to the symmetric nature of the observation window, wmyrfilt
(x, w, k) pads the input sequence x with (N-1)/2 zeros at the beginning and at the end. If the fourth input parameter is used, it indicates the method used to compute the weighted myriad. method is a
string that can have one of these values:
’exact’ (default) uses the exact method to compute the weighted myriad value of the observation vector. At each window position, wmyrfilt(x, w, k,’exact’) calls the wmyriad function.
’approximate’ uses the approximate method to compute the weighted myriad value of the observation vector. At each window position, wmyrfilt(x, w, k,’approximate’) calls the fwmyriad function.
If you omit the method argument, wmyrfilt uses the default value of ’exact’.
Test the robustness properties of weighted myriad filter.
t = 0:.001:0.5; x = sin(2*pi*lO*t); xn = x +.05*astable(l, length(x), 1.5); w = firl(7, 0.3); IinearFlR = filter(w, 1, xn); wmyrE = wmyrfilt xn, w, 0.1, 'exact'); wmyrA= wmyrfilt xn, w, 0.1,
% FIR filter output % WMy exact output % WMy approx output
a x i s l r 500 -2 411; plotrn); 4,l ,l/; 4,1,2 ; plot IinearFIR); axis 0 500 -2 4 ; axis 0 500 -2 4 ; 4,1,3 ; plot wmyrE ; axis 0 500 -2 4 ; 4,1,4 ; plot wmyrA ; ~
See Also
axis off axis off axis off axis off
wmyropt, wmyriad, fwmyriad ,Section 9.1 and [112].
Compute the weighted myriad of an observation vector.
Omyr = wmyriad(x,w,k)
wmyriad(x,w,k) computes the weighted myriad of an observation vector X. The length of the observation vector defines the observation window size, N.
w is a N-component real-valued vector that contains the coefficients of the filter. k is the linearity parameter that takes on positive values. As k goes to +co, wmyriad(x,w,k) reduces to a weighted
sum of the observation samples. If k tends to zero, the weighted myriad reduces to a selection filter. If k is not specified, it is set to one.
x = [32 4 5 81; w = [0.15 -0.2 0.3 -0.25 0.11; Owmf = wmyriad(x,w,l ); Owmf = 3.3374
Given a set of observation samples x 1, x2,. . . , X N , a real-valued W1, weights w2,. . . , W N and a real parameter k > 0, the sample weighted myriad of order k is defined as
According to the window size, wmyriad uses different algorithms to find the value of p that minimizes the above equation. For small window size, N 5 11,wmyriad treats the above equation as a
polynomial function. The global minimum is found by examining all the local extrema, which are found as the roots of the derivative of the polynomial function. For large window
size, N > 11,wmyriad uses MATLAB’S fmin function to find the global minimum.
See Also
fwmyriad, fmin, Section 9.1 and [83].
Affinity function, 277 a-K curve, 323 a-stable triplet, 322, 325 AM Demodulation, 259 Asymmetric Digital Subscriber Lines (ADSL), 4 Asymptotic order, 3 11 Asymptotic similarity, 30 Attenuation, 156
Autocorrelation, 13,275 Band-pass gain, 225 Barrodale and Roberts’ (BR) algorithm, 134 Block-Toeplitz, 267 Boolean function linearly separable, 205 self-dual, 205 self-dual linearly separable, 205
Boolean operator, 114 Bounded-input Bounded-output (BIBO), 188 Breakdown probability, 85 Cauchy-Schwartz inequality, 254 Central Limit Theorem, 7, 10, 17,21,28,303 Characteristic function, 23 of
symmetric a-stable, 23 Chirp signal, 1, 145, 282, 296, 343, 352 Cholesky decomposition, 254 Concomitant, 97, 143, 156,159-160 Constant Modulus Algorithm (CMA), 359 Constant neighborhood, 89 Cost
function, 163,207 approximated, 168
Covariance matrix, 140 CramCr-Rao hound, &1-65,75 Cross-channel correlation, 235 matrix, 235 Cummulants, 30 Cutoff frequencies, 156 Dirac impulse, 177, 193 Distribution Cauchy, 10,22,25,42,69,306,
319, 322,325, 346 complex Gaussian, 210 complex Laplacian, 210 contaminated Gaussian, 18 double exponential, 9, 19, 85 Gaussian, 2, 10, 19,22, 25,29-30,42, 76, 95, 255,322,324 generalized t , 324
generalized Gaussian, 9, 18-19,66,95,305 joint Gaussian, 235 LBvy, 22,25 Laplacian, 9, 17-19,30,95, 126, 255 multivariate Gaussian, 140 multivariate Laplacian, 140 nonGaussian, 140 of order
statistics, 44 stable, 9-10, 19,68, 303,324 parameter estimation of, 36 simulation of stable sequences, 29 symmetric a-stable, 23, 322 Characteristic Function of, 23
uniform, 255 Double weight, 207 Edge, 89 detection, 150 diagonal, 152 horizontal, 150 indicators, 152 negative-slope, 148 positive-slope, 148 vertical, 150 Elimination of Interference, 280 Estimate
censored, 25 1 consistent, 64 efficient, 63 instantaneous, 354 L-estimate, 252 nearly best, 256 M-Estimate, 73, 303 Maximum Likelihood, 64-65, 140,212, 304 robust, 72,251 unbiased, 62, 252 Excess
error floor, 359 Exponential distance, 277 Feed-back coefficients, 186 Feed-forward coefficients, 186 Filtering, 1 , 12 band-pass, 1,3, 145,352 high-pass, 357 low-pass, 343 optimal, 12 Filters L3 .t
Permutation, 270 Le-filters, 263 design, 265 vector formulation, 265 center affine, 280 combination, 263 FIR-Median hybrid, 286 FIR Affine L-Filters, 279 gamma, 305 hybrid mediadinear FIR, 275 linear
combination of weighted medians design, 291 linear complex-valued, 219 FIR, 12,304,347,357 IIR, 185 LUM, 104 median, 262 permutation PI?274 , permutation, 270 selection-type, 188 stack, 202-203
weighted median, 140 Wiener, 266
Fixed-point signals, 262 Fixed point formulation, 334 Fixed point iteration, 335 Fractional Lower-Order Moments(FLOMs), 30, 32,303 Frequency response, 225 Gamma filters, 305 function, 19 Generalized
t density functions, 324 model, 323 Generalized Central Limit Theorem, 10,28 Geometric power, 34 Global convergence, 335 Global minimizer, 348 Heavy tails, 303 Higher-order statistics (HOS), 30 Image
processing denoising, 306 with CWMy, 336 with recursive WM, 193 edge detection, 150 ISAR image denoising, 282 sharpening, 7, 146, 155 zooming, 100 Implicit differentiation, 355 Impulse, 89 Impulse
response, 156 Influence function, 74 Internet traffic, 3 JPEG compression, 7 Kronecker product, 264 Lagrange multipliers, 253 Learning curve, 182 Least Absolute Deviation (LAD) Regression, 124 with
weighted medians, 131 Least logarithmic deviation, 69 Least Mean Absolute (LMA), 176, 195,239,246 fast, 180, 195 for recursive Wh4, 193 Least Mean Square (LMS) algorithm, 176 Least Squares (LS), 125
Likelihood function, 65,211 Line Enhancement, 219 Linear part, 158,226 Linear regresian, 124 Linear transformation, 25 1 Linearity parameter, 308 squared, 361 LZ correlation matrix, 267 Le vector,
264 Location estimation, 11,251 complex sample median, 210 complex sample mean, 210 FLOM, 316
INDEX in Gaussian Noise, 65 in Generalized Gaussian Noise, 66 in Stable Noise, 68 M-estimates, 73 midrange, 255 mode-myriad, 316 sample mean, 252,65,314,316 sample median, 68, 314, 316 Log likelihood
function, 64 LOMO Sequence, 91 LUM sharpener, 156 Mallows theorem, 158 theory, 158,289 MAX-MIN representation of medians, 93 Mean Absolute Error (MAE), 176, 182, 190, 314, 353 Mean Square Error
(MSE), 12,353 Medianization of linear FIR filters, 278 Method of succesive approximation, 335 Mirror sample, 202 Mirrored vector, 202 Model contaminated Gaussian, 18 Gaussian, 9 Middleton’s class, 18
non-Gaussian, 17 stable, 303 Moments first-order, 13 Fractional Lower-Order, 303-304 logarithmic, 33 of stack smoothers, 119 second-order, 13,304 Multi-tone signal, 181 Multinomial coefficient, 45
Myriad smoothers, 347 linear property, 3 10 mode property, 3 12 weighted, 347 Myriad mode-myriad, 3 12 Noise a-stable, 145, 314 characterization of, 18 contaminated Gaussian, 269 correlated
salt-and-pepper, 242 Gaussian, 61, 156 impulsive, 316,357,359 nonGaussian, 3,285 salt-and-pepper, 104, 306, 338 stable, 182, 316,357 Norm L i , 125 L2,233 L,, 234
Normalized asymptotic variance, 325 Normalized weighted average, 140 Optimality of the sample myriad, 322 in the a-stable model, 322 in the generalized t model, 323 Order statistics, 43,97, 251
containing outliers, 54 correlation, 49 from uniform distributions, 50 linear combination of, 251 Moments of, 48 Oscillation, 89 Parameter estimation of stable distributions, 36 Peak Signal to Noise
Ratio (PSNR), 106 Phase-coupled samples, 213-214 Phase coupling, 213 Phase Lock Loop, 343 Polyphase interpolation, 100 Positive Boolean Function (PBF), 115, 203 self-dual, 119 self dual linearly
separable, 163 Pulse Amplitude Modulation (PAM), 360 Quadratic optimization, 253 Random processes heavy-tailed, 225 impulsive, 303 nonGaussian, 7, 17 stable, 303 Random variable stable, 10,22 index
of stability, 19 location parameter, 19 scale parameter, 19 skewness parameter, 19 uniform, 256 Rank-order based nonlinear differential operator (RONDO), 154 Rank-order dependent weights, 154 Rank
indicator vector, 272 Rank permutation indicator, 273 Rank permutation vector, 274 Recurrence Relations, 52 Reduced rank indicator, 272 Replication operator, 214 Robust blind equalization, 359 Root
convergence property, 91 Root signal set, 91 Root signals, 88 Root signals, 91 Round Trip Time (RTT), 3 Row search algorithm, 291 Sample mean, 304 Sample myriad, 304 Sample Selection Probabilities
(SSPs), 158 Signed observation vector, 174
Signed samples, 142, 174, 188,233 threshold decomposition of, 175 Simplex, 163 Skewness, 303 Smoothers KOM, 304 K-nearest neighbors, 276 L-Smoothers, 258 median, 81, 259 breakdown probability, 85
root signals, 88 statistical properties, 83 myriad, 303,306 geometrical interpretation of, 3 16 rank-smoother, 259 recursive median, 83 statistical properties, 85 Running, 11 running mean, 11 running
median, 12, 81 sigma, 276 stack, 114 Sobel operator, 150 Spatial correlation, 235 Spectral analysis, 158 Spectral design of complex-valued weighted medians, 226 of weighted median filters, 156
Spectral profile, 156 Spectrum of a nonlinear smoother, 158 Stack filters continuous, 204 MAX-MIN representation of, 204 Stack smoothers, 114,202 continuous, 115 MAX-MIN Representation of, 116
moments, 119 optimization, 121 statistical properties, 117 under structural constraints, 121 Standard deviation, 19 Statistical error criterion, 353 Statistics fractional lower-order, 14, 32 higher
order, 14, 30 robust, 9 zero-order, 33 Steepest descent. 191,354 Tail of the distribution, 30 TCPIIP, 3 Threshold, 142, 159, 186 Threshold decomposition, 172,216 complex, 2 15 mirrored, 202 of
recursive weighted median filters, I88
property, 111 real-valued, 170,239 representation, 111 stacking constraints, 113, 203 statistical, 118 Thresholding function, 204 Time-frequency distributions discrete Wigner, 28 1 Wigner, 280
Transfer function, 185, 188 Trimmed-mean, 62,72, 260 double window modified, 276 modified, 275 Unbiasedness constraint, 253 Unsharp masking, 146 Vector median, 212 weighted, 233 Wavelet denoising,
267 Wavelet shrinkage, 267 Weak superposition property, 114 Weight mask, 155 Weighted median, 276 Weighted median filter, 139, 141 affine, 276 complex-valued, 210 design, 221 marginal phase coupled,
214 optimal marginal phase-coupled, 216 phase-coupled, 214 computation, 141 cost function, 143-144 cost function interpretation, 143 double, 206 recursive, 208 edge detection with, 150 for
multichannel signals, 231 image sharpening with, 146 in bandpass filtering, 145 marginal, 232 marginal multichannel median, 237 multichannel median I, 236 optimization, 238 multichannel median 11,
237 optimization, 245 multichannel structures, 235 optimal, 169 optimal high-pass, 181 permutation, 154 recursive, 185 bandpass, 195 computation, 186 equation error formulation, 190 first-order
approximation, 209 for image denoising, 193 optimization, 190 second-order approximation, 209
stability, 188 stability of, 188 stack filter representation of, 207 third-order approximation, 209 threshold decomposition of, 188 spectral design of, 156 stack filter representation of, 205 unsharp
mask, 155 Weighted median smoothers, 94, 145, 158 Center-WM, 102 classes, 162 representative of the, 162 computation, 96 permutation, 107, 110 synthesis, 162 Weighted median LAD regression with, 131
linear combination of, 286 symmetric, 293 Weighted myriad, 10, 327 Weighted myriad filter, 303,347 design, 35 1 fast computation of, 350
fixed point search, 351 linear property, 349 mode property, 349 myriadization, 35 1 objective function, 348 optimization, 353 Weighted myriad center, 336 fast computation of, 332 fixed point search,
336 myriadization, 338 of PLL, 343 objective function, 332 smoother, 325 geometrical interpretation of, 332 linear property, 329 mode property, 330 objective function, 327 outlier rejection property,
330 unbiasedness, 332 Welch method, 169,225,231,289 Windsorized mean, 72 Winsor’s principle, 19 Zero-order statistics (ZOS),33
|
{"url":"https://epdf.tips/nonlinear-signal-processing-a-statistical-approach1f75e879d14447d5453cbf9eda49e40842953.html","timestamp":"2024-11-12T08:54:18Z","content_type":"text/html","content_length":"838357","record_id":"<urn:uuid:d8b5dc34-27ef-4d2f-9281-ab2ac0d0de4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00487.warc.gz"}
|
Wave Period Calculator - Savvy Calculator
Wave Period Calculator
About Wave Period Calculator (Formula)
A Wave Period Calculator is a tool used in oceanography, physics, and engineering to calculate the period of a wave. The period of a wave is the time it takes for one complete cycle of the wave to
pass a given point. This calculation is crucial for understanding wave behavior, analyzing wave patterns, and designing structures that interact with waves. The formula used to calculate wave period
involves the frequency of the wave.
The formula for calculating wave period (T) based on wave frequency (f) is:
Wave Period (T) = 1 / Frequency (f)
• Wave Period (T) is the time it takes for one complete wave cycle to pass a point, typically measured in seconds.
• Frequency (f) is the number of wave cycles that pass a point per unit time, typically measured in hertz (Hz).
Using the Wave Period Calculator involves these steps:
1. Input: Enter the frequency of the wave into the calculator.
2. Calculation: The calculator applies the formula to calculate the wave period.
3. Output: The calculator displays the calculated wave period in seconds.
This tool is particularly useful for oceanographers, physicists, and engineers who study wave behavior and design structures that interact with waves, such as coastal defenses and offshore platforms.
For example, if a wave has a frequency of 2 Hz, the Wave Period Calculator will provide you with the period of the wave.
In the fields of oceanography, fluid dynamics, and coastal engineering, understanding wave periods is crucial for analyzing wave phenomena, predicting wave behavior, and designing structures that can
withstand wave forces.
Leave a Comment
|
{"url":"https://savvycalculator.com/wave-period-calculator","timestamp":"2024-11-08T12:27:40Z","content_type":"text/html","content_length":"141345","record_id":"<urn:uuid:8f050254-d5d8-4574-a8a3-0a8ea0491cc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00861.warc.gz"}
|
Dimensions 31681 - math word problem (31681)
Dimensions 31681
The welded sheet metal tub is shaped a prism with dimensions of 6x2x2 m. How many m^3 of water can fit in it, and what is its surface?
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
You need to know the following knowledge to solve this word math problem:
Units of physical quantities:
Grade of the word problem:
We encourage you to watch this tutorial video on this math problem:
Related math problems and questions:
|
{"url":"https://www.hackmath.net/en/math-problem/31681","timestamp":"2024-11-02T11:45:29Z","content_type":"text/html","content_length":"56125","record_id":"<urn:uuid:ecfccf54-6684-4c9d-a801-0f24d0f2e369>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00462.warc.gz"}
|
Kiloyards to Twip Converter
β Switch toTwip to Kiloyards Converter
How to use this Kiloyards to Twip Converter π €
Follow these steps to convert given length from the units of Kiloyards to the units of Twip.
1. Enter the input Kiloyards value in the text field.
2. The calculator converts the given Kiloyards into Twip in realtime β using the conversion formula, and displays under the Twip label. You do not need to click any button. If the input changes,
Twip value is re-calculated, just like that.
3. You may copy the resulting Twip value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Kiloyards to Twip?
The formula to convert given length from Kiloyards to Twip is:
Length[(Twip)] = Length[(Kiloyards)] / 1.9290123486052167e-8
Substitute the given value of length in kiloyards, i.e., Length[(Kiloyards)] in the above formula and simplify the right-hand side value. The resulting value is the length in twip, i.e., Length
Calculation will be done after you enter a valid input.
Consider that a race track is 2 kiloyards long.
Convert this distance from kiloyards to Twip.
The length in kiloyards is:
Length[(Kiloyards)] = 2
The formula to convert length from kiloyards to twip is:
Length[(Twip)] = Length[(Kiloyards)] / 1.9290123486052167e-8
Substitute given weight Length[(Kiloyards)] = 2 in the above formula.
Length[(Twip)] = 2 / 1.9290123486052167e-8
Length[(Twip)] = 103679999.8427
Final Answer:
Therefore, 2 kyd is equal to 103679999.8427 twip.
The length is 103679999.8427 twip, in twip.
Consider that a golf course has a fairway measuring 1.5 kiloyards.
Convert this distance from kiloyards to Twip.
The length in kiloyards is:
Length[(Kiloyards)] = 1.5
The formula to convert length from kiloyards to twip is:
Length[(Twip)] = Length[(Kiloyards)] / 1.9290123486052167e-8
Substitute given weight Length[(Kiloyards)] = 1.5 in the above formula.
Length[(Twip)] = 1.5 / 1.9290123486052167e-8
Length[(Twip)] = 77759999.882
Final Answer:
Therefore, 1.5 kyd is equal to 77759999.882 twip.
The length is 77759999.882 twip, in twip.
Kiloyards to Twip Conversion Table
The following table gives some of the most used conversions from Kiloyards to Twip.
Kiloyards (kyd) Twip (twip)
0 kyd 0 twip
1 kyd 51839999.9214 twip
2 kyd 103679999.8427 twip
3 kyd 155519999.7641 twip
4 kyd 207359999.6854 twip
5 kyd 259199999.6068 twip
6 kyd 311039999.5282 twip
7 kyd 362879999.4495 twip
8 kyd 414719999.3709 twip
9 kyd 466559999.2923 twip
10 kyd 518399999.2136 twip
20 kyd 1036799998.4272 twip
50 kyd 2591999996.0681 twip
100 kyd 5183999992.1362 twip
1000 kyd 51839999921.3616 twip
10000 kyd 518399999213.616 twip
100000 kyd 5183999992136.16 twip
A kiloyard (ky) is a unit of length equal to 1,000 yards or approximately 914.4 meters.
The kiloyard is defined as one thousand yards, providing a convenient measurement for longer distances that are not as extensive as miles but larger than typical yard measurements.
Kiloyards are used in various fields to measure length and distance where a scale between yards and miles is appropriate. They offer a practical unit for certain applications, such as in land
measurement and engineering.
A twip is a unit of length used in digital typography and graphic design. One twip is equivalent to 1/20 of a point or approximately 1/1440 of an inch, which is about 0.0018 inches or 0.045 mm.
The twip is defined as a very small unit of measurement, providing fine granularity for specifying small increments in digital design and layout.
Twips are used in digital typography, graphic design, and computer programming to achieve precise control over the placement and spacing of text and graphical elements. The unit allows for detailed
adjustments and fine-tuning in digital documents and layouts.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Kiloyards to Twip in Length?
The formula to convert Kiloyards to Twip in Length is:
Kiloyards / 1.9290123486052167e-8
2. Is this tool free or paid?
This Length conversion tool, which converts Kiloyards to Twip, is completely free to use.
3. How do I convert Length from Kiloyards to Twip?
To convert Length from Kiloyards to Twip, you can use the following formula:
Kiloyards / 1.9290123486052167e-8
For example, if you have a value in Kiloyards, you substitute that value in place of Kiloyards in the above formula, and solve the mathematical expression to get the equivalent value in Twip.
|
{"url":"https://convertonline.org/unit/?convert=kiloyards-twips","timestamp":"2024-11-02T15:38:29Z","content_type":"text/html","content_length":"90988","record_id":"<urn:uuid:976d182f-e482-4b7b-a95d-27e9528be40b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00739.warc.gz"}
|
Interpolated table lookups using SSE2 [2/2]
Continued from part 1…
So now we are ready for putting a version together in SSE2. I mentioned that we would look separately at the “floor” function, so lets start by looking at that.
SSE2 Floor function
In SSE2 there isn’t any instruction that can do float rounding. It has been added in SSE 4.1, but that is only available on a minority of computers.
The most obvious way is to do a float -> int -> float conversion. Unfortunately, if you look at the instruction timing of the “cvtps2dq” and “cvtdq2ps” it has a latency of up to 50 cycles on Pentium
Netburst (P4) and AMD K8. That means you risk having to wait 100 cycles before the result is available. Intel Core 1 and newer have much better timing for these instructions, so in most cases it is
not a huge problem.
But digging around on the internet, I found this rather nice trick, that uses the float point point trick, that will overflow 32 bit float to the point where all information below the single digits
is rounded off.
static const float _two_to_23_ps[4] __attribute__ ((aligned (16))) = { 0x1.0p23f, 0x1.0p23f, 0x1.0p23f, 0x1.0p23f };
/* Floor for positive numbers */
static inline __m128 _mm_floor_positive_ps( __m128 v )
__m128 two_to_23_ps = _mm_load_ps(_two_to_23_ps);
return _mm_sub_ps( _mm_add_ps( v, two_to_23_ps ), two_to_23_ps );
What is does is pretty simple, it adds a “very large number” and subtracts it right away. In C terms it does this:
float output = (input+0x1.0p23f)-0x1.0p23f;
On Core 2, it has the same timing as a conversion, but it is considerably faster on older systems, assuming you can hide the cost of the load in other code. Do note, you can NOT use this trick in C,
since x87 float float operation have greater internal precision. I have only tested this with positive numbers, I think it will have to be modified to work with negative ones too (masking out the
sign and re-applying it).
Another this you will have to do, is to adjust the rounding mode. This is pretty simple using intrinsics though:
int _mm_rounding = _MM_GET_ROUNDING_MODE();
(... all processing code goes here...)
Implementing the curve adjustment
So lets move on to the actual implementation of the algorithm described in the first part of this article. The first thing we need is a couple of constants:
static float _twofiftysix_ps[4] __attribute__ ((aligned (16))) = {255.9999f,255.9999f,255.9999f,255.9999f};
static float _ones_ps[4] __attribute__ ((aligned (16))) = {1.0f, 1.0f, 1.0f, 1.0f};
Here is the algorithm. Input and output is “__m128 v”, which contains 4 values to look up and are values between 0 and 1.
/* Convert v to lookup values and interpolate */
int xfer[4] __attribute__ ((aligned (16)));
__m128 v_mul = _mm_mul_ps(v, _mm_load_ps(_twofiftysix_ps));
__m128i lookup = _mm_cvtps_epi32(v_mul);
_mm_store_si128((__m128i*)&xfer[0], lookup);
/* Calculate fractions */
__m128 frac = _mm_sub_ps(v_mul, _mm_floor_positive_ps(v_mul));
__m128 inv_frac = _mm_sub_ps(_mm_load_ps(_ones_ps), frac);
/* Load two adjacent curve values and interpolate between them */
__m128 p0p1 = _mm_castsi128_ps(_mm_loadl_epi64((__m128i*)&curve[xfer[0]]));
__m128 p2p3 = _mm_castsi128_ps(_mm_loadl_epi64((__m128i*)&curve[xfer[2]]));
p0p1 = _mm_loadh_pi(p0p1, (__m64*)&curve[xfer[1]]);
p2p3 = _mm_loadh_pi(p2p3, (__m64*)&curve[xfer[3]]);
/* Pack all lower values in v0, high in v1 and interpolate */
__m128 v0 = _mm_shuffle_ps(p0p1, p2p3, _MM_SHUFFLE(2,0,2,0));
__m128 v1 = _mm_shuffle_ps(p0p1, p2p3, _MM_SHUFFLE(3,1,3,1));
v = _mm_add_ps(_mm_mul_ps(inv_frac, v0), _mm_mul_ps(frac, v1));
There are several interesting points in this piece of code, but lets start by looking at the overall picture:
• For 4 pixels we have 4 lookups into the curve table, because we load the two adjacent values with a single 64 bit load.
• All the math in the “Calculate fractions” is independent from much of the code above and below, so the cost of this calculation is basically hidden by the memory lookups on superscalar CPUs.
• “shuffle_ps” (shufps) isn’t the fastest operation in the world, but I haven’t found any good alternatives in SSE2.
But there are still a few questions that remain unanswered.
Why are you transferring lookup values through memory?
This might seem a bit strange, but unfortunately the alternatives are quite slow. The most obvious alternative is “pextrw”. This instruction have a latency of 2.5 cycles on new platforms and
considerably more on Netburst/AMD K8.
All modern CPUs have a feature called “store forwardning”. There are a lot of restrictions on this, but what it basically means is that if you store an aligned value, and read aligned values, the CPU
will bypass the cache and forward the value directly. So again, this method is considerably faster on older systems, but if you do a separate SSE 4.1 version you might want to use pextrw instead.
Aren’t unaligned loads of 64 bit value slow?
Yes – you are absolutely correct, and that brings us to the last optimization possibility. Right now there are two things that slow this algorithm down:
• Unaligned 64 bit loads (50% of all loads)
• Cache splits (1 in 16 loads)
The first issue is not optimal, but the second issue is pretty bad – the penalty is more than 20 cycles on Intel CPUs. But luckily we can fix the issue by re-ordering our data.
The basic principle is that you duplicate each pair of values, so you create a table with the following layout:
new_curve[0] = curve[0]; /* first pair */
new_curve[1] = curve[1];
new_curve[2] = curve[1]; /* second pair */
new_curve[3] = curve[2];
new_curve[4] = curve[2]; /* third pair */
Obviously this doubles the size of the lookup table, but for this example we are still well within level 1 cache size. The “new_curve” array must be 8 byte aligned to get the desired effect. So we
modify the lookups to this:
/* Load two adjacent curve values and interpolate between them */
__m128 p0p1 = _mm_castsi128_ps(_mm_loadl_epi64((__m128i*)&new_curve[xfer[0]*2]));
__m128 p2p3 = _mm_castsi128_ps(_mm_loadl_epi64((__m128i*)&new_curve[xfer[2]*2]));
p0p1 = _mm_loadh_pi(p0p1, (__m64*)&new_curve[xfer[1]*2]);
p2p3 = _mm_loadh_pi(p2p3, (__m64*)&new_curve[xfer[3]*2]);
This solves both our problems, since all our loads are aligned and we don’t read across cache line boundaries.
“Preheating” lookup tables
A final issue when dealing with lookup tables is to preheat – or pre-load the values before you start using them. If the table values are not in cache, you will get a massive penalty when you hit a
value that hasn’t been used before.
Lookup tables are not accessed in a linear fashion, so the hardware prefetcher will not be able to predict which values you will be needing next. Therefore you are quite certain that the value will
be read from memory, which gives a penalty of more than 100 cycles. To avoid this you simply read all the values in the table in a linear fashion with a simple loop:
float unused = 0;
const int cache_line_bytes = 64;
/* Preloads cache with lookup data */
for (int i = 0; i < 514; i+=(cache_line_bytes/sizeof(float)))
unused += new_curve[i];
Be sure your compiler doesn’t optimize out your load, because it thinks you wont be using the value. The cache line size is 64 bytes on most modern machines, and as far as I recall no SSE2 machines
have a cacheline size less than 64 bytes.
This optimization is also one that can be used in your C-code, but you might want to use a cacheline size of 32 bytes to accommodate older machines like Pentium 3 and AMD K7.
Conclusion & More Information
So by now we managed to turn a relatively simple operation into something quite complex, but it should also be faster and yield better quality, and fit within an SSE2 render chain. Here are some
things to get more information:
3 responses to “Interpolated table lookups using SSE2 [2/2]”
1. This is why I almost exclusively hack Python… :-)
Fascinating nonetheless.
2. But now with old Athlon Thunderbird 1.4 Rawstudio crash!!
□ Please submit a bug report with some sort of backtrace if possible. That would help us a great deal.
Klaus Post on November 3, 2010
|
{"url":"https://rawstudio.org/blog/?p=482","timestamp":"2024-11-01T20:56:12Z","content_type":"application/xhtml+xml","content_length":"16935","record_id":"<urn:uuid:f1801da0-6dd2-4f92-bc25-9b23b93cfc57>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00864.warc.gz"}
|
By popular demand, a quick hack that modifies the pyVCP meter widget to have two independent needles. It's used inside the <meter> tag by specifying <halpin2>"my2ndpin"</halpin2> and hooking up
something to that pin. If <halpin2> is not used meter works as before, showing only one needle.
There's an XML file for this test-panel, a short HAL-file that hooks up the pins, and a shell script to run it all here: pyvcp_dual-needle-test
The modifications to linuxcnc source required are in lib/python/pyvcp_widgets.py: 0002-dual-needle-meter-use-with-halpin2-meter2-halpin2.patch
NOTE: This is a quick hack to make it work - don't take my code/patch too seriously...
Real-Time Tuning
I tried a number of things that are supposed to improve real-time performance, as described in this forum post.
But not much changed. This series of jitter-histograms shows little or no changes:
The things I tried are roughly
1. measure first latency histogram 0.png
2. uninstall the package irqbalance using synaptic. reboot.
3. measure 1.png
4. in /etc/default/grub modify GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=1 acpi_irq_nobalance noirqbalance" (Aside: why are the files in /etc/grub.d/ made so incredibly hard to read? Someone should
re-write them in Python!). Run sudo update-grub. reboot.
5. measure 2.png
6. Add irq-affinity.conf to /etc/init/
7. Add set-irq-affinity and watchirqs to /usr/local/sbin. reboot
8. measure 3.png
9. Try to tweak BIOS settings. Turn off power-saving features, etc.
10. measure 4.png
The output of watchirqs looks like this:
The scripts mentioned above: irqstuff
Temperature PID control - Part Deux
Update: this version of the component may compile on 10.04LTS without errors/warnings: frequency2temperature.comp (thanks to jepler!)
There's been some interest in my 2-wire temperature PID control from 2010. It uses one parallel port pin for a PWM-heater, and another connected to a 555-timer for temperature measurement. I didn't
document the circuits very well, but they should be simple to reproduce for someone with an electronics background.
Here's the HAL setup once again:
The idea is to count the 555 output-frequency with an encoder, compare this to a set-point value from the user, and use a pid component to drive a pwm-generator that drives the heater.
Now it might be nicer to set the temperature in degrees C instead of a frequency. I've hacked together a new component called frequency2temperature that can be inserted after the encoder. This
obviously required the thermistor B-parameters as well as the 555-astable circuit component values as input (these are hard-coded constants in frequency2temperature.comp) . Like this:
I didn't have the actual circuits and extruder at hand when coding this. So instead I made a simulated extruder (sim_extruder) component and generated simulated 555-output. Like this:
This also requires a conversion in the reverse direction called temperature2frequency. A stepgen is then used to generate a pulse-train (simulating the 555-output).
"heartyGFX" has made some progress on this. He has a proper circuit diagram for the PWM-heater and 555-astable. His circuits look much nicer than mine!
The diagrams above were drawn with Inkscape in SVG format: temp_pid_control_svg_diagrams
Why Real-Time?
Why bother with these real-time kernels and APIs at all? Isn't timing on a modern PC good enough? Look at this:
This histogram shows latency-numbers from the same 1ms thread test run compiled without (red) and with (green) real-time enabled. All the green real-time data clusters around zero +/- 20us. Without
real-time enabled the event we are expecting to happen every 1 ms might happen almost 1 ms too early, or up to 3 ms late. With real-time the timing is mostly consistent to better than 1% (10 us) with
a worst-case jitter of 2% (20 us).
Latency Histogram
This shows a latency-histogram for a 1 ms thread running on Xenomai on my recently acquired ITX-board. Note how badly the histogram is approximated by a normal distribution (Gaussians look like
parabolas with logarithmic y-scale!). See also Michael's recent RPi data and Kent's Athlon/P4 data.
The usual latency-test numbers people report is the maximum latency, a measure of how far out to the left or right the most distant single data point lies. The histrogram can probably be used to
extract many more numbers, but for real-time critical applications like cnc-machine control the maximum latency is probably an OK figure of merit.
The latency numbers are recorded with a simple HAL component:lhisto.comp
The instantaneous latency-number is then put in a FIFO by the real-time component sampler and written to a text-file using halsampler. I'm setting this up with the following HAL commands (put this in
a file myfile.halrun and run with "halrun -f myfile.halrun")
loadrt threads name1=servo period1=1000000
loadrt sampler depth=1000 cfg=S
loadrt lhisto names=shisto
addf shisto servo
addf sampler.0 servo
net latency shisto.latency sampler.0.pin.0
loadusr halsampler -c 0 latencysamples.txt
The numbers can now be plotted with matplotlib. I'm using the following script:
1 import numpy as np
2 import matplotlib.pyplot as plt
3 import matplotlib.mlab as mlab
4 # load data from file
5 x = np.loadtxt('latencysamples.txt' )
6 x=x/1e3 # convert to microseconds
8 fig = plt.figure()
9 ax = fig.add_subplot(111)
10 nbins = len(x)/1000
11 n, bins, patches = ax.hist(x, nbins, facecolor='green', alpha=0.5, log=True)
12 bincenters = 0.5*(bins[1:]+bins[:-1]) # from matlplotlib example code
13 mu = np.mean(x)
14 sigma = np.std(x)
15 area = np.trapz(n,bincenters) # scale normpdf to have the same area as the dataset
16 y = area * mlab.normpdf( bincenters, mu, sigma)
17 l = ax.plot(bincenters, y, 'r--', linewidth=1)# add a 'best fit' line for the normal PDF
19 ax.set_xlabel('Latency ( $ \mathrm{ \mu s } $ ) ')
20 ax.set_ylabel('Counts')
21 ax.set_title('Latency Histogram\n 12.04LTS + 3.2.21-xenomai+')
22 ax.set_ylim(1e-1, 10*max(n))
23 ax.grid(True)
24 plt.show()
LinuxCNC on Ubuntu 12.04LTS
Recent developments has made it possible to run LinuxCNC on the latest LTS release of Ubuntu. This is experimental work, so not recommended for controlling a real machine just yet. The main obstacle
for moving LinuxCNC from 10.04LTS to a more recent distribution has been the RTAI real-time kernel, which has not been kept up-to-date with development of the normal Linux kernel. Fortunately there
are alternatives such as Xenomai or RT_PREEMPT.
Here is a step-by-step description of the install/build process, if you want to experiment with this.
1. Download and install a normal 32-bit 12.04LTS Ubuntu (ubuntu-12.04.1-desktop-i386.iso). Note that the 64-bit version is not supported for the steps that follow further down. I could not get
Ubuntu's startup-disk-creator to work, so I used unetbootin to write the ISO-file to a USB-stick.
2. It's possible to compile the xenomai-kernel from scratch, along with the runtime etc., but I used pre-compiled deb-packages by Michael Haberler from here: http://static.mah.priv.at/public/
3. Install the xenomai kernel:
sudo dpkg -i linux-headers-3.2.21-xenomai+_0.1_i386.deb
sudo dpkg -i linux-image-3.2.21-xenomai+_0.1_i386.deb
4. make sure it will show up as a GRUB-entry when booting:
sudo update-initramfs -c -k 3.2.21-xenomai+
sudo update-grub
5. reboot. uname -r should now show: 3.2.21-xenomai+
6. now install the xenomai runtime:
sudo dpkg -i libxenomai1_2.6.1_i386.deb
sudo dpkg -i libxenomai-dev_2.6.1_i386.deb
sudo dpkg -i xenomai-runtime_2.6.1_i386.deb
This installs the xenomai system on top of which a recently available version of LinuxCNC can be built. There are probably many ways to now obtain the tools/dependencies that are required. I used the
1. sudo apt-get install synaptic
sudo apt-get install git
2. Now using synaptic, install the following packages (I found these are required for a minimal linuxcnc build):
3. Get Michael's version of LinuxCNC that can be compiled for Xenomai:
git clone git://git.mah.priv.at/emc2-dev emc2-dev
cd emc2-dev
git branch --track rtos origin/rtos-integration-preview1
git checkout rtos
4. Configure and build for Xenomai:
cd src
./configure --with-threads=xenomai-user --enable-run-in-place
sudo make setuid
5. Test:
. ./scripts/rip-environment
This new version of LinuxCNC can be built without a real-time kernel (previously called "simulator" or "sim") or with any of the real-time kernel alternatives: RTAI, Xenomai, RT_PREEMPT. It should be
possible to compare real-time performance in the form of latency-numbers with different hardware and kernels.
EMC2 Filters
I hacked together a few python-scripts that can be run as "filters" in EMC2. They are opened/run from AXIS and produce G-code into EMC2.
The first one is ttt2ngc which simply demonstrates my C++ port of Chris Radek's truetype-tracer. The original code is a rather monolithic C-program while my C++ port is divided into smaller files and
offers python-bindings and more options (for example arc, cubic, conic output can be turned on/off independently).
The seconds script is ttt2offset which takes ttt-geometry, builds a VD, and produces offsets. By reversing the list of points from ttt either inwards or outwards offsets can be produced. Currently
the toolpaths are machined in the order they are produced, i.e. in order of increasing offset value. An improvement would be to order the loops so that for e.g. pocketing the innermost loop is
machined first, and rapid-traverses are minimized.
The third script is ttt2medial. Here the VD is filtered down to an (approximate) medial-axis, and the edges of the medial axis are chained together into a toolpath. The chaining-algorithm could
probably be improved much, again to minimize rapid-traverses.
If this is run with a V-shaped cutter with a 90-degree angle we can push the cutter into the material by an amount equal to the clearance-disk radius of the edge. This is a "V-carving" toolpath which
should produce a cut-out very similar to the outline of the font. For added effect choose a material with contrasting surface and interior colors.
It would be interesting to know if this v-carving g-code is anywhere near to correct. If someone has a cutting-simulator, or is adventurous enough to run this on an actual machine, I'd be very
interested in the results! (here is the g-code: emc2_vcarve.ngc)
Here is a metric version. The max depth is around -3mm, so a 10mm diameter 90-degree V-cutter should be OK. The text should be roughly 100mm long: emc2_vcarve_mm_ver2.ngc
Disclaimer: This is experimental code. Warnings, Errors, and Segfaults are common.
A/B Quadrature from EMC2
By popular demand a simple example of how to modify the stepper_mm sample configuration to output phase-A/phase-B quadrature signals (stepgen type=2).
In core_stepper.hal we specify step type 2, and re-name/wire the stepgen output:
loadrt stepgen step_type=2,2,2
net XA <= stepgen.0.phase-A net XB <= stepgen.0.phase-B net YA <= stepgen.1.phase-A net YB <= stepgen.1.phase-B net ZA <= stepgen.2.phase-A net ZB <= stepgen.2.phase-B
Then in standard_pinout.hal we wire the phases to the parport:
net XA => parport.0.pin-03-out
net XB => parport.0.pin-02-out
net YA => parport.0.pin-05-out
net YB => parport.0.pin-04-out
net ZA => parport.0.pin-07-out
net ZB => parport.0.pin-06-out
Since I have neither a parport nor an oscilloscope at hand right now I'm using some pyvcp LEDs to look at the A/B signals. These are set up with two changes to the INI-file:
PYVCP = phaseleds.xml
POSTGUI_HALFILE = pyvcp_phaseleds.hal
The files I'm using are here: phaseleds.tar
Now it is possible to look at the blinking of the LEDs when the machine moves and see the 90-degree out-of-phase square waveform (see also image here).
EMC2 simulator build on Ubuntu 11.10
I thought I would build EMC2-simulator on 64-bit Ubuntu 11.10 following the instructions from the wiki. To get the source and dependencies:
$ git clone git://git.linuxcnc.org/git/emc2.git emc2-dev
$ cd emc2-dev
$ cd debian
$ ./configure sim
$ cd ..
$ dpkg-checkbuilddeps
Then install all the required packages with "sudo apt-get install". dpkg-checkbuilddeps suggests installing tk8.4 and tcl8.4 but I found that in ordet to get the configure-script to run without
errors I needed tk8.5, tk8.5-dev, tcl8.5 and tcl8.5-dev, and I removed all the 8.4 packages of tk and tcl. That makes configure run without errors. Then try building:
$ cd src
$ ./autogen.sh
$ ./configure --enable-simulator
$ make
However that produces a number of linking errors. Don't ask me exactly why but this patch: 0001-changes-to-make-sim-build-on-ubuntu-11.10.patch.tar (updated corrected version!) seems to fix things,
and I get emc2 sim built and running. Just in case anyone else wants to build on 64-bit Ubuntu 11.10.
EMC2 tpRunCycle revisited
I started this EMC2 wiki page in 2006 when trying to understand how trajectory control is done in EMC2. Improving the trajectory controller is a topic that comes up on the EMC2 discussion list every
now and then. The problem is just that almost nobody actually takes the time and effort to understand how the trajectory planner works and documents it...
A recent post on the dev-list has asked why the math I wrote down in 2006 isn't what's in the code, so here we go:
I will use the same shorthand symbols as used on the wiki page. We are at coordinate P ("progress") we want to get to T ("target") we are currently travelling at vc ("current velocity"), the next
velocity suggestion we want to calculate is vs ("suggested velocity), maximum allowed acceleration is am ("max accel") and the move takes tm ("move time") to complete. The cycle-time is ts ("sampling
time"). The new addition compared to my 2006 notes is that now the current velocity vc as well as the cycle time ts is taken into account.
As before, the area under the velocity curve is the distance we will travel, and that needs to be equal to the distance we have left, i.e. (T-P). (now trying new latex-plugin for math:) (EQ1)
Note how the first term is the area of the red triangle and the second therm is the area of the green triangle. Now we want to calculate a new suggested velocity vs so that using the maximum
deceleration our move will come to a halt at time tm, so(EQ2):
Inserting this into the first equation gives (EQ3):
this leads to a quadratic eqation in tm (EQ4):
with the solution (we obviously want the plus sign)(EQ5)
which we can insert back into EQ2 to get (EQ6) (this is the new suggested max velocity. we obviously apply the accel and velocity clamps to this as noted before)
It is left as an (easy) exercise for the reader to show that this is equivalent to the code below (note how those silly programmers save memory by first using the variable discr for a distance with
units of length and then on the next line using it for something else which has units of time squared):
discr = 0.5 * tc->cycle_time * tc->currentvel - (tc->target - tc->progress);
discr = 0.25 * pmSq(tc->cycle_time) - 2.0 / tc->maxaccel * discr;
newvel = maxnewvel = -0.5 * tc->maxaccel * tc->cycle_time +tc->maxaccel * pmSqrt(discr);
over and out.
|
{"url":"https://www.anderswallin.net/category/cnc/emc/","timestamp":"2024-11-08T19:00:54Z","content_type":"text/html","content_length":"119156","record_id":"<urn:uuid:4869e39b-63f8-44fd-b40b-ae0fe9fc20b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00565.warc.gz"}
|
cunmr2.f - Linux Manuals (3)
cunmr2.f (3) - Linux Manuals
cunmr2.f -
subroutine cunmr2 (SIDE, TRANS, M, N, K, A, LDA, TAU, C, LDC, WORK, INFO)
CUNMR2 multiplies a general matrix by the unitary matrix from a RQ factorization determined by cgerqf (unblocked algorithm).
Function/Subroutine Documentation
subroutine cunmr2 (characterSIDE, characterTRANS, integerM, integerN, integerK, complex, dimension( lda, * )A, integerLDA, complex, dimension( * )TAU, complex, dimension( ldc, * )C, integerLDC,
complex, dimension( * )WORK, integerINFO)
CUNMR2 multiplies a general matrix by the unitary matrix from a RQ factorization determined by cgerqf (unblocked algorithm).
CUNMR2 overwrites the general complex m-by-n matrix C with
Q * C if SIDE = 'L' and TRANS = 'N', or
Q**H* C if SIDE = 'L' and TRANS = 'C', or
C * Q if SIDE = 'R' and TRANS = 'N', or
C * Q**H if SIDE = 'R' and TRANS = 'C',
where Q is a complex unitary matrix defined as the product of k
elementary reflectors
Q = H(1)**H H(2)**H . . . H(k)**H
as returned by CGERQF. Q is of order m if SIDE = 'L' and of order n
if SIDE = 'R'.
SIDE is CHARACTER*1
= 'L': apply Q or Q**H from the Left
= 'R': apply Q or Q**H from the Right
TRANS is CHARACTER*1
= 'N': apply Q (No transpose)
= 'C': apply Q**H (Conjugate transpose)
M is INTEGER
The number of rows of the matrix C. M >= 0.
N is INTEGER
The number of columns of the matrix C. N >= 0.
K is INTEGER
The number of elementary reflectors whose product defines
the matrix Q.
If SIDE = 'L', M >= K >= 0;
if SIDE = 'R', N >= K >= 0.
A is COMPLEX array, dimension
(LDA,M) if SIDE = 'L',
(LDA,N) if SIDE = 'R'
The i-th row must contain the vector which defines the
elementary reflector H(i), for i = 1,2,...,k, as returned by
CGERQF in the last k rows of its array argument A.
A is modified by the routine but restored on exit.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,K).
TAU is COMPLEX array, dimension (K)
TAU(i) must contain the scalar factor of the elementary
reflector H(i), as returned by CGERQF.
C is COMPLEX array, dimension (LDC,N)
On entry, the m-by-n matrix C.
On exit, C is overwritten by Q*C or Q**H*C or C*Q**H or C*Q.
LDC is INTEGER
The leading dimension of the array C. LDC >= max(1,M).
WORK is COMPLEX array, dimension
(N) if SIDE = 'L',
(M) if SIDE = 'R'
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 159 of file cunmr2.f.
Generated automatically by Doxygen for LAPACK from the source code.
|
{"url":"https://www.systutorials.com/docs/linux/man/3-cunmr2.f/","timestamp":"2024-11-03T01:26:12Z","content_type":"text/html","content_length":"10098","record_id":"<urn:uuid:15bf32c5-46d2-4a43-abc7-e8ca050c3adc>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00013.warc.gz"}
|
When you use a SUM statement and a BY statement with one BY variable, PROC PRINT sums the SUM variables for each BY group that contains more than one observation and totals them over all BY groups.
Summing Numeric Variables with One BY Group.)
When you use a SUM statement and a BY statement with multiple BY variables, PROC PRINT sums the SUM variables for each BY group that contains more than one observation, just as it does if you use
only one BY variable. However, it provides sums only for those BY variables whose values change when the BY group changes. (See
Summing Numeric Variables with Multiple BY Variables.)
|
{"url":"http://support.sas.com/documentation/cdl/en/proc/65145/HTML/default/p1edr041yq1iaqn18uhrks8zkdaw.htm","timestamp":"2024-11-10T08:47:00Z","content_type":"application/xhtml+xml","content_length":"18345","record_id":"<urn:uuid:8cbd2307-ce95-4d2c-aca6-d72b28082c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00332.warc.gz"}
|
CONSTRUCTION CALCULATORS - Civil Material Calculator
Calculate quantity of Cement, Sand and Aggregates in Concrete for your project
• Calculate quantities for any Grade of Concrete.
• Calculate quantities for any shape of casting.
• Click Below to Use the Concrete calculator.
Calculate quantity of Cement, Sand and Bricks in walls for your brick work.
• Calculate quantities for any mix ratios.
• Click Below to Use the Brick calculator.
Calculate quantity of Cement, Sand in plaster for your project
• Calculate quantities for any mix ratios of plaster.
• Click Below to Use the Plaster calculator.
Calculate number of tiles required for your room
• Calculates the exact number of tiles required.
• Click Below to Use the Tile calculator.
|
{"url":"https://civilmaterialcalculator.com/","timestamp":"2024-11-14T02:17:54Z","content_type":"text/html","content_length":"118179","record_id":"<urn:uuid:e8549397-bfaa-44e4-af19-bd653db64f32>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00252.warc.gz"}
|
Question: If water vapor of 60 \% RH, temperature - Asksia.ai
Question: If water vapor of $60 \% \mathrm{RH}$, temperature of 23 ' $\mathrm{C}$, pressure of 1 atm is cooled to $10^{\prime} \mathrm{C}$, will it be condensed?
If water vapor of $60 \% \mathrm{RH}$, temperature of $23^{\prime} \mathrm{C}$, pressure of $1 \mathrm{~atm}$ is cooled to $10^{\prime} \mathrm{C}$, will it be condensed?
Answer from Sia
Posted 6 months ago
Understand Relative Humidity (RH): Relative humidity is the ratio of the partial pressure of water vapor in the air to the equilibrium vapor pressure of water at a given temperature
Determine the Dew Point: The dew point is the temperature at which air becomes saturated with water vapor and water begins to condense. If the air is cooled below this temperature, condensation will
Use the RH to Find Dew Point: Since the air has a relative humidity of $60\%$ at $23^\prime \mathrm{C}$, we need to find the dew point for these conditions. If the dew point is above $10^\prime \
mathrm{C}$, then cooling to $10^\prime \mathrm{C}$ will cause condensation
Consult Psychrometric Charts or Formulas: To find the exact dew point, one would typically use a psychrometric chart or formulas that relate temperature, pressure, and relative humidity. These tools
would tell us the temperature at which the dew point is reached for a given relative humidity
Conclusion Based on Dew Point: If the dew point calculated from the psychrometric data is above $10^\prime \mathrm{C}$, then cooling the air to $10^\prime \mathrm{C}$ will result in condensation. If
it is below, it will not
Without the exact psychrometric data or formulas, we cannot determine the exact dew point and thus cannot definitively say whether condensation will occur. However, if the dew point is above $10^\
prime \mathrm{C}$, then condensation will occur when the air is cooled to this temperature.
Key Concept
Dew Point and Condensation
The dew point is the critical temperature below which water vapor begins to condense into liquid water. If the air is cooled below its dew point, condensation occurs.
|
{"url":"https://www.asksia.ai/question-and-answer/Question-If-water-vapor-of-60--mathrmRH-temperature-of-23--mathrmC-pressure-of-1-atm-is-cooled-to-10prime-mathrmC-will-it-be-condensedIf-water-vapor-o-N0ytU","timestamp":"2024-11-01T22:38:00Z","content_type":"text/html","content_length":"100888","record_id":"<urn:uuid:2a9d5eb1-2d48-40c4-8098-37628d67ece3>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00463.warc.gz"}
|
This concept defines the representation of an envelope diagram of a set of planar curve. The envelope diagram is a subdivision of the \( x\)-axis into 0-dimensional cells (vertices) and 1-dimensional
cells (edges), such that the identity of the curves that induce the lower envelope (or the upper envelope) over each cell is fixed.
A vertex in an envelope diagram is therefore associated with a point on the envelope, and corresponds to either a curve endpoint or to an intersection point of two (or more) curves. Therefore each
vertex is associated with a set of \( x\)-monotone curves that induce the envelope over this point. Each vertex is incident to two edges, one lying to its left and the other to its right.
An edge in the envelope diagram represents a continuous portion of the \( x\)-axis, and is associated with a (possibly empty) set of curves that induce the envelope over this portion of the \( x\)
-axis. An edge may be bounded by two vertices, one to its left and the other to its right. However, the diagram contains two unbounded edges, its leftmost edge, representing the interval \( (-\infty,
x_l)\), and its rightmost edge, representing the interval \( (x_r, \infty)\), where \( x_l\) and \( x_r\) are the \( x\)-coodinates of the leftmost and the rightmost vertices in the diagram,
respectively. Note that a diagram may contain no vertices at all, in which case it comprises a single edge.
Note that any model of the EnvelopeDiagram_1 concept must define a geometric traits class, which in turn defines the Point_2 and X_monotone_curve_2 types defined with the diagram features.
See also
typedef unspecified_type Traits_2
the geometric traits class.
typedef Traits_2::Point_2 Point_2
the point type.
typedef Traits_2::X_monotone_curve_2 X_monotone_curve_2
the \( x\)-monotone curve type.
typedef unspecified_type Size
the size type (convertible to size_t).
typedef unspecified_type Curve_const_iterator
an iterator for the \( x\)-monotone curves that induce a diagram feature, with value type X_monotone_curve_2.
typedef unspecified_type Vertex
the vertex type, a model of the concept EnvelopeDiagramVertex.
typedef unspecified_type Edge
the edge type, a model of the concept EnvelopeDiagramEdge.
typedef unspecified_type Vertex_handle
a handle to a diagram vertex.
typedef unspecified_type Vertex_const_handle
a non-mutable handle to a diagram vertex.
typedef unspecified_type Edge_handle
a handle to a diagram edge.
typedef unspecified_type Edge_const_handle
a non-mutable handle to a diagram edge.
void set_leftmost (Edge_const_handle e)
sets the leftmost edge of the diagram to be e.
void set_rightmost (Edge_const_handle e)
sets the rightmost edge of the diagram to be e.
Vertex_handle new_vertex (const Point_2 &p)
creates a new diagram vertex, associated with the point p.
Edge_handle new_edge ()
creates a new diagram edge.
void delete_vertex (Vertex_handle v)
deletes the given vertex v.
void delete_edge (Edge_handle e)
deletes the given edge e.
|
{"url":"https://doc.cgal.org/latest/Envelope_2/classEnvelopeDiagram__1.html","timestamp":"2024-11-06T08:13:27Z","content_type":"application/xhtml+xml","content_length":"24986","record_id":"<urn:uuid:84a0d60a-92a4-471e-b94a-3d8bbe24d942>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00047.warc.gz"}
|
Jiří Vala's research works | Brno University of Technology and other places
Use of Cohesive Approaches for Modelling Critical States in Fibre-Reinforced Structural Materials
During the operation of structures, stress and deformation fields occur inside the materials used, which often ends in fatal damage of the entire structure. Therefore, the modelling of this damage,
including the possible formation and growth of cracks, is at the forefront of numerical and applied mathematics. The finite element method (FEM) and its modification will allow us to predict the
behaviour of these structural materials. Furthermore, some practical applications based on cohesive approach are tested. The main effort is devoted to composites with fibres and searching for
procedures for their accurate modelling, mainly in the area where damage can be expected to occur. The use of the cohesive approach of elements that represent the physical nature of energy release in
front of the crack front has proven to be promising not only in the direct use of cohesive elements, but also in combination with modified methods of standard finite elements.
|
{"url":"https://www.researchgate.net/scientific-contributions/Jiri-Vala-82057384","timestamp":"2024-11-07T17:34:33Z","content_type":"text/html","content_length":"463329","record_id":"<urn:uuid:a61331d2-2327-4786-984b-7ed0ecf00102>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00170.warc.gz"}
|
Crypto magic - zero knowledge
There are some applications of cryptography that can only be classified as magical. It is only fitting that my first crypto post begins with one of them - zero knowledge proofs. I will try and
explain it without all the rather complicated mathematical stuff.
Let’s start with a simple definition.
This is where one party (called the Prover) convinces a second party (called the Verifier) that they know the answer to some question. The catch is that the Verifier never gains any knowledge of
what the actual answer is.
I’ll start with a crude example to explain further (don’t worry, I’ll give a detailed one later). Let’s say I want to sell a super secret formula about how to cure a disease to a powerful company. A
problem arises where the company needs proof that I indeed know the super secret formula yet I cannot tell them what it is (my payday depends on it). The company should gain no information of what
the super secret formula is from my proof, so I cannot let them test it. In the real world we would use a third party to hold both the payment and formula, and do some independent tests of the
formula, but where’s the magic in that :). In any case I don’t trust anybody with the formula till I get paid. And so arises our dilemma.
So, are zero knowledge proofs possible? Can I have my cake and eat it?
The answer is yes and let me explain with the detailed example I promised :). Let’s say I have two identical balls, one is red and one is blue. I want to prove to a color blind guy that they are of
different colors. What I’ll do is ask him to hold one ball in each hand so that he has a red ball in one hand and a blue one in the other. I can tell the color of each but he cannot (color
blindness). Next I tell him to put his hands behind his back so that I cannot see them. Then he can decide whether to switch the balls or not to. After which he shows me what ball he’s holding in
each hand. He then asks me to tell him if he switched them or not. I obviously will get the answer correct as I know what color he was holding in each hand before. Now here is the interesting part,
the only way I can tell whether he switched the balls behind his back is if they are of different colors. So I easily prove to him they are of different colors if I consistently can tell if he
switched them (which I can). I will have to get the answer correct a number of times to convince him am not guessing (Probability and Maths come in which I promised to avoid in this post). He is
color blind, so at the end of it all he still doesn’t get to know which ball is red and which one is blue. And so the zero knowledge proof is complete. I prove that the balls are of different colors
while he gains no knowledge at all which of them is blue and which is red.
Can this be applied in the information Security world?
Yes it can :) Let’s say I want to authenticate with a server. I however don’t want to reveal my password to the server. I just want to prove to the server that I know the password to be
authenticated. I also don’t want the server to gain any knowledge of what my password is. In the real world, the server stores passwords in a hashed form. You send over your plain password and the
server calculates its hash value and compares it with what it has stored. This fails our zero knowledge proof as the server still sees your password before conversion. So we need a different
implementation. I will explain it in detail in a different post - there’s mathematics involved and I want to avoid information overload here :).
I’ll leave you with the following fun zero knowledge proofs for now.
See how a person can prove to another that they know the solution to a sudoku puzzle without actually reavealing the solution. Check it out here.
Also check out this cool explanation with a modified version of Ali Baba and the forty thieves.
In part two of Crypto Magic, I’ll talk about homomorphic encryption :).
|
{"url":"https://www.ckn.io/blog/2015/05/05/crypto-magic-zero-knowledge/","timestamp":"2024-11-02T17:47:47Z","content_type":"text/html","content_length":"11099","record_id":"<urn:uuid:95fd8069-4589-48fb-8810-4481244a8ac8>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00572.warc.gz"}
|
ECON 159 - Lecture 20 - Subgame Perfect Equilibrium: Wars of Attrition
Game Theory
ECON 159 - Lecture 20 - Subgame Perfect Equilibrium: Wars of Attrition
Chapter 1. Wars of Attrition: The Rivalry Game [00:00:00]
Professor Ben Polak: Last time we looked at how to apply our new idea of sub-game perfect equilibrium to a whole bunch of games, and our general idea for how to solve the sub-game perfect equilibrium
is as follows. We looked at each sub-game. We solved for the Nash equilibrium in the sub-game, that is something we learned to do long ago. And then we rolled back the payoffs: we rolled them back up
the tree. And towards the end we learned something interesting. I’m not going to go back to it today–I just want to emphasize it. We learned that strategic effects matter.
So in that investment game we looked at last time, when you’re considering whether to rent a new piece of machinery, it made a very big difference whether you considered how this action would affect
the actions of the other side; in this case, how it affects your competition. This is a very general idea, a very general point. So just to give you a couple of more examples, when you’re designing
tax systems–I mentioned this last time–when you’re designing a tax system, to make some changes in the U.S. tax system, it’s not good enough to look at how people are behaving in the old tax system
and just calculate in an accounting manner how much more money you’re going to raise, or how much money it’s going to cost you. You have to take into account how that’s going to lead to changes in
Once again, that’s a strategic effect, and in the homework that you’re handing in today, all of you will have had a nice example of that in the toll booth problem. So in the toll booth problem, when
you’re putting tolls on roads–or more generally, when you’re building new roads, new bridges, new flyovers, new bypasses, you need to take into account how those new tolls, how those new roads will
affect all of traffic flow. Traffic flow down the tree will form a new equilibrium and you need to consider that in designing your tolls and designing your road system. So that’s another example of
So today I want to do something quite different, a little bit like what we did with duel, I want to play a game today, and probably spend the whole of today analyzing this one game. So it’s quite a
complicated game, but it’s quite a fun game. So what’s the game we’re going to look at? The game is going to involve two players, and each player in each period, they choose–or each chooses I should
say–each chooses, whether to fight or to quit. So F means fight and Q means quit, and they make this choice simultaneously. The game ends as soon as someone quits.
So there’s good news and bad news for this game. Let’s do the good news first. The good news is that if the other player quits first you win a prize. Generally we’ll call this prize V, but we’ll play
for some cash in a minute. The bad news is, each period in which both fight–so each period in which both players choose to fight–each player pays a cost, so they pay -C. Just to keep things
interesting let’s fill in the other thing here which is if both quit at once–so if both quit at once then they get 0 that period. So this is a game we’ve seen a little bit before. We saw a little bit
under the auspices of Hawk Dove. Those people in the MBA class saw a game a lot like this. But we’re going to analyze this in much more detail then we did before.
As I said, we’re going to spend the whole of today talking about it. So, to start this out, let’s actually play this game. So I want two volunteers. Let me just, since this is college football
season, let me see if I can play off the rivalries. So do I have anybody here from the great state of Texas? A whole bunch of Texans, keep your hands up. I want to use you a second and I guess the
rivalry here is Oklahoma.: Anybody from Oklahoma? No Oklahomans? What we’ll do is we’ll pick two Texans then. We’ll assume this is Texas and Texas A&M. So Texans raise their hands again. All right,
we’re going to pick out two Texans and I’m going to give you a mike each. So your name is?
Student: Nick.
Professor Ben Polak: Why don’t you keep hold of the mike and just point it towards you when you speak, but still shout because everyone here wants to hear you. So this is Nick, where was my other
Texan back here? Why don’t I go for the closer one. We’ll start here. And your name is?
Student: Alec.
Professor Ben Polak: Alec, so shout it out.
Student: Alec.
Professor Ben Polak: That’s better, okay. So the game is this. They’re going to have to write down for the first period whether they choose fight or quit. Each player will have a referee. So the
person behind Alec is going to be Alec’s referee to make sure that Alec is actually saying what he says he’s going to do. And what happened to my other Texan. I’ve lost my other Texan. There he is.
Your name again was?
Student: Nick.
Professor Ben Polak: Nick is going to write down fight or quit. And to make this real, let’s play for some real cash. So we’ll make the prize–why don’t we make the prize equal to a $1 and the cost
equal to $.75. So I’ve got some dollars here. Here we go. What do we call this? In Texas we call this a fist full of dollars, is that right? So where are my players? Why don’t you stand up, you guys.
So everyone can see you. I made this difficult, because now you are going to have to write down your strategy. So you are going to have to grab a pen. I didn’t make that easy for you. Let me come
down to make it easier for the camera person. So why don’t you write down what your strategy is going to be, and tell your neighbor what it’s going to be, and we’ll see what happened. Show your
referee. Have you shown your referee? Nick, speaking into the microphone what did you do?
Student: I quit.
Professor Ben Polak: He quit.
Student: I quit as well.
Professor Ben Polak: What happened to remember the Alamo? All right, so Texas didn’t work very well. Let’s try a different state. I have to say my wife’s from Texas and my wife’s family is from
Texas, and I thought they had more fight in them than that. Maybe that’s why they are sliding in the polls. Let’s try somebody from Ohio, anyone from Ohio? Nobody from Ohio in the whole class, that’s
no good. I was going to pick Ohio against Michigan. How about some people from some of our own teams? Are there any players on the football team other than the two I’ve picked on before? There we go.
I need a different team, anybody from the hockey team? Anybody from the baseball team? Okay good. So our friend from the baseball team, your name is?
Student: Chris.
Professor Ben Polak: Chris and our new football team player is?
Student: Ryland.
Professor Ben Polak: Ryland. Okay so Ryland and Chris are going to play this. And neither of you is from Texas, I take it, so we have some hope of something happening here. So write down what it is
you’re going to choose. Have you both written something down? Yeah, all right Ryland what did you choose?
Student: Fight.
Professor Ben Polak: Chris?
Student: I’m going to quit.
Professor Ben Polak: Well that was easy too. So the football team is looking pretty good here. So we’re not getting much in the way of action going here. Anyone else want to try here? Another little
state rivalry here, I don’t suppose I’ve got anyone from Oregon, that’s asking too much, anyone from Oregon? You guys must be from somewhere. There’s got to be a state where at least one of you is
from. Well let’s try something like New Jersey, how about that? There’s some players from New Jersey that’s good. Here we go. And we’ll try New Jersey and New York. That seems like there’s a bit of a
rivalry there. Are you from New York? Excellent, here we go, and your name is?
Student: Geersen.
Professor Ben Polak: Your name is?
Student: Andy.
Professor Ben Polak: Andy. Okay so Geersen and Andy. So stand up so everyone can see where you are. Let’s see if there’s any fight in New York and New Jersey. So write down your strategies. Andy what
did you choose?
Student: I’m going to fight.
Student: Fight.
Professor Ben Polak: Here we go, this is better now. I was getting worried there for a second. I know it’s near the Thanksgiving break, but there has to be some sort of spark left in the class. So we
have both people fighting which means right now they’re down $.75 but the prize is $1. So the game goes on, so write down again. What you’re going to do second period. The $.75 is gone so now we’re
just looking at this game for $1. Let’s go to New York, what does New York?
Student: I’m going to fight.
Student: Fight.
Professor Ben Polak: Fight okay. So you have to stay on the east coast to get life. There’s no point going west is there. That makes sense. So write down again what you’re going to do, and let’s go
the other way around, to New Jersey?
Student: Fight.
Student: Fight.
Professor Ben Polak: Fight again all right, so right now we’re down three $.75 whatever that is, and there’s still this prize of $1, plus perhaps a bit of pride here. So write down again. Let’s try
again, so let’s go with New York this time.
Student: I’m going to fight.
Student: Fight.
Professor Ben Polak: Fight okay, I’m guessing we could keep this going for quite a while, is that right? Now it might make a difference, by the way, if they’re allowed to talk to each other here, so
let’s see if it does. So let’s allow New Jersey and New York to talk to each other. You can’t insult each other about bridges and tunnels, just regular talk. So anything you want to say to your
friend from New Jersey here?
Student: I can’t let New Jersey win, that’s just New York pride. You guys are just worse in every realm so I’m sorry. It’s just pride.
Professor Ben Polak: Anything you want to say in reply?
Student: Well I’m going to keep fighting, so your best choice is to give up.
Professor Ben Polak: Let’s see if that works. Did they get anything out of that? So choose the strategies again. New York?
Student: I just can’t let Jersey win: fight.
Student: Bring it on: fight.
Professor Ben Polak: All right, so it’s clear that if we kept this going for a while, it would pay for my lunch, is that right? We’ll hold it here for a second. We’ll talk about it a bit, but thank
you. A round of applause for our two feistier players. So what’s going on here? So clearly we can see what can happen in this game. You can get people quitting early, and it could be that one side
quits and the other side doesn’t quit. That is also something that can happen. That can happen pretty quickly. But it’s possible–we just saw it happen–it’s possible that a fight could go on quite a
while here.
Now why? What’s going on here? I mean the prize here was what? Was $1, and the cost was $.75. I could have raised the stakes maybe on these guys and see if that made a difference, but I think $1 and
$.75 will do. And by the time they had fought the second time, they’d exhausted the possible prize of $1. So it’s true that if you won this in the first period then that’s fine because you just get a
$1 and it wouldn’t cost you anything. And even if you won in the second period you’d be okay, you’d only cost yourself $.75 for fighting in the first period, but you’re getting $1 so that’s good. But
there on–and “there on” went on for plenty of time in this case–there on you’re just accumulating losses, so what’s going on?
There are various conclusions possible here. One is that people from New York and New Jersey are crazy. That’s a possible thing. But what else is going on. Why did we get involved in this fight. What
happened here? Why do we tend to see fights like this emerging? I’m claiming this isn’t such an implausible situation. Why do we see it emerging? Let’s talk to our friend from New York, shout out.
Student: By the time she fought with me on the second round, I knew I was going to be losing money anyway so why not just keep going and then, there was no reason, I wasn’t going to win anyway, so I
might as well just keep fighting until she quit.
Professor Ben Polak: All right, I think there’s two things. There’s two parts to that answer. Part of the answer is: I have lost the money anyway, let’s hold that piece of it. And the other part of
it is what? The other part of it is I’m really determined to win this thing. So there’s two things going on there, and they’re quite different. Let’s take the second one first. It’s possible that the
reason these fights emerge and can go on for quite a while is that even though the prize is only $1 in money, it could be that the actual thing that the players care about is what? What do the
players actually care about here? Somebody just raise your hand, I’ll put you on the mike. What do people tend to care about in these situations? Winning, they care about winning, or they care about
pride. Is that right? That’s why I started with Texas, but I couldn’t find any pride in Texas, so we had to go to New York.
So people care about winning per se. It’s a pride thing. So it could be that $1 simply isn’t a good description of the true payoffs here. It could be that the payoffs are actually about winning. It
could also be that both of these guys know that they’re going to be interacting with you at other times in the class, or other times at Yale. And they want to establish a reputation, both of them, as
being the kind of guys who fight. In particular, when they got to talk about it, both of them said: “look I’m a fighter”: something which we’ve seen before with Ale and his pizza shop. Both of them
said: “I’m a fighter. You better back out.”
So both of them tried to signal the fact that they were going to fight to try and get the other side to quit. So that’s about reputation, and that reputation could extend beyond this game. It could
be that they’re going to be involved in this kind of conflict later on in life. So both of those things are around. There’s another element to this, and it’s the other part of what our friend from
New York said which is about the costs. What’s true about the costs in this game as we move from period to period? Somebody said it. Say it again. Say it loudly.
Student: Sunk cost.
Professor Ben Polak: It’s a sunk cost. So all of those costs that you accumulate as the game goes on, they’re irrelevant looking forward because they’re sunk. The fact I’ve played this game for ten
periods and hence lost ten times $.75–which even I can do, that’s $7.50–the fact that I’ve lost $7.50 is irrelevant because I’ve lost it anyway. I can’t get that back. That’s a sunk cost. So the game
ten periods through looks exactly the same as the game did at the beginning, when fighting seemed a good option. So ten periods through the game, you have the same view about fighting as you did at
the beginning.
Now that’s not quite true because at some point you’re going to run out of money, but if we ignore that, basically, those sunk costs are irrelevant. So what we’re seeing here is reasons why people
fight and some of these reasons seem to be for standard economic reasons like sunk costs, and some of them seem to be about things that are outside the game like pride or possibly reputation.
It’s certainly the case within the real world, we do see fights like this. Let’s just spell out what the key feature of this is. The key feature of this is, in these fights over a period of time,
even though you may only be losing a little piece in each period, over a period of time you could lose a lot. In fact, you could lose far more than the prize that was originally at stake. So the
losses you could accumulate–and our friend from New Jersey and New York, the losses that they accumulated vastly outweighed the prize that was at stake after a while. That’s a worry.
Chapter 2. Wars of Attrition: Real World Examples [00:17:39]
So this can occur in real life not just in the classroom. What do we call these kinds of fights; these fights where they’re holding out for this possibly small prize and incurring, possibly small–but
they accumulated to being large–costs each period. What do we call those fights? Let’s think about some examples. Let’s see if the word comes out of examples.
So one example–let’s do some examples here. One example is what happened in World War I. So in World War I, as I’m assuming most of you know, on the western front at least, the German and allies to
Germany armies faced off with the British and French and allied armies, for an extraordinarily long time, fighting over extraordinarily small patches of land, little pieces of northern France and
Belgium. You could argue that these pieces of northern France and Belgium–I don’t wish to offend anyone French or Belgian here–but you could argue that those few acres of northern France and Germany
weren’t worth a whole lot anyway. Nevertheless, the two sides kept on fighting from 1914 to 1918, and enormous losses of life were accumulated in that period. So that was a really costly, long term
battle. Neither side would quit. Each year huge numbers of lives were lost. If you doubt that go and look at the war memorial in Yale that shows how many Yale American lives were lost, and America
was only in that war for about a year. Okay, so that’s an example.
Another example–a more business example, an example we talked about in our MBA class but not in this class so far–is examples in business where there’s a market and that market is really only going
to hold one firm. That market is only going to hold one firm. You can end up in an extremely long fight about who’s going to end up being the one firm in that market.
So a famous example–actually it’s a famous business school case–is the fight that ocurred to control satellite broadcasting in Europe. So there was a fight between Sky Television and the British
Satellite Broadcasting Company that went on for a number of years. And these companies were doing things like charging zero prices, and giving away satellite dishes, and this that and the other. And
over the course of the fight they accumulated so many losses that, if you did the accounting, the entire future projected profit flow of winning this fight was vastly outweighed by the amount of
money that they’d lost during the fight. That was a fight that involved on one side Rupert Murdock and you can argue maybe Rupert Murdock is a little crazy and had a reputation to keep up, but still
it looks like another example of this. So that example was British Satellite Broadcasting versus Sky.
So with those two examples there, anyone think of a general term we call these fights? How do people refer to the method of fighting in World War I–or for that matter during the American Civil War?
Somebody in the back, shout it out.
Student: War of attrition.
Professor Ben Polak: It’s a war of attrition. So these are wars of attrition. These are wars of attrition. And what we know about wars of attrition is that they can go on a long time, and a lot can
be lost. A lot of life can be lost in the case of real wars. A lot of money can be lost in the case of business wars. Actually it turns out, a lot of games have this structure of a war of attrition.
Let me give you one more example. Suppose that two companies are competing for a market not in the manner of BSB and Sky by the advertising or whatever, but in the form of paying bribes.
So suppose there’s a company let’s say in France and a company let’s say in Britain, and these two companies are trying to win a contract in some country where paying bribes is a successful strategy.
And here I’m going to be careful about the film and not mention any real companies, so let’s call this imaginary country Freedonia, which comes from a Marx Brothers film. So here’s this French
company and this British company, and they both want this contract to build a bridge in Freedonia. And they start paying bribes to the general who controls Freedonia. And what happens? Well you’re
not going to get the bribe back. So both sides pay a few thousand dollars to this general, and then the general comes back and says, well you both paid $1,000. Which of you wants to pay the bridge?
So they go on, and they put more money in and more money in. And you can see once again this is a war of attrition. Those bribes that they’ve paid, they’re never getting back, but once you’ve paid
them they’re a sunk cost. Once you’ve paid that bribe, good luck saying: I paid you this bribe. You didn’t let me build the bridge. Give me my money back. There isn’t a court in the world that’s
going to enforce that. So these bribery contests look a lot like this. There’s a technical name for these bribe contests, they’re sometimes called all pay auctions. So what do we want to establish
We want to establish– we want to talk about why fighting occurs here and we wanted to do so in some detail. So there may be informal reasons why fighting occurs and we’ve talked about that. It could
be that one side is crazy. It could be that both sides are crazy. It could be that national or regional pride is at stack. All of these things could affect why we get fighting. But what I want to try
and establish today is that you can get long fights emerging in potential wars of attrition even if everybody is rational, even if the payoff is just that one dollar, and even if there’s no
reputation at stake.
So again, my goal today is to try and convince you that you can get huge loss of life in World War I or huge loss of money in these business contexts without having to argue something outside the
model like irrationality or reputation. Even rational players can get themselves in trouble in wars of attrition.
Chapter 3. Wars of Attrition: Analysis [00:24:04]
So for the rest of today, I want to try and analyze this. To get us started I want to look at a version of this game, at a simplified version, which only lasts for two periods. So we’ll do a two
period version of the game we played just now. Eventually, by the end of today, I want to look at the infinite version. So we’ll start small. We’ll start with a two period version. The trick in
analyzing these things is to be able to come up with a tree and to be able to come up with payoffs, and be able to apply the analysis that we know about to get us to where we want to be.
So here’s the game I claim. I claim that it has the following tree. So first of all Player A chooses and Player A can either Fight or Quit. And let me put a [1], and we’ll see what the [1] is in a
second. Then we’re going to model Player 2. But of course this is a simultaneous move game. This is a simultaneous move, so this is an information set. Let’s not call him Player 2. Let’s call him
Player B. So Player B doesn’t know what A has done that first period when B is making her choice. This is a simultaneous move. And B is choosing between fighting or quitting. Just to distinguish
them, let me use small letters for B, so once again fight or quit, and fight or quit.
Now, if both sides fight then the game continues. And everyone knows that both sides fought at that stage. So at this point on, we’re actually at a singleton information node and it’s Player A’s turn
again. So here we go again. So in the second period, once again, we’ve got A choosing whether to Fight or Quit. And this time we’ll put a [2] to indicate we’re in the second period. And after Player
2 has moved once again–after Player A has moved–once again Player B is moving. That’s a simultaneous move. And once again they’re choosing fight or quit. I’ll put [2] to indicate that we’re in the
second stage. So that’s the structure of this two period game. And let’s write down what the payoffs are, starting with the easy payoffs.
So if both people quit in the first stage, they get nothing. If A quits and B fights, then A gets nothing and B gets V. If A fights and B quits, then A gets V and B gets nothing. And if they both
fight we go into the second stage. So let’s write down the payoffs in the second stage. So in the second stage, if they both quit in the second stage then their payoffs are going to be, for A, -C,
the costs they accumulated in the first stage, plus 0. And, for B, -C + 0. If A quits and B fights in the second stage then the payoffs are -C + 0 and -C + V. If A fights and B quits then the payoffs
are -C + V and -C + 0. And if they both fight for two periods, we have a decision to make about how we’re going to end the game in this two period game, but let’s just assume that what we’ll get here
is -C -C and -C -C. We’ll assume if they both fight the game ends and no one gets the prize, just to make life simple.
So this is a two period version of the game and the only change I’ve made, other than making it two periods, is I had to put in a payoff. I had to see what happened if the game didn’t resolve. And
I’ve assumed that if the game didn’t resolve, no one got the prize.
Now there’s another assumption I’m going to have to make before we analyze this, there are two possible cases to consider here. There’s the case when V > C which is the case we just played in the
class; and there’s also the converse case when C > V. So today we’ll focus on the case V > C which is the case we just played in class. I’m going to leave you to analyze the other case, the case when
the cost is bigger than the prize, as a homework assignment. So V > C is our assumption. So everyone okay with the tree? This tree I hope describes the game, at least a two period version of the
Now the first thing I want to point out here is, if we look at the payoffs that are incurred at the end of the second period game, we notice that they all contain a –C. There’s a -C everywhere. What
is that -C? It’s the cost that you accumulated from having fought in the first stage. But the observation I want to make straight away is that this cost–so here it is, here it is, here it is, here it
is, here it is, here it is, and here it is, and here it is–this cost is sunk. These objects here are sunk costs. There’s nothing you can do once you’re in the second period of the game to get these
sunk costs back. They’re just gone. They’re there, but the fact that they enter everywhere is going to make them strategically irrelevant. Okay, so what we want to do here is we want to find all of
the sub-game perfect equilibria of this little game. Let me get rid of the rules. We all know the rules by now. So our goal here is to use our solution concept, which is sub-game perfect equilibrium,
to try and analyze this game. So how are we going to analyze this game in terms of sub-game perfect equilibria? How are we going to start that discussion? We’ve got a lot of work to do here, where
are we going to start in finding sub-game perfect equilibria? What’s the first thing we should do?
Well, I claim the first thing we should do is just figure out what the sub-games are. Let’s start with that. So having just pushed it far away I’m going to need to use the pointer. I claim that the
obvious sub-game to analyze first is this sub-game. It’s the sub-game if you should end up in period [2]. And notice that it is a sub-game: it starts from a singleton node; it doesn’t break up any
information set; and it contains all of the descendants of the node from which it starts. So that is genuinely a sub-game. So we’re going to start our analysis by considering the second sub-game.
So let’s write down the matrix that corresponds to that second sub-game. And I’m going to write it down in the following way. So I claim, in this second sub-game, each player has two choices, they
can fight or quit. And I’m going to write the payoffs in a particular way. I’m going to write the payoffs as -C plus this thing. So rather than keep that -C in all the boxes, which it’s going to get
boring after a while, I’m going to pull that -C out and just put it in front. Is that okay? So we had this sunk cost box everywhere and I’m going to pull out this sunk cost box and put it in front.
So here it is. If you get into the second period of the game you’ve incurred this sunk cost.
And your payoffs in this game, if you both fight then you incur -C for the second time. If A fights and B quits, then A is going to win the prize so they’ll get V and Player B will get nothing. If B
fights and A quits, then conversely, A gets nothing and Player B gets the prize. And if they both quit they just get nothing. So just notice what I did here, I could have written out the box with
-C-C here; -C-C here; -C+V here; -C+0 here; -C+0 here, etc.. But I just pulled out that -C because it’s just distracting everything. So I pulled out that -C and put it in the front. Okay, so now we
can analyze this little game, and let’s start off by talking about pure strategy equilibria in this game.
So again, our goal is to find sub-game perfect equilibria, so the way in which we find sub-game perfect equilibria is what? We start at the last sub-games, we look for Nash equilibria in those last
sub-games, and then eventually we’re going to roll those back. So there’s our last sub-game. There’s the matrix for it. Let’s just find the Nash equilibria. So if Player B is fighting then Player A’s
best response is to quit and if Player A is fighting then Player B’s best response is to quit. Conversely, if Player B is quitting, Player A’s best response is to fight, and if B is fighting–sorry:
if A is quitting then Player B’s best response is to fight. I didn’t say that right. Let me try again. So if A is fighting, if the other side is fighting you want to quit. If the other side is
quitting you want to fight. Is that clear?
So there are actually two–let’s be careful here–pure strategy equilibria, there are two pure strategy Nash equilibria in this sub-game. What are they? They are (Fight,quit) and (Quit, fight). So if
we get into the second sub-game and if we know we’re going to play a pure strategy in the second sub-game, this is what’s going to happen, that’s our claim. Now notice that it didn’t matter, the sunk
cost didn’t matter there. I could have included the sunk cost in the payoffs, but I would have found exactly the same thing with or without the sunk costs. So, as we’d expect, the sunk cost is
irrelevant. So we’ve got both the equilibria in the sub-game. Let’s roll these back into the first stage of the game. The payoffs associated with this one are V and 0, and the payoff associated with
this one is 0 and V. Everyone okay with that?
Okay, let’s revisit the first stage of this game. Now, things get a little bit more complicated, so what I’m going to do is I’m going to redraw the first stage of this game. So here it is. A can
fight or quit just as before. And, following this, B can fight or quit just as before. So this is a picture of the first stage of the game, but I’m going to chop off the second stage. Let’s put the
payoffs in. So the payoffs down here are the same as they were before: (0, 0); working up, (0, V); if A fights and B quits then its V,0. But what about the payoff if they both fight? So what I want
to do now is I want to look at the payoff when they both fight by considering what they would get, if they both fight, in the second period of the game.
So our idea is find the Nash equilibrium in the second period of the game and roll back these possible payoffs. So here the payoffs are going to be -C plus stage [2] Nash equilibrium payoffs for
Player A; and -C plus the same thing, stage [2] Nash equilibrium payoffs for Player B. So just to understand what I’ve written here then, I put the same payoffs as before but I’ve replaced that
enormous thing that was the second stage of the game, just with the payoffs that we know that we’re going to get in the second stage of the game, if we play Nash equilibrium in the second stage. So
these objects have a name, and the name is continuation payoffs.
These objects are the continuation payoffs. They are the payoffs I’m going to get tomorrow–and possibly forward in a more complicated case–if, in this case, if we both fight in the first period. Now
what we want to do is we want to draw up the matrix that corresponds to this first stage game, and we’re going to have to do so twice. We’re going to have to do so once for the case where the
continuation payoffs are (V, 0), where the equilibrium we’re playing tomorrow is (Fight, quit). And we’re going to have to do again for the case where the continuation payoffs tomorrow are (0, V),
namely the continuation play is (Quit, fight). So we have to do it twice.
[So, you’ve got the continuation payoffs down so let’s delete this to give ourselves a little room.]
So the matrix is going to look as follows: a nice big matrix, 2x2. Player A is choosing Fight or Quit, and Player B is choosing fight or quit. That much is easy. It’s what we put in here that
matters. So let’s do all the easy cases first. So (Quit, quit) is (0, 0)’ (Quit, fight) is (0,V); (Fight, quit) is (V, 0); and in here it’s going to depend which of these two games–which of these two
equilibria is being played tomorrow. So what we’re going to do here is we’re going to do the case for the equilibrium (Fight, quit) in stage two. We’re going to write out the matrix for the case
where we’re going to play (Fight, quit) tomorrow.
[-Thank you, try and keep me consistent today, because it is very easy to slip up. So once again I’m going to use capital letters for Player A and small letters for Player B. ]
So what happens if we both fight? We both incur costs of C from fighting, and tomorrow we’re going to get the payoffs from the equilibrium (Fight, quit). That’s this equilibrium, so we’re going to
get V and 0 tomorrow. So let’s add those in. So this will be +V and this will be +0. And let’s just put some chalk around here to indicate that these are going to be continuation payoffs. So this is
the matrix we’re going to use to analyze the first stage of the game in the case where the equilibrium we’re playing tomorrow is (Fight, quit).
And as I promised, we have to do this twice. So the other case, of course, is if we’re in the other equilibrium tomorrow. So let’s just do that. So once again we have Fight [1], Quit [1]; fight [1],
quit [1];and the payoffs are–same as we had–(0,V); (0, 0); (V, 0); and then here, this time, we’re going to have -C + 0 and -C + V. And the reason for the change is that we’re now looking at the
continuation game, the continuation play where it’s Player A who quits in the second stage. So this is for the case (Quit [2], fight [2]) in period 2.
So let’s just pause, let’s make sure everyone’s got that down, everyone okay? So what we’ve done here is we started off by analyzing what’s going to happen in period 2, and that really wasn’t very
hard, is that right? That was a pretty simple game to analyze, pretty easy to find the equilibria. Then what we did was we rolled back the equilibrium payoffs from period 2 and we plunked them on top
of the relevant payoffs in period 1. So in particular, if you both fight and you know you’re going to play the (Fight, quit) equilibrium tomorrow, then you’re payoffs will be -C + V and -C + 0. If
you both fight and you know you’re going to play the (Quit, fight) equilibrium tomorrow then you’re payoffs will be -C + 0 and -C + V.
And just to emphasize once again, these four boxes we created correspond to the stage 2 Nash equilibrium payoffs, so the continuation payoffs of the game. Okay, so now we’re ready to analyze each of
these games. So this isn’t going to be too hard. Let’s try and find out the Nash equilibrium of this game. So let’s start with the left hand one. This is the case where Player A is going to fight and
win in period 2. So if Player B is going to quit in period 1 then, if Player A fights, she gets V; if she quits, she gets 0: so she’s going to want to fight. Everyone okay with that? If Player B
fights in period 2 [error; 1] then, if Player A fights, she gets -C + V and if she quits she gets 0, and here’s where our assumption is going to help us. We’ve assumed, what did we assume? We assumed
V is bigger than C, just like we played in class. So because V is bigger than C this is going to be the best response: again fighting is going to be the best response.
So we know that, in fact, Player A here has a dominant strategy, the dominant strategy is to fight in period 1 in this analysis of the game. And since A is fighting, not surprisingly, we’re going to
find that B is going to quit. So B’s best response is to quit. So here is our Nash equilibrium in this sub-game. This game has a Nash equilibrium, it only has one, and the equilibrium is (Fight [1],
quit [1]).
Now let’s just talk about it intuitively for a second. Intuitively, if I know, if I’m playing Jake, and I know that Jake is going to fight tomorrow–sorry, other way round–I know that Jake’s going to
quit tomorrow and I’m going to fight tomorrow–I know that tomorrow I’m going to win. So that prize is there for me tomorrow, so why would I want to quit today? I’m going to get $1 tomorrow if I just
fight in this period. So why would I want to quit today when, at worst case scenario, I’m only going to lose $.75 today. So if I know Jake is quitting tomorrow I’m going to stay on and fight now. And
conversely, if Jake knows he’s quitting tomorrow, hence, he knows I’m going to fight now, he may as well just quit. So what we’re learning here is in this particular example, if we know that tomorrow
I’m going to win the war, I’m actually going to win it today. Say it again, if we know that tomorrow I’m going to win the war, I’m actually going to win it today.
The converse is true for the case where I’m the quitter and Jake’s the fighter tomorrow. So once again it’s pretty quick to see that from Jake’s point of view if I’m going to fight he’s going to want
to fight. If I’m going to quit, he’s going to want to fight. So in either case he’s going to want to fight. So I’m going to want to quit. So the Nash equilibrium in this game is (Quit [1], fight
Chapter 4. Wars of Attrition: Discussion of SPEs [00:47:53]
So at this stage, we found all of the pure strategy sub-game perfect equilibria in the game. Let’s describe them before I write them up. One pure strategy Nash equilibrium has me fighting in period 1
and Jake quitting in period 1; and if we got to period 2–which in fact we won’t–then I fight again and he quits again. So let’s write that equilibrium up and we’ll do it here. We’ll do it on the top
board actually. So let me get it right, when I write it up as a whole equilibrium. So I claim that I’ve now found all of the pure strategy SPE in this game. One of them involves my fighting in the
first period and fighting in the second period, and Jake quitting in the first period and quitting in the second period. The other one just flips it around, I quit in the first period, and if I got
there I would quit in the second period and Jake fights in the first period and, if he got there, he would also fight in the second period.
So these are perfectly natural equilibria to think about. If you want to get it intuitively, each of these equilibria involves a fighter and a quitter. The fighter always fights, the quitter always
quits. If I know that I’m playing a quitter, I’m always going to fight, so that’s a best response. If I know I’m facing a fighter, I’m going to want to quit, so that’s a best response and those are
two very simple equilibria. That’s the good news. What’s the bad news here? The bad news is we haven’t achieved our goal. Our goal was to argue that rational players might get involved in a fight,
and notice that in each of these two pure strategy sub-game perfect equilibria, in each of them, no real fight occurs. Is that right?
In each of them one person fights for the first period, but the other person just runs away. That isn’t much of a fight. Let me say it again. In each of these equilibria, one side is willing to
fight, but the other side isn’t, so no fight occurs. In particular, no costs are incurred in either of these equilibria. But I claimed at the beginning, I wanted to explain how we could have costs
occur in equilibrium. Rational players are going to incur costs. So what am I missing here? What should I do to try and find a more costly equilibrium? I claim I’m still missing some equilibria here.
What kind of equilibria am I missing? I’m missing the mixed strategy equilibria.
So far, all we’ve done is solve out the pure-strategy equilibria but we need to go back and re-analyze the whole game looking now for mixed strategy equilibria. So we’re going to do the entire–take a
deep breath, because we’re going to take the whole analysis we just did, we’re going to repeat the entire analysis we just did, but this time we’re going to look at mixed strategy equilibria.
Everyone happy with what we’re doing? So first of all, we’re going to go back to the second sub-game. Here’s the second sub-game, and we already found the pure strategy equilibria, so let me get rid
of them, and in your notes you probably want to rewrite this matrix. But I’m not going to rewrite it here because we’re a little bit short of time.
This is exactly the same payoff matrix we saw before, but now I want to look for a mixed strategy equilibrium in this game. How do I go about finding–it’s good review this–how do I go about finding a
mixed strategy equilibrium in a game like this? What’s the trick for finding mixed strategy equilibria? Should we try our guys from New Jersey and New York? Let’s try our guys from New Jersey and New
York. Where’s my New Yorker? We’ll have the true battle here between New York and New Jersey, how do we find a mixed strategy equilibrium?
Student: You use the P’s and Q’s and set them equal to one another. That’s a very crude explanation.
Professor Ben Polak: That’s a crude thing okay. So the answer was we find the P’s and Q’s and “set them equal to one another.” What is it we’re actually setting equal to what? Let’s try and get some
response on this. Did our New Jersey guy flee? Where’s my New Jersey person? They fled. We could give our Texans another chance. Where’s our Texan? Tthere was a Texan down here somewhere, what is it
they set equal to what?
Student: I guess the chances that one would quit and the other would fight.
Professor Ben Polak: Not quite. The remark about using P’s and Q’s was right. This is good review for the final. What is it I’m going to do with those P’s and Q’s? Shout it out.
Student: You use the other player’s payoffs.
Professor Ben Polak: Use the other player’s payoffs and?
Student: Make them indifferent between their strategies.
Professor Ben Polak: Good. I’m going to choose Player B’s mix in such a way as to make Player A indifferent between choosing fight and quit. So as to make it plausible that A is actually mixing. So
again the intuition is for A to be mixing they must be indifferent between fight and quit. So I’m going to choose the mix of B to make A indifferent. So that’s good review. Let’s do that. So here
I’ve usually used the letter Q but to avoid confusion here, let me use the letter P. We’re going to choose P to make Player A indifferent.
So if A fights then their payoff is what? Let’s have a look. It’s -C with probability P, and V with probability of 1 - P. This should be coming back now. This is before the mid-term, but you guys
were alive before the mid-term so you should remember this. So if they fight, they get -C P + V [1–P]. If they quit then they get 0 with probability P, and 0 again with probability 1 - P. So we know
that if A is mixing, B must be mixing in such a way as to make these two numbers equal. So we know these two must be equal to one another. Since they’re equal we can now solve for P, so what’s that
going to give us? It’s going to give us V [1–P] = P . C and that I think is P = V / [V + C]. Is that right? Someone just check my algebra.
If you remember the game of Hawk Dove that we saw just before the mid-term–it was a game we looked at when we looked at evolution–this is essentially the same game more or less as that game, and that
you’ll notice it’s the same kind of mixture we’ve got here. So P = V / [V + C] which means 1 - P = C / [V + C]. I’m leaving it up there for a bit hoping that one of the T.A.’s is just going to do my
algebra for me. I think that’s right though. So this game is symmetric so we could do the same for B but we’ll find the same thing: it’s a symmetric game. So the mixed strategy equilibrium, the mixed
Nash equilibrium in this game has both mix, both fight with probability equal to V / [V + C]. Now this is good news, because at least we are getting some fighting going on, but we need to do
We need to take this Nash equilibrium we’ve just found, which is a Nash equilibrium in the second sub-game. It’s a Nash equilibrium in the sub-game way up here. And we need to roll back the payoffs
from this sub-game, the equilibrium payoffs from this sub-game into the first stage. That’s our method. How are we going to do that? Well we better first of all figure out what those payoffs are. So
what are the payoffs in this equilibrium? The payoffs in this mixed Nash equilibrium are what? Anyone see what the payoffs are going to be if they’re both playing this mix? Well presumably the payoff
from fight and the payoff from quit must be the same, is that right? So we may as well choose the easier one.
So I claim that the payoff from quit is 0 x P + 0 x 1 - P which is 0. V / [V + C] + 0 . C / [V + C] but that’s equal to what? 0, okay good. So it’s got to be the case (kind of conveniently) that if
they do play this mixed strategy equilibrium in stage 2, the payoff they’ll get from playing it is 0. That’s going to make life a little bit easier later on. That’s our new equilibrium in the second
sub-game. Now let’s roll that back to the first game.
Here’s our first game again, and everything about this is correct except what’s down here. So let’s get rid of what’s down here. Our analysis from before is more or less still intact. It’s still the
case, that if they both quit they’ll get 0. If they (Quit, fight) they’ll get (0, V); or (V, 0) and it’s still the case if they both fight they’ll both incur costs of C and they’ll both then get
stage 2 continuation Nash payoffs. Is that right?
But now instead of those continuation Nash payoffs being (V, 0) or (0, V), those continuation Nash payoffs are going to be what? They’re going to be 0. So what we’re going to do here is we’re going
to backward induct, or roll back, those zero payoffs and come up with the corresponding matrix to describe the first stage of the game. Here it is. Fight, Quit–I’ll try to get it right without Jake
having to correct me this time–little f, little q. This is A. This is B. And the payoffs here are (0, 0) here; (0, V); (V, 0) just as before; and, in this box now, we’ve got -C + 0 and -C + 0. So
it’s exactly the same box we saw before, but now the continuation payoffs are just 0.
Again, what is this? This is for the Nash equilibrium–let’s just say for the mixed Nash equilibrium in period 2. Now what I want to do is I want to find the mixed equilibrium in period 1. We found
the mixed equilibrium in period 2. Now I want to find the mixed equilibrium in period 1. So what I could do here is I could spend a lot of time. I could put in a P and a 1 - P and I could work out
what mix of Player B will make A indifferent. I could work out what mix of Player A would make B indifferent. But has anybody noticed something about this matrix? What do you notice about this
matrix? Somebody help me out? Somebody’s got to help me out here. Tell me something about this matrix. What’s true about this matrix?
Student: It’s the same as the one above.
Professor Ben Polak: It’s the same as the one above. The matrix I just drew, when I rolled back the payoffs is exactly the same matrix that I had here. It’s exactly the same matrix. So we already
know what the mixed strategy equilibrium is in this. The mixed Nash equilibrium in this matrix is both fight with probability P = V / [V + C] So now we’re ready to show our new sub-game perfect
equilibrium, let’s drag it down.
Here’s our whole game, we found the pure SPEs but now we’re ready to find the mixed SPE. The mixed sub-game perfect equilibrium has Player A–before I do that let me just give this P a name. Let me
call this P, P*. So V / [V + C], let’s call it P*. So the mixed sub-game perfect equilibrium has Player I mixing, fighting with probability of P* in the first stage; and in the second stage, again
mixing, fighting with probability of P*. So this is Player 1 and Player 2 does exactly the same thing. While we’re here, what’s the expected payoff for each player if they’re playing this mixed
sub-game perfect equilibrium? It’s 0–the payoff from this–the expected payoff is 0.
So now we’re actually getting somewhere, now we’re really getting somewhere. So let’s just take a deep breath and see where we are. We broke this game down, this complicated game we played in class,
that conceivably–for example, when New York is playing New Jersey–conceivably it could go on all night. Apparently not when the Texans are playing each other or the football team is playing the
baseball team, but when we have New York and New Jersey it could go on all night. We curtailed it to a two period game, but in a minute, we’re going to go back to the infinite game. In this two
period game, I tried to argue–I’m trying to convince you–that you could get fighting occurring just in equilibrium with absolutely standard rational players: nothing to do with pride, nothing to do
with reputation, nothing to do with the fact that these guys are crazy guys who drunk the water in New York and New Jersey, God help them.
You can get fighting with ordinary people in equilibrium. What we’ve shown is the way in which you can get fighting is in a mixed strategy equilibrium. In each period of the game, people fight with
probability P. That’s just enough fight to give the other side an incentive to quit and just enough probability of the other side quitting to give the other side an incentive to fight; just exactly
enough. If they play that equilibrium in every period, there’s some chance of the game ending but with probability P, the game goes forward to the next period. So you could potentially have fights
for two periods.
By the way, with what probability would there be a fight in both periods? That’s a good homework question, I won’t answer it here, you can work it out at home. Should we do it here? Anybody want to
tell me? So okay, with what probability do we get a fight in the first period; a real fight, a fight involving both players? We need both players to fight, each are fighting with probability of P, so
the probability of both of them fighting is what? P². So to get a fight in the first period, the probability is P². To get a fight in both periods is what then? P^4. But we get a fight with
probability of P^4 going through. We get fighting in equilibrium. Moreover, we get some very intuitive things that we already learned in the Hawk-Dove game. The probability of fight–so in this
equilibrium–the probability of fights occurring goes up as V goes up. So the prize gets bigger, you’re more likely to see fights occur: that seems right. It goes down in C. So the probability of
fights occurring goes up in the size of the prize–that seems intuitively right–and down in the cost of fighting.
Chapter 5. Wars of Attrition: Generalization [01:06:54]
Now, okay, that’s reasonable, but I claimed I could show you this not in a two period game, but in an infinite period game. So let me spend the last five minutes taking us to infinite period games.
So everybody take a deep breath. We’re now going to consider something, we’ve never done before. We’re going to consider a game that could go on forever. It could go on forever. The way we’re going
to do that is to use the following idea, the following picture. So I can’t really draw a true tree for the infinite period game. The reason I can’t draw a true tree for the infinite period game is:
(1) I would run out of chalk; and (2) I’d run into lunch time. But you can imagine what it looks like. It looks like this, roughly speaking.
The infinite period game looks something like this. And then it goes again, and then it goes again, and so on, and it would go right the way through the board and work right the way across whatever
street that is, right across campus. That’s what the infinite tree would look like, so I clearly can’t really analyze that object. But what I want to show you is that we can still solve this game
even though it’s an infinite period game. How are we going to do that? Let’s look at a particular stage. Let’s call the stage Stage 4,503 whatever that number was: 4,503, whatever it was. So here is
the stage, this arbitrary stage, and the tree for this arbitrary stage looks like this.
What I’m going to do is: this is Stage 4,503, and what I’m going to add to this is that before you get into this stage, you’re going to incur sunk costs. If you go on playing after this stage then
you’re going to get continuation values. So going into the beginning of the game, you’ve incurred some sunk costs and if you come out on the other side and go on playing, you’re going to play some
equilibrium and get continuation values. But otherwise everything else is the same. We still have (0, 0) here, we still have (0, V) here, we still have (V, 0) here and here we still have -C plus
continuation values and -C plus continuation values. This is something we’ve seen before. This little box is something we’ve seen before.
Essentially we’ve got sunk costs in front, but they’re irrelevant. We’ve got continuation values at the end but we know how to handle them, we just put them into the payoffs. So suppose now that in
the continuation game, people play the mixed strategy that we just found. Suppose that in the continuation game people mixed with probability P: so they fight with P* and quit with probability 1 -
P*. Suppose in the continuation game they’re playing a mixed strategy. In that case, what is the continuation value of the game? What is it? It’s 0 right.
If they’re mixing in the future, they always have the option to quit so it must be that the continuation value is 0. So if they mix in the future then the continuation value is (0, 0). So now let’s
go back to this board. To make this board equivalent to the board above, all I need to do is one thing. I need to add on some sunk costs at the front. I’ve got sunk costs at the front. I’m going to
play the game. And then I’m going to get, instead of stage 2 values, I’m going to get stage–what was it?–4,503 and all stages in the future values in here and here, but otherwise it’s the same thing.
And what’s convenient is: all of those are 0 anyway. Since they’re all 0 anyway, this matrix is still correct, the continuation values are 0 and 0, these are now the continuation values. And so if I
look for a mixed-strategy equilibrium in this game, it’s something I’ve solved already. What’s the mixed strategy equiliibrium in this game? Anybody? It’s exactly what we found before. Just as
before, I’m going to mix with probability of V / [V + C].
Let’s summarize, we did something today–just now–that we’ve never done before. We’ve looked at an infinite or at least potentially infinite period game: a game that could go on forever. The way in
which we handled the game that could go on forever was what? We noticed two things. We noticed that part of the game that comes before, that part of the game that’s passed already, anything that
happened there is just a sunk cost. It’s irrelevant. It hurts if it’s a cost. It’s nice if it’s a gain. But it’s sunk, you can’t affect it now. Anything in the future can be summarized by the value,
the payoff I’m going to get in the future by playing the equilibrium in the future. In this case, the future meant mixing. Mixing gave me the value of 0. So here I am getting 0 from the future.
Then I can just analyze the game in the stage in which I’m in, just as if it was an ordinary, bog-standard, simultaneous move game. When we did so, in this particular example–we’re going to see more
examples like this after the break–but in this particular example, we found out something quite surprising. This thing we found out was, in these war of attrition settings, there is an equilibrium
with rational players–more than that, common knowledge of rationality: everybody’s rational, everyone knows everyone else is rational–there are equilibria in which, not only people fight but they
could fight forever. In every period they fight with some probability and we got an extra prediction out of it, a prediction that we weren’t expecting.
Let me just give you that prediction, and then we’ll leave the class with that. The extra prediction is this, if we look at these wars of attrition, and we keep track of the time in which the–hang on
guys don’t rush to the back yet–one more thing. If we look at the time in which the games have gone on and keep track of the probability that a war will end at that time. So imagine this is World War
I. You could imagine World War I going for one year, or two years, or three years, or 20 years or whatever. The probability distribution in this war of attrition is going to look like this. In every
period, the probability of continuing is just P*². So, in every period, the chance that–as you get further into the future there’s a greater chance the war will end. You can get very long, very
costly wars; that’s the bad news. The good news is it doesn’t happen very often. I guess we’re all involved in a rather large and costly war right now, so I’ll leave you with that pleasant thought
over Thanksgiving. Have a good break and we’ll see you afterwards.
[end of transcript]
|
{"url":"https://oyc.yale.edu/economics/econ-159/lecture-20","timestamp":"2024-11-09T11:05:53Z","content_type":"text/html","content_length":"126950","record_id":"<urn:uuid:bbda7bbf-edfa-434a-a4cd-d720220a45dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00329.warc.gz"}
|
Finitely many delta-interactions with supports on concentric spheres
Using the theory of self adjoint extensions of symmetric operators we give the precise methematical definition of the quantum Hamiltonian describing a finite number of delta interactions with
supports of concentric spheres. We also derive its resolvent, describe its spectral properties and show how this Hamiltonian can be obtained as a norm resolvent limit of a family of local scaled
short range Hamiltonians.
Pub Date:
December 1986
□ Delta Function;
□ Hamiltonian Functions;
□ Concentric Spheres;
□ Adjoints;
□ Operators (Mathematics);
□ Quantum Mechanics;
□ Eigenvalues;
□ Thermodynamics and Statistical Physics
|
{"url":"https://ui.adsabs.harvard.edu/abs/1986fmdi.rept.....S/abstract","timestamp":"2024-11-14T15:26:35Z","content_type":"text/html","content_length":"32980","record_id":"<urn:uuid:1fe15cfc-e971-4f6d-8cd6-6458af420e9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00349.warc.gz"}
|
Geweke (1992) proposed a convergence diagnostic for Markov chains. This diagnostic is based on a test for equality of the means of the first and last part of a Markov chain (by default the first 10%
and the last 50%). If the samples are drawn from a stationary distribution of the chain, then the two means are equal and Geweke's statistic has an asymptotically standard normal distribution.
The test statistic is a standard Z-score: the difference between the two sample means divided by its estimated standard error. The standard error is estimated from the spectral density at zero, and
so takes into account any autocorrelation.
The Z-score is calculated under the assumption that the two parts of the chain are asymptotically independent.
The Geweke.Diagnostic is a univariate diagnostic that is usually applied to each marginal posterior distribution. A multivariate form is not included. By chance alone due to multiple independent
tests, 5% of the marginal posterior distributions should appear non-stationary when stationarity exists. Assessing multivariate convergence is difficult.
|
{"url":"https://www.rdocumentation.org/packages/LaplacesDemon/versions/16.1.6/topics/Geweke.Diagnostic","timestamp":"2024-11-07T00:59:32Z","content_type":"text/html","content_length":"64487","record_id":"<urn:uuid:593f2099-6f1a-46df-af45-c0d56856dd30>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00792.warc.gz"}
|
The torch package contains data structures for multi-dimensional tensors and defines mathematical operations over these tensors. Additionally, it provides many utilities for efficient serialization
of Tensors and arbitrary types, and other useful utilities.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.
is_tensor Returns True if obj is a PyTorch tensor.
is_storage Returns True if obj is a PyTorch storage object.
is_complex Returns True if the data type of input is a complex data type i.e., one of torch.complex64, and torch.complex128.
is_conj Returns True if the input is a conjugated tensor, i.e. its conjugate bit is set to True.
is_floating_point Returns True if the data type of input is a floating point data type i.e., one of torch.float64, torch.float32, torch.float16, and torch.bfloat16.
is_nonzero Returns True if the input is a single element tensor which is not equal to zero after type conversions.
set_default_dtype Sets the default floating point dtype to d.
get_default_dtype Get the current default floating point torch.dtype.
set_default_device Sets the default torch.Tensor to be allocated on device.
get_default_device Gets the default torch.Tensor to be allocated on device
numel Returns the total number of elements in the input tensor.
set_printoptions Set options for printing.
set_flush_denormal Disables denormal floating numbers on CPU.
Creation Ops¶
tensor Constructs a tensor with no autograd history (also known as a "leaf tensor", see Autograd mechanics) by copying data.
sparse_coo_tensor Constructs a sparse tensor in COO(rdinate) format with specified values at the given indices.
sparse_csr_tensor Constructs a sparse tensor in CSR (Compressed Sparse Row) with specified values at the given crow_indices and col_indices.
sparse_csc_tensor Constructs a sparse tensor in CSC (Compressed Sparse Column) with specified values at the given ccol_indices and row_indices.
sparse_bsr_tensor Constructs a sparse tensor in BSR (Block Compressed Sparse Row)) with specified 2-dimensional blocks at the given crow_indices and col_indices.
sparse_bsc_tensor Constructs a sparse tensor in BSC (Block Compressed Sparse Column)) with specified 2-dimensional blocks at the given ccol_indices and row_indices.
asarray Converts obj to a tensor.
as_tensor Converts data into a tensor, sharing data and preserving autograd history if possible.
as_strided Create a view of an existing torch.Tensor input with specified size, stride and storage_offset.
from_file Creates a CPU tensor with a storage backed by a memory-mapped file.
from_numpy Creates a Tensor from a numpy.ndarray.
from_dlpack Converts a tensor from an external library into a torch.Tensor.
frombuffer Creates a 1-dimensional Tensor from an object that implements the Python buffer protocol.
zeros Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.
zeros_like Returns a tensor filled with the scalar value 0, with the same size as input.
ones Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.
ones_like Returns a tensor filled with the scalar value 1, with the same size as input.
Returns a 1-D tensor of size $\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil$ with values from the interval [start, end) taken with common difference step
arange beginning from start.
range Returns a 1-D tensor of size $\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1$ with values from start to end with step step.
linspace Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive.
Creates a one-dimensional tensor of size steps whose values are evenly spaced from ${{\text{{base}}}}^{{\text{{start}}}}$ to ${{\text{{base}}}}^{{\text{{end}}}}$, inclusive, on a
logspace logarithmic scale with base base.
eye Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
empty Returns a tensor filled with uninitialized data.
empty_like Returns an uninitialized tensor with the same size as input.
empty_strided Creates a tensor with the specified size and stride and filled with undefined data.
full Creates a tensor of size size filled with fill_value.
full_like Returns a tensor with the same size as input filled with fill_value.
quantize_per_tensor Converts a float tensor to a quantized tensor with given scale and zero point.
quantize_per_channel Converts a float tensor to a per-channel quantized tensor with given scales and zero points.
dequantize Returns an fp32 Tensor by dequantizing a quantized Tensor
complex Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag.
polar Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle.
heaviside Computes the Heaviside step function for each element in input.
Indexing, Slicing, Joining, Mutating Ops¶
adjoint Returns a view of the tensor conjugated and with the last two dimensions transposed.
argwhere Returns a tensor containing the indices of all non-zero elements of input.
cat Concatenates the given sequence of tensors in tensors in the given dimension.
concat Alias of torch.cat().
concatenate Alias of torch.cat().
conj Returns a view of input with a flipped conjugate bit.
chunk Attempts to split a tensor into the specified number of chunks.
dsplit Splits input, a tensor with three or more dimensions, into multiple tensors depthwise according to indices_or_sections.
column_stack Creates a new tensor by horizontally stacking the tensors in tensors.
dstack Stack tensors in sequence depthwise (along third axis).
gather Gathers values along an axis specified by dim.
hsplit Splits input, a tensor with one or more dimensions, into multiple tensors horizontally according to indices_or_sections.
hstack Stack tensors in sequence horizontally (column wise).
index_add See index_add_() for function description.
index_copy See index_add_() for function description.
index_reduce See index_reduce_() for function description.
index_select Returns a new tensor which indexes the input tensor along dimension dim using the entries in index which is a LongTensor.
masked_select Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which is a BoolTensor.
movedim Moves the dimension(s) of input at the position(s) in source to the position(s) in destination.
moveaxis Alias for torch.movedim().
narrow Returns a new tensor that is a narrowed version of input tensor.
narrow_copy Same as Tensor.narrow() except this returns a copy rather than shared storage.
permute Returns a view of the original tensor input with its dimensions permuted.
reshape Returns a tensor with the same data and number of elements as input, but with the specified shape.
row_stack Alias of torch.vstack().
select Slices the input tensor along the selected dimension at the given index.
scatter Out-of-place version of torch.Tensor.scatter_()
diagonal_scatter Embeds the values of the src tensor into input along the diagonal elements of input, with respect to dim1 and dim2.
select_scatter Embeds the values of the src tensor into input at the given index.
slice_scatter Embeds the values of the src tensor into input at the given dimension.
scatter_add Out-of-place version of torch.Tensor.scatter_add_()
scatter_reduce Out-of-place version of torch.Tensor.scatter_reduce_()
split Splits the tensor into chunks.
squeeze Returns a tensor with all specified dimensions of input of size 1 removed.
stack Concatenates a sequence of tensors along a new dimension.
swapaxes Alias for torch.transpose().
swapdims Alias for torch.transpose().
t Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.
take Returns a new tensor with the elements of input at the given indices.
take_along_dim Selects values from input at the 1-dimensional indices from indices along the given dim.
tensor_split Splits a tensor into multiple sub-tensors, all of which are views of input, along dimension dim according to the indices or number of sections specified by indices_or_sections.
tile Constructs a tensor by repeating the elements of input.
transpose Returns a tensor that is a transposed version of input.
unbind Removes a tensor dimension.
unravel_index Converts a tensor of flat indices into a tuple of coordinate tensors that index into an arbitrary tensor of the specified shape.
unsqueeze Returns a new tensor with a dimension of size one inserted at the specified position.
vsplit Splits input, a tensor with two or more dimensions, into multiple tensors vertically according to indices_or_sections.
vstack Stack tensors in sequence vertically (row wise).
where Return a tensor of elements selected from either input or other, depending on condition.
Within the PyTorch repo, we define an “Accelerator” as a torch.device that is being used alongside a CPU to speed up computation. These device use an asynchronous execution scheme, using torch.Stream
and torch.Event as their main way to perform synchronization. We also assume that only one such accelerator can be available at once on a given host. This allows us to use the current accelerator as
the default device for relevant concepts such as pinned memory, Stream device_type, FSDP, etc.
As of today, accelerator devices are (in no particular order) “CUDA”, “MTIA”, “XPU”, and PrivateUse1 (many device not in the PyTorch repo itself).
Stream An in-order queue of executing the respective tasks asynchronously in first in first out (FIFO) order.
Event Query and record Stream status to identify or control dependencies across Stream and measure timing.
Generator Creates and returns a generator object that manages the state of the algorithm which produces pseudo random numbers.
Random sampling¶
seed Sets the seed for generating random numbers to a non-deterministic random number on all devices.
manual_seed Sets the seed for generating random numbers on all devices.
initial_seed Returns the initial seed for generating random numbers as a Python long.
get_rng_state Returns the random number generator state as a torch.ByteTensor.
set_rng_state Sets the random number generator state.
torch.default_generator Returns the default CPU torch.Generator¶
bernoulli Draws binary random numbers (0 or 1) from a Bernoulli distribution.
Returns a tensor where each row contains num_samples indices sampled from the multinomial (a stricter definition would be multivariate, refer to
multinomial torch.distributions.multinomial.Multinomial for more details) probability distribution located in the corresponding row of tensor input.
normal Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given.
poisson Returns a tensor of the same size as input with each element sampled from a Poisson distribution with rate parameter given by the corresponding element in input i.e.,
rand Returns a tensor filled with random numbers from a uniform distribution on the interval $[0, 1)$
rand_like Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval $[0, 1)$.
randint Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).
randint_like Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive).
randn Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution).
randn_like Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1.
randperm Returns a random permutation of integers from 0 to n - 1.
In-place random sampling¶
There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:
Quasi-random sampling¶
quasirandom.SobolEngine The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences.
save Saves an object to a disk file.
load Loads an object saved with torch.save() from a file.
get_num_threads Returns the number of threads used for parallelizing CPU operations
set_num_threads Sets the number of threads used for intraop parallelism on CPU.
get_num_interop_threads Returns the number of threads used for inter-op parallelism on CPU (e.g.
set_num_interop_threads Sets the number of threads used for interop parallelism (e.g.
Locally disabling gradient computation¶
The context managers torch.no_grad(), torch.enable_grad(), and torch.set_grad_enabled() are helpful for locally disabling and enabling gradient computation. See Locally disabling gradient computation
for more details on their usage. These context managers are thread local, so they won’t work if you send work to another thread using the threading module, etc.
>>> x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
>>> is_train = False
>>> with torch.set_grad_enabled(is_train):
... y = x * 2
>>> y.requires_grad
>>> torch.set_grad_enabled(True) # this can also be used as a function
>>> y = x * 2
>>> y.requires_grad
>>> torch.set_grad_enabled(False)
>>> y = x * 2
>>> y.requires_grad
no_grad Context-manager that disables gradient calculation.
enable_grad Context-manager that enables gradient calculation.
autograd.grad_mode.set_grad_enabled Context-manager that sets gradient calculation on or off.
is_grad_enabled Returns True if grad mode is currently enabled.
autograd.grad_mode.inference_mode Context-manager that enables or disables inference mode.
is_inference_mode_enabled Returns True if inference mode is currently enabled.
Math operations¶
inf A floating-point positive infinity. Alias for math.inf.
nan A floating-point “not a number” value. This value is not a legal number. Alias for math.nan.
Pointwise Ops¶
abs Computes the absolute value of each element in input.
absolute Alias for torch.abs()
acos Computes the inverse cosine of each element in input.
arccos Alias for torch.acos().
acosh Returns a new tensor with the inverse hyperbolic cosine of the elements of input.
arccosh Alias for torch.acosh().
add Adds other, scaled by alpha, to input.
addcdiv Performs the element-wise division of tensor1 by tensor2, multiplies the result by the scalar value and adds it to input.
addcmul Performs the element-wise multiplication of tensor1 by tensor2, multiplies the result by the scalar value and adds it to input.
angle Computes the element-wise angle (in radians) of the given input tensor.
asin Returns a new tensor with the arcsine of the elements of input.
arcsin Alias for torch.asin().
asinh Returns a new tensor with the inverse hyperbolic sine of the elements of input.
arcsinh Alias for torch.asinh().
atan Returns a new tensor with the arctangent of the elements of input.
arctan Alias for torch.atan().
atanh Returns a new tensor with the inverse hyperbolic tangent of the elements of input.
arctanh Alias for torch.atanh().
atan2 Element-wise arctangent of $\text{input}_{i} / \text{other}_{i}$ with consideration of the quadrant.
arctan2 Alias for torch.atan2().
bitwise_not Computes the bitwise NOT of the given input tensor.
bitwise_and Computes the bitwise AND of input and other.
bitwise_or Computes the bitwise OR of input and other.
bitwise_xor Computes the bitwise XOR of input and other.
bitwise_left_shift Computes the left arithmetic shift of input by other bits.
bitwise_right_shift Computes the right arithmetic shift of input by other bits.
ceil Returns a new tensor with the ceil of the elements of input, the smallest integer greater than or equal to each element.
clamp Clamps all elements in input into the range [ min, max ].
clip Alias for torch.clamp().
conj_physical Computes the element-wise conjugate of the given input tensor.
copysign Create a new floating-point tensor with the magnitude of input and the sign of other, elementwise.
cos Returns a new tensor with the cosine of the elements of input.
cosh Returns a new tensor with the hyperbolic cosine of the elements of input.
deg2rad Returns a new tensor with each of the elements of input converted from angles in degrees to radians.
div Divides each element of the input input by the corresponding element of other.
divide Alias for torch.div().
digamma Alias for torch.special.digamma().
erf Alias for torch.special.erf().
erfc Alias for torch.special.erfc().
erfinv Alias for torch.special.erfinv().
exp Returns a new tensor with the exponential of the elements of the input tensor input.
exp2 Alias for torch.special.exp2().
expm1 Alias for torch.special.expm1().
fake_quantize_per_channel_affine Returns a new tensor with the data in input fake quantized per channel using scale, zero_point, quant_min and quant_max, across the channel specified by axis.
fake_quantize_per_tensor_affine Returns a new tensor with the data in input fake quantized using scale, zero_point, quant_min and quant_max.
fix Alias for torch.trunc()
float_power Raises input to the power of exponent, elementwise, in double precision.
floor Returns a new tensor with the floor of the elements of input, the largest integer less than or equal to each element.
fmod Applies C++'s std::fmod entrywise.
frac Computes the fractional portion of each element in input.
frexp Decomposes input into mantissa and exponent tensors such that $\text{input} = \text{mantissa} \times 2^{\text{exponent}}$.
Estimates the gradient of a function $g : \mathbb{R}^n \rightarrow \mathbb{R}$ in one or more dimensions using the second-order accurate central differences method
gradient and either first or second order estimates at the boundaries.
imag Returns a new tensor containing imaginary values of the self tensor.
ldexp Multiplies input by 2 ** other.
lerp Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor.
lgamma Computes the natural logarithm of the absolute value of the gamma function on input.
log Returns a new tensor with the natural logarithm of the elements of input.
log10 Returns a new tensor with the logarithm to the base 10 of the elements of input.
log1p Returns a new tensor with the natural logarithm of (1 + input).
log2 Returns a new tensor with the logarithm to the base 2 of the elements of input.
logaddexp Logarithm of the sum of exponentiations of the inputs.
logaddexp2 Logarithm of the sum of exponentiations of the inputs in base-2.
logical_and Computes the element-wise logical AND of the given input tensors.
logical_not Computes the element-wise logical NOT of the given input tensor.
logical_or Computes the element-wise logical OR of the given input tensors.
logical_xor Computes the element-wise logical XOR of the given input tensors.
logit Alias for torch.special.logit().
hypot Given the legs of a right triangle, return its hypotenuse.
i0 Alias for torch.special.i0().
igamma Alias for torch.special.gammainc().
igammac Alias for torch.special.gammaincc().
mul Multiplies input by other.
multiply Alias for torch.mul().
mvlgamma Alias for torch.special.multigammaln().
nan_to_num Replaces NaN, positive infinity, and negative infinity values in input with the values specified by nan, posinf, and neginf, respectively.
neg Returns a new tensor with the negative of the elements of input.
negative Alias for torch.neg()
nextafter Return the next floating-point value after input towards other, elementwise.
polygamma Alias for torch.special.polygamma().
positive Returns input.
pow Takes the power of each element in input with exponent and returns a tensor with the result.
quantized_batch_norm Applies batch normalization on a 4D (NCHW) quantized tensor.
quantized_max_pool1d Applies a 1D max pooling over an input quantized tensor composed of several input planes.
quantized_max_pool2d Applies a 2D max pooling over an input quantized tensor composed of several input planes.
rad2deg Returns a new tensor with each of the elements of input converted from angles in radians to degrees.
real Returns a new tensor containing real values of the self tensor.
reciprocal Returns a new tensor with the reciprocal of the elements of input
remainder Computes Python's modulus operation entrywise.
round Rounds elements of input to the nearest integer.
rsqrt Returns a new tensor with the reciprocal of the square-root of each of the elements of input.
sigmoid Alias for torch.special.expit().
sign Returns a new tensor with the signs of the elements of input.
sgn This function is an extension of torch.sign() to complex tensors.
signbit Tests if each element of input has its sign bit set or not.
sin Returns a new tensor with the sine of the elements of input.
sinc Alias for torch.special.sinc().
sinh Returns a new tensor with the hyperbolic sine of the elements of input.
softmax Alias for torch.nn.functional.softmax().
sqrt Returns a new tensor with the square-root of the elements of input.
square Returns a new tensor with the square of the elements of input.
sub Subtracts other, scaled by alpha, from input.
subtract Alias for torch.sub().
tan Returns a new tensor with the tangent of the elements of input.
tanh Returns a new tensor with the hyperbolic tangent of the elements of input.
true_divide Alias for torch.div() with rounding_mode=None.
trunc Returns a new tensor with the truncated integer values of the elements of input.
xlogy Alias for torch.special.xlogy().
Reduction Ops¶
argmax Returns the indices of the maximum value of all elements in the input tensor.
argmin Returns the indices of the minimum value(s) of the flattened tensor or along a dimension
amax Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.
amin Returns the minimum value of each slice of the input tensor in the given dimension(s) dim.
aminmax Computes the minimum and maximum values of the input tensor.
all Tests if all elements in input evaluate to True.
any Tests if any element in input evaluates to True.
max Returns the maximum value of all elements in the input tensor.
min Returns the minimum value of all elements in the input tensor.
dist Returns the p-norm of (input - other)
logsumexp Returns the log of summed exponentials of each row of the input tensor in the given dimension dim.
mean Returns the mean value of all elements in the input tensor.
nanmean Computes the mean of all non-NaN elements along the specified dimensions.
median Returns the median of the values in input.
nanmedian Returns the median of the values in input, ignoring NaN values.
Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. a value which appears most often in that row,
mode and indices is the index location of each mode value found.
norm Returns the matrix norm or vector norm of a given tensor.
nansum Returns the sum of all elements, treating Not a Numbers (NaNs) as zero.
prod Returns the product of all elements in the input tensor.
quantile Computes the q-th quantiles of each row of the input tensor along the dimension dim.
nanquantile This is a variant of torch.quantile() that "ignores" NaN values, computing the quantiles q as if NaN values in input did not exist.
std Calculates the standard deviation over the dimensions specified by dim.
std_mean Calculates the standard deviation and mean over the dimensions specified by dim.
sum Returns the sum of all elements in the input tensor.
unique Returns the unique elements of the input tensor.
unique_consecutive Eliminates all but the first element from every consecutive group of equivalent elements.
var Calculates the variance over the dimensions specified by dim.
var_mean Calculates the variance and mean over the dimensions specified by dim.
count_nonzero Counts the number of non-zero values in the tensor input along the given dim.
Comparison Ops¶
allclose This function checks if input and other satisfy the condition:
argsort Returns the indices that sort a tensor along a given dimension in ascending order by value.
eq Computes element-wise equality
equal True if two tensors have the same size and elements, False otherwise.
ge Computes $\text{input} \geq \text{other}$ element-wise.
greater_equal Alias for torch.ge().
gt Computes $\text{input} > \text{other}$ element-wise.
greater Alias for torch.gt().
isclose Returns a new tensor with boolean elements representing if each element of input is "close" to the corresponding element of other.
isfinite Returns a new tensor with boolean elements representing if each element is finite or not.
isin Tests if each element of elements is in test_elements.
isinf Tests if each element of input is infinite (positive or negative infinity) or not.
isposinf Tests if each element of input is positive infinity or not.
isneginf Tests if each element of input is negative infinity or not.
isnan Returns a new tensor with boolean elements representing if each element of input is NaN or not.
isreal Returns a new tensor with boolean elements representing if each element of input is real-valued or not.
kthvalue Returns a namedtuple (values, indices) where values is the k th smallest element of each row of the input tensor in the given dimension dim.
le Computes $\text{input} \leq \text{other}$ element-wise.
less_equal Alias for torch.le().
lt Computes $\text{input} < \text{other}$ element-wise.
less Alias for torch.lt().
maximum Computes the element-wise maximum of input and other.
minimum Computes the element-wise minimum of input and other.
fmax Computes the element-wise maximum of input and other.
fmin Computes the element-wise minimum of input and other.
ne Computes $\text{input} eq \text{other}$ element-wise.
not_equal Alias for torch.ne().
sort Sorts the elements of the input tensor along a given dimension in ascending order by value.
topk Returns the k largest elements of the given input tensor along a given dimension.
msort Sorts the elements of the input tensor along its first dimension in ascending order by value.
Spectral Ops¶
stft Short-time Fourier transform (STFT).
istft Inverse short time Fourier Transform.
bartlett_window Bartlett window function.
blackman_window Blackman window function.
hamming_window Hamming window function.
hann_window Hann window function.
kaiser_window Computes the Kaiser window with window length window_length and shape parameter beta.
Other Operations¶
atleast_1d Returns a 1-dimensional view of each input tensor with zero dimensions.
atleast_2d Returns a 2-dimensional view of each input tensor with zero dimensions.
atleast_3d Returns a 3-dimensional view of each input tensor with zero dimensions.
bincount Count the frequency of each value in an array of non-negative ints.
block_diag Create a block diagonal matrix from provided tensors.
broadcast_tensors Broadcasts the given tensors according to Broadcasting semantics.
broadcast_to Broadcasts input to the shape shape.
broadcast_shapes Similar to broadcast_tensors() but for shapes.
bucketize Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries.
cartesian_prod Do cartesian product of the given sequence of tensors.
cdist Computes batched the p-norm distance between each pair of the two collections of row vectors.
clone Returns a copy of input.
combinations Compute combinations of length $r$ of the given tensor.
corrcoef Estimates the Pearson product-moment correlation coefficient matrix of the variables given by the input matrix, where rows are the variables and columns are the observations.
cov Estimates the covariance matrix of the variables given by the input matrix, where rows are the variables and columns are the observations.
cross Returns the cross product of vectors in dimension dim of input and other.
cummax Returns a namedtuple (values, indices) where values is the cumulative maximum of elements of input in the dimension dim.
cummin Returns a namedtuple (values, indices) where values is the cumulative minimum of elements of input in the dimension dim.
cumprod Returns the cumulative product of elements of input in the dimension dim.
cumsum Returns the cumulative sum of elements of input in the dimension dim.
diag • If input is a vector (1-D tensor), then returns a 2-D square tensor
diag_embed Creates a tensor whose diagonals of certain 2D planes (specified by dim1 and dim2) are filled by input.
diagflat • If input is a vector (1-D tensor), then returns a 2-D square tensor
diagonal Returns a partial view of input with the its diagonal elements with respect to dim1 and dim2 appended as a dimension at the end of the shape.
diff Computes the n-th forward difference along the given dimension.
einsum Sums the product of the elements of the input operands along dimensions specified using a notation based on the Einstein summation convention.
flatten Flattens input by reshaping it into a one-dimensional tensor.
flip Reverse the order of an n-D tensor along given axis in dims.
fliplr Flip tensor in the left/right direction, returning a new tensor.
flipud Flip tensor in the up/down direction, returning a new tensor.
kron Computes the Kronecker product, denoted by $\otimes$, of input and other.
rot90 Rotate an n-D tensor by 90 degrees in the plane specified by dims axis.
gcd Computes the element-wise greatest common divisor (GCD) of input and other.
histc Computes the histogram of a tensor.
histogram Computes a histogram of the values in a tensor.
histogramdd Computes a multi-dimensional histogram of the values in a tensor.
meshgrid Creates grids of coordinates specified by the 1D inputs in attr:tensors.
lcm Computes the element-wise least common multiple (LCM) of input and other.
logcumsumexp Returns the logarithm of the cumulative summation of the exponentiation of elements of input in the dimension dim.
ravel Return a contiguous flattened tensor.
renorm Returns a tensor where each sub-tensor of input along dimension dim is normalized such that the p-norm of the sub-tensor is lower than the value maxnorm
repeat_interleave Repeat elements of a tensor.
roll Roll the tensor input along the given dimension(s).
Find the indices from the innermost dimension of sorted_sequence such that, if the corresponding values in values were inserted before the indices, when sorted, the order of the
searchsorted corresponding innermost dimension within sorted_sequence would be preserved.
tensordot Returns a contraction of a and b over multiple dimensions.
trace Returns the sum of the elements of the diagonal of the input 2-D matrix.
tril Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
Returns the indices of the lower triangular part of a row-by- col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains
tril_indices column coordinates.
triu Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
Returns the indices of the upper triangular part of a row by col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains
triu_indices column coordinates.
unflatten Expands a dimension of the input tensor over multiple dimensions.
vander Generates a Vandermonde matrix.
view_as_real Returns a view of input as a real tensor.
view_as_complex Returns a view of input as a complex tensor.
resolve_conj Returns a new tensor with materialized conjugation if input's conjugate bit is set to True, else returns input.
resolve_neg Returns a new tensor with materialized negation if input's negative bit is set to True, else returns input.
BLAS and LAPACK Operations¶
addbmm Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension).
addmm Performs a matrix multiplication of the matrices mat1 and mat2.
addmv Performs a matrix-vector product of the matrix mat and the vector vec.
addr Performs the outer-product of vectors vec1 and vec2 and adds it to the matrix input.
baddbmm Performs a batch matrix-matrix product of matrices in batch1 and batch2.
bmm Performs a batch matrix-matrix product of matrices stored in input and mat2.
chain_matmul Returns the matrix product of the $N$ 2-D tensors.
cholesky Computes the Cholesky decomposition of a symmetric positive-definite matrix $A$ or for batches of symmetric positive-definite matrices.
cholesky_inverse Computes the inverse of a complex Hermitian or real symmetric positive-definite matrix given its Cholesky decomposition.
cholesky_solve Computes the solution of a system of linear equations with complex Hermitian or real symmetric positive-definite lhs given its Cholesky decomposition.
dot Computes the dot product of two 1D tensors.
geqrf This is a low-level function for calling LAPACK's geqrf directly.
ger Alias of torch.outer().
inner Computes the dot product for 1D tensors.
inverse Alias for torch.linalg.inv()
det Alias for torch.linalg.det()
logdet Calculates log determinant of a square matrix or batches of square matrices.
slogdet Alias for torch.linalg.slogdet()
lu Computes the LU factorization of a matrix or batches of matrices A.
lu_solve Returns the LU solve of the linear system $Ax = b$ using the partially pivoted LU factorization of A from lu_factor().
lu_unpack Unpacks the LU decomposition returned by lu_factor() into the P, L, U matrices.
matmul Matrix product of two tensors.
matrix_power Alias for torch.linalg.matrix_power()
matrix_exp Alias for torch.linalg.matrix_exp().
mm Performs a matrix multiplication of the matrices input and mat2.
mv Performs a matrix-vector product of the matrix input and the vector vec.
orgqr Alias for torch.linalg.householder_product().
ormqr Computes the matrix-matrix multiplication of a product of Householder matrices with a general matrix.
outer Outer product of input and vec2.
pinverse Alias for torch.linalg.pinv()
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that $\text{input} = Q R$ with $Q$ being an orthogonal
qr matrix or batch of orthogonal matrices and $R$ being an upper triangular matrix or batch of upper triangular matrices.
svd Computes the singular value decomposition of either a matrix or batch of matrices input.
svd_lowrank Return the singular value decomposition (U, S, V) of a matrix, batches of matrices, or a sparse matrix $A$ such that $A \approx U \operatorname{diag}(S) V^{\text{H}}$.
pca_lowrank Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix.
lobpcg Find the k largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem using matrix-free LOBPCG methods.
trapz Alias for torch.trapezoid().
trapezoid Computes the trapezoidal rule along dim.
cumulative_trapezoid Cumulatively computes the trapezoidal rule along dim.
triangular_solve Solves a system of equations with a square upper or lower triangular invertible matrix $A$ and multiple right-hand sides $b$.
vdot Computes the dot product of two 1D vectors along a dimension.
Foreach Operations¶
This API is in beta and subject to future changes. Forward-mode AD is not supported.
_foreach_abs Apply torch.abs() to each Tensor of the input list.
_foreach_abs_ Apply torch.abs() to each Tensor of the input list.
_foreach_acos Apply torch.acos() to each Tensor of the input list.
_foreach_acos_ Apply torch.acos() to each Tensor of the input list.
_foreach_asin Apply torch.asin() to each Tensor of the input list.
_foreach_asin_ Apply torch.asin() to each Tensor of the input list.
_foreach_atan Apply torch.atan() to each Tensor of the input list.
_foreach_atan_ Apply torch.atan() to each Tensor of the input list.
_foreach_ceil Apply torch.ceil() to each Tensor of the input list.
_foreach_ceil_ Apply torch.ceil() to each Tensor of the input list.
_foreach_cos Apply torch.cos() to each Tensor of the input list.
_foreach_cos_ Apply torch.cos() to each Tensor of the input list.
_foreach_cosh Apply torch.cosh() to each Tensor of the input list.
_foreach_cosh_ Apply torch.cosh() to each Tensor of the input list.
_foreach_erf Apply torch.erf() to each Tensor of the input list.
_foreach_erf_ Apply torch.erf() to each Tensor of the input list.
_foreach_erfc Apply torch.erfc() to each Tensor of the input list.
_foreach_erfc_ Apply torch.erfc() to each Tensor of the input list.
_foreach_exp Apply torch.exp() to each Tensor of the input list.
_foreach_exp_ Apply torch.exp() to each Tensor of the input list.
_foreach_expm1 Apply torch.expm1() to each Tensor of the input list.
_foreach_expm1_ Apply torch.expm1() to each Tensor of the input list.
_foreach_floor Apply torch.floor() to each Tensor of the input list.
_foreach_floor_ Apply torch.floor() to each Tensor of the input list.
_foreach_log Apply torch.log() to each Tensor of the input list.
_foreach_log_ Apply torch.log() to each Tensor of the input list.
_foreach_log10 Apply torch.log10() to each Tensor of the input list.
_foreach_log10_ Apply torch.log10() to each Tensor of the input list.
_foreach_log1p Apply torch.log1p() to each Tensor of the input list.
_foreach_log1p_ Apply torch.log1p() to each Tensor of the input list.
_foreach_log2 Apply torch.log2() to each Tensor of the input list.
_foreach_log2_ Apply torch.log2() to each Tensor of the input list.
_foreach_neg Apply torch.neg() to each Tensor of the input list.
_foreach_neg_ Apply torch.neg() to each Tensor of the input list.
_foreach_tan Apply torch.tan() to each Tensor of the input list.
_foreach_tan_ Apply torch.tan() to each Tensor of the input list.
_foreach_sin Apply torch.sin() to each Tensor of the input list.
_foreach_sin_ Apply torch.sin() to each Tensor of the input list.
_foreach_sinh Apply torch.sinh() to each Tensor of the input list.
_foreach_sinh_ Apply torch.sinh() to each Tensor of the input list.
_foreach_round Apply torch.round() to each Tensor of the input list.
_foreach_round_ Apply torch.round() to each Tensor of the input list.
_foreach_sqrt Apply torch.sqrt() to each Tensor of the input list.
_foreach_sqrt_ Apply torch.sqrt() to each Tensor of the input list.
_foreach_lgamma Apply torch.lgamma() to each Tensor of the input list.
_foreach_lgamma_ Apply torch.lgamma() to each Tensor of the input list.
_foreach_frac Apply torch.frac() to each Tensor of the input list.
_foreach_frac_ Apply torch.frac() to each Tensor of the input list.
_foreach_reciprocal Apply torch.reciprocal() to each Tensor of the input list.
_foreach_reciprocal_ Apply torch.reciprocal() to each Tensor of the input list.
_foreach_sigmoid Apply torch.sigmoid() to each Tensor of the input list.
_foreach_sigmoid_ Apply torch.sigmoid() to each Tensor of the input list.
_foreach_trunc Apply torch.trunc() to each Tensor of the input list.
_foreach_trunc_ Apply torch.trunc() to each Tensor of the input list.
_foreach_zero_ Apply torch.zero() to each Tensor of the input list.
compiled_with_cxx11_abi Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1
result_type Returns the torch.dtype that would result from performing an arithmetic operation on the provided input tensors.
can_cast Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation.
promote_types Returns the torch.dtype with the smallest size and scalar kind that is not smaller nor of lower kind than either type1 or type2.
use_deterministic_algorithms Sets whether PyTorch operations must use "deterministic" algorithms.
are_deterministic_algorithms_enabled Returns True if the global deterministic flag is turned on.
is_deterministic_algorithms_warn_only_enabled Returns True if the global deterministic flag is set to warn only.
set_deterministic_debug_mode Sets the debug mode for deterministic operations.
get_deterministic_debug_mode Returns the current value of the debug mode for deterministic operations.
set_float32_matmul_precision Sets the internal precision of float32 matrix multiplications.
get_float32_matmul_precision Returns the current value of float32 matrix multiplication precision.
set_warn_always When this flag is False (default) then some PyTorch warnings may only appear once per process.
get_device_module Returns the module associated with a given device(e.g., torch.device('cuda'), "mtia:0", "xpu", ...).
is_warn_always_enabled Returns True if the global warn_always flag is turned on.
vmap vmap is the vectorizing map; vmap(func) returns a new function that maps func over some dimension of the inputs.
_assert A wrapper around Python's assert which is symbolically traceable.
Symbolic Numbers¶
class torch.SymInt(node)[source]¶
Like an int (including magic methods), but redirects all operations on the wrapped node. This is used in particular to symbolically record operations in the symbolic shape workflow.
Represent this int as an exact integer ratio
Return type
Tuple[SymInt, int]
class torch.SymFloat(node)[source]¶
Like an float (including magic methods), but redirects all operations on the wrapped node. This is used in particular to symbolically record operations in the symbolic shape workflow.
class torch.SymBool(node)[source]¶
Like an bool (including magic methods), but redirects all operations on the wrapped node. This is used in particular to symbolically record operations in the symbolic shape workflow.
Unlike regular bools, regular boolean operators will force extra guards instead of symbolically evaluate. Use the bitwise operators instead to handle this.
sym_float SymInt-aware utility for float casting.
sym_int SymInt-aware utility for int casting.
sym_max SymInt-aware utility for max which avoids branching on a < b.
sym_min SymInt-aware utility for min().
sym_not SymInt-aware utility for logical negation.
sym_sum N-ary add which is faster to compute for long lists than iterated binary addition.
Export Path¶
This feature is a prototype and may have compatibility breaking changes in the future.
export generated/exportdb/index
Control Flow¶
This feature is a prototype and may have compatibility breaking changes in the future.
cond Conditionally applies true_fn or false_fn.
compile Optimizes given model/function using TorchDynamo and specified backend.
|
{"url":"http://pytorch.org/docs/main/torch.html","timestamp":"2024-11-10T08:00:06Z","content_type":"text/html","content_length":"337615","record_id":"<urn:uuid:f095a2ed-3db5-4e7f-9efc-88dfd0c8e323>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00674.warc.gz"}
|
Workshop Dedicated to Thomas A. Henzinger for His 60th Birthday
Invited speakers
Thomas Henzinger: Two Successes, Two Failures, and Two Predictions
Rajeev Alur: Time for Temporal Logic
Tom Henzinger’s early work developed theoretical foundations of real-time temporal logics. One of his first papers, co-authored with me and published at FOCS 1989, was titled “A really temporal
logic”. Research on real-time temporal logics remains vibrant even today motivated by applications to robotics and cyber-physical systems and the new opportunities to translate informal requirements
to formal specifications. In this talk, I will first reminisce about our joyful and productive collaboration on real-time temporal logics. I will then give an overview of current research in my group
which explores why and how temporal logic can be used to specify task requirements in reinforcement learning.
Christel Baier: From classical to non-classical stochastic shortest path problems
The classical stochastic shortest path (SSP) problems asks to find a policy for traversing a weighted stochastic graph until reaching a distinguished goal state that minimizes the expected
accumulated weight. The underlying graph model is a finite-state Markov decision process (MDP) with integer weights for its state-action pairs. Prominent results are the existence of optimal
memoryless deterministic policies together with linear programming techniques and value and policy iteration to compute such policies and their values. These results rely on the assumption that the
minimum under all proper policies that reach the goal state almost surely exists. Early work on the SSP problems goes back to the 1960s-1990s and makes additional assumptions. Complete algorithms
that only require the existence of proper policies combine these techniques with a pre-analysis of end components, an elegant graph-theoretic concept for MDPs that has been introduced by de Alfaro in
the late 1990s. The talk will start with a summary of these results. The second part of the talk presents more recent results for variants of the classical SSP. The conditional and partial SSP drop
the assumption that the goal state must be reached almost surely and ask to minimize the expected accumulated weight under the condition that the goal will be reached (conditional SSP) resp. assign
value 0 to all paths that do not reach the goal state (partial SSP). Other variants take into account aspects of risk-awareness, e.g., by studying the conditional value-at-risk or the
variance-penalized expected accumulated weight. While the classical SSP problem is solvable in polynomial time, such non-classical SSP problems are computationally much harder. For the general case,
the decidability status of such non-classical SSP problems is unknown, but they have been shown to be at least as hard as the Skolem problem (and even as the positivity problem). However, for
non-positive weights, the conditional, partial and variance-penalized SSP problem are solvable in exponential time with a PSPACE lower bounds for acyclic MDPs.
Javier Esparza: Back to the Future: A Fresh Look at Linear Temporal Logic
In the late 1970s, Amir Pnueli introduced Linear Temporal Logic (LTL) into computer science as a framework for specifying and verifying concurrent programs. During the 1980s, in collaboration with
Zohar Manna and others, he designed a deductive verification framework for LTL, based on a key Normalization Theorem for LTL.
In 1985 Vardi and Wolper introduced an automata-theoretic approach to model checking LTL. Its success took the logical approach of Manna and Pnueli out of the limelight, and “demoted“ LTL to a
specification language, a sort of syntax for automata.
In the last years, together with Jan Kretinsky, Salomon Sickert and Ruben Rubio, we have shown that logic techniques, and in particular a new normalization algorithm, help to translate LTL into
smaller and better automata for modern applications, like probabilistic verification and automatic program synthesis. The talk will present some highlights of this work.
Edward Lee: Generalizing Logical Execution Time
In the Logical Execution Time (LET) principle, concurrent software components interact deterministically, reading their inputs atomically at the start of a task and producing outputs atomically after
a fixed elapsed logical time. In addition to deterministic concurrency, LET programs yield more deterministic timing when they interact with their physical environment through sensors and actuators.
This talk shows through a series of examples that the LET principle can be realized flexibly and generalized using the Lingua Franca coordination language.
Joel Ouaknine: What’s Decidable about Discrete Linear Dynamical Systems?
Discrete linear dynamical systems are an important and fundamental class of infinite-state systems and are ubiquitous in mathematics, physics, engineering, and computer science. They play a key role,
among others, in the formal analysis and verification of program loops, probabilistic and weighted automata, control systems, etc. In this talk, I will present an overview and survey of the state of
the art on the algorithmic analysis of discrete linear dynamical systems, focussing in particular on reachability and model-checking problems.
Shaz Qadeer: A discipline of verified program construction
I have worked on program verification for many years. In this time, I invented many techniques and built a variety of tools, running the gamut from static to dynamic verification. I also had the
privilege of doing my work in the setting of the software industry, which allowed me to make credible attempts to convince software engineers to use these tools in their everyday programming.
Automatic program verification, as broadly understood by the research community, is the goal of analyzing software artifacts automatically with no involvement from the programmer. Over time, my
assessment of the theory of program verification and the practice of software construction led me to a devastating conclusion: progress towards automatic program verification, by itself, is unlikely
to enable productive deployment of verification methods at scale. This conclusion forced me to consider a different goal—verified program construction.
I will talk about the basis for my conclusion and the difference between verified program construction and program verification. I will illustrate the discipline of verified program construction by
discussing a few research efforts whose goals are similar in spirit. Finally, I will talk about Civl, a verifier that could help with the construction of verified concurrent and distributed systems.
|
{"url":"https://th-birthday-workshop.pages.ist.ac.at/sample-page/","timestamp":"2024-11-04T22:06:58Z","content_type":"text/html","content_length":"35160","record_id":"<urn:uuid:4b90b4f5-3f94-4acf-ad3b-b6774faeaf21>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00115.warc.gz"}
|
Melanie Schmidt - MIT Rising Stars
Melanie Schmidt
Carnegie Mellon University
Position: PostDoc
Rising Stars year of participation: 2015
Melanie Schmidt obtained a master’s degree with distinction in computer science (with minor in mathematics) from TU Dortmund University in 2009. In her undergraduate studies, she focused on network
flow theory, a topic that lies in the intersection between theoretical computer science and discrete mathematics. During her PhD time, her main focus became clustering algorithms, in particular for
large data sets. In 2012, Melanie Schmidt received the Google Anita Borg Memorial Scholarship that supports women that excel in technology. She graduated with distinction with her PhD theses on
‘Coresets and streaming algorithms for the k-means problem and related clustering objectives’ in 2014. Then, she was awarded with a merit-scholarship by the German Academic Exchange Service (DAAD) to
spend a year as a visiting PostDoc at the Carnegie Mellon University in Pittsburgh, where she visits Anupam Gupta.
Algorithmic techniques for solving the k-means problem on big data sets
Algorithmic techniques for solving the k-means problem on big data sets
Algorithm theory consists of designing and analyzing methods to solve computational problems. The k-means problem is a computational problem from geometry. The input consists of points from the
d-dimensional Euclidean space, i.e. vectors. The goal is to group these into k groups and to find a representative point for each group. Clustering is a major tool in machine learning: Imagine that
the vectors represent songs in a music collection or handwritten letters. The clustering can show which objects are similar, and the representatives can be used to classify newly arriving objects.
There are many clustering objectives and the k-means objective might be the most popular among them. It is based on the Euclidean distance. The representative of a group is the centroid, i.e. the sum
of the points in the group divided by their number. A grouping is evaluated by computing the squared Euclidean distance of every point to its representative and summing these up. The k-means problem
consists of finding a grouping into k groups that minimizes this cost function.
The algorithmic challenges connected to the k-means problem are numerous. The problem is NP-hard, but it can be solved approximately up to a constant factor. What is the best possible approximation
factor? Can we prove lower bounds? A different approach is to fix a parameter to lower the complexity. If the number of groups k is fixed, then the problem can be approximated to an arbitrary
precision. This assumption also allows us to approximately solve the problem by algorithms that only read the input data once and in a given order — a main tool to deal with big data. How small can
we make the memory need of such a streaming algorithm, and will the algorithm be efficient in practice? We see different answers to this question.
|
{"url":"https://risingstars-eecs.mit.edu/participants/melanie-schmidt/?y=&search_page=1&keyword=","timestamp":"2024-11-08T21:52:53Z","content_type":"text/html","content_length":"30281","record_id":"<urn:uuid:c1b99fb9-d8b7-4195-b127-a6b6015414b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00094.warc.gz"}
|
The Classification of the Finite Simple Groups, Number 2search
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
The Classification of the Finite Simple Groups, Number 2
Hardcover ISBN: 978-0-8218-0390-5
Product Code: SURV/40.2
List Price: $129.00
MAA Member Price: $116.10
AMS Member Price: $103.20
eBook ISBN: 978-0-8218-3376-6
Product Code: SURV/40.2.E
List Price: $125.00
MAA Member Price: $112.50
AMS Member Price: $100.00
Hardcover ISBN: 978-0-8218-0390-5
eBook: ISBN: 978-0-8218-3376-6
Product Code: SURV/40.2.B
List Price: $254.00 $191.50
MAA Member Price: $228.60 $172.35
AMS Member Price: $203.20 $153.20
Click above image for expanded view
The Classification of the Finite Simple Groups, Number 2
Hardcover ISBN: 978-0-8218-0390-5
Product Code: SURV/40.2
List Price: $129.00
MAA Member Price: $116.10
AMS Member Price: $103.20
eBook ISBN: 978-0-8218-3376-6
Product Code: SURV/40.2.E
List Price: $125.00
MAA Member Price: $112.50
AMS Member Price: $100.00
Hardcover ISBN: 978-0-8218-0390-5
eBook ISBN: 978-0-8218-3376-6
Product Code: SURV/40.2.B
List Price: $254.00 $191.50
MAA Member Price: $228.60 $172.35
AMS Member Price: $203.20 $153.20
• Mathematical Surveys and Monographs
Volume: 40; 1996; 218 pp
MSC: Primary 20
The Classification Theorem is one of the main achievements of 20th century mathematics, but its proof has not yet been completely extricated from the journal literature in which it first
appeared. This is the second volume in a series devoted to the presentation of a reorganized and simplified proof of the classification of the finite simple groups. The authors present (with
either proof or reference to a proof) those theorems of abstract finite group theory, which are fundamental to the analysis in later volumes in the series. This volume provides a relatively
concise and readable access to the key ideas and theorems underlying the study of finite simple groups and their important subgroups.
The sections on semisimple subgroups and subgroups of parabolic type give detailed treatments of these important subgroups, including some results not available until now or available only in
journal literature. The signalizer section provides an extensive development of both the Bender Method and the Signalizer Functor Method, which play a central role in the proof of the
Classification Theorem.
This book would be a valuable companion text for a graduate group theory course.
Advanced graduate students and mathematicians who specialize in finite group theory.
□ Part I
□ G. General group theory
□ A model of clarity and precision ... contains many gems of exposition and new proofs ... makes clear the very questions that could revolutionize the proof.
Mathematical Reviews
□ Apart from readers studying the classification theorem in some detail, this volume is also of interest to someone who just wants to get acquainted with some of the principal methods of the
classification theorem but does not want to follow the long proof itself.
Zentralblatt MATH
□ The authors are among the best expositors in finite group theory. Their treatment of all these topics is clear and elegant ... gives an attractive treatment of local group theory ... should
be in the library of all finite group theorists.
Bulletin of the AMS
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Reviews
• Requests
Volume: 40; 1996; 218 pp
MSC: Primary 20
The Classification Theorem is one of the main achievements of 20th century mathematics, but its proof has not yet been completely extricated from the journal literature in which it first appeared.
This is the second volume in a series devoted to the presentation of a reorganized and simplified proof of the classification of the finite simple groups. The authors present (with either proof or
reference to a proof) those theorems of abstract finite group theory, which are fundamental to the analysis in later volumes in the series. This volume provides a relatively concise and readable
access to the key ideas and theorems underlying the study of finite simple groups and their important subgroups.
The sections on semisimple subgroups and subgroups of parabolic type give detailed treatments of these important subgroups, including some results not available until now or available only in journal
literature. The signalizer section provides an extensive development of both the Bender Method and the Signalizer Functor Method, which play a central role in the proof of the Classification Theorem.
This book would be a valuable companion text for a graduate group theory course.
Advanced graduate students and mathematicians who specialize in finite group theory.
• Part I
• G. General group theory
• A model of clarity and precision ... contains many gems of exposition and new proofs ... makes clear the very questions that could revolutionize the proof.
Mathematical Reviews
• Apart from readers studying the classification theorem in some detail, this volume is also of interest to someone who just wants to get acquainted with some of the principal methods of the
classification theorem but does not want to follow the long proof itself.
Zentralblatt MATH
• The authors are among the best expositors in finite group theory. Their treatment of all these topics is clear and elegant ... gives an attractive treatment of local group theory ... should be in
the library of all finite group theorists.
Bulletin of the AMS
Permission – for use of book, eBook, or Journal content
Please select which format for which you are requesting permissions.
|
{"url":"https://bookstore.ams.org/surv-40-2","timestamp":"2024-11-11T19:56:27Z","content_type":"text/html","content_length":"93737","record_id":"<urn:uuid:d9fc2e87-5bbe-4708-b79e-aafb32ffe154>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00408.warc.gz"}
|
Python matplotlib – 2
In this article:
1. Plotting more sets of data in the same graph
Lets plot two functions
y1=x2 + 1, y2= x3+2*t
X values are between 0 and 10
Code which solve the problem:
import numpy as np
import matplotlib.pyplot as myplot
t = np.arange(0., 10., 0.25)
myplot.plot( t, t**2+1, 'r>') myplot.plot( t, t**3+2*t, 'b*')
Code explanation:
t = np.arange(0., 10., 0.25)
this generate x coordinate values. First argument 0. is the start value.
Second argument 10. is the end value. Third argument is the step between value.
Means generated values by np.arange are 0., 0.25, 0.50, 0.75, 1, 1.25 etc.
Plot function have 3 arguments this time, first two was explained in previous post.
Third argument is plot "format string", which describe colour used for draw and character.
Thus for myplot.plot( t, t**2+1, 'r>')
third argument 'r>' will draw plot in red colour ('r') using triangle_right marker ('>).
Documentation about all markers supported by plot are in matplotlib.pyplot.plot, section "Format Strings".
Is important to retain that in plot "format string" first character is plot colour and the second character is type of marker.
For myplot.plot( t, t*3+2t, 'b') third argument 'b' will draw plot in blue ('b') and the marker is '*'
Output from code is in below image:
2. A graph with categorical variables
Sample problem for this case: we have population
spread in 4 group ages:
years_18_30, medium income is 2800
years_31_45, medium income is 4100
years_46_60, medium income is 4500
years_60_plus, medium income is 3800
We need to plot categorical variable age/income for this.
Code is:
import matplotlib.pyplot as myplot
ages=['year_18_30', 'year_31_45', 'year_46_60', 'year_60_plus']
income = [2800, 4100, 4500, 3800]
myplot.figure(figsize=(5, 5))
myplot.bar(ages, income)
This time graph is draw in a figure, created by
myplot.figure(figsize=(5, 5))
Size of figure is width 5 inches, heigh 5 inches.
Graph is a bar graph type, draw with
myplot.bar(ages, income)
In code Similar output we used figure and subplot, similar output is obtained
without those with code:
import matplotlib.pyplot as myplot
ages=['year_18_30', 'year_31_45', 'year_46_60', 'year_60_plus']
income = [2800, 4100, 4500, 3800]
myplot.bar(ages, income)
|
{"url":"https://data2bit.com/2024/02/18/python-matplotlib-2/","timestamp":"2024-11-12T00:02:34Z","content_type":"text/html","content_length":"33214","record_id":"<urn:uuid:ee4cf01a-0dd2-466c-a609-dd56f7f7c08d>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00044.warc.gz"}
|
American Mathematical Society
Powersโ s binary shifts on the hyperfinite factor of type $\textrm {II}_ 1$
HTML articles powered by AMS MathViewer
by Masatoshi Enomoto and Yasuo Watatani
Proc. Amer. Math. Soc. 105 (1989), 371-374
DOI: https://doi.org/10.1090/S0002-9939-1989-0938911-6
PDF | Request permission
A unit preserving $*$-endomorphism $\sigma$ on the hyperfinite ${\text {I}}{{\text {I}}_1}$ factor $R$ is called a shift if $\bigcap \nolimits _{n = 0}^\infty {{\sigma ^n}(R) = \{ \lambda 1;\lambda \
in \mathbb {C}} \}$. A shift $\sigma$ is called Powersโ binary shift if there is a self-adjoint unitary $u$ such that $R = \{ {\sigma ^n}(u);n \in \mathbb {N} \cup \{ 0\} \} ''$ and ${\sigma ^k}(u)
u = \pm u{\sigma ^k}(u)$ for $k \in \mathbb {N} \cup \{ 0\}$. Let $q(\sigma )$ be the number $\min \{ k \in \mathbb {N};{\sigma ^k}(R)โ \cap R \ne \mathbb {C}1\}$. It is shown that the number $q(\
sigma )$ is not the complete outer conjugacy invariant for Powersโ binary shifts. References
D. Bures and H. S. Yin, Shifts on the hyperfinite factor of type ${\text {I}}{{\text {I}}_1}$ (preprint, 1987). M. Enomoto, M. Choda and Y. Watatani, Uncountably many non-binary shifts on the
hyperfinite ${\text {I}}{{\text {I}}_1}$-factor (preprint, 1987).
Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC: 46L10, 46L35, 46L55
• Retrieve articles in all journals with MSC: 46L10, 46L35, 46L55
Bibliographic Information
• © Copyright 1989 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 105 (1989), 371-374
• MSC: Primary 46L10; Secondary 46L35, 46L55
• DOI: https://doi.org/10.1090/S0002-9939-1989-0938911-6
• MathSciNet review: 938911
|
{"url":"https://www.ams.org/journals/proc/1989-105-02/S0002-9939-1989-0938911-6/?active=current","timestamp":"2024-11-02T05:39:19Z","content_type":"text/html","content_length":"59036","record_id":"<urn:uuid:9e9b51b6-d83e-450f-8939-fa18dd031f6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00608.warc.gz"}
|
Important Properties of Direct Common Tangents |Explained With Diagram
Important Properties of Direct Common Tangents
We will discuss here three important properties of direct common tangents.
I. The two direct common tangents drawn to two circles are equal in length.
Given: WX and YZ are the two direct common tangents drawn to the two given circles with centres O and P.
To prove: WX = YZ.
Construction: Produce WX and YZ show that they meet at Q.
Statement Reason
1. WQ = YQ 1. The two tangents, drawn to a circle from an external point are equal in length.
2. XQ = ZQ 2. As in statement 1.
3. WQ – XQ = YQ – ZQ
3. Subtracting statement 2 from statement 1.
⟹ WX = YZ (Proved).
II. The length of a direct common tangent to two circles is \(\sqrt{d^{2} – (r_{1} – r_{2})^{2}}\), where d is the distance between the centres of the circles, and r\(_{1}\) and r\(_{2}\) are the
radii of the given circles.
Let two circles be given with centres O and P, and radii r\(_{1}\) and r\(_{2}\) respectively. Let WX be a direct common tangent.
Therefore, OW = r\(_{1}\) and PX = r\(_{2}\).
Also, r\(_{1}\) > r\(_{2}\).
Let the distance between the centres of the circles, OP = d.
Draw PT ⊥ OW.
Now, OW ⊥ WX and PX ⊥ WX, because a tangent is perpendicular to the radius drawn through the point of contact
Therefore, WXPT is a rectangle.
So, WT = XP = r\(_{2}\) and WX = PT, and the opposite sides of a rectangle are equal.
OT = OW – WT = r\(_{1}\) - r\(_{2}\).
In the right-angled triangle OPT,
We have, PT^2 = OP^2 – OT^2 [by, Pythagoras Theorem]
⟹ PT^2 = d^2 – (r\(_{1}\) - r\(_{2}\))\(^{2}\)
⟹ PT = \(\sqrt{d^{2} – (r_{1} – r_{2})^{2}}\)
⟹ WX = \(\sqrt{d^{2} – (r_{1} – r_{2})^{2}}\); [As PT = WX]
Note: This formula remains true even when the circles touch or intersect each other.
III. The point of intersection of the direct common tangents and the centres of the circles are collinear.
Given: Two circles with centres O and P, and there direct common tangents WX and YZ, which intersect at Q.
To prove: Q, P and O lie on the same straight line.
Statement Reason
1. PQ bisects ∠XQZ 1. The tangents drawn to a circle from an external point are equally inclined to the line joining the point to the centre of the circle.
2. OQ bisects ∠WQY 2. As in statement 1.
3. Therefore, PQ and OQ lie along the same straight line
3. As ∠XQZ and ∠WQY are the same angle, so their bisectors must be the same straight line.
⟹ Q, P and O are collinear. (Proved).
From Important Properties of Direct Common Tangents to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
|
{"url":"https://www.math-only-math.com/important-properties-of-direct-common-tangents.html","timestamp":"2024-11-04T04:23:31Z","content_type":"text/html","content_length":"39321","record_id":"<urn:uuid:a767f73c-c0e7-4ee7-a9d6-5813b393c9b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00809.warc.gz"}
|
Generate ROC Curve Charts for Print and
Command line basic usage
I start by creating an example data set. There are 2 markers, one that is moderately predictive and one that is not as predictive.
## Loading required package: ggplot2
D.ex <- rbinom(200, size = 1, prob = .5)
M1 <- rnorm(200, mean = D.ex, sd = .65)
M2 <- rnorm(200, mean = D.ex, sd = 1.5)
test <- data.frame(D = D.ex, D.str = c("Healthy", "Ill")[D.ex + 1],
M1 = M1, M2 = M2, stringsAsFactors = FALSE)
The Roc Geom
Next I use the ggplot function to define the aesthetics, and the geom_roc function to add an ROC curve layer. The geom_roc function requires the aesthetics d for disease status, and m for marker. The
disease status need not be coded as 0/1, but if it is not, stat_roc assumes (with a warning) that the lowest value in sort order signifies disease-free status. stat_roc and geom_roc are linked by
default, with the stat doing the underlying computation of the empirical ROC curve, and the geom consisting of the ROC curve layer.
The disease status aesthetic can be specified as a string or factor, but with a warning.
## Warning in verify_d(data$d): D not labeled 0/1, assuming Healthy = 0 and Ill =
## 1!
The geom_roc layer includes the ROC curve line combined with points and labels to display the values of the biomarker at the different cutpoints. It accepts the argument n.cuts to define the number
of cutpoints to display along the curve. Labels can be suppressed by using n.cuts = 0 or labels = FALSE. The size of the labels and the number of significant digits can be adjusted with labelsize and
labelround, respectively.
We provide a function style_roc that can be added to a ggplot that contains an ROC curve layer. This adds a diagonal guideline, sets the axis labels, and adjusts the major and minor grid lines. The
direct_label function operates on a ggplot object, adding a direct label to the plot. It attempts to intelligently select an appropriate location for the label, but the location can be adjusted with
nudge_x, nudge_y and label.angle. If the labels argument is NULL, it will take the name from the mapped aesthetic.
Confidence regions and the Rocci Geom
It is common to compute confidence regions for points on the ROC curve using the Clopper and Pearson (1934) exact method. Briefly, exact confidence intervals are calculated for the \(FPF\) and \(TPF
\) separately, each at level \(1 - \sqrt{1 - \alpha}\). Based on result 2.4 from Pepe (2003), the cross-product of these intervals yields a \(100 * (1 - \alpha)\) percent rectangular confidence
region for the pair.
This is implemented in the stat_rocci and displayed as a geom_rocci layer. These both require the same aesthetics as the ROC geom, d for disease status and m for marker. By default, a set of 3 evenly
spaced points along the curve are chosen to display confidence regions. You can select points by passing a vector of values in the range of m to the ci.at argument. By default, the significance level
\(\alpha\) is set to 0.05, this can be changed using the sig.level option.
Interactive Plots
Ggplot objects that contain a GeomRoc layer can be used to create an interactive plot and display it in the Rstudio viewer or default web browser by passing it to the plot_interactive_roc, or
export_interactive_roc function. The style_roc function is applied by default. Give the function an optional path to an html file as an argument called file to save the interactive plot as a complete
web page. By default, any existing Rocci layers are removed and replaced with a dense layer of confidence regions so that the user can click anywhere for a confidence region. This can be suppressed
by add.cis = FALSE. Furthermore, the points layer of the Roc geom can be hidden by using the hide.points option.
Hovering over the display shows the cutoff value at the point nearest to the cursor. Clicking makes the cutoff label stick until the next click, and if confidence regions are available, clicks will
also display those as grey rectangles. The confidence regions are automatically detected. When the user clicks on the ROC curve, the confidence region for the TPF and FPF is overlaid using a grey
rectangle. The label and region stick until the next click.
An interactive ROC plot can be exported by using the export_interactive_roc function, which returns a character string containing the necessary HTML and JavaScript. The character string can be
copy-pasted into an html document, or better yet, incorporated directly into a dynamic document using knitr (knitr homepage).
In a knitr document, it is necessary to use the cat function on the results and use the chunk options results = 'asis' and fig.keep='none' so that the interactive plot is displayed correctly. For
documents that contain multiple interactive plots, it is necessary to assign each plot a unique name using the prefix argument of export_interactive_roc. This is necessary to ensure that the
JavaScript code manipulates the correct svg elements. The next code block shows an example knitr chunk that can be used in an .Rmd document to display an interactive plot.
```{r int-no, fig.keep='none', results = 'asis'}
prefix = "a")
The result is shown below:
Click for confidence regions.
Multiple ROC curves
If you have grouping factors in your dataset, or you have multiple markers measured on the same subjects, you may wish to plot multiple ROC curves on the same plot. plotROC fully supports faceting
and grouping done by ggplot2. In out example dataset, we have 2 markers measured in a paired manner:
## D D.str M1 M2
## 1 1 Ill 1.48117155 -2.50636605
## 2 1 Ill 0.61994478 1.46861033
## 3 0 Healthy 0.57613345 0.07532573
## 4 1 Ill 0.85433197 2.41997703
## 5 0 Healthy 0.05258342 0.01863718
## 6 1 Ill 0.66703989 0.24732453
These data are in wide format, with the 2 markers going across 2 columns. ggplot requires long format, with the marker result in a single column, and a third variable identifying the marker. We
provide the function melt_roc to perform this transformation. The arguments are the data frame, a name or index identifying the disease status column, and a vector of names or indices identifying the
the markers. Optionally, the names argument gives a vector of names to assign to the marker, replacing their column names. The result is a data frame in long format.
## D M name
## M11 1 1.48117155 M1
## M12 1 0.61994478 M1
## M13 0 0.57613345 M1
## M14 1 0.85433197 M1
## M15 0 0.05258342 M1
## M16 1 0.66703989 M1
Then, the dataset can be passed to the ggplot function, with the marker name given as a grouping or faceting variable.
pairplot <- ggplot(longtest, aes(d = D, m = M, color = name)) +
geom_roc(show.legend = FALSE) + style_roc()
Interactive versions of the plots are fully supported.
## Scale for x is already present.
## Adding another scale for x, which will replace the existing scale.
## Scale for y is already present.
## Adding another scale for y, which will replace the existing scale.
|
{"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/plotROC/vignettes/examples.html","timestamp":"2024-11-05T12:28:53Z","content_type":"text/html","content_length":"1048887","record_id":"<urn:uuid:2eda5a94-5dae-4c74-8e01-d2cdbb89e452>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00691.warc.gz"}
|
Formal Logic vs. Material Logic?
Formal Logic vs. Material Logic?
In a purely logical argument, even if the premises aren’t in any way (semantically) connected to the conclusion, the argument may still be both valid and sound.
Professor Edwin D. Mares displays what he sees as a problem with purely formal logic when he offers us the following example of a valid argument:
The sky is blue.
∴ there is no integer n greater than or equal to 3 such that for any non-zero integers x, y, z, xn = yn + zn.
Edwin Mares says that the above “is valid, in fact sound, on the classical logician’s definition”. It’s the argument that is valid; whereas the premise and conclusion are sound (i.e., true). In more
detail, the
“premise cannot be true in any possible circumstance in which the conclusion is false”.
Clearly the content of the premise isn’t semantically — or otherwise — connected to the content of the conclusion. However, the argument is still valid and sound.
That said, it’s not clear from Edwin Mares’ symbolic expression above if he meant this: “If P, therefore Q. P. Therefore Q.” That is, perhaps the premise “The sky is blue” with a line under it,
followed by the mathematical statement, is used as symbolic shorthand for an example of modus ponens which doesn’t have a sematic connection between P and Q. In other words, Mares’ “P, therefore Q”
isn’t (really) an argument at all. However, if both P and Q are true, then, logically, they can exist together without any semantic connection and without needing to be read as shorthand for an
example of modus ponens.
Whatever the case is, what’s the point of the “The Sky is blue” example above?
Perhaps no logician would state it for real. He would only do so, as Mares himself does, to prove a point about logical validity. However, can’t we now ask why it’s valid even though the premise and
conclusion are true?
Perhaps showing the bare bones of the “The sky is blue” example will help. Thus:
∴ Q
Does that look any better? Even though we aren’t given any semantic content, both P and Q must have a truth-value. (In this case, both P and Q are true.) It is saying: P is true. Therefore Q is true.
The above isn’t saying: Q is a consequence of P.(Or: P entails Q.) Basically, we’re being told that two true and unrelated statements can (as it were) exist together — as long as they don’t
contradict each another. (Or on the aforementioned alternative reading: “If P is true; then Q is true. P is true. Therefore Q is true.”)
So there are cases in which the premises of an argument are all true, and the conclusion is also true; and yet as Professor Stephen Read puts it:
“[T]here is an obvious sense in which the truth of the premises does not guarantee that of the conclusion.”
Ordinarily the truth of the premises is meant to “guarantee” the truth of the conclusion. So let’s look at Read’s own example:
i) All cats are animals
ii) Some animals have tails
iii) Therefore some cats have tails.
Clearly, premises i) and ii) are true. Indeed iii) is also true. (Not all cats have tails. And, indeed, according to some logicians, “some” also implies “all”.)
So why is the argument above invalid?
It’s invalid not because of the assigned truth-values of the premises and the conclusion; but for another reason. The reason is that the sets used in the argument are (as it were) mixed up. Thus we
have the distinct sets [animals], [cats] and [animals which have tails].
It doesn’t logically follow from “some animals have tails” that “some cats have tails”. If some animals have tails it might have been the case that cats are animals which don’t have tails. Thus iii)
doesn’t necessarily follow from ii). (iii) doesn’t follow from i) either.) ii) can be taken as an existential quantification over animals. iii), on the other hand, is an existential quantification
over cats. Thus:
ii) ((Ǝx) (Ax)
iii) (Ǝx) (Cx))
Clearly, Ax and Cx are quantifications over different sets. It doesn’t follow, then, that what’s true of animals is also generally true of cats; even though cats are members of the set [animals].
Thus iii) doesn’t follow from ii).
To repeat: even though the premises and the conclusion are all true, the above still isn’t a valid argument. Read himself helps to show this by displaying an argument-form with mutually-exclusive
sets — namely, [cats] and [dogs]. Thus:
i) All cats are animals
ii) Some animals are dogs
iii) Therefore some cats are dogs.
This time, however, the conclusion is false; whereas i) and ii) are true. It’s the case that the subset [dogs] belongs to the set [animals]. Some animals are indeed dogs. However, because some
animals are dogs, it doesn’t follow that “some cats are dogs”. In other words, because dogs are members of the set [animals], that doesn’t mean that they’re also members of the subclass [cats] simply
because cats themselves are also members of the set [animals]. Cats and dogs share animalhood; though they’re different subsets of the set [animal]. In other words, what’s true of dogs isn’t
automatically true of cats.
The importance of sets, and their relation to subsets, may be expressed in terms of brackets. Thus:
[animals [[cats [[[cats with tails]]]]
not-[animals [[cats [[[dogs]]]]
Material Validity and Formal Validity
Stephen Read makes a distinction between formal validity and material validity. He does so by using this example:
i) Iain is a bachelor
ii) So Iain in unmarried.
(One doesn’t usually find an argument with only a single premise.)
The above is materially valid because there’s enough semantic material in i) to make the conclusion acceptable. After all, if x is a bachelor, he must also be unmarried. Despite that, it’s still
formally invalid because there isn’t enough content in the premise to bring about the conclusion. That is, one can only move from i) to ii) if one already knows that all bachelors are unmarried. We
either recognise the shared semantic content or we know that the term “unmarried man” is a synonym of “bachelor”. Thus we have to add semantic content to i) in order to get ii). And it’s because of
this that the overall argument is said to be formally invalid. Nonetheless, because of what’s already been said, it is indeed still materially valid.
The material validity of the above can also be shown by its inversion:
i) Iain is unmarried
ii) So Iain is a bachelor.
Read makes a distinction by saying that its
“validity depends not on any form it exhibits, but on the content of certain expressions in it”.
Thus, in terms of logical form, it’s invalid. In terms of content (or the expressions used), it’s valid. This means that the following wouldn’t work as either a materially or a formally valid
i) Iain is a bachelor.
ii) So Iain is a footballer.
There’s no semantic content in the word “bachelor” that can be directly tied to the content of the word “footballer”. Iain may well be a footballer; though the necessary consequence of him being a
footballer doesn’t follow from his being a bachelor. As it is, the conclusion is false even though the premise is true.
Another way of explaining the material (i.e., not formal) validity of the argument above is in terms of what logicians call a suppressed premise (or a hidden premise). This is more explicit than talk
of synonyms or shared content. In this case, what the suppressed premise does is show the semantic connection between i) and ii). The actual suppressed premise for the above is the following:
All bachelors are unmarried.
Thus we should actually have the following argument:
i) Iain is a bachelor.
ii) All bachelors are unmarried.
iii) Therefore Iain is unmarried.
It may now be seen more clearly that
i) Iain is unmarried.
ii) So Iain is a bachelor.
doesn’t work formally; though it does work materially.
What about this? -
i) All bachelors are unmarried.
ii) So Iain is unmarried.
To state the obvious, this is clearly a bad argument. (It’s called an enthymeme.) Indeed it can’t really be said to be an argument at all. Nonetheless, this too can be seen to have a suppressed (or
hidden) premise. Thus:
i) All bachelors are unmarried.
ii) [Suppressed premise: Iain is a bachelor.]
iii) So Iain is unmarried.
Now let’s take the classic case of modus ponens:
A, if A then B / Therefore B
That means:
A, if A is the case, then B is the case. A is the case. Therefore B must also be the case.
The obvious question here is: What connects A to B (or B to A)? In terms of this debate, is the connection material or formal? Clearly, if the content of both A and B isn’t given, then it’s
impossible to answer this question.
We can treat the example of modus ponens above as having the aforesaid suppressed premise. Thus:
i) [Suppressed premise: Britain’s leading politician is the Prime Minister.]
ii) Boris Johnson is Britain’s leading politician.
iii) Therefore Boris Johnson is Britain’s Prime Minister.
In this instance, premises and conclusion are true. Yet i) is only contingently (i.e., not necessarily) connected to ii) and iii).
Finally, Stephen Read puts the formalist position on logic very clearly when he states the following:
“Logic is now seen — now redefined — as the study of formal consequence, those validities resulting not from the matter and content of the constituent expressions, but from the formal structure.”
We can now ask:
What is the point of a logic without material or semantic content?
If logic were purely formal, then wouldn’t all the premise and predicate symbols — not the logical symbols — simply be autonyms? (That is, all the p’s, q’s, x’s, F’s, G’s etc. would be purely
self-referential.) So what would be left of logic if that were the case? Clearly we could no longer say that logic is about argumentation — or could we? Not really. The fact is that we can still
learn about argumentation from schemas (or argument-forms) which are purely formal in nature. And that basically means that the dots don’t always — or necessarily — need to be filled in.
[I can be found @ Twitter.
|
{"url":"https://www.cantorsparadise.org/formal-logic-vs-material-logic-cdcf0821c664/","timestamp":"2024-11-12T02:34:45Z","content_type":"text/html","content_length":"44550","record_id":"<urn:uuid:83516289-f769-41d8-b8c2-151ca082c473>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00526.warc.gz"}
|
Diagonal 5555 - math word problem (5555)
Diagonal 5555
The pool has a diamond shape with a side 20m long. One diagonal is 32 meters. What is the second diagonal?
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
Tips for related online calculators
You need to know the following knowledge to solve this word math problem:
Related math problems and questions:
|
{"url":"https://www.hackmath.net/en/math-problem/5555","timestamp":"2024-11-06T07:52:58Z","content_type":"text/html","content_length":"55655","record_id":"<urn:uuid:03bf55f5-e782-476f-aa4a-59bed28b7a39>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00211.warc.gz"}
|
OpenStax College Physics for AP® Courses, Chapter 5, Problem 4 (Problems & Exercises)
Suppose you have a 120-kg wooden crate resting on a wood floor. (a) What maximum force can you exert horizontally on the crate without moving it? (b) If you continue to exert this force once the
crate starts to slip, what will the magnitude of its acceleration then be?
Question by
is licensed under
CC BY 4.0
Final Answer
a. $6\times 10^{2}\textrm{ N}$
b. $2 \textrm{ m/s}^2$
Solution video
OpenStax College Physics for AP® Courses, Chapter 5, Problem 4 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. Crate made out of wood is resting on a floor also made out of wood and that tells us that when we look in table [5.1], we need to look at this
wood-on-wood section to find the coefficient of static friction which is 0.5 and the coefficient of kinetic friction when it's sliding of 0.3. So we write those two pieces of information down. We are
told the mass is 120 kilograms and then in part (a), we are asked what maximum force could we exert horizontally, what applied force could we apply such that the crate does not move? Now there's a
normal force upwards exerted by the floor which is important because that is responsible in part for the friction force because the static friction force is this coefficient of static friction
multiplied by this normal force and we also have the friction force opposing the applied force and we have gravity downwards. Now the friction force when it's in the static context is less than or
equal to the coefficient of static friction multiplied by the normal force which means at its maximum, it's going to be equal to this product here. So we know that the normal force upwards is going
to equal gravity downwards because there's no vertical acceleration and we also know that the applied force to the right is gonna equal the static friction force to the left because likewise there's
no motion horizontally. So then we can make substitution's up here: we'll replace this maximum static friction force with the applied force—which is what we want to find— and we'll replace the normal
force with mg—the force of gravity— and so then we have 0.5—coefficient of static friction— multiply by 120 kilograms times 9.80 meters per second squared which with one significant figure— since we
have only one significant figure in our static friction coefficient— is 6 times 10 to the 2 newtons would be the maximum applied force you could apply without the crate moving. Part (b) says suppose
it does start to slide anyway and you maintained the same applied force, what would the acceleration be? And it's gonna have acceleration because the friction force will change to the kinetic
friction force now which is always less than the static friction force. So we have the force to the right— which is the applied force— minus this kinetic friction force to the left equals mass times
acceleration—this is Newton's second law— and we divide both sides by m and we get acceleration then is this difference in these two forces divided by the mass. Now the kinetic friction force is the
coefficient of kinetic friction multiplied by the same normal force as before—that being equal to gravity— and so then we plug in numbers here. So we have the acceleration then is 588 newtons which
is the answer to part (a) but with no rounding because we don't want to have intermediate rounding error; we don't wanna round until we get to a final answer so I'm using this intermediate number of
588 newtons minus 0.3 which is the coefficient of kinetic friction times 120 kilograms times 9.80 meters per second squared divided by the mass giving us 2 meters per second squared acceleration.
|
{"url":"https://collegephysicsanswers.com/openstax-solutions/suppose-you-have-120-kg-wooden-crate-resting-wood-floor-what-maximum-force-can-0","timestamp":"2024-11-08T17:59:56Z","content_type":"text/html","content_length":"139308","record_id":"<urn:uuid:d6917c50-7722-472d-9f06-6dcf370e2c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00279.warc.gz"}
|
6.6: Normal Random Variables (6 of 6)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Learning Objectives
• Use a normal probability distribution to estimate probabilities and identify unusual events.
Now we use the simulation and the standard normal curve to find the probabilities associated with any normal density curve.
Length of Human Pregnancy
The length (in days) of a randomly chosen human pregnancy is a normal random variable with μ = 266, σ = 16. So X = length of pregnancy (in days)
(a) What is the probability that a randomly chosen pregnancy will last less than 246 days?
We want P(X < 246). To find this probability, we first convert X = 246 to a z-score:
Now we can use the simulation to find P(Z < −1.25). This is the area under the normal probability curve to the left of Z = −1.25.
The probability that a randomly chosen pregnancy lasts less than 246 days is 0.1056. In other words, there is an 11% chance that a randomly selected pregnancy will last less than 246 days.
(b) Suppose a pregnant woman’s husband has scheduled his business trips so that he will be in town between the 235th and 295th days of her pregnancy. What is the probability that the birth will take
place during that time?
Compute the z-scores for each of these x-values:
Use the simulation to find the area under the standard normal curve between these two z-scores.
So the desired probability is 0.9387.
There is about a 94% probability that he will be home for the birth. Looks like he planned well.
The previous examples all followed the same general form: Given values of a normal random variable, we found an associated probability. The two basic steps in the solution process were as follows:
1. Convert x-value to a z-score.
2. Use the simulation to find associated probability.
The next example is a different type of problem: Given a probability, we will find the associated value of the normal random variable. The solution process will go in reverse order.
1. Use a new simulation to convert statements about probabilities to statements about z-scores.
2. Convert z-scores to x-values.
These types of problems are informally called “work-backwards” problems. We will use a new simulation for these types of problems. The new simulation requires us to enter a probability and then gives
us the associated z-score. This is backwards from the simulation we worked with previously where we entered a z-score to find a probability. We will use this simulation in the next example.
Click here to open this simulation in its own window.
A link to an interactive elements can be found at the bottom of this page.
Work Backwards to Find X
Foot length (in inches) of a randomly chosen adult male is a normal random variable with a mean of 11 and standard deviation of 1.5. So X = foot length (inches).
(a) Suppose that an XL sock is designed to fit the largest 30% of men’s feet. What is the smallest foot length that fits an XL sock?
Step 1: Use the simulation to convert the probability to a statement about z-scores.
We want to mark off the largest 30% of the distribution, so the probability to the right of the z-score is 30%. This means that 70% of the area is to the left of the z-score.
From the simulation, we can see that the corresponding z-score is 0.52.
Step 2: Now we need to convert this z-score to a foot length.
Before we calculate the length, note that the z-score is about 0.5, so the x-value will be about 0.5 standard deviations above the mean.
Conclusion: A foot length of 11.75 inches is the shortest foot for an XL sock.
(b) What is the first quartile for the men’s foot lengths?
Step 1: Use the simulation to convert this probability into a statement about z-scores.
We want to mark off the smallest 25% of the distribution, so the probability to the left of the z-score is 25%.
From the simulation, we can see that the corresponding z-score is −0.67.
Step 2: Convert this z-score to a foot length. If X is the foot length we seek, then X is 0.67 standard deviations below the mean. That is,
Conclusion: The first quartile mark is 9.995 inches, so about 25% of the men’s feet are shorter than 10 inches.
In the preceding example (specifically step 2), we found the x-value by reasoning about the meaning of the z-score. We can also develop a formula for this process.
Recall the definition of z-score. In words, the z-score of an x-value is the number of standard deviations X is away from the mean. As a formula, this is
We can solve this equation for X as follows:
$\begin{array}{l}\frac{x-\mathrm{μ}}{\mathrm{σ}}=Z\\ x-\mathrm{μ}=Z⋅\mathrm{σ}\\ x=\mathrm{μ}+Z⋅\mathrm{σ}\end{array}$
This gives us a formula for finding X from Z. You can use this formula in step 2 of a work-backwards problem.
Let’s Summarize
• In “Continuous Random Variables,” we made the transition from discrete to continuous random variables. A continuous random variable is not limited to distinct values. It is a measurement such as
foot length. We cannot display the probability distribution for a continuous random variable with a table or histogram. We use a density curve to assign probabilities to intervals of x-values. We
use the area under the density curve to find probabilities.
• We use a normal density curve to model the probability distribution for many variables, such as weight, shoe sizes, foot lengths, and other human physical characteristics. Normal curves are
mathematical models. We use µ to represent the mean of a normal curve and σ to represent the standard deviation of a normal curve. We use Greek letters to remind us that the normal curve is not a
distribution of real data. It is a mathematical model based on a mathematical equation. We use this mathematical model to represent the perfect bell-shaped distribution.
• For a normal curve, the empirical rule for normal curves tells us that 68% of the observations fall within 1 standard deviation of the mean, 95% within 2 standard deviations of the mean, and
99.7% within 3 standard deviations of the mean.
• To compare x-values from different distributions, we standardize the values by finding a z-score: $Z=\frac{x-\mathrm{μ}}{\mathrm{σ}}$
• A z-score measures how far X is from the mean in standard deviations. In other words, the z-score is the number of standard deviations X is from the mean of the distribution. For example, Z = 1
means the x-value is 1 standard deviation above the mean.
• If we convert the x-values into z-scores, the distribution of z-scores is also a normal density curve. This curve is called the standard normal distribution. We use a simulation with the standard
normal curve to find probabilities for any normal distribution.
• We can also work backwards and find the x-value for a given probability. We used a different simulation to work backwards from probabilities to x-values. With this simulation, we found x-values
corresponding to quartiles and percentiles.
Are You Ready for the Checkpoint?
If you completed all of the exercises in this module, you should be ready for the Checkpoint. To make sure that you are ready for the Checkpoint, use the My Response link below to evaluate your
understanding of the learning outcomes for this module and to submit questions that you may have.
Contributors and Attributions
CC licensed content, Shared previously
|
{"url":"https://stats.libretexts.org/Courses/Lumen_Learning/Concepts_in_Statistics_(Lumen)/06%3A_Probability_and_Probability_Distributions/6.06%3A_Normal_Random_Variables_(6_of_6)","timestamp":"2024-11-02T06:26:36Z","content_type":"text/html","content_length":"146906","record_id":"<urn:uuid:bed4800b-2799-4ac5-9d8b-7a98e1694a8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00615.warc.gz"}
|
2024 Course Materials
This is the website for a summer course on algorithms and programming for high schoolers, in Kingston, Jamaica from July 1st to 27th, 2024. Here you will find lecture notes and lab assignments. You
can follow using Google Colab or, alternatively, by downloading the notebooks and using Jupyter notebook. We are using Python 3.
For reference, morning exercises are marked by “A”, afternoon by “B”, and challenge by “C”.
Course Content
Week 1
Day 1: Course intro, then types, variables, printing and operations and errors. Exercises A, B, C.
Day 2: No Lab/Lecture due to Hurricane Beryl.
Day 3: No Lab/Lecture due to Hurricane Beryl.
Day 4: No Lab/Lecture due to Hurricane Beryl.
Day 5: No Lab/Lecture due to Hurricane Beryl.
Week 2
Day 1: Course and Professor Intro then Variables, Printing, Lists, Functions, Loops. Exercises A, B, C.
Day 2: Loops, More loops and functions. Exercises A, B, C.
Day 3: Functions, then List Functions. Exercises A, B.
Day 4: String Functions, More Functions and Lists of Lists. Exercises A, C.
Day 5: Nested For Loops, Recursion Teaser. Exercises A, C.
Week 3
Week 4
Day 1: Merge Sort, then Recursion Practice. Exercises A, B, C.
Day 2: Dictionaries, then Sorting and Time Complexity. Exercises A, B, C.
Day 3: Graphs Intro, then Graph Traversals. Exercises A, C, Review 1, Review 2, Review 3.
Day 4: Final exam, then TA lightning talks, then Q&A with Prof. Daniel Coore.
Day 5: Graduation ceremony.
|
{"url":"https://jamcoders.org.jm/syllabus/2024/","timestamp":"2024-11-14T12:30:12Z","content_type":"text/html","content_length":"19785","record_id":"<urn:uuid:32efedf8-fd78-4cfe-b34a-f05a03be4a2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00869.warc.gz"}
|
Searching for codes credited to 'Garrison, Lehman H.'
➥ Tip! Refine or expand your search. Authors are sometimes listed as 'Smith, J. K.' instead of 'Smith, John' so it is useful to search for last names only. Note this is currently a simple phrase
[ascl:1812.011] GRAND-HOD: GeneRalized ANd Differentiable Halo Occupation Distribution
GRAND-HOD (GeneRalized ANd Differentiable Halo Occupation Distribution) takes a generalized Halo Occupation Distribution (HOD) prescription as input and outputs the corresponding mock galaxy catalogs
in binary files. The code is differentiable and incorporates various generalizations to the standard HOD. It is written for the Abacus simulations, but the main functionalities can be easily adapted
for other halo catalogs with the appropriate properties.
[ascl:2403.009] pycorr: Two-point correlation function estimation
pycorr wraps two-point counter engines such as Corrfunc (ascl:1703.003) to estimate the correlation function. It supports theta (angular), s, s-mu, rp-pi binning schemes, analytical two-point counts
with periodic boundary conditions, and inverse bitwise weights (in any integer format) and (angular) upweighting. It also provides MPI parallelization and jackknife estimate of the correlation
function covariance matrix.
|
{"url":"https://ascl.net/code/cs/Garrison%2C%C2%A0Lehman%C2%A0H.","timestamp":"2024-11-11T07:36:54Z","content_type":"text/html","content_length":"5962","record_id":"<urn:uuid:ce13e30d-938c-4e75-a66b-781bc2c8f1bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00174.warc.gz"}
|
the ratio the ratio of income to expenditure of Mr a is 11.7 and that of Mr B is 11.9 find the ratio of the savings
the ratio the ratio of income to expenditure of Mr a is 11.7 and that of Mr B is 11.9 find the ratio of the savings
1 thought on “the ratio the ratio of income to expenditure of Mr a is 11.7 and that of Mr B is 11.9 find the ratio of the savings ”
1. Answer:
Let the income of A =11x
expenditure of A=7X
then Saving of A =(income – expenditure)
Income of B=13x
expenditure of B=9x
Saving of B=13x- 9x
Ratio of the income of A and B
A and b have the same amount of saving.
Step-by-step explanation:
Leave a Comment
|
{"url":"https://wiki-helper.com/the-ratio-the-ratio-of-income-to-ependiture-of-mr-a-is-11-7-and-that-of-mr-b-is-11-9-find-the-ra-37399111-57/","timestamp":"2024-11-01T20:47:45Z","content_type":"text/html","content_length":"127272","record_id":"<urn:uuid:8c96775f-160a-4970-89a2-2a34243eeada>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00796.warc.gz"}
|
the bulk modulus
What is the bulk modulus of air?
What is the bulk modulus of air?
Selected values
Water 2.2 GPa (0.32 Mpsi) (value increases at higher pressures)
Methanol 823 MPa (at 20 °C and 1 Atm)
Air 142 kPa (adiabatic bulk modulus [or isentropic bulk modulus])
Air 101 kPa (isothermal bulk modulus)
Solid helium 50 MPa (approximate)
What is the effect of bulk modulus?
Bulk modulus is a property that indicates the compressibility of a fluid. With many of today’s hydraulic systems operating at pressures 5000 psi and higher, ignoring bulk modulus can compromise
response time of a system. Applied pressure should directly affect the action of the system rather than compress the fluid.
What is bulk modulus B?
Sometimes referred to as the incompressibility, the bulk modulus is a measure of the ability of a substance to withstand changes in volume when under compression on all sides. It is equal to the
quotient of the applied pressure divided by the relative deformation. Related Topics: compressibility elastic modulus.
Is bulk modulus applicable to gases?
The volume of a gas changes when pressure applied on it is varied. The bulk modulus of a gas is defined as the ratio of volumetric stress to the volumetric strain i.e, B=−Δp(ΔV/V) B = − Δ p ( Δ V / V
) where Δp is a change in pressure and ΔV is change in volume.
What is the Young’s modulus of air?
Table 1
Parameter Value/Preset function in COMSOL
Young’s modulus Y [GPa] 170
Poisson ratio σ 0.28
Density ρSi [kg/m3] 2329
Relative permittivity 11.7
Do liquids and gases have bulk modulus?
Out of solids , liquids and gases, which one has all the three types of modulus ofelasticity and why gases have only bulk modulus of elasticity.
How does Young’s modulus change with rise in temperature?
With rise in temperature, the length of the material increases which decreases its stiffness. Thus, we can say that with rise in the temperature, the Young’s modulus decreases.
What is the difference between isothermal and adiabatic bulk modulus?
The inverse of the bulk modulus gives a substance’s compressibility. Generally the bulk modulus is defined at constant temperature as the isothermal bulk modulus, but can also be defined at constant
entropy as the adiabatic bulk modulus. ), and other variations are possible. Such distinctions are especially relevant for gases .
What is the reciprocal of the bulk modulus at fixed temperature?
For a complex anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke’s law. The
reciprocal of the bulk modulus at fixed temperature is called the isothermal compressibility .
What is the difference between bulk modulus and Young’s modulus?
Other moduli describe the material’s response ( strain) to other kinds of stress: the shear modulus describes the response to shear stress, and Young’s modulus describes the response to normal
stress. For a fluid, only the bulk modulus is meaningful.
What is the inverse of the bulk modulus of a substance?
The inverse of the bulk modulus gives a substance’s compressibility. Generally the bulk modulus is defined at constant temperature as the isothermal bulk modulus, but can also be defined at constant
entropy as the adiabatic bulk modulus.
|
{"url":"https://www.skinscanapp.com/essay-writing-blog/what-is-the-bulk-modulus-of-air/","timestamp":"2024-11-09T19:37:36Z","content_type":"text/html","content_length":"87328","record_id":"<urn:uuid:80948360-db93-4279-abbb-79696cfb3778>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00852.warc.gz"}
|
Triangle distribution and equation of state for classical rigid disks
The triangle distribution function f^(3) for three mutual near neighbors in the plane describes basic aspects of short-range order and statistical thermodynamics in two-dimensional many-particle
systems. This paper examines prospects for constructing a self-consistent calculation for the rigid-disk-system f^(3). We present several identities obeyed by f^(3). A rudimentary closure suggested
by scaled-particle theory is introduced. In conjunction with three of the basic identities, this closure leads to an unique f^(3) over the entire density range. The pressure equation of state
exhibits qualitatively correct behaviors in both the low-density and the close-packed limits, but no intervening phase transition appears. We discuss extensions to improved disk closures, and to the
three-dimensional rigid-sphere system.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Mathematical Physics
• Freezing transition
• Neighbor triangles
• Packing
• Rigid disks
Dive into the research topics of 'Triangle distribution and equation of state for classical rigid disks'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/triangle-distribution-and-equation-of-state-for-classical-rigid-d","timestamp":"2024-11-14T15:23:40Z","content_type":"text/html","content_length":"51862","record_id":"<urn:uuid:ad13d86f-57da-4781-8d4a-4b35e0d553d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00416.warc.gz"}
|
Some tests are required to be repeated for a series of different input parameters. One way to achieve this is manually register a test case for each parameter. You can also invoke a test function
with all parameters manually from within your test case, like this:
void single_test( int i )
BOOST_TEST( /* test assertion */ );
void combined_test()
int params[] = { 1, 2, 3, 4, 5 };
std::for_each( params, params+5, &single_test );
The approach above has several drawbacks:
• the logic for running the tests is inside a test itself: single_test in the above example is run from the test case combined_test while its execution would be better handled by the Unit Test
• in case of fatal failure for one of the values in param array above (say a failure in BOOST_TEST_REQUIRE), the test combined_test is aborted and the next test-case in the test tree is executed.
• in case of failure, the reporting is not accurate enough: the test should certainly be reran during debugging sessions by a human or additional logic for reporting should be implemented in the
test itself.
In some circumstance, one would like to run a parametrized test over an arbitrary large set of values. Enumerating the parameters by hand is not a solution that scales well, especially when these
parameters can be described in another function that generates these values. However, this solution has also limitations
• Generating functions: suppose we have a function func(float f), where f is any number in [0, 1]. We are not interested that much in the exact value, but we would like to test func. What about,
instead of writing the f for which func will be tested against, we choose randomly f in [0, 1]? And also what about instead of having only one value for f, we run the test on arbitrarily many
numbers? We easily understand from this small example that tests requiring parameters are more powerful when, instead of writing down constant values in the test, a generating function is
• Scalability: suppose we have a test case on func1, on which we test N values written as constant in the test file. What does the test ensure? We have the guaranty that func1 is working on these N
values. Yet in this setting N is necessarily finite and usually small. How would we extend or scale N easily? One solution is to be able to generate new values, and to be able to define a test on
the class of possible inputs for func1 on which the function should have a defined behavior. To some extent, N constant written down in the test are just an excerpt of the possible inputs of
func1, and working on the class of inputs gives more flexibility and power to the test.
• Composition: suppose we already have test cases for two functions func1 and func2, taking as argument the types T1 and T2 respectively. Now we would like to test a new functions func3 that takes
as argument a type T3 containing T1 and T2, and calling func1 and func2 through a known algorithm. An example of such a setting would be
// Returns the log of x
// Precondition: x strictly positive.
double fast_log(double x);
// Returns 1/(x-1)
// Precondition: x != 1
double fast_inv(double x);
struct dummy {
unsigned int field1;
unsigned int field2;
double func3(dummy value)
return 0.5 * (exp(fast_log(value.field1))/value.field1 + value.field2/fast_inv(value.field2));
In this example,
□ func3 inherits from the preconditions of fast_log and fast_inv: it is defined in (0, +infinity) and in [-C, +C] - {1} for field1 and field2 respectively (C being a constant arbitrarily big).
□ as defined above, func3 should be close to 1 everywhere on its definition domain.
□ we would like to reuse the properties of fast_log and fast_inv in the compound function func3 and assert that func3 is well defined over an arbitrary large definition domain.
Having parametrized tests on func3 hardly tells us about the possible numerical properties or instabilities close to the point {field1 = 0, field2 = 1}. Indeed, the parametrized test may test for
some points around (0,1), but will fail to provide an asymptotic behavior of the function close to this point.
The facilities provided by the Unit Test Framework addressed the issues described above:
• the notion of datasets eases the description of the class of inputs for test cases. The datasets also implement several operations that enable their combinations to create new, more complex
• a single macro, BOOST_DATA_TEST_CASE, is used for the declaration and registration of a test case over a collection of values (samples),
• each test case, associated to a unique value, is executed independently from others. These tests are guarded in the same way regular test cases are, which makes the execution of the tests over
each sample of a dataset isolated, robust, repeatable and ease the debugging,
• several datasets generating functions are provided by the Unit Test Framework
The remainder of this section covers the notions and feature provided by the Unit Test Framework about the data-driven test cases, in particular:
|
{"url":"https://www.boost.org/doc/libs/1_59_0/libs/test/doc/html/boost_test/tests_organization/test_cases/test_case_generation.html","timestamp":"2024-11-14T08:42:32Z","content_type":"text/html","content_length":"24302","record_id":"<urn:uuid:6dc37a9f-6365-41a9-bd31-9741b0185b0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00009.warc.gz"}
|
Flow through Convergent Nozzle Equations and Calculator
Related Resources: calculators
Flow through Convergent Nozzle Equations and Calculator
Isentropic Flow through Convergent Nozzle Equations and Calculator
Isentropic flow of compressible fluid from a large tank through a convergent nozzle, as shown in Fig. 1 indicates that the pressure, mass density, and temperature (p[1], ρ[1], and T[1]) at a point
within the tank. Since the tank is "large," velocity here is assumed to be near zero. Also indicated on Fig. 1 are the same parameters as well as the velocity of flow and area of the nozzle (p[2], ρ
[2], T[2], v[2], and A[2]) at the exit of the nozzle. Also indicated is p[2]', the pressure outside the tank.
Figure 1, Convergent nozzle.
Preview Isentropic Flow through Convergent Nozzle Calculator
In a convergent nozzle, flow through the nozzle's throat will be either sonic or subsonic. If flow is sonic, the Mach number is equal to unity, and the ratio p[2]/p[1] must be equal to the "critical
pressure ratio" as defined by
Eq. 1
(p[2]/p[1])[c] = critical pressure ratio
k = specific heat ratio
If flow through the throat is subsonic, the ratio (p[2]/p[1])[c] will be larger than (p'[2]/p[1]).
Obviously, in order to have appreciable flow from the tank through the nozzle out of the tank, pressure inside the tank must be greater than pressure outside the tank (that is, p[1] > p'[2]). If the
pressure drop is small [(p[2]/p[1]) >(p[2]/p[1])[c]], flow through the nozzle will be subsonic and the pressure at the exit of the nozzle will be the same as the pressure outside the tank (p[2] = p'
[2]). In this case the weight flow rate can be determined from the equation
Eq. 2
$G={A}_{2}\sqrt{\frac{2gk}{k-1}{p}_{1}{\gamma }_{1}\left[{\left(\frac{{p}_{2}}{{p}_{1}}\right)}^{2/k}-{\left(\frac{{p}_{2}}{{p}_{1}}\right)}^{\left(k+1\right)/k}\right]}$
G = weight flow rate, lbs/ft^2, (N/m^2)
A[2] = throat area, ft^2, (m^2)
g = acceleration of gravity, ft/sec^2 (m/sec^2)
k = specific heat ratio
p[1] = pressure inside the tank, lbs/ft^2, (N/m^2)
p[2] = pressure inside the tank, lbs/ft^2, (N/m^2)
γ[1] = specific weight of fluid inside the tank, lbs/ft^3, (N/m^3)
R = Gas constant, ft/°R, (m/K)
If the pressure drop increases (either by increasing p[1] or decreasing p'[2], or both), flow through the nozzle will remain subsonic until the point is reached where the ratio p'[2]/p[1] is equal to
the critical pressure ratio (p[2]/p[1])[c], At this point, flow through the nozzle will be sonic and the pressure at the exit of the nozzle will be the same as the pressure outside the tank (p[2] =
p'[2]). In this case, the weight flow rate can be determined from the equation
Eq. 3
where T[1] is the absolute temperature of the fluid inside the tank, R is the gas constant, and other terms are as defined above for equation 2.
If the pressure drop increases further [beyond the point where the ratio p'[2]/p[1] is equal to the critical pressure ratio (p'[2]/p[1])[c] , flow through the nozzle will remain sonic and the
pressure at the exit of the nozzle will be greater than the pressure outside the tank (p[2] > p'[2]). However, the weight flow rate will
not increase. Thus, no matter how much p[1] is increased or p'[2] is decreased, if the ratio p'[2]/p[1] is less than the critical pressure ratio (p[2]/p[1])[c] the weight flow rate will be the same
as that where the ratio p'[2]/p[1] is equal to the critical pressure ratio. In this case the weight flow rate can be determined from
equation 3 provided the value substituted for p[1] is the pressure that makes the ratio p'[2]/p[1] equal to the critical pressure ratio (p[2]/p[1])[c].
Schaum's Outline of Fluid Mechanics and Hydraulics
|
{"url":"https://www.engineersedge.com/calculators/flow_through_convergent_nozzle_16364.htm","timestamp":"2024-11-07T15:30:59Z","content_type":"text/html","content_length":"31145","record_id":"<urn:uuid:dd1be985-eb81-4d70-a94a-4118ec82e48e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00665.warc.gz"}
|
Estimating Chisquare Parameters with TidyDensity | R-bloggersEstimating Chisquare Parameters with TidyDensityEstimating Chisquare Parameters with TidyDensity
Estimating Chisquare Parameters with TidyDensity
[This article was first published on
Steve's Data Tips and Tricks
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
Hello R users! Today, let’s explore the latest addition to the TidyDensity package: util_chisquare_param_estimate(). This function is designed to estimate parameters for a Chi-square distribution
from your data, providing valuable insights into the underlying distribution characteristics.
Understanding the Purpose
The util_chisquare_param_estimate() function is a powerful tool for analyzing data that conforms to a Chi-square distribution. It utilizes maximum likelihood estimation (MLE) to infer the degrees of
freedom (dof) and non-centrality parameter (ncp) of the Chi-square distribution based on your input vector.
Getting Started
To begin, let’s generate a dataset that conforms to a Chi-square distribution:
# Generate Chi-square distributed data
data <- rchisq(250, 10, 2)
# Call util_chisquare_param_estimate()
result <- util_chisquare_param_estimate(data)
By default, the function will automatically generate empirical distribution data if .auto_gen_empirical is set to TRUE. This means you’ll not only get the Chi-square parameters but also a combined
table of empirical and Chi-square distribution data.
Exploring the Output
Let’s unpack what the function returns:
• dist_type: Identifies the type of distribution, which will be “Chisquare” for this analysis.
• samp_size: Indicates the sample size, i.e., the number of data points in your vector .x.
• min, max, mean: Basic statistics summarizing your data.
• dof: The estimated degrees of freedom for the Chi-square distribution.
• ncp: The estimated non-centrality parameter for the Chi-square distribution.
This comprehensive output allows you to gain deeper insights into your data’s distribution characteristics, particularly when the Chi-square distribution is a potential model.
Let’s now take a look at the output itself.
result$combined_data_tbl |>
head(5) |>
Rows: 5
Columns: 8
$ sim_number <fct> 1, 1, 1, 1, 1
$ x <int> 1, 2, 3, 4, 5
$ y <dbl> 12.716908, 17.334453, 11.913559, 15.252845, 7.208524
$ dx <dbl> -2.100590, -1.952295, -1.803999, -1.655704, -1.507408
$ dy <dbl> 2.741444e-05, 3.676673e-05, 4.930757e-05, 6.515313e-05, 8.6…
$ p <dbl> 0.640, 0.848, 0.576, 0.744, 0.204
$ q <dbl> 2.765968, 3.205658, 3.297085, 3.567437, 3.869764
$ dist_type <fct> "Empirical", "Empirical", "Empirical", "Empirical", "Empiri…
result$combined_data_tbl |>
tidy_distribution_summary_tbl(dist_type) |>
Rows: 2
Columns: 13
$ dist_type <fct> "Empirical", "Chisquare c(9.961, 1.979)"
$ mean_val <dbl> 11.95263, 12.04686
$ median_val <dbl> 10.79615, 11.48777
$ std_val <dbl> 5.438087, 5.349567
$ min_val <dbl> 2.765968, 1.922223
$ max_val <dbl> 29.95844, 30.43480
$ skewness <dbl> 0.9344797, 0.6903444
$ kurtosis <dbl> 3.790972, 3.243122
$ range <dbl> 27.19248, 28.51258
$ iqr <dbl> 7.469292, 7.282262
$ variance <dbl> 29.57279, 28.61787
$ ci_low <dbl> 4.010739, 3.997601
$ ci_high <dbl> 26.33689, 23.60014
Behind the Scenes: MLE Optimization
Under the hood, the function leverages MLE through the optim() function to estimate the Chi-square parameters. It minimizes the negative log-likelihood function to obtain the best-fitting degrees of
freedom (dof) and non-centrality parameter (ncp) for your data.
Initial values for the optimization are intelligently set based on your data’s sample variance and mean, ensuring a robust estimation process.
Visualizing the Results
One of the strengths of TidyDensity is its seamless integration with visualization tools like ggplot2. With the combined output from util_chisquare_param_estimate(), you can easily create insightful
plots that compare the empirical distribution with the estimated Chi-square distribution.
result$combined_data_tbl |>
This example demonstrates how you can visualize the empirical data overlaid with the fitted Chi-square distribution, providing a clear representation of your dataset’s fit to the model.
In summary, util_chisquare_param_estimate() from TidyDensity is a versatile tool for estimating Chi-square distribution parameters from your data. Whether you’re exploring the underlying distribution
of your dataset or conducting statistical inference, this function equips you with the necessary tools to gain valuable insights.
If you haven’t already, give it a try and let us know how you’re using TidyDensity to enhance your data analysis workflows! Stay tuned for more updates and insights from the world of R programming.
Happy coding!
|
{"url":"https://www.r-bloggers.com/2024/05/estimating-chisquare-parameters-with-tidydensity/","timestamp":"2024-11-06T17:49:12Z","content_type":"text/html","content_length":"78623","record_id":"<urn:uuid:351e9c7f-b6f6-436b-aab6-38183ba808a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00541.warc.gz"}
|
251 research outputs found
The discovery of Kaluza-Klein (KK) gravitons is a smoking gun of extra dimensions. Other scenarios, however, could give rise to spin-two resonances of a new strongly-coupled sector and act as
impostors. In this paper we prove that a spin-two resonance does not couple to the Standard Model through dimension-four operators. We then show that the massive graviton and its impostor both couple
to the Standard Model through the same dimension-five operators. Therefore the spin determination is identical. Nevertheless, we also show that one can use the ratio of branching ratios to photons
and to jets for distinguishing between KK gravitons and their impostors. The capacity to distinguish between KK gravitons and impostors is a manifestation of the breakdown of the duality between AdS
and strongly-coupled theories.Comment: 14 pages, 3 figures, 1 table. References added, typos correcte
Models of dynamical electroweak symmetry breaking usually include new spin-1 resonances, whose couplings and masses have to satisfy electroweak precision tests. We propose to use dilepton searches to
probe the underlying structure responsible for satisfying these. Using the invariant mass spectrum and charge asymmetry, we can determine the number, parity, and isospin of these resonances. We pick
three models of strong/warped symmetry breaking, and show that each model produces specific features that reflect this underlying structure of electroweak symmetry breaking and cancellations.Comment:
Added missing referenc
Modern extra-dimensional Higgsless scenarios rely on a mass-matching between fermionic and bosonic KK resonances to evade constraints from precision electroweak measurements. After analyzing all of
the Tevatron and LEP bounds on these so-called Cured Higgsless scenarios, we study their LHC signatures and explore how to identify the mass-matching mechanism, the key to their viability. We find
singly and pair produced fermionic resonances show up as clean signals with 2 or 4 leptons and 2 hard jets, while neutral and charged bosonic resonances are visible in the dilepton and leptonic WZ
channels, respectively. A measurement of the resonance masses from these channels shows the matching necessary to achieve $S\simeq 0$. Moreover, a large single production of KK-fermion resonances is
a clear indication of compositeness of SM quarks. Discovery reach is below 10 fb$^{-1}$ of luminosity for resonances in the 700 GeV range.Comment: 28 pages, 18 figure
We reconsider the low-energy effective theory for Higgs-less electroweak symmetry breaking: we study the anomaly-matching in the situation where all Goldstone fields disappear from the spectrum as a
result of the Higgs mechanism. We find that the global SU(2)_L x SU(2)_R x U(1)_{B-L} symmetry of the underlying theory, which is spontaneously broken to SU(2)_{L+R} x U(1)_{B-L} has to be
anomaly-free. For the sake of generality, we include the possibility of light spin-1/2 bound states resulting from the dynamics of the strongly-interacting symmetry-breaking sector, in addition to
the Goldstone bosons. Such composite fermions may have non-standard couplings at the leading order, and an arbitrary total B-L charge. In order to perform the anomaly-matching in that case, we
generalize the construction of the Wess-Zumino effective lagrangian. Composite fermions beyond the three known generations are theoretically allowed, and there are no restrictions from the
anomaly-matching on their couplings nor on their U(1)_{B-L} charge. Absence of global anomalies for the composite sector as a whole does not preclude anomalous triple gauge boson couplings arising
from composite fermion triangular diagrams. On the other hand, the trace of B-L over elementary fermions must vanish if all Goldstone modes are to disappear from the spectrum.Comment: Keywords:
Anomalies in Field and String Theories, Spontaneous Symmetry Breaking, Beyond the Standard Model, Chiral Lagrangians. 33 pages, 7 figure
We perform a chiral extrapolation of lattice data on the scalar K pi form factor and the ratio of the kaon and pion decay constants within Chiral Perturbation Theory to two loops. We determine the
value of the scalar form factor at zero momentum transfer, at the Callan-Treiman point and at its soft kaon analog as well as its slope. Results are in good agreement with their determination from
experiment using the standard couplings of quarks to the W boson. The slope is however rather large. A study of the convergence of the chiral expansion is also performed.Comment: few minor change
In PRD78(2008)055005 [arXiv:0805.1503 [hep-ph]] and PRD79(2009)075004 [arXiv:0809.1324 [hep-ph]], we constructed a holographic description of walking technicolour theories using both a hard- and a
soft-wall model. Here, we show that the dilaton field becomes phenomenologically irrelevant for the spectrum of spin-one resonances once a term is included in the Lagrangian that mixes the Goldstone
bosons and the longitudinal components of the axial vector mesons. We show how this mixing affects our previous results and we make predictions about how this description of technicolour can be
tested.Comment: 7 pages, no figure
Xylan is primarily found in the secondary cell wall of plants providing strength and integrity. To take advantage of the reinforcing effect of xylan in papermaking, it is crucial to understand its
role in pulp fibers, as it undergoes substantial changes during pulping. However, the contributions of xylan that is added afterwards (extrinsic) and xylan present after pulping (intrinsic) remain
largely unexplored. Here, we partially degraded xylan from refined bleached softwood kraft pulp (BSKP) and adsorbed xylan onto BSKP. Enzymatic degradation of 1 % xylan resulted in an open hand sheet
structure, while adsorption of 3 % xylan created a denser fiber network. The mechanical properties improved with adsorbed xylan, but decreased more significantly after enzymatic treatment. We propose
that the enhancement in mechanical properties by adsorbed extrinsic xylan is due to increased fiber-fiber bonds and sheet density, while the deterioration in mechanical properties of the enzyme
treated pulp is caused by the opposite effect. These findings suggest that xylan is decisive for fiber network strength. However, intrinsic xylan is more critical, and the same properties cannot be
achieved by readsorbing xylan onto the fibers. Therefore, pulping parameters should be selected to preserve intrinsic xylan within the fibers to maintain paper strength
During explosive eruptions, volcanic plumes inject ash into the atmosphere and may severely affect air traffic, as illustrated by the 2010 Eyjafjallajökull eruption. Quantitative estimates of ash
injection can be deduced from the height reached by the volcanic plume on the basis of scaling laws inferred from models of powerful Plinian plumes. In less explosive basaltic eruptions, there is a
partitioning of the magma influx between the atmospheric plume and an effusive lava flow on the ground. We link the height reached by the volcanic plume with the rate of ash injection in the
atmosphere via a refined plume model that (1) includes a recently developed variable entrainment law and (2) accounts for mass partitioning between ground flow and plume. We compute the time
evolution of the rate of injection of ash into the atmosphere for the Eyjafjallajökull eruption on the basis of satellite thermal images and plume heights and use the dispersion model of the Volcanic
Ash Advisory Center of Toulouse to translate these numbers into hazard maps. The classical Plinian model would have overestimated ash injection by about 20% relative to the refined estimate, which
does not jeopardize risk assessment. This small error was linked to effective fragmentation by intense interactions of magma with water derived from melting of ice and hence strong mass partitioning
into the plume. For a less well fragmented basaltic dry eruption, the error may reach 1 order of magnitude and hence undermine the prediction of ash dispersion, which demonstrates the need to monitor
both plume heights and ground flows during an explosive eruption
|
{"url":"https://core.ac.uk/search/?q=author%3A(Hirn%2C%20B.)","timestamp":"2024-11-05T06:46:31Z","content_type":"text/html","content_length":"260166","record_id":"<urn:uuid:a484a8fb-300f-4699-8304-080cb40574c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00279.warc.gz"}
|
What Color Is Math Blue Or Red? Not An Object!
What Color Is Math Blue or Red? Not An Object!
Math does not have an assigned color, as it is a branch of study and not an object. Generally, math is associated with colors such as black, white, and gray, as these can represent simple equations
and formulas.
The association between math and the colors blue or red is subjective and varies depending on cultural and individual perspectives.
For example:
Blue: Often linked with calm, clarity, and focus, which are beneficial for logical thinking and problem-solving in mathematics.
Red: Sometimes associated with alertness and urgency, which can be relevant in situations that require quick calculations or attention to detail.
While blue is commonly connected with the analytical nature of math, red may be favored in educational settings to capture students’ attention and signify the importance of mathematical concepts.
Key Takeaway
Mathematics doesn’t inherently possess any color- neither blue nor red. It’s an abstract discipline consisting of numbers, structures, and concepts.
However, due to the phenomenon of synesthesia, which triggers association of letters or numbers with certain colors, some people might associate certain colors with math.
Numerical-color synesthesia is a type that often causes people to perceive numbers and sometimes mathematical operations in certain colors.
This perception varies from person to person. What might be red for one person could be blue for another.
The Psychology of Color Perception
One might argue that the psychology of color perception plays a significant role in determining the associations between mathematical concepts and specific colors.
• Research has shown that color can impact cognitive processes, including memory and attention.
• When it comes to math, certain colors may evoke emotions or mental states that can influence problem-solving and comprehension.
• For example, blue has been associated with calmness and stability, which could make it a suitable color for mathematical concepts that require logic and precision.
• On the other hand, red is often linked to urgency and intensity, making it potentially suitable for concepts involving quick calculations or high energy.
Understanding the psychological effects of color perception in relation to math can provide insight into how to optimize learning environments and materials for improved comprehension and retention.
Historical and Cultural Influences
Historical and cultural influences have shaped the color associations with mathematical concepts, reflecting diverse societal perspectives and traditions.
• In some cultures, the color red symbolizes good luck, prosperity, and happiness, while in others, it signifies danger, power, or revolution.
• Similarly, blue has been linked to tranquility, stability, and depth in some societies, whereas in different contexts, it represents coldness, sadness, or masculinity.
• These historical and cultural nuances have seeped into the way mathematical concepts are visually represented and taught, leading to varying color associations in different parts of the world.
• For instance, the use of red and blue in representing positive and negative values in mathematical graphs can be traced back to cultural interpretations of these colors.
Understanding these influences is crucial in appreciating the diverse perspectives on the color associations with math.
Mathematical Concepts and Color Association
Color perception plays a significant role in mathematical concepts. Different colors often symbolize various mathematical ideas.
• The symbolism of math colors can shed light on the associations between specific colors and mathematical principles.
• This offers insights into how individuals perceive and understand mathematical concepts.
Exploring the relationship between color and mathematical symbolism provides valuable perspectives.
It helps us understand the intersection of visual perception and abstract mathematical reasoning.
Color Perception in Math
Mathematics is often associated with and perceived through the lens of color, reflecting the intricate relationship between abstract concepts and sensory perception.
The color perception in math is a fascinating area where mathematical concepts are linked with color associations.
These associations can vary among individuals and cultures, adding a layer of complexity to the understanding of math.
The table below illustrates some common color associations with mathematical concepts, showcasing how color can be used to represent and understand abstract mathematical ideas:
Mathematical Concept Color Association
Number theory Blue
Geometry Green
Algebra Red
Calculus Yellow
Statistics Purple
Color Perception in Math
Symbolism of Math Colors
The symbolism of colors in mathematical concepts provides a unique framework for understanding and interpreting abstract ideas, adding depth and richness to the comprehension of mathematical
The association of specific colors with mathematical concepts can evoke emotions and enhance the understanding of complex theories.
This association can vary among individuals and cultures, but some common examples include:
• Blue: Often associated with logic, reasoning, and stability in mathematical contexts.
• Red: Symbolizes passion, energy, and dynamism, which can be linked to the excitement and creativity in problem-solving.
• Green: Represents growth, harmony, and balance, reflecting the development and equilibrium of mathematical ideas.
• Purple: Associated with creativity, imagination, and transformation, reflecting the innovative and transformative nature of mathematical discoveries.
Synesthesia and Mathematical Visualization
Synesthesia, the neurological phenomenon where stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second pathway, has been a subject of interest in the
context of mathematical visualization.
The association between math and colors in synesthetic experiences offers a unique perspective on how individuals perceive and comprehend mathematical concepts.
Exploring synesthetic experiences in math may provide insights into enhancing mathematical comprehension and pedagogy.
Math and Color Association
Color association in mathematics, particularly through synesthesia and mathematical visualization, has been a subject of growing interest in cognitive science and psychology.
This phenomenon raises intriguing questions about the relationship between mathematical thinking and sensory perception.
Some key points to consider include:
• Synesthesia: Some individuals experience synesthesia, a neurological condition where stimulation of one sensory pathway leads to automatic, involuntary experiences in a second sensory pathway.
• Mathematical Visualization: Certain people have the ability to visualize mathematical concepts in specific colors, shapes, or spatial arrangements, aiding in problem-solving and comprehension.
• Cross-Modal Associations: The cross-wiring of sensory pathways suggests that color associations in mathematics may stem from the brain’s tendency to establish connections between different types
of information.
• Cognitive Implications: Exploring math and color association can provide valuable insights into the nature of mathematical cognition and sensory processing.
Synesthetic Experiences in Math
Mathematicians with synesthetic experiences often perceive mathematical concepts with vivid sensory associations, such as colors, shapes, or spatial arrangements.
• This phenomenon, known as synesthesia, allows individuals to experience multiple senses simultaneously, leading to unique perceptions of mathematical ideas.
• For some, numbers may evoke specific colors, equations may manifest as distinct geometric patterns, and functions may be visualized in intricate spatial forms.
• These sensory associations can provide alternative perspectives and aid in problem-solving and mathematical reasoning.
• While synesthetic experiences in math are subjective and vary among individuals, they offer valuable insights into the interconnected nature of sensory perception and mathematical cognition.
Exploring these diverse experiences can enrich our understanding of mathematical visualization and its potential impact on learning and creativity in the realm of mathematics.
Enhancing Math Comprehension
When considering the enhancement of math comprehension, the phenomenon of synesthesia and its influence on mathematical visualization warrants careful examination.
Synesthesia, a neurological condition where stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second pathway, can provide unique insights into
mathematical concepts.
In the context of enhancing math comprehension, synesthesia offers potential benefits such as:
• Enhanced Memory Retention: Synesthetes may use their sensory associations to remember mathematical formulas and concepts more effectively.
• Multisensory Learning: Synesthetic experiences can enable individuals to perceive mathematical relationships through multiple senses, fostering a deeper understanding.
• Creative Problem-Solving: Synesthesia may encourage innovative approaches to mathematical problem-solving through the blending of sensory experiences.
• Personalized Learning Strategies: Understanding an individual’s synesthetic experiences can inform tailored teaching methods to optimize mathematical comprehension.
Impact on Learning and Understanding
The impact of color on learning and understanding in mathematics is a topic of significant interest and debate among educators and researchers.
• Studies have shown that color can affect cognitive processes, memory retention, and overall comprehension.
• In the context of mathematics, the use of specific colors in educational materials and classroom environments has been linked to varying levels of impact on students’ learning experiences.
• For example, some research suggests that certain colors may enhance focus and information processing, while others may lead to distractions or confusion.
• Understanding the individual differences in how color influences learning is crucial for educators to create inclusive and effective teaching strategies.
Additionally, considering the potential impact of color on mathematical understanding can provide valuable insights into optimizing learning environments and instructional materials for diverse
student populations.
The Debate Continues: Blue Vs. Red
The ongoing debate between proponents of using blue and red in mathematical materials and environments underscores the significance of color choice in shaping students’ learning experiences and
This debate continues to spark discussions and research on the potential impact of color on mathematical cognition.
The following points highlight the key aspects of the blue vs. red debate:
• Cultural Influences: The preference for blue or red may vary across different cultures, impacting the effectiveness of color choices in mathematical settings.
• Cognitive Effects: Studies explore how blue and red may affect cognitive processes such as problem-solving and concentration in mathematical tasks.
• Accessibility: Considerations arise regarding color blindness and the accessibility of content for all students.
• Personalization: The potential benefits of allowing students to choose their preferred color for mathematical materials and resources.
This ongoing debate emphasizes the need for thoughtful consideration of color selection in educational materials.
The debate over whether math is associated with the color blue or red continues to provoke discussion and intrigue.
While historical and cultural influences play a role, the psychology of color perception and synesthesia also contribute to the perception of mathematical concepts.
Ultimately, the impact of color on learning and understanding remains a fascinating area for further exploration.
The anachronism ‘color me intrigued’ adds a touch of playfulness to the discussion.
Leave a Reply Cancel reply
|
{"url":"https://colorvisit.com/what-color-is-math-blue-or-red/","timestamp":"2024-11-06T02:41:51Z","content_type":"text/html","content_length":"139014","record_id":"<urn:uuid:e705e03c-b13c-40a2-88b2-36ba80dc9a7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00422.warc.gz"}
|
Samacheer Kalvi 3rd Standard Maths Guide Term 3 Chapter 5 Money
Samacheer Kalvi 3rd Standard Maths Book Solutions Term 3 Chapter 5 Money
Rupees and Paise (Text Book Page No. 36):
Question 1.
Convert the following rupees into paise.
Add the following (Text Book Page No. 37):
Question 1.
Question 2.
Question 3.
Question 4.
Question 5.
Question 6.
Subtract the following (Text Book Page No. 37):
Question 1.
Question 2.
Question 3.
Question 4.
Question 5.
Question 6.
Exercise (Text Book Page No. 38):
Question 1.
Sengothai bought a school bag for ₹ 210.30 and a sports shoe for ₹ 260.20 find the amount to be returned by the shopkeeper if she has paid five hundred rupees to the shopkeeper.
Adding rupees:
School bag = 210.30 (+)
Sports shoe = 260.20
Total cost = 470.50
Subtracting rupees:
Sengothai paid = 500.00
Total cost = (-) 470.50
Shopkeeper has to return = 29.50
Question 2.
Kumaran’s father asked him to get a change for ₹ 200 from his uncle. If his uncle gave him a hundred rupee note and a fifty rupees note. How much more his uncle has to give him?
Adding rupees:
1 hundred rupees = 100.00
1 fifty rupees = (+) 50.00
Total = 150.00
Subtracting rupees:
Father gave = 200.00
Uncle returned = 150.00
More uncle need to give him = 50.00
Rate Charts and Simple Bills:
Question 1.
The following are the items eaten by Raju and his family. Fill in the blanks using the given bill.
i. Name of the Restaurant _________
Hotel foods
ii. Bill number _________
iii. bate of the bill ________
iv. Total number of items eaten _________
v. Total amount of money to be paid _________
Question 2.
Complete the given bill and find the total amount to be paid.
Question 3.
Prepare Bills for the items purchased using the given rate chart.
i. Ramya bought two pens three erasers and a sketch packets. Prepare a bill for her purchase.
ii. Ravi bought an eraser a sharpener and two pens. Prepare a bill for his purchase.
3rd Standard Maths Guide Money Additional Questions and Answers
Question 1.
Question 2.
Question 3.
Question 4.
Question 5.
Complete the given table and find the total amount to be paid.
|
{"url":"https://samacheerkalviguru.com/samacheer-kalvi-3rd-standard-maths-guide-term-3-chapter-5/","timestamp":"2024-11-01T19:17:46Z","content_type":"text/html","content_length":"64303","record_id":"<urn:uuid:ec4ec311-2363-48f9-94ed-6ee2ffeff584>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00649.warc.gz"}
|
[Solved] What is the probability that exactly two | SolutionInn
What is the probability that exactly two of the two offspring will be affected in a family
What is the probability that exactly two of the two offspring will be affected in a family with dominant disease with complete penetrance?
A dominantly inherited genetic disease is identified over several generations of a large family. However, about half the families have dominant disease with complete penetrance, whereby if a parent
is affected there is a 50% probability that any one offspring will be affected. Similarly, about half the families have dominant disease with reduced penetrance, whereby if a parent is affected there
is a 25% probability that any one offspring will be affected.
Suppose in a particular family one parent and two of the two offspring are affected.
Fantastic news! We've Found the answer you've been seeking!
|
{"url":"https://www.solutioninn.com/study-help/fundamentals-of-biostatistics/what-is-the-probability-that-exactly-two-of-the-two","timestamp":"2024-11-13T14:44:42Z","content_type":"text/html","content_length":"82009","record_id":"<urn:uuid:825cab1c-0292-4ef1-9596-f81535e0b87a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00682.warc.gz"}
|
Before introducing the cryptographic tools we need, just a brief recap of some of my previous posts. We can express an instance of a problem as three vectors of polynomials, and a solution to that
computation as a vector of values.
With a series of transformations (“reductions”), we can convert a computation that we want to prove execution of to this polynomial format. Knowing all the values of the solution vector w actually
means knowing the values of each variable used during our computation. If a polynomial H(X) exists, as defined in the QAP post, it means that all the variables were calculated as defined by the
computations, i.e. that all constraints were met. Nothing so far is “zero knowledge”: we need some technique to hide information.
The first ingredient is, not unexpectedly, a one-way function: we can call it hh(X). hh(X) is like a hashing function, but it also supports addition and multiplication, i.e. it maps X to a “space”
where the operations of addition and multiplication are defined, and in there it preserves the structure of linear combinations of the inputs. So, hh(a*X+b*Z) = a*hh(X)+b*xx(Z).
That’s cool, but how are we going to use this one-way function? Well, we’ll use it to prove that we know the H(X) polynomial we calculated in the QAP post. But we’ll do that without exposing the
coefficients of H(X): instead, we’ll disclose the value of H(X) evaluated in a randomly chosen secret point P and mapped into this new space. This is critical: to make sure that the prover cannot
make up the proof, we need to be sure that he does not know the value of the secret point P: P must be secret, calculated once by someone, deleted and totally forgotten, unknown to the world!
… I can pretty much guess what you’re thinking: how can we evaluate H(P) and map it to this hidden space, if nobody knows P? We can do that because multiplications of terms for their coefficients,
and their sums, maintain the same “meaning” in the new space, so we can calculate the polynomial in the new space by applying the same linear combination expressed by the original polynomial to the
hidings hh(1), hh(P), hh(P^2), hh(P^3), …
Of course to do that, the prover needs to know the hidings hh(1), hh(P), hh(P^2), hh(P^3), … they will have to be provided, and that explains the need, in our zk-Snark, of a “trusted setup”, I’ll
write a specific post about that when I have time, I guess towards the end of the protocol presentation.
Before the prover proves anything then, someone must have randomly sampled a point P, calculated the elements hh(1), hh(P), hh(P^2), hh(P^3), … (as many as the number of constraints - 1), made them
publicly available, deleted and cancelled from memory the point P. These hidings are made available as part of the CRS, the common reference string, or “structured reference string” as it’s now
called (as promised, more on this to come).
Putting things together, to give proof that we performed a given computation, we’ll prove that we know the polynomials A(X), B(X), C(X), and H(X) by mapping those polynomials, calculated in a secret
point P, into the hiding space. The verifier will verify the proof by checking that the hidings of [A(P)*B(P)-C(P)], divided by the hidings of Z(P), is equal to the hiding of H(P). To verify this,
without revealing secrets, we’ll need a new ingredient, called “pairing”, which will have its own dedicated post.
One last thing before we move on: why does the the point P need to be secret? Well, if it’s not secret, a malicious prover could make up other polynomials A1'(X), A2'(X), A3'(X), … B1'(X),.. C1'
(X),... and make up a vector w' such that A'(X)*B'(X) - C'(X) only in that point P. That wouldn’t be too difficult to achieve, so the prover could in fact make up false proofs. By keeping P secret,
we are (statistically) proving that the polynomials hold that equation in ANY point, not just in that point P.
|
{"url":"http://www.zeroknowledgeblog.com/index.php/the-pinocchio-protocol/hiding","timestamp":"2024-11-09T18:59:24Z","content_type":"text/html","content_length":"20683","record_id":"<urn:uuid:ca5db343-3f9e-4d19-867e-78096ec94ccc>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00334.warc.gz"}
|
How to embed a 3D matplotlib figure in the notebook interface?
How to embed a 3D matplotlib figure in the notebook interface?
I'd like to embed a matplotlib figure in the notebook interface of SageMath, is this possible? Presently I am using this SageMath code:
# Next we define the parameters
# The Lorenz equations
# Time and initial conditions
# Plot the result
from mpl_toolkits.mplot3d import axes3d
from matplotlib import pyplot as plt
def plot():
fig = plt.figure(1)
ax = fig.add_subplot(111, projection='3d')
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
to solve and plot the solution to the Lorenz equations. What modification to the plot() function do I have to make in order to embed the matplotlib plot in the notebook interface.
1 Answer
Sort by ยป oldest newest most voted
First off, you probably don't need a separate function for that, and secondly you definitely shouldn't clobber the normal Sage plot() function with yours - maybe name it plot1()?
But the real thing you need is to create a file for sagenb to display, with something like
It would be nice to use the temporary filename creation capabilities in Sage but those create filenames in the whole tree; you could do something more annoying like
Anyway, I get a nice Lorenz plot so I guess everything works okay! Nice code. I cannot say if this would work with other savefig outputs though as I don't use mpl directly.
edit flag offensive delete link more
|
{"url":"https://ask.sagemath.org/question/32079/how-to-embed-a-3d-matplotlib-figure-in-the-notebook-interface/","timestamp":"2024-11-10T22:35:20Z","content_type":"application/xhtml+xml","content_length":"54200","record_id":"<urn:uuid:c0949c90-46e2-4bc1-a6a9-09a4a52bb859>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00192.warc.gz"}
|
Mathematics Archives
Explain the differences between the arithmetic return and the geometric return. How do you feel about the following statement: “Avoiding risks is the biggest risk of all”? That statement may seem
contradictory at first, but it makes more sense if you apply the concept to various situations in life and business. For example, many people … Read more
Characterizing uncertainty
Characterizing uncertainty Many business activities generate data that can be thought of as random. An example described in the textbook is the servicing of cars at an oil change shop. Each car
entering the shop can be considered an experiment with random outcomes. A variable of interest in this experiment could be the amount of … Read more
Discussion and reply :QUALICOPC
Discussion and reply:QUALICOPC You have been assigned a topic according to the first letter of your last name. Please identify the topic assigned to you below. For each topic, find a health science
example of it in a published research study or news article. Summarize how your topic is used in the example. Do you … Read more
Assignment: Examining Mean, Median and Mode
Assignment: Examining Mean, Median and Mode Unit outcomes: Differentiate between mean, median, and mode. Interpret distribution shapes, especially with respect to skewness and outliers. Explain the
concept of bell curve. Course outcome practiced in this Assignment: HS311-3: Examine summary statistics of health data in terms of central tendency. Calculate the median, mean and mode of the … Read
Discuss Cross-Curricular Lesson Effectiveness
Discuss Cross-Curricular Lesson Effectiveness Math is used throughout our days and in many contexts. Integrating other content areas into your math lessons or math into other content area lessons
helps students see how we use math in our everyday lives in a variety of situations. Observe a cross‐curricular math Kindergarten lesson taught by your mentor … Read more
Discuss the Development of Numerical Operations
Discuss the Development of Numerical Operations When early number and operation concepts are taught well, children move from understanding “10” as ten ones to understanding “10” as one unit of ten.
This knowledge can be applied in many mathematical operations, including addition and subtraction. Select at least three number and operations in base ten standards … Read more
Explain the Assessment Tools in Mathematics.
Explain the Assessment Tools in Mathematics. High‐quality math teaching and learning begins with the end in mind, including identification of how learning will be assessed during and after learning
activities are completed. For this assignment, you will create developmentally appropriate pre‐assessments for the group learning activities you designed in Topic 2. For each of the … Read more
Explain how accounting information System (AIS) add value to an organization
Explain how accounting information System (AIS) add value to an organization 1- Explain how accounting information System (AIS) add value to the organization using examples of Saudi Companies 2- As a
student of BBA imagine you have decided to become entrepreneur.You came up with a great ideas in your business plan to start your own … Read more
Discuss the Reflection of Financial Decisions
Discuss the Reflection of Financial Decisions Discuss the Reflection of Financial Decisions as you answer the questions that follows:Do you think credit cards are a wise way to pay for things? Do you
think credit cards are necessary?What would keep you from paying extra on the credit card on Financial Decisions ?What would it mean to … Read more
Critically review zone and field models which are used for compartment fire modelling.
Critically review zone and field models which are used for compartment fire modelling. Critically review zone and field models which are used for compartment fire modelling. Include examples and
critically analyse the main assumptions, limitations, advantages and disadvantages of these models. Fire modelling contains two distinct models; zone and field models Field Models Field … Read more
|
{"url":"https://www.essaycounter.com/downloads/category/mathematics/","timestamp":"2024-11-13T16:40:35Z","content_type":"text/html","content_length":"72898","record_id":"<urn:uuid:3ce7c44e-6e50-4889-a926-499c23cd2147>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00512.warc.gz"}
|
How to Use Operators in Excel? (All You Need to Know) - ExcelDemy
Operators specify the type of calculation in an Excel formula. For example, arithmetic operators indicate addition, subtraction, multiplication, or division of formula elements.
In this article, we will discuss 4 commonly used types of operators in Excel: Arithmetic, Comparison/Logical, Reference, and Concatenation operators. We’ll discuss the orders in which operators are
used and how to change those orders.
The below overview image explains the uses of different operators in Excel. We used the IF function to demonstrate their usage, to get a more logical output.
Download Practice Workbook
What Are Operators in Excel?
Operators are characters or symbols used to perform mathematical calculations in Excel. We use Excel operators for calculations and various formulas.
What Types of Operators Are There in Excel?
There are 4 types of operators in Excel:
• Arithmetic Operators (+,–,*,/,%,^): Used for basic mathematical calculations.
• Comparison Operators (=,<>,>,<,>=,<=): Used in conditional formatting and other complex formulas.
• Reference Operators (“:”,“,”,“ ”): Refer to a specific range or cell link within formulas.
• Concatenation Operator: The ampersand symbol (&) is the only concatenation operator that joins two or multiple strings together.
Section 1 – Arithmetic Operators and How to Use Them
There are 6 arithmetic operators: Plus (+) sign for addition, Minus (-) sign for subtraction, Asterisk (*) sign for Multiplication, Forward Slash (/) for Division, Percent (%) sign for percentage,
and Caret (^) sign for Exponential operation.
All the arithmetic operators and their summary is given in the following table:
Operators Condition Name Formula Description
% Percent Sign =25%*B10 Converts a numeric value to a percentage.
^ Caret/Exponential =B10^C10 The value of the first cell is raised to the power of the value in the second cell.
* Asterisk =B10*C10 Returns the multiplied value of two cells.
/ Forward Slash =B10/C10 Divides the first cell value by the second one and gives the result.
+ Addition =B10+C10 Adds the numeric values and returns the result.
– Subtraction =B10-C10 Subtracts the second cell from the first and gives a numeric value.
Note: Before using the arithmetic operators you should be aware of their precedence. The Percent sign has the highest precedence. Then, Exponential or Caret symbol. After that, Multiplication and
Division, followed by Addition and Subtraction.
The following data table has 2 values in each row. We will see the result with each operator.
• Enter the following formula in E5 and press ENTER to get the added value:
• For the exponential operator, enter the following formula in E9 and press ENTER:
The exponentiated value 25 is the result.
• Similarly, use the other arithmetic operators to get the rest of the results.
Section 2 – When to Use Comparison/Logical Operators in Excel
Use the comparison/logical operators to compare data between two cells. There are 6 comparison operators in Excel:
Operators Condition Name Formula Description
= Equal to =IF(C5=D5, “True”, “False”) Checks if two cell values are equal.
< Less than =IF(C5<D5, “True”, “False”) Checks if the first cell value is smaller than the second cell value.
> Greater than =IF(C5>D5, “True”, “False”) Checks if the first cell value is greater than the second cell value.
<> Not equal to =IF(C5<>D5, “True”, “False”) Checks if the two cells are not equal.
<= Less than or equal to =IF(C5<=D5, “True”, “False”) Checks if the first cell value is smaller than or equal to the second cell value.
>= Greater than or equal to =IF(C5>=D5, “True”, “False”) Checks if the first cell value is greater than or equal to the second cell value.
We will use these logical operators in an IF formula, which checks whether a condition is TRUE or FALSE. The syntax of the IF function is as follows:
• Enter the following formula in cell E5 and press ENTER:
This formula checks whether the logic is TRUE or FALSE. As cells C5 and D5 are not equal, it returns FALSE.
Again, cells C8 and D8 are not equal, so the Not equal to operator returns TRUE in cell E8.
Similarly, check the other operators.
Section 3 – Reference Operators
Colon (:), Comma (,), and Space ” “ are the reference operators in Excel. They are also known as Range, Union, and Intersection operators, and are used to indicate a data range.
Operators Condition Name Formula Description
: Range =SUM(C5:E5) Indicates a data range between the first cell and the second cell.
, Union =SUM(C5,D5,E5) Indicates separate cell values.
“ ” Intersection =C9:E9 D5:D12 Returns the intersection of cell values.
In the dataset below, we have different student names and their marks. We will find Total Marks using different reference operators.
Read More: How to Use Reference Operator
3.1 – Using Range Operators in Excel
Generally, we can perform addition with the SUM function in Excel, and reference data with the Range (:) operator.
The overview of the SUM function is shown in the following image:
• Use the following formula in F5 to find the Total Marks:
• Hold and drag the Fill Handle from cell F5 downwards to find the Total Marks for all the students.
3.2 – Using Union Operators
We can also reference the data by the Union (,) operator in the SUM Function.
• Enter the following formula in cell F5 and press ENTER:
• Copy the formula to the other cells to get the Total Marks for all the other students.
3.3 – Using Intersect Operators in Excel
To find a specific cell value in a data table, use the Intersect (“ ”) operator. In the dataset below, we want to find the Chemistry marks of the student with ID number S005.
• Use the following formula in D16 and press ENTER:
The formula returns a Chemistry mark of 75.
Section 4 – The Concatenation Operator and How to Use it
The ampersand (&) sign is known as the concatenation operator. We can use this concatenation operator (&) to join two or more strings.
For example, suppose we want to concatenate the First Name and Last Name from the following data set:
• As we have to include a space between the First Name and Last Name, enter the following formula in cell D5 and press ENTER:
• Copy the formula to all the other cells, and all the other names will also be concatenated, as shown in the following image:
What Is the Order of Excel Operators and How Can We Change It?
In an Excel formula, there could be many operators. The Excel operators have predefined precedence. The system will first work with the highest precedent operator, then move to the next highest
precedent operator, and so on. The order of operators is given in the following table:
Operators Description
Colon (,); Comma (,); Space ( ) Reference Operators
% Percentage
^ Exponential Operator
*, / Multiplication and Division Operators
+ , – Addition and Subtraction Operators
& Concatenating Operator
=, <>, <=, >=, <> Comparison Operators
You can change the operator order by adding parenthesis. For example, the following formula will return 13:
20/4 will be calculated first and then added to 8 due to the higher precedence of the Division Operator (/).
Now add parenthesis and modify the formula as follows:
The formula will return 7, since we are now telling Excel to calculate 8+20 first before dividing by 4.
Read More: Order of Operations in Excel
Things To Remember
• Multiplications and Divisions are performed before Addition and Subtraction.
• Make sure to use the proper data type. When you are concatenating, the cell value must be in Text format. For arithmetic operators, use the Number format.
• Be careful while referencing, especially with absolute and relative referencing.
Frequently Asked Questions
1. What is the difference between Operators and Functions?
Answer: Excel Functions take values as input and return meaningful results. On the other hand, operators are used in different Excel functions or formulas so that they work properly.
2. How can I use operators to create complex formulas?
Answer: You can create complex formulas with Excel operators. To separate different criteria, use the comma (,) as a delimiter. To apply different logical conditions, use logical operators. Here is a
complex formula =IF(AND(A1 > 60, B1 < 50), “Pass”, “Fail”) which returns Pass and Fail using a condition.
3. Which operator is used to compare if two values are equal?
Answer: Use two equal operators (==). This operator can be used to compare between a pair of numeric, boolean, string, and object values.
Excel Operators: Knowledge Hub
<< Go Back to Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
We will be happy to hear your thoughts
Leave a reply
|
{"url":"https://www.exceldemy.com/learn-excel/formula/operators/","timestamp":"2024-11-08T04:24:38Z","content_type":"text/html","content_length":"212288","record_id":"<urn:uuid:dd777102-7be7-42c9-a0ca-b8f8dcdc78fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00281.warc.gz"}
|
Irrotational axisymmetric flow about a prolate spheroid in cylindrical duct
A solution to the problem of potential flow about a prolate spheroid placed axially symmetric in a circular duct has been derived. The solution is in the form of a distribution of vortex rings over
the surface of the spheroid. The vortex strength is expressed in terms of an infinite series of Legendre polynomials and the analysis yields an infinite set of equations for determining the
coefficients of this series. An expression for the velocity distribution on the surface of the spheroid as well as the longitudinal added mass coefficients of the spheroid are derived in terms of the
coefficients of the Neumann series expansion of the vortex sheet strength. Numerical results are presented for various spheroids and different blockages. Also given is a comparison between the
present method and few available approximate methods.
Journal of Engineering Mathematics
Pub Date:
October 1974
□ Axisymmetric Flow;
□ Ducted Flow;
□ Flow Velocity;
□ Potential Flow;
□ Prolate Spheroids;
□ Velocity Distribution;
□ Entire Functions;
□ Flow Equations;
□ Flow Theory;
□ Inviscid Flow;
□ Legendre Functions;
□ Polynomials;
□ Vortex Rings;
□ Fluid Mechanics and Heat Transfer;
□ Vortex;
□ Vortex Ring;
□ Prolate;
□ Approximate Method;
□ Legendre Polynomial
|
{"url":"https://ui.adsabs.harvard.edu/abs/1974JEnMa...8..315M/abstract","timestamp":"2024-11-13T18:32:15Z","content_type":"text/html","content_length":"38356","record_id":"<urn:uuid:1f6d488a-6127-4b81-9f01-e11d5992fd79>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00447.warc.gz"}
|
Investigation Starting Points (Number 19)
lnvestigating Stamps
The post office only has 1p, 2p, 4p, 8p, 16p, 32p stamps. If you can only buy 1 stamp for each amount, how many different postage totals can you make?
The cheapest rate is 1p!
[See also Aunt Sophie's Post Office]
Investigate further.....
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your
A mathematical investigation is quite different to other mathematical activities. The best investigations are open ended and allow students to choose the way they work and how they record their
findings. It is one of the few occasions when 'going off on a tangent' is not only acceptable but actively encouraged (within reason).
Students may ask for 'the answers' but this supposes that the activity is closed. Investigations can always be extended by varying the initial instructions or asking the question 'what if...?'.
Sometimes students point out that the instructions are ambiguous and can be interpreted in different ways. This is fine and the students are encouraged to explain how they interpreted the
instructions in their report.
Some students may benefit from a writing frame when producing the reports of their investigations. Teachers may suggest sections or headings such as Introduction, Interpretation, Research, Working
and Conclusion or something similar.
Here are some other activities you may be interested in:
Featured Activity Remainder Race Recently Updated
Roman Numerals Jigsaw Quartiles
A game involving chance and choice requiring an
ability to calculate the remainder when a two digit
This is a wonderful activity for someone who does not know Roman number is divided by a single digit number. Practise processing the sets of numbers to find the lower and upper
numerals. By completing the activity an understanding of the symbols quartiles. So far this activity has been accessed 101 times and 75
develops and a great sense of achievement is enjoyed. The short web address is: Transum Trophies have been awarded for completing it.
Teacher's notes for this investigation and solutions to Transum puzzles, exercises and activities are available when you are signed in to your Transum subscription account. If you do not yet have an
account and you are a teacher, tutor or parent you can apply for one by completing the form on the Sign Up page.
A Transum subscription also gives you access to the 'Class Admin' student management system, downloadable worksheets, many more teaching resources and opens up ad-free access to the Transum website
for you and your pupils.
|
{"url":"https://www.transum.org/Software/Investigations/StartingPoints.asp?ID=19","timestamp":"2024-11-07T22:49:20Z","content_type":"text/html","content_length":"23642","record_id":"<urn:uuid:67eb9bbf-d07d-4c28-9864-042555883f47>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00372.warc.gz"}
|
Lesson 3
Partial Products in Algorithms
Warm-up: Which One Doesn't Belong: Multiplying Large Numbers (10 minutes)
This warm-up prompts students to compare four representations of multiplication. Students compare diagrams and equations that represent multi-digit multiplication. This prepares them for the work of
the lesson where they compare different ways to represent products as sums of partial products.
• Groups of 2
• Display the image.
• “Pick one that doesn’t belong. Be ready to share why it doesn’t belong.”
• 1 minute: quiet think time
• “Discuss your thinking with your partner.”
• 2–3 minutes: partner discussion
• Share and record responses.
Student Facing
Which one doesn't belong?
Activity Synthesis
• “Why doesn't B belong?” (It's not a diagram. It's an expression.)
• “Does the value of expression B match the value represented in any of the diagrams?" (Yes, diagrams A and C both represent the product \(4 \times 5,\!342\) and that's the same as B.)
Activity 1: Partial Products Everywhere (20 minutes)
The goal of this activity is for students to examine different ways to write the product of a three-digit number and a two-digit number as a sum of partial products. Students match sets of partial
products which can be put together to make the full product. Students are provided blank diagrams, familiar from the previous lesson, that they may choose to use to support their reasoning. In the
activity synthesis, students relate the expressions and diagrams to equations to prepare them to analyze symbolic notation for partial products in the next activity.
When students relate partial products and diagrams to the product \(245 \times 35\) they look for and identify structure (MP7).
MLR8 Discussion Supports. Display the following sentence frame to support small-group discussion: “I noticed _____ , so I matched . . . .” Encourage students to challenge each other when they
Advances: Speaking, Conversing
Required Materials
Materials to Copy
• Partial Product Expressions
Required Preparation
• Create a set of cards from the blackline master for each group of 2.
• Groups of 2
• Display first image from student book.
• “What product does this rectangle represent?” (\(245 \times 35\))
• “Today, you are going to take turns with your partner picking expressions that can be added together to give the product \(245 \times 35\). You can use the diagrams to explain your reasoning, if
they are helpful.”
• 10 minutes: partner work time
• Monitor for students who:
□ use the diagram to determine which expressions they will use.
□ look at the expressions and think about how they could be used to find the full product.
□ compute the full product in different ways.
Student Facing
1. Take turns picking out a set of expressions that are equal to \(245 \times 35\) when added together. Use the diagrams if they are helpful.
2. Explain how you know the sum of your expressions is equal to \(245 \times 35\).
3. What is the value of \(245 \times 35\)? Explain or show your reasoning.
Advancing Student Thinking
If students do not choose correct expressions to represent a sum that is equal to \(245 \times 35\), refer to one of the empty boxes in the diagram and ask, “Which multiplication expression
represents this partial product?”
Activity Synthesis
• Invite previously selected students to share their strategies. As students share, record their reasoning with equations.
• Display: \(245 \times 30 + 245 \times 5 = 245 \times 35\)
• “How do you know this equation is true?” (I can put the 30 and 5 together since they are both multiplied by 245. I see that \(245 \times 30\) is the top part of the diagram and \(245 \times 5\)
is the bottom part. Together that’s the whole diagram.)
Activity 2: Record Partial Products (15 minutes)
The purpose of this activity is for students to consider 2 different ways of recording partial products in an algorithm that they worked with in a previous course. The numbers are the same as in the
previous activity to allow students to make connections between the diagram and the written strategies. Students examine two different ways to list the partial products in vertical calculations,
corresponding to working from left to right and from right to left. Regardless of the order, the key idea behind the algorithm is to multiply the values of each digit in one factor by the values of
each digit in the other factor.
Action and Expression: Develop Expression and Communication. Provide access to a variety of tools. Provide access to colored pencils or highlighters they can use to identify the partial products.
Supports accessibility for: Visual-Spatial Processing, Conceptual Processing
• Groups of 2
• “We’re going to look at two ways students recorded partial products for multiplying 245 by 35.”
• Display the image of Andre’s and Clare’s calculations.
• “How does this relate to what you just did?” (You can see they split it up into different partial products and listed the results to add them up.)
• 3 minutes: independent work time
• 5 minutes: partner work time
• Monitor for students who identify a pattern for how Andre and Clare list the partial products
Student Facing
1. How are Andre’s and Clare’s strategies the same? How are they different?
2. Create a list of equations to match the partial products Andre and Clare found.
Advancing Student Thinking
If students do not write the correct equations, refer to the individual partial products and ask, “Where is this partial product represented in the multiplication expression \(245 \times 35\)?”
Activity Synthesis
• “Both of these strategies use an algorithm that lists the partial products. An algorithm is a set of steps that works every time as long as the steps are carried out correctly.”
• “How are both the approaches the same?” (They both multiply ones and tens by hundreds, tens, and ones.)
• “How are the approaches different?” (One starts with the hundreds and the other starts with the ones. One goes from left to right and the other goes from right to left.)
• “Why is it important to list the products in an organized way?” (That way I know I found all the partial products. I did not leave some out or take some twice.)
• Display:
\(245 \times 35\)
• Display student work to show the list of equations from the second problem or use the list in the student responses.
• “How does each expression relate to the product \(245 \times 35\)?” (\(30 \times 200\) is the product of the 3 in the tens place of 35 and the 2 in the hundreds place of 245.)
Lesson Synthesis
“Today we found products of two-digit and three-digit numbers using partial products. We saw how diagrams can help us make sure we found all the partial products. We also saw we could list partial
products using an algorithm.”
“How do you know that all the different ways to find the product give the same answer?” (You’re always adding up the same partial products, just calculating them and putting them together in
different ways.)
“What is helpful to remember when you are using partial products to determine a full product?” (You have to make sure to find all of the partial products. You have to make sure you add them.
Sometimes I can add them mentally and then don't need to list all of them.)
Cool-down: Using Partial Products (5 minutes)
|
{"url":"https://im.kendallhunt.com/k5/teachers/grade-5/unit-4/lesson-3/lesson.html","timestamp":"2024-11-12T09:07:58Z","content_type":"text/html","content_length":"108894","record_id":"<urn:uuid:352af64f-6bb4-43c1-8f95-687505f6d245>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00641.warc.gz"}
|
Representation of positive operators and alternating sequences
We give a representation for a positive L[p]-operator, 1 < p < ∞, in terms of a pair of positive operators (U, V), an L[l]-operator U and an L[∞]-operator V. This representation is obtained by an
extension of the methods used in the construction of dilations of positive L[p]-contractions to positive invertible L[p]-isometries. A positive L[p]-operator T and a positive L[r]-operator H, 1 < p,
r < ∞, are called associated operators if they can be represented by the same pair. If {T[n]} is a sequence of positive L[p]-contractions and {S[n]} a sequence of positive L[r]-contractions, 1 < p, r
< ∞, and if S[n] and T*[n] are associated for each n, then we show that the sequence. S[1]·S[n](T[n]·T[1]f) p r. converges a.e. for each nonnegative L[p]-function f. This result includes Rota's
"Alternierende Verfahren" theorem and its subsequent generalizations and covers new cases.
Dive into the research topics of 'Representation of positive operators and alternating sequences'. Together they form a unique fingerprint.
|
{"url":"https://experts.umn.edu/en/publications/representation-of-positive-operators-and-alternating-sequences","timestamp":"2024-11-12T16:27:13Z","content_type":"text/html","content_length":"49903","record_id":"<urn:uuid:e7374783-2144-4639-ba26-ffdd178b6eff>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00646.warc.gz"}
|
Equal-Area Projection -- from Wolfram MathWorld
A map projection in which areas on a sphere, and the areas of any features contained on it, are mapped to the plane in such a way that two are related by a constant scaling factor. No projection can
be both equal-area and conformal, and projections which are neither equal-area nor conformal are sometimes called aphylactic (Snyder 1987, p. 4). Equal-area projections are also called equivalent,
homolographic, homalographic, authalic, or equiareal (Lee 1944; Snyder 1987, p. 4).
|
{"url":"https://mathworld.wolfram.com/Equal-AreaProjection.html","timestamp":"2024-11-11T03:08:12Z","content_type":"text/html","content_length":"52630","record_id":"<urn:uuid:8c457997-7dd5-42ec-b15a-08448912709e>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00844.warc.gz"}
|
Corporate Finance • Formulas CFA® Level 1 – 365 Financial Analyst
Need an all-in-one list with the Corporate Finance formulas included in the CFA® Level 1 Exam? We have compiled them for you here. The relevant formulas have been organized and presented by chapter.
In this section, we will cover the following topics — Capital Budgeting, Cost of Capital, Measures of Leverage, and Working Capital Management.
1. Capital Budgeting
Net present value (NPV)
NPV = \sum_{t=0}^n \frac {CF{_t}}{(1+r){^t}}
CF{_t} = After-tax cash flow at time t
r = Required rate of return for the investment
Internal Rate of Return (IRR)
\sum_{t=0}^N \frac {CF{_t}}{(1+IRR){^t}} = 0
Average Accounting Rate of Return (AAR)
AAR = \frac {Average~net~income}{Average~book~value}
Profitability Index (PI)
PI = \frac {PV~of~future~cash~flows}{Initial~Investment} = 1+ \frac {NPV}{Initial~Investment}
2. Cost of Capital
Weighted Average Cost of Capital (WACC)
WACC = w{_d}r{_d} (1 - t) + w{_p}r{_p} + w{_e}r{_e}
w{_d} = Proportion of debt that the company uses when it raises new funds
r{_d} = Before-tax marginal cost of debt
t = Company’s marginal tax rate
w{_p} = Proportion of preferred stock the company uses when it raises new funds
r{_p} = The marginal cost of preferred stock
w{_e} = Proportion of equity that the company uses when it raises new funds
r{_e} = Marginal cost of equity
Tax Shield
Tax~shield = Deduction × Tax~rate
Cost of Preferred Stock
r{_p} = \frac {D{_p}}{P{_p}}
P{_p} = Current preferred stock price per share
D{_p} = Preferred stock dividend per share
r{_P} = Cost of preferred stock
Cost of Equity (Dividend discount model approach)
r{_e} = \frac {D{_1}}{P{_0}}+g
P{_0} = Current market value of the equity market index
D{_1} = Dividends expected next period on the index
r{_e} = Required rate of return on the market
g = Expected growth rate of dividends
Growth Rate
g = \bigg( 1- \frac {D}{EPS} \bigg) \times ROE
\frac {D}{EPS} = Assumed stable dividend payout ratio
ROE = Historical return on equity
Cost of Equity (Bond yield plus risk premium)
r{_e} = r{_d} + Risk~Premium
Risk~premium = the additional yield on a company’s stock relative to its bonds
Capital Asset Pricing Model (CAPM)
E (R{_i}) = R{_F} + β{_i} [E (R{_M}) - R{_F}]
β{_i} = The return sensitivity of stock i to changes in the market return
E(R{_M}) = The expected return on the market
E(R{_M}) − R{_F} = The expected market risk premium
R{_F} = Risk-free rate of interest
Beta of a Stock
β{_i} = \frac {Cov (R{_i}, R{_M})}{Var (R{_M})}
R{_M} = Average expected rate of return on the market
R{_i} = Expected return on an asset i
Cov = Covariance
Var = Variance
Pure-play Method Project Beta (De-lever)
β{_{Unlevered(Comparable)}} = \frac {β{_{Levered,~Comparable}}}{\bigg[1+\bigg((1-t{_{Comparable}}) \frac {D{_{Comparable}}}{E{_{Comparable}}}\bigg)\bigg]}
t = Tax rate
D = Debt
E = Equity
Pure-play Method for Subject Firm (Re-lever)
β{_{Levered,~Project}} = {β{_{Unlevered,~Comparable}}}{\bigg[1+\bigg((1-t{_{Project}}) \frac {D{_{Project}}}{E{_{Project}}}\bigg)\bigg]}
Adjusted CAPM (for country risk premium)
E(R{_i}) = R{_F} + β{_i} [E (R{_M}) - R{_F} + Country~risk~premium]
Country Risk Premium
CRP = Sovereign~yield~spread \times \Big(\frac {\sigma~of~equity~index~of~the~developing~country}{\sigma~of~sovereign~bond~market~in~terms~of~the~developed~market~currency}\Big)
σ = Standard deviation
Break Point
Break~point = \frac {Amount~of~capital~at~which~the~source’s~cost~of~capital~changes} {Proportion~of~new~capital~raised~from~the~source}
3. Measures of Leverage
Degree of Operating Leverage
Degree~of~Operating~Leverage = \frac {Percentage~change~in~operating~income}{Percentage~change~in~units ~sold}
Degree of Financial Leverage
Degree~of~Financial~Leverage = \frac {Percentage~change~in~Net~Income}{Percentage~change~in~EBIT}
Degree of Total Leverage
Degree~of~Total~Leverage = \frac {Percentage~change~in~Net~Income}{Percentage~change~in~number~of~Units~Sold}
Return on Equity (ROE)
Return~on~Equity = \frac {Net~Income}{Shareholders’~Equity}
The Breakeven Quantity of Sales
Q{_{Breakeven}}= \frac {F + C}{P - V}
P = Price per unit
V = Variable cost per unit
F = Fixed operating costs
C = Fixed financial cost
Q = Quantity of units produced and sold
Operating Breakeven Quantity of Sales
Q{_{Operating~Breakeven}}= \frac {F}{P - V}
P = Price per unit
V = Variable cost per unit
F = Fixed operating costs
4. Working Capital Management
Current Ratio
Current~Ratio = \frac {Current~assets}{Current~liabilities}
Quick Ratio
Quick~Ratio = \frac {Cash + Receivables + Short–term~marketable~investments}{Current~liabilities}
Accounts Receivable Turnover
Accounts~Receivable~Turnover= \frac {Credit~sales}{Average~receivables}
Number of Days of Receivables
Number~of~days~of~receivables = \frac {365}{Accounts~receivable~turnover}
Inventory Turnover
Inventory~Turnover = \frac {Cost~of~goods~sold}{Average~Inventory}
Number of Days of Inventory
Number~of~days~of~Inventory = \frac {365}{Inventory~turnover}
Payables Turnover
Payables~Turnover~Ratio = \frac {Purchases}{Average~accounts~payables}
Number of Days of Payables
Number~of~days~of~Payables = \frac {365}{Payables~turnover~ratio}
Net Operating Cycle
Net~operating~cycle = Number~of~days~of~inventory+ Number~of~days~of~receivables - Number~of~days~of~payables
Yield on a Bank Discount Basis (BDY)
r{_{BD}} = \frac {D}{F} \times \frac {360}{t}
D = Dollar discount, which is equal to the difference between the face value of the bill (F) and its purchase price (P0)
F = Face value of the T-bill
t = Actual number of days remaining to maturity
r{_{BD}} = Annualized yield on a bank discount basis
Effective Annual Yield (EAY)
EAY = ( 1 + HPR){^{\frac {360}{t}}} - 1
Holding Period Return
HPR = \frac {(Cashflow~ending~value - Beginning~value + Cashflow~received)}{Beginning~value}
Cost of Trade Credit
Cost~of trade~credit = \Bigg( 1+\frac {\%Discount}{1 - \%Discount} \Bigg){^{\frac {360}{Number~of~days~past~discount}}} - 1
Cost of Borrowing
Cost~of~borrowing = \frac {Interest + Dealer’s~commission + Other~costs}{Loan~amount - Interest}
Follow the links to find more formulas on Quantitative Methods, Economics, Alternative Investments, Financial Reporting and Analysis, Portfolio Management, Equity Investments, Fixed-Income
Investments, and Derivatives, included in the CFA® Level 1 Exam.
|
{"url":"https://365financialanalyst.com/templates-and-models/corporate-finance-formulas-cfa-level-1/","timestamp":"2024-11-01T19:58:39Z","content_type":"text/html","content_length":"73114","record_id":"<urn:uuid:eb53f8c3-a756-4dd7-ae27-48d8aef5668c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00693.warc.gz"}
|
Division as Fractions
5th Grade
Alabama Course of Study Standards: 11
Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers.
1. Model and interpret a fraction as division of the numerator by the denominator (a/b = a ÷ b)
2. Use visual fraction models, drawings, or equations to represent word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers
Arizona Academic Standards: 5.NF.B.3
Interpret a fraction as the number that results from dividing the whole number numerator by the whole number denominator (a/b = a ÷ b). Solve word problems involving division of whole numbers
leading to answers in the form of fractions or mixed numbers. For example, interpret 3/4 as the result of dividing 3 by 4, noting that 3/4 multiplied by 4 equals 3, and that when 3 wholes are shared
equally among 4 people, each person has a share of size 3/4. If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should each person get? Between what two
whole numbers does your answer lie?
Common Core State Standards: Math.5.NF.3 or 5.NF.B.3
Interpret a fraction as division of the numerator by the denominator (a/b = a ÷ b). Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed
numbers, e.g., by using visual fraction models or equations to represent the problem. For example, interpret 3/4 as the result of dividing 3 by 4, noting that 3/4 multiplied by 4 equals 3, and that
when 3 wholes are shared equally among 4 people each person has a share of size 3/4. If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should each person
get? Between what two whole numbers does your answer lie?
Georgia Standards of Excellence (GSE): 5.NR.3.1
Explain the meaning of a fraction as division of the numerator by the denominator (a/b = a ÷ b). Solve problems involving division of whole numbers leading to answers in the form of fractions or
mixed numbers.
North Carolina - Standard Course of Study: 5.NF.3
Use fractions to model and solve division problems.
• Interpret a fraction as an equal sharing context, where a quantity is divided into equal parts.
• Model and interpret a fraction as the division of the numerator by the denominator.
• Solve one-step word problems involving division of whole numbers leading to answers in the form of fractions and mixed numbers, with denominators of 2, 3, 4, 5, 6, 8, 10, and 12, using area,
length, and set models or equations.
New York State Next Generation Learning Standards: 5.NF.3
Interpret a fraction as division of the numerator by the denominator (a/b = a ÷ b).
e.g., Interpret 3/4 as the result of dividing 3 by 4, noting that 3/4 multiplied by 4 equals 3, and that when 3 wholes are shared equally among 4 people each person has a share of size 3/4.
Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers.
e.g., using visual fraction models or equations to represent the problem
e.g., If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should each person get? Between what two whole numbers does your answer lie?
Tennessee Academic Standards: 5.NF.B.3
Interpret a fraction as division of the numerator by the denominator (a/b = a ÷ b). Solve contextual problems involving division of whole numbers leading to answers in the form of fractions or mixed
numbers by using visual fraction models or equations to represent the problem. For example, if 8 people want to share 49 sheets of construction paper equally, how many sheets will each person
receive? Between what two whole numbers does your answer lie?
Wisconsin Academic Standards: 5.NF.B.3
Interpret a fraction as an equal sharing division situation, where a quantity (the numerator) is divided into equal parts (the denominator). Solve word problems involving division of whole numbers
leading to answers in the form of fractions or mixed numbers, by using visual fraction models (e.g., tape diagrams or area models) or equations to represent the problem.
For example, when 3 wholes are shared equally among 4 people each person has a share of size 3/4. If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should
each person get? Between what two whole numbers does your answer lie?
Pennsylvania Core Standards: CC.2.1.5.C.2
Apply and extend previous understandings of multiplication and division to multiply and divide fractions.
Pennsylvania Core Standards: M05.A-F.2.1.1
Solve word problems involving division of whole numbers leading to answers in the form of fractions (including mixed numbers).
Florida - Benchmarks for Excellent Student Thinking: MA.5.FR.1.1
Given a mathematical or real-world problem, represent the division of two whole numbers as a fraction.
Georgia Standards of Excellence (GSE): 5.NR.3.1
Explain the meaning of a fraction as division of the numerator by the denominator (a/b = a ÷ b). Solve problems involving division of whole numbers leading to answers in the form of fractions or
mixed numbers.
Arkansas Academic Standards: 5.CAR.6
Interpret and solve fractions as division problems, (a/b = a ÷ b), where a and b are natural numbers.
|
{"url":"https://www.learningfarm.com/web/practicePassThrough.cfm?TopicID=5770","timestamp":"2024-11-07T03:04:02Z","content_type":"application/xhtml+xml","content_length":"37755","record_id":"<urn:uuid:ef11a129-80be-4b3b-84cf-6cf5cf7f23d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00660.warc.gz"}
|
Grade 4 Improper Fractions To Mixed Numbers Worksheet
Grade 4 Improper Fractions To Mixed Numbers Worksheet function as fundamental devices in the realm of maths, supplying a structured yet versatile system for students to check out and grasp
mathematical concepts. These worksheets supply an organized technique to comprehending numbers, nurturing a strong foundation upon which mathematical effectiveness prospers. From the most basic
counting exercises to the complexities of advanced calculations, Grade 4 Improper Fractions To Mixed Numbers Worksheet deal with learners of varied ages and skill degrees.
Introducing the Essence of Grade 4 Improper Fractions To Mixed Numbers Worksheet
Grade 4 Improper Fractions To Mixed Numbers Worksheet
Grade 4 Improper Fractions To Mixed Numbers Worksheet - Improper Fractions To Mixed Numbers Worksheet Grade 4, Converting Improper Fractions To Mixed Numbers Worksheet Grade 4, Improper Fractions To
Mixed Numbers Worksheet Year 4, Improper Fraction To Mixed Number Worksheet For Grade 3, Improper Fraction To Mixed Number Worksheet For Grade 5, 4th Grade Improper Fractions To Mixed Numbers, What
Is A Mixed Number 4th Grade, Mixed Fraction For Grade 4
Convert improper fractions to mixed numbers Grade 4 Fractions Worksheet Convert 584 50 11 17 25 4 15 2 7 1
Changing improper fractions to mixed numbers This math worksheet gives your child practice converting improper fractions to mixed numbers and vice versa MATH GRADE 4th 5th
At their core, Grade 4 Improper Fractions To Mixed Numbers Worksheet are cars for conceptual understanding. They envelop a myriad of mathematical concepts, guiding students with the labyrinth of
numbers with a collection of interesting and purposeful workouts. These worksheets transcend the boundaries of typical rote learning, motivating energetic interaction and fostering an user-friendly
grasp of numerical relationships.
Nurturing Number Sense and Reasoning
Convert Mixed Numbers Into Improper Fractions Denominators Not Exceeding Tenths Grade 4 Math
Convert Mixed Numbers Into Improper Fractions Denominators Not Exceeding Tenths Grade 4 Math
Convert improper fractions to mixed numbers Grade 4 Fractions Worksheet Convert 1 8 3 2 3 2 2 15 12 1 4 1 3 22 6 2 3 3 4 10 3 1 3 3 5 21 6 1 2 3 6 7 4 3 4 1 7 7 2 1 2 3 8 13 8 5 8 1 9 13 6 1 6 2 10 3
Grade 4 Fractions Worksheet Convert Convert improper fractions to mixed numbers
The heart of Grade 4 Improper Fractions To Mixed Numbers Worksheet lies in cultivating number sense-- a deep understanding of numbers' definitions and affiliations. They encourage expedition,
inviting learners to study math procedures, understand patterns, and unlock the secrets of sequences. Via provocative difficulties and logical puzzles, these worksheets end up being entrances to
sharpening reasoning abilities, supporting the analytical minds of budding mathematicians.
From Theory to Real-World Application
Improper Fraction Worksheets
Improper Fraction Worksheets
Level Grade 4 Language English en ID 139126 29 04 2020 Country code BB Country Barbados School subject Mathematics 1061599 Main content Fractions Improper and Mixed Numbers 1085034 Changing from
improper fractions to mixed numbers Other contents none Share Print Worksheet Finish Changing from
Sort by Improper Fractions Worksheet Math Review Part 2 Let s Soar in Grade 4 Worksheet Make Mixed Numbers 1 Worksheet Feed the Kramsters Mixed Number Review Worksheet Adding Mixed Numbers and
Improper Fractions on a Number Line Worksheet Make Mixed Numbers 2 Worksheet Mixed Number Storm Worksheet
Grade 4 Improper Fractions To Mixed Numbers Worksheet function as conduits bridging theoretical abstractions with the apparent facts of day-to-day life. By instilling useful scenarios into
mathematical workouts, learners witness the significance of numbers in their environments. From budgeting and measurement conversions to understanding statistical data, these worksheets empower
trainees to wield their mathematical expertise beyond the confines of the class.
Diverse Tools and Techniques
Flexibility is inherent in Grade 4 Improper Fractions To Mixed Numbers Worksheet, using a collection of pedagogical devices to deal with different learning styles. Aesthetic help such as number
lines, manipulatives, and digital resources work as companions in visualizing abstract principles. This diverse strategy ensures inclusivity, suiting learners with various preferences, staminas, and
cognitive styles.
Inclusivity and Cultural Relevance
In a progressively diverse globe, Grade 4 Improper Fractions To Mixed Numbers Worksheet embrace inclusivity. They go beyond cultural borders, incorporating instances and troubles that resonate with
students from diverse histories. By including culturally pertinent contexts, these worksheets promote an atmosphere where every learner feels represented and valued, enhancing their connection with
mathematical ideas.
Crafting a Path to Mathematical Mastery
Grade 4 Improper Fractions To Mixed Numbers Worksheet chart a course towards mathematical fluency. They instill perseverance, essential thinking, and analytical skills, important features not only in
mathematics but in various aspects of life. These worksheets empower learners to navigate the complex surface of numbers, supporting an extensive recognition for the beauty and logic inherent in
Accepting the Future of Education
In an era noted by technical innovation, Grade 4 Improper Fractions To Mixed Numbers Worksheet effortlessly adjust to electronic systems. Interactive user interfaces and digital resources boost
traditional understanding, using immersive experiences that transcend spatial and temporal limits. This combinations of conventional methodologies with technological innovations advertises an
appealing era in education and learning, fostering an extra vibrant and interesting knowing setting.
Verdict: Embracing the Magic of Numbers
Grade 4 Improper Fractions To Mixed Numbers Worksheet illustrate the magic inherent in mathematics-- a charming trip of expedition, discovery, and mastery. They go beyond traditional rearing,
functioning as catalysts for stiring up the fires of interest and query. Via Grade 4 Improper Fractions To Mixed Numbers Worksheet, students start an odyssey, opening the enigmatic globe of numbers--
one trouble, one option, each time.
Improper Fraction Worksheets
Improper Fractions To Mixed Numbers Worksheet
Check more of Grade 4 Improper Fractions To Mixed Numbers Worksheet below
Converting Improper To Mixed Fractions Worksheet
Improper Fraction Worksheets
Improper fractions
Convert Between Improper Fractions And Mixed Numbers Worksheets
33 Converting Improper Fractions To Mixed Numbers Worksheet Support Worksheet
Mixed Numbers And Improper Fractions solutions Examples Worksheets Videos Games Activities
Changing Improper Fractions To Mixed Numbers 4th Grade 5th Grade
Changing improper fractions to mixed numbers This math worksheet gives your child practice converting improper fractions to mixed numbers and vice versa MATH GRADE 4th 5th
Convert Improper Fractions And Mixed Numbers Worksheets
Converting Improper Fractions to Mixed Numbers Experience some theoretical conversion practice with printable 4th grade worksheets Divide the numerator by the denominator write the quotient as the
whole part and
Changing improper fractions to mixed numbers This math worksheet gives your child practice converting improper fractions to mixed numbers and vice versa MATH GRADE 4th 5th
Converting Improper Fractions to Mixed Numbers Experience some theoretical conversion practice with printable 4th grade worksheets Divide the numerator by the denominator write the quotient as the
whole part and
Convert Between Improper Fractions And Mixed Numbers Worksheets
Improper Fraction Worksheets
33 Converting Improper Fractions To Mixed Numbers Worksheet Support Worksheet
Mixed Numbers And Improper Fractions solutions Examples Worksheets Videos Games Activities
Mixed Numbers To Improper Fractions TMK Education
Heather s Show And Tell Mixed Numbers And Improper Fractions
Heather s Show And Tell Mixed Numbers And Improper Fractions
Fractions Worksheet Convert Improper Fractions To Mixed Numbers K5 Learning Grade 4 Math
|
{"url":"https://szukarka.net/grade-4-improper-fractions-to-mixed-numbers-worksheet","timestamp":"2024-11-08T08:03:20Z","content_type":"text/html","content_length":"28845","record_id":"<urn:uuid:eec33678-adac-4c14-8a38-3baeaf8ac046>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00777.warc.gz"}
|
8. Relating linear-scaling and plane-wave methods
Next: 8.1 Wave-functions from density-matrices Up: thesis Previous: 7.7 Practical details   Contents
8. Relating linear-scaling and plane-wave methods
For a number of reasons, it is useful to be able to convert the Kohn-Sham orbitals generated by traditional plane-wave codes into a set of support functions and a density-kernel which can be used as
input in a linear-scaling code. One such reason is the need for careful density-matrix initialisation, discussed in section 8.3. For analysis it is also useful to be able to perform the reverse
operation of extracting the Kohn-Sham orbitals and occupation numbers from the density-matrix. In this chapter we describe methods for performing both of these operations.
Peter Haynes
|
{"url":"https://www.tcm.phy.cam.ac.uk/~pdh1001/thesis/node46.html","timestamp":"2024-11-12T12:56:20Z","content_type":"text/html","content_length":"4024","record_id":"<urn:uuid:5e3d8078-e15a-4ea9-ad96-70c259fa0208>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00891.warc.gz"}
|
Coursnap app
Coding the Collatz Conjecture
It's the second episode of Coding in the Cabana! Here I attempt to visualize the Collatz Conjecture in Processing. Code: https://thecodingtrain.com/challenges/c2-collatz-conjecture đ šī¸ p5.js Web
Editor Sketch: https://editor.p5js.org/codingtrain/sketches/XjLDE7gu6 đ Ĩ All videos: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH References: đ Collatz Conjecture
Wikipedia: https://en.wikipedia.org/wiki/Collatz_conjecture đ ģ Collatz Graph: All Numbers Lead to One: https://www.jasondavies.com/collatz-graph/ đ ģ Trying to visualize the Collatz conjecture:
http://mathematica.stackexchange.com/questions/85718/trying-to-visualize-the-collatz-conjecture đ ž Primitive Data Types in Java (more information about the long type): https://docs.oracle.com/
javase/tutorial/java/nutsandbolts/datatypes.html Videos: đ Ĩ Collatz Conjecture in Color: https://www.youtube.com/watch?v=LqKpkdRRLZw đ Ĩ Golan Levin's Modulo Operator video: https://
www.youtube.com/watch?v=r5Iy3v1co0A đ Ĩ UNCRACKABLE? The Collatz Conjecture: https://www.youtube.com/watch?v=5mFpVDpKX70 Related Coding Challenges: đ #14 Recursive Fractal Trees: https://youtu.be
/0jjeOYMjmDU Timestamps: 0:00 The Collatz Conjecture 4:10 Programming in Processing 6:46 Checking The Number of Steps 9:23 Visualizing The Collatz Conjecture 20:44 Rendering to a PDF File 22:24
Conclusions and Goodbyes Editing by Mathieu Blanchette Animations by Jason Heglund Music from Epidemic Sound đ Website: http://thecodingtrain.com/ đ ž Share Your Creation! https://
thecodingtrain.com/guides/passenger-showcase-guide đ Š Suggest Topics: https://github.com/CodingTrain/Suggestion-Box đ Ą GitHub: https://github.com/CodingTrain đ Ŧ Discord: https://
thecodingtrain.com/discord đ Membership: http://youtube.com/thecodingtrain/join đ Store: https://standard.tv/codingtrain đ ī¸ Twitter: https://twitter.com/thecodingtrain đ ¸ Instagram:
https://www.instagram.com/the.coding.train/ đ Ĩ Coding Challenges: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH đ Ĩ Intro to Programming: https://www.youtube.com/
playlist?list=PLRqwX-V7Uu6Zy51Q-x9tMWIv9cueOFTFA đ p5.js: https://p5js.org đ p5.js Web Editor: https://editor.p5js.org/ đ Processing: https://processing.org đ Code of Conduct: https://
github.com/CodingTrain/Code-of-Conduct This description was auto-generated. If you see a problem, please open an issue: https://github.com/CodingTrain/thecodingtrain.com/issues/new #collatzconjecture
#modulo #processing
{'title': 'Coding the Collatz Conjecture', 'heatmap': [{'end': 353.076, 'start': 330.318, 'weight': 0.786}, {'end': 1347.42, 'start': 1331.208, 'weight': 1}], 'summary': 'Explores the collatz
conjecture, demonstrating the process with 111 steps for the number 5, visualizes the conjecture using processing, successfully visualizes numbers up to 10,000, and discusses creating sequences,
exploring number patterns, and optimizing the visualization for controlled results.', 'chapters': [{'end': 209.215, 'segs': [{'end': 67.616, 'src': 'embed', 'start': 36.772, 'weight': 0, 'content':
[{'end': 38.493, 'text': 'And the number sequence goes like this.', 'start': 36.772, 'duration': 1.721}, {'end': 40.215, 'text': 'Take any number n.', 'start': 38.674, 'duration': 1.541}, {'end':
44.178, 'text': 'If the number is even, set it equal to itself divided by 2.', 'start': 40.215, 'duration': 3.963}, {'end': 48.942, 'text': 'If the number is odd, set it equal to itself times 3 plus
1.', 'start': 44.178, 'duration': 4.764}, {'end': 55.687, 'text': "So why is this conjecture, why is this sequence interesting, meaningful, mysterious? Let's start with a number.", 'start': 48.942,
'duration': 6.745}, {'end': 58.689, 'text': "Let's say I'm going to start with the number 5.", 'start': 55.767, 'duration': 2.922}, {'end': 60.311, 'text': 'So following this, 5 is odd.', 'start':
58.689, 'duration': 1.622}, {'end': 61.912, 'text': 'Multiply it by 3.', 'start': 60.351, 'duration': 1.561}, {'end': 63.713, 'text': '15, add 1.', 'start': 61.912, 'duration': 1.801}, {'end':
64.953, 'text': 'I get 16.', 'start': 63.713, 'duration': 1.24}, {'end': 67.616, 'text': "OK Ah, that's even divided by 2.", 'start': 64.953, 'duration': 2.663}], 'summary': 'The sequence involves
dividing even numbers by 2 and multiplying odd numbers by 3 and adding 1, leading to a pattern of numbers.', 'duration': 30.844, 'max_score': 36.772, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed836772.jpg'}, {'end': 140.439, 'src': 'embed', 'start': 117.335, 'weight': 3, 'content': [{'end': 124.916, 'text':
'So this idea was originally suggested as a topic suggestion on August 8, 2016.', 'start': 117.335, 'duration': 7.581}, {'end': 131.198, 'text': "And there's some wonderful links here that I'll
include in this video's description for you to look at and see different code examples and visualizations of it.", 'start': 124.916, 'duration': 6.282}, {'end': 132.598, 'text': "I'm going to attempt
to do my own here.", 'start': 131.258, 'duration': 1.34}, {'end': 140.439, 'text': 'And what I am looking and hoping to create is inspired by the number file video about the Collatz conjecture,',
'start': 132.978, 'duration': 7.461}], 'summary': "I'll attempt to create my own visualization of the collatz conjecture, inspired by the number file video.", 'duration': 23.104, 'max_score':
117.335, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8117335.jpg'}, {'end': 209.215, 'src': 'embed', 'start': 163.927, 'weight': 1,
'content': [{'end': 168.109, 'text': "But you'll also reach 22 if you start with the number 18.", 'start': 163.927, 'duration': 4.182}, {'end': 176.034, 'text': 'kind of visualize as this branch,
this graph, this tree all converging eventually every sequence all the way down to 1.', 'start': 168.109, 'duration': 7.925}, {'end': 179.936, 'text': "But I'm here in the cabana in the sun with the
garden outside.", 'start': 176.034, 'duration': 3.902}, {'end': 189.842, 'text': "And I want to see, I'm inspired by this particular number file video, which creates a visualization of the Kolatz
conjecture that looks like this.", 'start': 180.696, 'duration': 9.146}, {'end': 192.103, 'text': "It's like seaweed or a plant.", 'start': 189.922, 'duration': 2.181}, {'end': 193.164, 'text': "It's
so organic.", 'start': 192.323, 'duration': 0.841}, {'end': 198.467, 'text': 'Why and how out of this very mathematical algorithm Do we get this seaweed-like pattern?', 'start': 193.344, 'duration':
5.123}, {'end': 202.03, 'text': "The visualization that I'm going to try is directly from that Numberphile video.", 'start': 198.707, 'duration': 3.323}, {'end': 204.632, 'text': 'And the rules for
it were designed by Edmund Harris.', 'start': 202.07, 'duration': 2.562}, {'end': 209.215, 'text': "A link to the Numberphile video and more about Edmund Harris' work will be in this video's
description.", 'start': 204.812, 'duration': 4.403}], 'summary': 'Visualizing the collatz conjecture with organic patterns inspired by a numberphile video.', 'duration': 45.288, 'max_score': 163.927,
'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8163927.jpg'}], 'start': 0.209, 'title': 'Collatz conjecture', 'summary': 'Explains the collatz
conjecture, a sequence where any positive number reaches 1 through a specific process, with an example showing that starting at 5 takes 111 steps, and discusses visualizing the conjecture through a
tree-like diagram inspired by a numberphile video.', 'chapters': [{'end': 140.439, 'start': 0.209, 'title': 'Coding the collatz conjecture', 'summary': 'Explains the collatz conjecture, a
mathematical sequence where any positive number will eventually reach 1 through a specific calculation process, with an example showing that starting at 5 takes 111 steps, and provides resources for
further exploration.', 'duration': 140.23, 'highlights': ['The Collatz Conjecture is a mathematical sequence where any positive number will eventually reach 1 through a specific calculation process,
exemplified by starting at 5 taking 111 steps.', 'The chapter provides resources for further exploration, including links to different code examples and visualizations of the Collatz Conjecture.',
'The Collatz Conjecture was originally suggested as a topic on August 8, 2016, and the example of starting at 27 takes 111 steps is mentioned with a visualization of the value as it takes those
steps.']}, {'end': 209.215, 'start': 140.439, 'title': 'Visualizing the kolatz conjecture', 'summary': 'Discusses visualizing the kolatz conjecture through a tree-like diagram, where specific numbers
eventually converge to 1, and the visualization technique inspired by a numberphile video creates a seaweed-like pattern.', 'duration': 68.776, 'highlights': ['The visualization technique represents
the Kolatz conjecture as a tree, illustrating how different numbers converge to 1, such as 25 becoming 76, 38, 19, and eventually 22.', 'The discussion explores the organic, seaweed-like pattern that
emerges from the mathematical algorithm of the Kolatz conjecture visualization, inspired by the Numberphile video.', "The chapter mentions that the visualization technique is directly from a
Numberphile video and the rules were designed by Edmund Harris, with a link to the video and more about Harris' work provided in the description."]}], 'duration': 209.006, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8209.jpg', 'highlights': ['The Collatz Conjecture is a mathematical sequence where any positive number will
eventually reach 1 through a specific calculation process, exemplified by starting at 5 taking 111 steps.', 'The visualization technique represents the Kolatz conjecture as a tree, illustrating how
different numbers converge to 1, such as 25 becoming 76, 38, 19, and eventually 22.', 'The discussion explores the organic, seaweed-like pattern that emerges from the mathematical algorithm of the
Kolatz conjecture visualization, inspired by the Numberphile video.', 'The chapter provides resources for further exploration, including links to different code examples and visualizations of the
Collatz Conjecture.', 'The Collatz Conjecture was originally suggested as a topic on August 8, 2016, and the example of starting at 27 takes 111 steps is mentioned with a visualization of the value
as it takes those steps.']}, {'end': 562.058, 'segs': [{'end': 279.524, 'src': 'embed', 'start': 251.718, 'weight': 1, 'content': [{'end': 256.724, 'text': "I'm going to attempt to program this
sequence in Processing, a Java-based creative coding environment.", 'start': 251.718, 'duration': 5.006}, {'end': 266.634, 'text': "One of the things I love about Processing is there's a library in
Processing that allows you to render out a drawing to a PDF that you could blow up to large scale and print as a nice poster.", 'start': 257.004, 'duration': 9.63}, {'end': 270.258, 'text': "So maybe
I'll make a nice, beautiful Collatz Conjecture coding train poster.", 'start': 266.655, 'duration': 3.603}, {'end': 274.841, 'text': "First thing I need is a function that's going to do the number
sequence.", 'start': 271.058, 'duration': 3.783}, {'end': 276.482, 'text': "So let's just call that colots.", 'start': 275.061, 'duration': 1.421}, {'end': 279.524, 'text': "And it'll be a function
that will receive any number.", 'start': 277.243, 'duration': 2.281}], 'summary': 'Using processing to create a large-scale pdf poster of collatz conjecture sequence.', 'duration': 27.806,
'max_score': 251.718, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8251718.jpg'}, {'end': 358.12, 'src': 'heatmap', 'start': 330.318,
'weight': 0.786, 'content': [{'end': 334.302, 'text': 'If I start with the number 10, I should get 5.', 'start': 330.318, 'duration': 3.984}, {'end': 341.067, 'text': "Now let's think about putting
that into a loop, right? Let's see how long it takes for the number to get back to 1.", 'start': 334.302, 'duration': 6.765}, {'end': 345.291, 'text': 'This is like a weird scenario where maybe I
might actually need to use a do while loop.', 'start': 341.067, 'duration': 4.224}, {'end': 353.076, 'text': 'Here is a do while loop.', 'start': 351.175, 'duration': 1.901}, {'end': 358.12, 'text':
"I really think this might be the first time in my entire life that I've used a do while loop.", 'start': 353.096, 'duration': 5.024}], 'summary': 'Using a do while loop to transform 10 into 5 and
back to 1 for the first time.', 'duration': 27.802, 'max_score': 330.318, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8330318.jpg'},
{'end': 444.304, 'src': 'embed', 'start': 387.239, 'weight': 0, 'content': [{'end': 388.355, 'text': '5, 16, 8, 4, 2, 1.', 'start': 387.239, 'duration': 1.116}, {'end': 391.682, 'text': 'Perfect So
now we can test the Kolatz conjecture.', 'start': 388.36, 'duration': 3.322}, {'end': 394.904, 'text': "Let's say n equals 100.", 'start': 392.502, 'duration': 2.402}, {'end': 395.504, 'text': 'We
got 1.', 'start': 394.904, 'duration': 0.6}, {'end': 398.305, 'text': 'n equals 1, 000.', 'start': 395.504, 'duration': 2.801}, {'end': 399.507, 'text': 'We got to 1.', 'start': 398.306, 'duration':
1.201}, {'end': 401.943, 'text': 'n equals 8, 3, 5, 2, 9, 1.', 'start': 399.507, 'duration': 2.436}, {'end': 405.01, 'text': 'Oh, I need my book of random numbers to try to start with a random
number.', 'start': 401.948, 'duration': 3.062}, {'end': 408.07, 'text': 'We got to 1.', 'start': 406.65, 'duration': 1.42}, {'end': 413.171, 'text': 'Cool The Collatz conjecture seems to be true
based on my simple processing code.', 'start': 408.07, 'duration': 5.101}, {'end': 417.692, 'text': "Let's just out of curiosity see the number of steps it takes.", 'start': 414.311, 'duration':
3.381}, {'end': 419.852, 'text': 'That equals 0.', 'start': 418.492, 'duration': 1.36}, {'end': 425.273, 'text': 'And then every time I call the Collatz function, steps goes up by 1.', 'start':
419.852, 'duration': 5.421}, {'end': 431.954, 'text': "Now, rather than print out the sequence, let's just print out the number of steps.", 'start': 425.273, 'duration': 6.681}, {'end': 434.115,
'text': '175 for that particular number.', 'start': 431.974, 'duration': 2.141}, {'end': 442.622, 'text': 'If I go back to the Wikipedia page, I can test to see if my code is performing correctly by
picking.', 'start': 437.073, 'duration': 5.549}, {'end': 444.304, 'text': "let's just say, let's pick this number.", 'start': 442.622, 'duration': 1.682}], 'summary': 'Testing the collatz conjecture
with various numbers, confirming its truth with simple processing code, and comparing results with the wikipedia page.', 'duration': 57.065, 'max_score': 387.239, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8387239.jpg'}, {'end': 502.062, 'src': 'embed', 'start': 470.18, 'weight': 4, 'content': [{'end': 472.301, 'text':
"So I think I've gone probably way above that.", 'start': 470.18, 'duration': 2.121}, {'end': 481.045, 'text': 'So somewhere in the sequence, even though I started with a number around 670 million, I
went way above the range probably of a 32-bit integer.', 'start': 472.741, 'duration': 8.304}, {'end': 485.088, 'text': 'So I think a way that I could probably fix this is by changing this to a
long.', 'start': 481.225, 'duration': 3.863}, {'end': 490.433, 'text': 'A long is a data type in Java that also stores numbers but uses more memory than 32 bits.', 'start': 485.108, 'duration':
5.325}, {'end': 494.056, 'text': "I believe it's 64 bits, and I'll correct that if that's wrong somehow.", 'start': 490.533, 'duration': 3.523}, {'end': 502.062, 'text': 'So I need to change the
function also to return a long and to accept a long.', 'start': 494.436, 'duration': 7.626}], 'summary': 'The number exceeded the range of a 32-bit integer, so it needs to be changed to a long data
type in java.', 'duration': 31.882, 'max_score': 470.18, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8470180.jpg'}], 'start': 209.635,
'title': 'Visualization of collatz conjecture', 'summary': 'Discusses the programming and visualization of the collatz conjecture, including creating a large-scale poster using processing and the
successful visualization of the collatz conjecture for numbers up to 10,000.', 'chapters': [{'end': 270.258, 'start': 209.635, 'title': 'Visualization of collatz conjecture', 'summary': 'Discusses
the visualization and programming of the collatz conjecture, where starting with a number, the sequence involves dividing by 2 for even numbers and multiplying by 3 and adding 1 for odd numbers, to
eventually create a large-scale poster using processing.', 'duration': 60.623, 'highlights': ['The Collatz Conjecture sequence involves dividing by 2 for even numbers and multiplying by 3 and adding
1 for odd numbers, creating a visualization of the sequence using a Java-based creative coding environment, and rendering it to a PDF for large-scale printing and poster creation in Processing.',
'Starting with the number 10, the sequence progresses to 5, then 16, demonstrating the application of the Collatz Conjecture.', 'The potential to create a large-scale poster using Processing and a
library within Processing enhances the visualization of the Collatz Conjecture sequence.']}, {'end': 562.058, 'start': 271.058, 'title': 'Collatz conjecture visualization', 'summary': 'Discusses the
creation of a function to test for even or odd numbers and then using it to validate the collatz conjecture, discovering that a 32-bit integer limit is surpassed, leading to the need to switch to
using the long data type, resulting in a successful visualization of the collatz conjecture for numbers up to 10,000.', 'duration': 291, 'highlights': ["The Collatz function is created to test for
even or odd numbers and to generate the next number in the sequence, with successful testing using numbers like 5 and 10. The function 'collatz' is designed to determine if a number is even or odd
and to generate the next number in the sequence, successfully tested with numbers like 5 and 10.", 'The use of a do while loop to test the Collatz conjecture for various numbers, leading to the
conclusion that the conjecture seems to hold true based on the simple processing code. A do while loop is utilized to test the Collatz conjecture for various numbers, leading to the conclusion that
the conjecture seems to hold true based on the simple processing code.', 'The discovery of surpassing the 32-bit integer limit while testing the Collatz conjecture for larger numbers like 670
million, leading to the need to switch to using the long data type to successfully complete the visualization for numbers up to 10,000. Surpassing the 32-bit integer limit is discovered while testing
the Collatz conjecture for larger numbers like 670 million, leading to the need to switch to using the long data type to successfully complete the visualization for numbers up to 10,000.', 'The
successful completion of the visualization of the Collatz conjecture for numbers up to 10,000, allowing for the use of integers without encountering range limitations. The successful completion of
the visualization of the Collatz conjecture for numbers up to 10,000, allowing for the use of integers without encountering range limitations.']}], 'duration': 352.423, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8209635.jpg', 'highlights': ['The successful completion of the visualization of the Collatz conjecture for numbers up
to 10,000, allowing for the use of integers without encountering range limitations.', 'The potential to create a large-scale poster using Processing and a library within Processing enhances the
visualization of the Collatz Conjecture sequence.', 'The use of a do while loop to test the Collatz conjecture for various numbers, leading to the conclusion that the conjecture seems to hold true
based on the simple processing code.', 'The Collatz function is created to test for even or odd numbers and to generate the next number in the sequence, with successful testing using numbers like 5
and 10.', 'Surpassing the 32-bit integer limit is discovered while testing the Collatz conjecture for larger numbers like 670 million, leading to the need to switch to using the long data type to
successfully complete the visualization for numbers up to 10,000.', 'Starting with the number 10, the sequence progresses to 5, then 16, demonstrating the application of the Collatz Conjecture.',
'The Collatz Conjecture sequence involves dividing by 2 for even numbers and multiplying by 3 and adding 1 for odd numbers, creating a visualization of the sequence using a Java-based creative coding
environment, and rendering it to a PDF for large-scale printing and poster creation in Processing.']}, {'end': 855.686, 'segs': [{'end': 603.558, 'src': 'embed', 'start': 563.052, 'weight': 0,
'content': [{'end': 570.237, 'text': 'Finished So it actually did that incredibly quickly, running through the full Collatz conjecture algorithm for every single number from 1 to 10, 000.', 'start':
563.052, 'duration': 7.185}, {'end': 573.659, 'text': 'The computer can do that super quickly.', 'start': 570.237, 'duration': 3.422}, {'end': 575.36, 'text': "Let's see what happens if I start
drawing.", 'start': 573.699, 'duration': 1.661}, {'end': 580.223, 'text': 'So I think an effective way for me to do this would be used to translate function.', 'start': 575.92, 'duration': 4.303},
{'end': 582.905, 'text': 'So the translate and rotate functions.', 'start': 580.484, 'duration': 2.421}, {'end': 592.711, 'text': 'So in processing, the translate function takes an x and a y and will
move the origin point along a path according to that x and y value.', 'start': 583.385, 'duration': 9.326}, {'end': 598.335, 'text': 'So that x value, that y value, 0, 0 is translated to here
potentially.', 'start': 592.972, 'duration': 5.363}, {'end': 603.558, 'text': "So let's say what I want to do is start the visualization at the bottom right here.", 'start': 598.615, 'duration':
4.943}], 'summary': 'Rapidly executed collatz algorithm for numbers 1 to 10,000; utilizing translate and rotate functions in processing for visualization.', 'duration': 40.506, 'max_score': 563.052,
'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8563052.jpg'}, {'end': 777.025, 'src': 'embed', 'start': 708.686, 'weight': 2, 'content':
[{'end': 710.906, 'text': "I'm going to pick 30 degrees, somewhat arbitrarily.", 'start': 708.686, 'duration': 2.22}, {'end': 715.907, 'text': 'Or if pi is 108 degrees, this would be pi divided by
6.', 'start': 711.106, 'duration': 4.801}, {'end': 717.988, 'text': 'And I should also make this a variable.', 'start': 715.907, 'duration': 2.081}, {'end': 719.488, 'text': "So let's call this
angle.", 'start': 718.248, 'duration': 1.24}, {'end': 721.348, 'text': "And let's say that's pi divided by 6.", 'start': 719.788, 'duration': 1.56}, {'end': 725.329, 'text': "So we're going to rotate
by the angle in one direction.", 'start': 721.348, 'duration': 3.981}, {'end': 731.17, 'text': 'Otherwise, rotate by the angle in the negative direction.', 'start': 725.909, 'duration': 5.261},
{'end': 737.013, 'text': "For every number, I'm about to move along this path.", 'start': 731.93, 'duration': 5.083}, {'end': 738.974, 'text': "I want to check if it's even or odd.", 'start':
737.513, 'duration': 1.461}, {'end': 740.495, 'text': 'So I want to rotate this way or that way.', 'start': 739.014, 'duration': 1.481}, {'end': 742.816, 'text': 'And then I want to move in that
direction.', 'start': 740.775, 'duration': 2.041}, {'end': 744.097, 'text': 'But I need to draw something.', 'start': 742.876, 'duration': 1.221}, {'end': 749.5, 'text': "So before I translate, let's
say stroke 255.", 'start': 744.437, 'duration': 5.063}, {'end': 754.783, 'text': "And let's draw a line from wherever I am, 0, 0, to 0, common negative length.", 'start': 749.5, 'duration': 5.283},
{'end': 757.284, 'text': "And then I'm going to move to the end of this line.", 'start': 755.443, 'duration': 1.841}, {'end': 760.406, 'text': 'This is very similar to what I did in the fractal tree
coding challenge.', 'start': 757.304, 'duration': 3.102}, {'end': 764.549, 'text': "So actually I think to test this out, I've got two things going on here.", 'start': 760.726, 'duration': 3.823},
{'end': 765.93, 'text': "I've got this like outer loop.", 'start': 764.569, 'duration': 1.361}, {'end': 768.793, 'text': 'I think I want to comment out the outer loop for a second.', 'start':
766.271, 'duration': 2.522}, {'end': 774.063, 'text': 'And I just want to test this idea out by starting with any given number.', 'start': 770.68, 'duration': 3.383}, {'end': 777.025, 'text': "So
let's start with the number 500 and see what happens.", 'start': 774.343, 'duration': 2.682}], 'summary': 'Using programming, rotate and draw lines based on mathematical calculations and
conditions.', 'duration': 68.339, 'max_score': 708.686, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8708686.jpg'}, {'end': 831.647, 'src':
'embed', 'start': 803.766, 'weight': 5, 'content': [{'end': 806.568, 'text': "When I come back to the next number, I'm going to be picking up from where it left off.", 'start': 803.766, 'duration':
2.802}, {'end': 815.513, 'text': 'So one way to deal with this is actually just call this function resetMatrix, which will reset everything just back to the original orientation.', 'start': 806.808,
'duration': 8.705}, {'end': 823.238, 'text': 'And then I can put that original translation right there, n equals i.', 'start': 815.854, 'duration': 7.384}, {'end': 825.239, 'text': 'This is taking a
very long time.', 'start': 823.238, 'duration': 2.001}, {'end': 831.647, 'text': 'And I got the visualization hairball.', 'start': 828.885, 'duration': 2.762}], 'summary': 'Function resetmatrix
resets orientation, n equals i, long processing time.', 'duration': 27.881, 'max_score': 803.766, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/
EYLWxwo1Ed8803766.jpg'}], 'start': 563.052, 'title': 'Visualizing collatz conjecture and rotating numbers', 'summary': 'Covers visualizing the collatz conjecture algorithm using processing,
demonstrating translate and rotate functions, and drawing/rotating numbers based on their even or odd status. it also discusses the process of converging numbers back to 1 and experimenting with
starting numbers.', 'chapters': [{'end': 679.088, 'start': 563.052, 'title': 'Visualizing collatz conjecture in processing', 'summary': 'Discusses visualizing the collatz conjecture algorithm and
utilizing translate and rotate functions in processing to create a visualization, with a demonstration of using vector math to move the origin point and the rotate function to shift the view of the
canvas.', 'duration': 116.036, 'highlights': ['The chapter demonstrates running the full Collatz conjecture algorithm for every single number from 1 to 10,000 on a computer, highlighting its speed
and efficiency.', 'The author explains the translate function in Processing, emphasizing its ability to move the origin point along a path according to specified x and y values, with a practical
example of starting the visualization at a specific position on the canvas.', 'The author discusses using vector math to move slightly to the right or left by some angle, illustrating the use of sine
and cosine to calculate the difference in x and y, and contrasts it with the functionality of the rotate function in shifting the view of the entire canvas for similar effects.']}, {'end': 757.284,
'start': 679.448, 'title': 'Drawing and rotating numbers', 'summary': 'Discusses rotating and drawing numbers by specific angles based on their even or odd status, with a focus on rotating by 30
degrees or pi/6 and drawing lines accordingly.', 'duration': 77.836, 'highlights': ['The chapter covers rotating and drawing numbers by specific angles based on their even or odd status, with a focus
on rotating by 30 degrees or pi/6 and drawing lines accordingly.', 'The process involves checking if a number is even or odd and then rotating and drawing lines based on the result.', 'The chapter
also mentions setting a stroke color of 255 and drawing lines from the current position to a specific point before moving to the end of the line.']}, {'end': 855.686, 'start': 757.304, 'title':
'Visualizing collatz conjecture', 'summary': 'Discusses the process of visualizing the collatz conjecture using coding, experimenting with starting numbers, and addressing errors encountered, aiming
to converge all numbers back to 1.', 'duration': 98.382, 'highlights': ['Experimenting with starting numbers, such as 500, and observing the results.', 'Addressing the issue of all numbers starting
back at the same place to enable visualization.', 'Implementing the resetMatrix function to reset everything back to the original orientation for each iteration.', 'Recognizing errors in the
visualization process and acknowledging misconceptions about the Collatz Conjecture.']}], 'duration': 292.634, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8
/pics/EYLWxwo1Ed8563052.jpg', 'highlights': ['The chapter demonstrates running the full Collatz conjecture algorithm for every single number from 1 to 10,000 on a computer, highlighting its speed and
efficiency.', 'The author explains the translate function in Processing, emphasizing its ability to move the origin point along a path according to specified x and y values, with a practical example
of starting the visualization at a specific position on the canvas.', 'The chapter covers rotating and drawing numbers by specific angles based on their even or odd status, with a focus on rotating
by 30 degrees or pi/6 and drawing lines accordingly.', 'The process involves checking if a number is even or odd and then rotating and drawing lines based on the result.', 'Experimenting with
starting numbers, such as 500, and observing the results.', 'Implementing the resetMatrix function to reset everything back to the original orientation for each iteration.']}, {'end': 1073.965,
'segs': [{'end': 900.269, 'src': 'embed', 'start': 874.851, 'weight': 0, 'content': [{'end': 880.595, 'text': "A float list in processing, and I'm going to call that sequence, is just a sequence of
numbers.", 'start': 874.851, 'duration': 5.744}, {'end': 883.557, 'text': 'And I could just use a plain array or an array list.', 'start': 881.096, 'duration': 2.461}, {'end': 888.641, 'text': 'But a
float list is nice because it just works really easily with floating point numbers.', 'start': 883.617, 'duration': 5.024}, {'end': 890.082, 'text': "And it's completely resizable.", 'start':
888.861, 'duration': 1.221}, {'end': 890.782, 'text': 'And I can iterate.', 'start': 890.122, 'duration': 0.66}, {'end': 891.463, 'text': 'I can reverse it.', 'start': 890.822, 'duration': 0.641},
{'end': 892.503, 'text': 'I can do all sorts of things.', 'start': 891.503, 'duration': 1}, {'end': 894.665, 'text': 'So I want to create a float list.', 'start': 893.044, 'duration': 1.621}, {'end':
897.987, 'text': "And I'm going to take all of this drawing stuff out.", 'start': 895.325, 'duration': 2.662}, {'end': 900.269, 'text': "Let's just put it down here, comment it.", 'start': 898.328,
'duration': 1.941}], 'summary': 'Processing float list is resizable and works well with floating point numbers, allowing various operations.', 'duration': 25.418, 'max_score': 874.851, 'thumbnail':
'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8874851.jpg'}, {'end': 950.386, 'src': 'embed', 'start': 920.046, 'weight': 1, 'content': [{'end': 921.106,
'text': "I don't know why I said float list.", 'start': 920.046, 'duration': 1.06}, {'end': 922.227, 'text': "It's just integers.", 'start': 921.366, 'duration': 0.861}, {'end': 924.868, 'text': 'So
it could only be integers in the co-op sequence.', 'start': 922.247, 'duration': 2.621}, {'end': 929.718, 'text': 'Now I can visualize the list.', 'start': 926.036, 'duration': 3.682}, {'end':
935.72, 'text': 'I want to visualize the list from the end all the way up to the beginning.', 'start': 930.198, 'duration': 5.522}, {'end': 941.803, 'text': 'Sequence.reverse And then iterate through
the entire list.', 'start': 936.06, 'duration': 5.743}, {'end': 945.865, 'text': 'Int j equals 0.', 'start': 942.423, 'duration': 3.442}, {'end': 950.386, 'text': 'j is less than sequence.size j++.',
'start': 945.865, 'duration': 4.521}], 'summary': 'Visualizing a sequence of integers in reverse order.', 'duration': 30.34, 'max_score': 920.046, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8920046.jpg'}, {'end': 1047.413, 'src': 'embed', 'start': 1018.06, 'weight': 2, 'content': [{'end': 1020.905, 'text':
"So I'm going to try different numbers to sort of get the feeling of it.", 'start': 1018.06, 'duration': 2.845}, {'end': 1024.691, 'text': "Let's try pi divided by 12.", 'start': 1021.305,
'duration': 3.386}, {'end': 1026.534, 'text': "So I'm going to have that.", 'start': 1024.691, 'duration': 1.843}, {'end': 1030.566, 'text': "Yeah, I think it's probably a very small amount.",
'start': 1027.185, 'duration': 3.381}, {'end': 1034.928, 'text': "So we can see that's what's happening with the number 100.", 'start': 1030.945, 'duration': 3.983}, {'end': 1038.169, 'text': 'I also
might want to think about which way am I going.', 'start': 1034.928, 'duration': 3.241}, {'end': 1041.01, 'text': "It's kind of going to orient this whole pattern.", 'start': 1038.189, 'duration':
2.821}, {'end': 1047.413, 'text': "And maybe actually I'm starting in a direction that's not straight.", 'start': 1041.39, 'duration': 6.023}], 'summary': 'Exploring mathematical patterns with
different numbers and orientations.', 'duration': 29.353, 'max_score': 1018.06, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/
EYLWxwo1Ed81018060.jpg'}], 'start': 857.267, 'title': 'Creating and visualizing number sequences in processing', 'summary': 'Explains creating sequences using float and integer lists in processing,
emphasizing easy manipulation, resizing, iterating, reversing, and visualizing the lists, and exploring number patterns using rotation, including testing with values such as pi/12 and 100, and
observing results with numbers from 1 to 10,000 while encountering technical difficulties.', 'chapters': [{'end': 919.946, 'start': 857.267, 'title': 'Creating a sequence with float list in
processing', 'summary': 'Explains the process of creating a sequence using a float list in processing, which provides easy manipulation of floating point numbers and is completely resizable.',
'duration': 62.679, 'highlights': ['Creating a float list in Processing, called sequence, for easily manipulating and resizing a sequence of numbers.', 'Using a float list to append every single
value of n and then adding 1 to it at the end.']}, {'end': 979.187, 'start': 920.046, 'title': 'Iterating and reversing integer list', 'summary': 'Discusses iterating through an integer list,
reversing the sequence, and visualizing the list, emphasizing the process of reversing and iterating through the list from the end to the beginning.', 'duration': 59.141, 'highlights': ["The process
involves visualizing and iterating through the integer list from the end to the beginning, emphasizing the use of 'Sequence.reverse' and 'j++' to iterate through the entire list.", 'The speaker
considers potential naming conflicts and rethinks the naming convention for the variables in the code.', "The chapter briefly mentions the initial confusion about using 'float list' instead of
'integers' in the cooperative sequence."]}, {'end': 1073.965, 'start': 979.707, 'title': 'Exploring number patterns with rotation', 'summary': 'Explores creating patterns based on numbers with
rotation, testing with various values such as pi divided by 12 and 100, and eventually observing the results with numbers from 1 to 10,000, while also encountering technical difficulties.',
'duration': 94.258, 'highlights': ['The chapter explores creating patterns based on numbers with rotation, testing with various values such as pi divided by 12 and 100, and eventually observing the
results with numbers from 1 to 10,000, while also encountering technical difficulties.', 'The speaker experimented with different numbers, such as pi divided by 12, to observe the resulting
patterns.', 'The speaker encountered technical difficulties during the process, including a camera dying and a memory card filling up.']}], 'duration': 216.698, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed8857267.jpg', 'highlights': ['Creating a float list in Processing, called sequence, for easily manipulating and
resizing a sequence of numbers.', "The process involves visualizing and iterating through the integer list from the end to the beginning, emphasizing the use of 'Sequence.reverse' and 'j++' to
iterate through the entire list.", 'The chapter explores creating patterns based on numbers with rotation, testing with various values such as pi divided by 12 and 100, and eventually observing the
results with numbers from 1 to 10,000, while also encountering technical difficulties.']}, {'end': 1376.456, 'segs': [{'end': 1119.884, 'src': 'embed', 'start': 1093.656, 'weight': 0, 'content':
[{'end': 1104.7, 'text': 'Any time I apply the algorithm for when I have an odd number, 3n plus 1, that is always going to result in an even number.', 'start': 1093.656, 'duration': 11.044}, {'end':
1107.421, 'text': "So what's the next step? Divide by 2.", 'start': 1105, 'duration': 2.421}, {'end': 1113.062, 'text': 'So I could actually, in the Collatz conjecture code, sort of speed up the
process of getting to 1.', 'start': 1107.421, 'duration': 5.641}, {'end': 1119.884, 'text': "It's not going to compute the exact number of steps, but it could speed up the process of getting to 1 by
just taking this down here.", 'start': 1113.062, 'duration': 6.822}], 'summary': 'Applying the 3n plus 1 algorithm results in even numbers, followed by dividing by 2 to speed up the process of
reaching 1.', 'duration': 26.228, 'max_score': 1093.656, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed81093656.jpg'}, {'end': 1162.756,
'src': 'embed', 'start': 1137.113, 'weight': 2, 'content': [{'end': 1144.379, 'text': "I can get something that's quite a bit more control and in a way more closer to what the result in that number
file video is.", 'start': 1137.113, 'duration': 7.266}, {'end': 1147.001, 'text': "So I think there's a bit more that I can do with this.", 'start': 1144.399, 'duration': 2.602}, {'end': 1154.527,
'text': 'Number one is let me instead of moving up, let me give myself more space to work with and move in a horizontal direction.', 'start': 1147.382, 'duration': 7.145}, {'end': 1160.973, 'text':
"So I'm actually going to start at 0.", 'start': 1155.088, 'duration': 5.885}, {'end': 1162.756, 'text': 'height divided by 2.', 'start': 1160.973, 'duration': 1.783}], 'summary': 'Exploring greater
control and precision in a horizontal direction, starting at 0 and height divided by 2.', 'duration': 25.643, 'max_score': 1137.113, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/
video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed81137113.jpg'}, {'end': 1228.592, 'src': 'embed', 'start': 1195.93, 'weight': 3, 'content': [{'end': 1196.33, 'text': 'Look at this.', 'start': 1195.93,
'duration': 0.4}, {'end': 1198.051, 'text': 'Interesting Huh.', 'start': 1196.47, 'duration': 1.581}, {'end': 1204.915, 'text': "The other thing about what's going on here is since everything
converges to 1, ultimately there's a lot of repeating patterns.", 'start': 1198.432, 'duration': 6.483}, {'end': 1213.881, 'text': 'So I think this could have a more organic-like feel if I start to
give the line some alpha so that the repeating patterns become much brighter.', 'start': 1205.576, 'duration': 8.305}, {'end': 1216.923, 'text': 'And as it branches out, it sort of fades away.',
'start': 1214.061, 'duration': 2.862}, {'end': 1220.365, 'text': 'So let me just give everything an alpha of 50.', 'start': 1217.283, 'duration': 3.082}, {'end': 1228.592, 'text': 'What happens if I
make that angle really, really small? Maybe I should stop dividing by pi and just do something like 0.0, 0.02.', 'start': 1220.365, 'duration': 8.227}], 'summary': 'Discussion on creating
organic-like patterns with repeating elements and adjusting alpha and angle values.', 'duration': 32.662, 'max_score': 1195.93, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/
video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed81195930.jpg'}, {'end': 1287.528, 'src': 'embed', 'start': 1245.528, 'weight': 1, 'content': [{'end': 1252.454, 'text': 'Just take a minute to ponder the fact
that this very formal, mathematical,', 'start': 1245.528, 'duration': 6.926}, {'end': 1263.357, 'text': 'highly repetitious pattern can somehow be turned into this feeling of seaweed, of growth, of
organic nature,', 'start': 1252.454, 'duration': 10.903}, {'end': 1265.577, 'text': 'like the plants that are right outside this window.', 'start': 1263.357, 'duration': 2.22}, {'end': 1270.098,
'text': 'What kind of beauty can you make out of this algorithm? I would love to see.', 'start': 1266.097, 'duration': 4.001}, {'end': 1272.099, 'text': 'Let me show you one more thing very
quickly.', 'start': 1270.118, 'duration': 1.981}, {'end': 1277.16, 'text': 'Let me have this render to a PDF so I could blow it up and make it a very big poster if I wanted to.', 'start': 1272.419,
'duration': 4.741}, {'end': 1278.08, 'text': 'Let me show you how to do that.', 'start': 1277.18, 'duration': 0.9}, {'end': 1281.803, 'text': "So I'm here on the PDF export page on the processing
website.", 'start': 1278.82, 'duration': 2.983}, {'end': 1283.464, 'text': "And there's a bunch of different ways to do it.", 'start': 1282.003, 'duration': 1.461}, {'end': 1286.167, 'text': 'This is
actually what I want to do here, single frame.', 'start': 1283.865, 'duration': 2.302}, {'end': 1287.528, 'text': 'Actually, I want to see it on screen.', 'start': 1286.207, 'duration': 1.321}],
'summary': 'Transforming a formal pattern into organic nature; showcasing pdf export process.', 'duration': 42, 'max_score': 1245.528, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/
video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed81245528.jpg'}, {'end': 1376.456, 'src': 'heatmap', 'start': 1331.208, 'weight': 6, 'content': [{'end': 1332.909, 'text': "It's taking a while for it to
load.", 'start': 1331.208, 'duration': 1.701}, {'end': 1336.752, 'text': "But what's nice about this, it's vector graphics.", 'start': 1333.109, 'duration': 3.643}, {'end': 1341.215, 'text': 'So you
can see my machine and the Mac Preview app is having taken a while to render.', 'start': 1336.772, 'duration': 4.443}, {'end': 1345.579, 'text': "But I can blow this up very, very large, and it won't
be pixelated.", 'start': 1341.456, 'duration': 4.123}, {'end': 1347.42, 'text': 'So make a version of this.', 'start': 1345.919, 'duration': 1.501}, {'end': 1350.443, 'text': 'Make a PDF, a vector
file of it.', 'start': 1347.721, 'duration': 2.722}, {'end': 1351.224, 'text': 'Print it.', 'start': 1350.743, 'duration': 0.481}, {'end': 1357.789, 'text': "I'm going to hang one of these up back
here or in my office somewhere or in the studio at NYU or here in the cabana.", 'start': 1351.384, 'duration': 6.405}, {'end': 1362.133, 'text': 'Who knows? Thank you for watching this second episode
of Coding in the Cabana.', 'start': 1357.849, 'duration': 4.284}, {'end': 1364.015, 'text': "I'm going to go water the plants.", 'start': 1363.054, 'duration': 0.961}, {'end': 1376.376, 'text':
'Alright, thanks so much for spending your time with me and I hope to see you next time on Coding in the Cabana.', 'start': 1370.976, 'duration': 5.4}, {'end': 1376.456, 'text': 'Bye!.', 'start':
1376.396, 'duration': 0.06}], 'summary': 'Vector graphics ensures high-quality, scalable output for printing and display.', 'duration': 43.347, 'max_score': 1331.208, 'thumbnail': 'https://
coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed81331208.jpg'}], 'start': 1074.485, 'title': 'Optimizing collatz conjecture visualization, algorithmic art, and
rendering pdf from processing website', 'summary': 'Discusses optimizing the collatz conjecture visualization for a more controlled and closer result, explores transforming mathematical patterns into
organic-like art, and demonstrates rendering pdfs using the processing library for large-scale printing.', 'chapters': [{'end': 1193.334, 'start': 1074.485, 'title': 'Optimizing collatz conjecture
visualization', 'summary': 'Discusses optimizing the collatz conjecture visualization by applying the algorithm to speed up the process of reaching 1, resulting in a more controlled and closer result
to the number file video, and making adjustments to the visualization to have more space and move in a horizontal direction.', 'duration': 118.849, 'highlights': ['Applying algorithm to speed up the
process of reaching 1 By applying the algorithm for odd numbers (3n+1) and dividing by 2, the process of reaching 1 can be sped up, though not computing the exact number of steps.', 'Adjusting
visualization for more control and space Making adjustments to the visualization to have more space to work with, move in a horizontal direction, and translate along the x-axis for a more controlled
result.']}, {'end': 1272.099, 'start': 1195.93, 'title': 'Algorithmic art and organic patterns', 'summary': 'Explores the process of transforming repetitious mathematical patterns into organic-like
art, pondering the beauty and potential of the algorithm, and experimenting with alpha and angle adjustments to achieve a more natural feel.', 'duration': 76.169, 'highlights': ['By adjusting the
alpha of the lines and branching out, the repeating patterns can be transformed into a more organic-like feel, potentially creating brighter and fading patterns (quantifiable data: alpha of 50).',
'Experimenting with different angles and small divisions, the author attempts to achieve a similar, organic look, acknowledging the resemblance to seaweed and organic growth (quantifiable data: angle
adjustments, division by 0.02).', 'The author reflects on the transformation of a formal, repetitious mathematical pattern into the feeling of seaweed and organic nature, highlighting the potential
beauty that can be derived from the algorithm (quantifiable data: reflection on the transformation).']}, {'end': 1376.456, 'start': 1272.419, 'title': 'Rendering pdf from processing website',
'summary': 'Demonstrates how to render a pdf using the processing library, enabling the creation of vector graphics for large-scale printing, with a mention of potential usage locations and a
sign-off for the episode.', 'duration': 104.037, 'highlights': ['Rendering PDF with vector graphics for large-scale printing The speaker discusses rendering the PDF using the processing library,
emphasizing the ability to create vector graphics suitable for large-scale printing.', 'Mention of potential usage locations for the printed PDF The speaker mentions potential locations for using the
printed PDF, including the cabana, office, and studio at NYU.', 'Sign-off for the episode The speaker concludes by thanking the audience and expressing the hope to see them in the next episode of
Coding in the Cabana.']}], 'duration': 301.971, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/EYLWxwo1Ed8/pics/EYLWxwo1Ed81074485.jpg', 'highlights': ['Applying
algorithm to speed up the process of reaching 1 By applying the algorithm for odd numbers (3n+1) and dividing by 2, the process of reaching 1 can be sped up, though not computing the exact number of
steps.', 'Rendering PDF with vector graphics for large-scale printing The speaker discusses rendering the PDF using the processing library, emphasizing the ability to create vector graphics suitable
for large-scale printing.', 'Adjusting visualization for more control and space Making adjustments to the visualization to have more space to work with, move in a horizontal direction, and translate
along the x-axis for a more controlled result.', 'By adjusting the alpha of the lines and branching out, the repeating patterns can be transformed into a more organic-like feel, potentially creating
brighter and fading patterns (quantifiable data: alpha of 50).', 'Experimenting with different angles and small divisions, the author attempts to achieve a similar, organic look, acknowledging the
resemblance to seaweed and organic growth (quantifiable data: angle adjustments, division by 0.02).', 'The author reflects on the transformation of a formal, repetitious mathematical pattern into the
feeling of seaweed and organic nature, highlighting the potential beauty that can be derived from the algorithm (quantifiable data: reflection on the transformation).', 'Mention of potential usage
locations for the printed PDF The speaker mentions potential locations for using the printed PDF, including the cabana, office, and studio at NYU.', 'Sign-off for the episode The speaker concludes by
thanking the audience and expressing the hope to see them in the next episode of Coding in the Cabana.']}], 'highlights': ['The visualization technique represents the Kolatz conjecture as a tree,
illustrating how different numbers converge to 1, such as 25 becoming 76, 38, 19, and eventually 22.', 'The successful completion of the visualization of the Collatz conjecture for numbers up to
10,000, allowing for the use of integers without encountering range limitations.', 'The chapter demonstrates running the full Collatz conjecture algorithm for every single number from 1 to 10,000 on
a computer, highlighting its speed and efficiency.', 'Creating a float list in Processing, called sequence, for easily manipulating and resizing a sequence of numbers.', 'Applying algorithm to speed
up the process of reaching 1 By applying the algorithm for odd numbers (3n+1) and dividing by 2, the process of reaching 1 can be sped up, though not computing the exact number of steps.', 'Rendering
PDF with vector graphics for large-scale printing The speaker discusses rendering the PDF using the processing library, emphasizing the ability to create vector graphics suitable for large-scale
printing.', 'Adjusting visualization for more control and space Making adjustments to the visualization to have more space to work with, move in a horizontal direction, and translate along the x-axis
for a more controlled result.', 'By adjusting the alpha of the lines and branching out, the repeating patterns can be transformed into a more organic-like feel, potentially creating brighter and
fading patterns (quantifiable data: alpha of 50).', 'Experimenting with different angles and small divisions, the author attempts to achieve a similar, organic look, acknowledging the resemblance to
seaweed and organic growth (quantifiable data: angle adjustments, division by 0.02).', 'The author reflects on the transformation of a formal, repetitious mathematical pattern into the feeling of
seaweed and organic nature, highlighting the potential beauty that can be derived from the algorithm (quantifiable data: reflection on the transformation).']}
|
{"url":"https://learn.coursnap.app/staticpage/EYLWxwo1Ed8.html","timestamp":"2024-11-05T10:35:43Z","content_type":"text/html","content_length":"53458","record_id":"<urn:uuid:e0a11bff-7ed0-4346-80cf-c39c8813840b>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00847.warc.gz"}
|
Fuel in a 525 Gallon Circular Tank
The Fuel in a 525 Gallon Circular Tank calculator computes the content fluid (e.g. fuel) volume in a 525 gallon above ground circular storage tank based on the tanks
INSTRUCTIONS: Choose units and enter the following:
• (F) Depth of Fluid in Tank (wet dipstick)
Volume (V): The calculator returns the volume of fuel in the tank in gallons. It also returns the volume of fuel needed to fill the tank in gallons. However, these can be automatically converted to
other volume units (e.g. liters, barrels) via the pull-down menu.
The Math / Science
This calculator answers the question, "How much is in my 525 gallon fuel tank?"
525 gallon fuel tanks have standard dimensions of 73" length and 46" diameter. Based on these dimensions, one can calculate the total volume. To compute the volume of liquid in the tank, enter a
dipstick and measure the depth of the fluid (F).
ASTs are used for home heating oil, kerosene and diesel fuel. The typical AST has a cap that is used to refuel the tank. This is the easiest place to insert a dipstick (measuring stick) to measure
the depth (F) of the liquid content of the AST.
The picture shown is a hand crank that pumps 10 gallons for every 100 turns of the pump crank(See Hand Pump Volume formula). During cold periods, the fuel supplier will treat the diesel with an
anti-freeze mixture, but this is only a concern if the temp is well below zero degrees F (e.g.-5 F or colder).
Storage Tank Calculators:
|
{"url":"https://www.vcalc.com/wiki/fuel-in-525-gallon-circular-tank","timestamp":"2024-11-12T16:25:31Z","content_type":"text/html","content_length":"57951","record_id":"<urn:uuid:a6fd336a-f454-46f0-8d91-30742854f1b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00469.warc.gz"}
|
How to Calculate Wheel Speed
••• lolostock/iStock/GettyImages
The speed at which a body moves is one of the most fundamental parameters within physics. In terms of linear motion, speed is defined as the distance traveled divided by the time taken. Bodies that
rotate, such as wheels, use a different quantity to define rate of rotation. This is often the number of revolutions carried out per minute. It is straightforward to convert between revolutions per
minute and linear speed.
Write down the linear speed in units of miles per hour. This example will use a car travelling at 70 miles per hour.
Convert miles per hour to meters per minute. To do this, multiply the number of miles per hour by 1609. Following the example, 70 miles per hour is equal to :
70 x 1,609 = 112,630 meters per hour.
Next, convert this figure to meters per minute. Since there are 60 minutes in an hour, divide the meters per hour by 60:
112,630 / 60 = 1,877 meters per minute.
Calculate the circumference of the wheel. Use the formula: c = 2_pi_r, where c is the circumference, r is the radius, and pi can be approximated by 3.14. Following the example, if the car wheel
has a radius of 0.3 meters, then the circumference is equal to:
0.3 x 3.14 x 2 = 1.89 meters.
Calculate the wheel speed in revolutions per minute. To do this, use the formula:
revolutions per minute = speed in meters per minute / circumference in meters.
Following the example, the number of revolutions per minute is equal to:
1,877 / 1.89 = 993 revolutions per minute.
About the Author
Samuel Markings has been writing for scientific publications for more than 10 years, and has published articles in journals such as "Nature." He is an expert in solid-state physics, and during the
day is a researcher at a Russell Group U.K. university.
|
{"url":"https://sciencing.com/calculate-wheel-speed-7448165.html","timestamp":"2024-11-02T02:25:08Z","content_type":"text/html","content_length":"403155","record_id":"<urn:uuid:994a73d6-f2f1-4074-b01e-8b5537eb02f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00776.warc.gz"}
|
Formula field
Who can use this feature?
👤 Available on all plans.
Formulas let you transform and combine existing fields into a new Formula field. For example, you can:
• Concatenate First name and Last name fields into a Full name Formula field.Â
• Multiply Price and Units field to get the result in the Revenue Formula field.
The field will show only in your Stacker app, and won't show up in your Airtable base or Google Sheets.
Create a formula field
1. Go to  Manage Fields and data
2. Select the table and select Fields
3. Click Add field
4. Give your field a name
5. Select the field type: Formula
6. Type in your formula and click Save
Fields in formulas
To use a field in a formula, the field needs to be in the table where you're creating the formula field. It also needs to be one of the following data types:
• Text
• Long Text
• Number
• Checkbox
• URL
• Single Select Dropdown
• Percentage
• Currency
• Rich Text
• Date
• Date and Time
• Multiple Select Dropdown
To use a field in a formula, you will need to wrap it in curly brackets. For example, {Price}. If you type in the opening curly bracket, we will suggest all available fields.
List of formulas
Functions work with any valid field data types, while operators work only on number fields. If you use an operator for a non-number field, we will try to convert it into a number. If we can't, you'll
see an error message.
Returns the sum of two or more numbers.
SUM({Sold},{Not sold})
{Sold} + {Not sold}
Returns the difference of two numbers.
{Total stock} - {Sold stock}
Returns the product of two numbers.
{Sold} * {Price}
Returns one number divided by another.
{Total sold} / {Price}
Returns the average of the values in two or more fields.
AVERAGE({Sold},{Not sold})
Check if one value is not equal to another.
1!=2 =>True (represented as a checked checkbox)
Compare if one value is equal to another value.
1=1 =>True (represented as an empty checkbox)
Compare if one value is greater than another value.
1>5 =>False (represented as an empty checkbox)
Compare if one value is greater than or equal to another value.
2>=2 =>True (represented as a checked checkbox)
Compare if one value is less than another value.
5<1 =>False (represented as an empty checkbox)
Compare if one value is less than another value.
2<=2 =>True (represented as a checked checkbox)
CONCAT( , ) or &
Concatenate two values.
CONCAT("Hello ", {Fullname}, " !")
"Hello " & {Fullname} & " !"
IF (statement, Action A, Action B)
Check whether the statement is true. If it is true, then do action A, if it is false do action B.
IF({UserId}, "https://dashboard.com/" + {UserId}, "")
AND( )
Returns true if all the arguments are true, returns false otherwise.
AND({Field 1}),{Field 2})
{Field 1} AND {Field 2}
OR( )
Returns true id any one of the argument is true.
OR({Field 1}),{Field 2})
{Field 1} OR {Field 2}
NOT( )
Reverse a true to false.
NOT(a<b) Returns the same as a>=b
Returns the length of a string.
LEN("Hello World") =>11
REGEX_MATCH( )
Returns whether the input text matches a regular expression.
REGEX_MATCH("Good Morning","Good.Morning" =>True (Represented as a checked checkbox)
REGEX_REPLACE( )
Substitutes all matching substrings with a replacement string value.
REGEX_REPLACE("Good Morning","M*",""=>"Good"
REGEX_EXTRACT( )
Returns the first substring that matches a regular expression.
REGEX_EXTRACT("Good Morning","M*",""=>"Morning"
TRIM( )
Removes the whitespace at the beginning and end of string.
TRIM(" Hello ") =>"Hello"
UPPER( )
Makes string uppercase.
UPPER("Hello") =>"HELLO"
LOWER( )
Makes string lowercase.
LOWER("Hello") =>"hello"
RIGHT( )
Extract how many characters from the end of the string. Accepts string for first arg, number for second.
RIGHT("Hello",2) =>"lo"
IS_SAME( )
Compares two dates up to a unit and determines whether they are identical. Returns true or false.
IS_SAME({date 1},{date 2}, 'unit')
current acceptable units are:
'exact'(matches all units)
'year', 'month', 'day'
IS_BEFORE( )
Determines if [date1] is earlier than [date2]. Returns 1 is yes, 0 if no.
IS_BEFORE({date 1},{date 2})
IS_AFTER( )
Determines if [date1] is later than [date2]. Returns 1 is yes, 0 if no.
IS_AFTER({date 1},{date 2})
YEAR( )
Returns the four-digit year of datetime.
MONTH( )
Returns the month of a datetime as a number between 1 (January) and 12 (Decemeber).
DAY( )
Returns the day of a month of a datetimein the form of a number between 1-31.
HOUR( )
Returns the hour of a datetime as a number between 0 (12.00am) and 23 (11.00pm).
MINUTE( )
Returns the minute if datetime as a number.
LEFT( )
Extract how many characters from the beginning of the string. Accepts the string for first arg, number for second.
LEFT("Hello",2) =>"He"
DATEDIF( )
Returns the difference between datetimes in the unit specified.
Unit accepts:
MIN( )
Returns the item with the lowest value.
MIN(number 1, number 2, number 3...)
MAX( )
Returns the largest item between two or more parameters.
MAX(number 1, number 2, number 3...)
Returns the absolute value.
DATEADD( )
Adds a time/date interval to a date and then returns the date.
DATEADD([date],[#],"units") (accepts years, months, weeks and days as well as singular counter-parts)
Returns the current date, but not the current time.
NOW( )
Returns the current date and time.
ROUND( )
Rounds to a number of decimal places as specified by precision e.g "0", "1", etc.
ROUND({Unit Price},0)
ROUNDUP( )
Rounds the value to the number of decimal places given by "precision" always rounding up. e.g "0", "1", etc.
ROUNDUP({Unit Price},0)
Coverts text string to a number.
VALUE({Quoted Price})
INT( )
Returns the greatest integer that is less than or equal to the number.
INT({Unit Price})
Error messages
If a formula is invalid, you will see an error message that might help you understand what went wrong.
│ Error │ Meaning │
│ Missing formula function or Unexpected formula function │ Means that you were very creative, but we don't have the formula function yet. │
│ Function only works with field_type but field is a another_field_type │ Shows when using a field with an incompatible field type. │
│ In an IF function, field_name and field_name_2 must be of similar types │ Shows in an IF function when the Action A and Action B are different types. │
│ Missing keys in formula 'IF' function: else │ Means that there is a missing else (Action B) in the IF function. │
|
{"url":"https://support.stackerhq.com/hc/en-us/articles/4415076803731-Formula-field","timestamp":"2024-11-03T09:31:20Z","content_type":"text/html","content_length":"72208","record_id":"<urn:uuid:9dc8d9dd-c11c-4211-a40c-1a7bb246a0fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00201.warc.gz"}
|
Multiplication Table Printable 1-12
Multiplication Table Printable 1-12 - Fill in the missing numbers multiplication chart worksheets Blank multplication table to print for practice. These table charts are suitable for the kids from
the 1st standard to the 5th standard. You can use it as a reminder or to learn your times tables up to 12x12 multiplication. Use these colorful multiplication tables to help your child build
confidence while mastering the multiplication facts. Printable multiplication flash cards where kids can review facts from 1 to 12. Gray on white with answers grayed for copy work. Each side is an
interactive lesson. Web may 2, 2022. It consists of a process that multiplies two numbers, called factors, which pertains to the accumulation of the given numbers.
printable multiplication chart 1 12 pdf
Choose your favourite multiplication chart from our wide range of printable charts. Web get a free printable multiplication chart pdf for your class! Blank multplication table to print for practice.
Black on white with rows and columns filled for a basic multiplication reference chart. Here is the printable multiplication chart (pdf) from the 1 time table up to the 12.
Printable Multiplication Times Table 1 12 Times Tables Worksheets
Gray on white with answers grayed for copy work. Web here you can find multiplication tables from 1 to 12. Fill in the missing numbers multiplication chart worksheets Web this multiplication table 1
to 12 is consist of 12 rows with a respective operation of multiplication, which is very beneficial to learn the basic multiplication of 1 to 12 table..
Printable Multiplication Table 112 Pdf Printable Multiplication
Choose your favourite multiplication chart from our wide range of printable charts. When you are just getting started learning the multiplication tables, these simple printable pages are great tools!
These are perfect multiplication worksheets for grade 3 to help kids slowly learn the multiplication tables starting with 1s and inching your way up to 12s. The free downloadable pdf includes.
Multiplication Chart Printable Free
More sizes of multiplication tables looking for a smaller or larger multiplication table? Choose your favourite multiplication chart from our wide range of printable charts. Web here are
multiplication tables from 1 to 12 given below: Web here you can find multiplication tables from 1 to 12. Web practice multiplication with this learning mat.
Printable Time Tables 112 Activity Shelter
Web here are multiplication tables from 1 to 12 given below: Multiplication charts & times tables [free & printable!] | prodigy education Web here you can find multiplication tables from 1 to 12.
Each side is an interactive lesson. This use of color with a purpose helps students to easily navigate the multiplication chart.
5+ Blank Multiplication Table 112 Printable Chart in PDF
The free downloadable pdf includes a black and white: How to learn your life will be a lot easier when you can simply remember the multiplication tables. Find three printable multiplication tables on
this page: To get the pdf of 1 to 12 table, click the download option and take a print of this 1 to 12 multiplication table. You.
Multiplication Charts 112 Times Table Activity Shelter
Download our free printable worksheets today! This worksheets are a very useful tool to improve students skill on printable subjects. This use of color with a purpose helps students to easily
navigate the multiplication chart. These table charts are suitable for the kids from the 1st standard to the 5th standard. Blank multplication table to print for practice.
112 Multiplication Chart Free Download
More sizes of multiplication tables looking for a smaller or larger multiplication table? Here is the printable multiplication chart (pdf) from the 1 time table up to the 12 times table, it's a free
resource. These are perfect multiplication worksheets for grade 3 to help kids slowly learn the multiplication tables starting with 1s and inching your way up to.
112 X Times Table Chart What's the best way to learn to multiply
Web kids can test their knowledge of multiplication tables by writing in the multiples of each number to fill in a blank multiplication table. First, use the table above to. Blank multplication table
to print for practice. They are perfect for 3rd or 4th graders or anyone learning their times tables. Download our free printable worksheets today!
Math Tables 1 to 12 Printable Multiplication Chart 1 to 12 Maths
You can use it as a reminder or to learn your times tables up to 12x12 multiplication. Black on white with rows and columns filled for a basic multiplication reference chart. Web the 12 times table
print one and put it on your wall, or paste it in an exercise book. Choose your favourite multiplication chart from our wide range.
Here is the printable multiplication chart (pdf) from the 1 time table up to the 12 times table, it's a free resource. Download your free printable multiplication chart by selecting either Each side
is an interactive lesson. Download our free printable worksheets today! For more ideas see printable paper and math drills and math problems generator. They are perfect for 3rd or 4th graders or
anyone learning their times tables. Buy a poster of multiplication table Gray on white with answers grayed for copy work. Web get a free printable multiplication chart pdf for your class! Web 3
printable multiplication tables: To get the pdf of 1 to 12 table, click the download option and take a print of this 1 to 12 multiplication table. The free downloadable pdf includes a black and
white: Perfect for practising your times tables! These table charts are suitable for the kids from the 1st standard to the 5th standard. Use these colorful multiplication tables to help your child
build confidence while mastering the multiplication facts. Black on white with rows and columns filled for a basic multiplication reference chart. You can use it as a reminder or to learn your times
tables up to 12x12 multiplication. Web the 12 times table print one and put it on your wall, or paste it in an exercise book. Simple tips and tricks for learning tables from 1 to 12 a few simple tips
and tricks to learn multiplication tables from 1 to 12 are mentioned below. Find three printable multiplication tables on this page:
Web Kids Can Test Their Knowledge Of Multiplication Tables By Writing In The Multiples Of Each Number To Fill In A Blank Multiplication Table.
Black on white with rows and columns filled for a basic multiplication reference chart. The free downloadable pdf includes a black and white: For more ideas see printable paper and math drills and
math problems generator. More sizes of multiplication tables looking for a smaller or larger multiplication table?
This Worksheets Are A Very Useful Tool To Improve Students Skill On Printable Subjects.
Feel free to color it to memorize more easily. Fill in the missing numbers multiplication chart worksheets Download your free printable multiplication chart by selecting either Web practice
multiplication with this learning mat.
For Example, 1 X 2 = 2, If You Add 1 + 1 = 2.
There is something for everyone! Web free printable multiplication charts (times tables) available in pdf format. Web get a free printable multiplication chart pdf for your class! These table charts
are suitable for the kids from the 1st standard to the 5th standard.
To Get A Multiplication Table Of 2, You Can Double The Number Of Multiples.
Chart, gray, and blank table. Use these colorful multiplication tables to help your child build confidence while mastering the multiplication facts. Simple tips and tricks for learning tables from 1
to 12 a few simple tips and tricks to learn multiplication tables from 1 to 12 are mentioned below. November 25, 2022 1 comment.
Related Post:
|
{"url":"https://dl-uk.apowersoft.com/en/multiplication-table-printable-1-12.html","timestamp":"2024-11-13T03:05:05Z","content_type":"text/html","content_length":"30372","record_id":"<urn:uuid:21eb87fc-27f6-417d-89cd-70934f4cec73>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00067.warc.gz"}
|
Understanding log warnings for MIP problem
While trying to solve an MIP model, I tried to look into the log file. I read the documentation https://www.gurobi.com/documentation/9.1/refman/mip_logging.html for the log file but I still don't
understand or know how to deal with the warnings I got in my log file below.
1) Warning: Model contains large matrix coefficients
Warning: Model contains large rhs
Is this caused by the fact that my constraints range in big interval? and how to solve this issue, added to the setting NumericFocus which slows the model.
2) Warning: 1 variables dropped from basis
What does it mean if a variable drops from basis, and how can I know which one and avoid this from happening?
3) Warning: max constraint violation (2.8125e-01) exceeds tolerance
Warning: max bound violation (1.5259e-05) exceeds tolerance
(possibly due to large matrix coefficients)
Thread count: 12 physical cores, 24 logical processors, using up to 24 threads
Optimize a model with 30649 rows, 10033 columns and 27365 nonzeros
Model fingerprint: 0x43ceffa0
Variable types: 9925 continuous, 108 integer (108 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+11]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 5e+03]
RHS range [5e+00, 1e+14]
Warning: Model contains large matrix coefficients
Warning: Model contains large rhs
Consider reformulating model or setting NumericFocus parameter
to avoid numerical issues.
Presolve removed 25473 rows and 129 columns
Presolve time: 0.03s
Presolved: 5176 rows, 9904 columns, 26808 nonzeros
Variable types: 9808 continuous, 96 integer (96 binary)
Warning: 1 variables dropped from basis
Root relaxation: objective 2.697254e+15, 6906 iterations, 0.44 seconds (0.44 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 2.6973e+15 0 44 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 11 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 8 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 8 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 7 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 7 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 7 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 7 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 7 - 2.6973e+15 - - 1s
0 0 2.6973e+15 0 7 - 2.6973e+15 - - 1s
0 2 2.6973e+15 0 7 - 2.6973e+15 - - 1s
189 221 infeasible 12 - 2.6973e+15 - 282 8s
234 300 2.6973e+15 13 11 - 2.6973e+15 - 369 10s
452 671 2.6973e+15 36 15 - 2.6973e+15 - 557 19s
709 673 2.6973e+15 24 79 - 2.6973e+15 - 562 21s
711 675 2.6973e+15 46 75 - 2.6973e+15 - 561 26s
1030 944 2.6973e+15 19 66 - 2.6973e+15 - 29.3 30s
* 1116 965 35 2.697254e+15 2.6973e+15 0.00% 42.4 31s
Explored 1171 nodes (471655 simplex iterations) in 31.16 seconds (27.94 work units)
Thread count was 24 (of 24 available processors)
Solution count 1: 2.69725e+15
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (2.8125e-01) exceeds tolerance
Warning: max bound violation (1.5259e-05) exceeds tolerance
(possibly due to large matrix coefficients)
Best objective 2.697253636296e+15, best bound 2.697253636296e+15, gap 0.0000%
If you have a documentation or a link for Gurobi warnings or have some insight, I would appreciate it.
Thank you.
• Hello Amal,
The large matrix and RHS coefficients in your model appear to be causing numerical issues within the solution process. The other warnings are symptoms of such numerical issues.
Ideally, all ranges should have a width of 1e4 or less, and we should definitely worry when any of them is bigger than 1e8. In your model the matrix coefficient and RHS ranges of [1e+00, 1e+11]
and [5e+00, 1e+14] are significantly larger than this rule of thumb. For more information, you can read our Guidelines for Numerical Issues. I recommend re-formulating/re-scaling your model to
avoid such large ranges and solving it again.
Best regards,
• Hi Dan,
thank you for your answer.
what does it mean if the model is achieving an optimal objective value even though those warnings exist? that's what I find it difficult to understand.
Also, I tried to change some parameters as mentioned in the documentation but still faced the same warnings. I then thought of decreasing the big M value which is part of my model, and no longer
had the warnings, the result though seem a little bit confusing as if it's no longer optimising for that value of M. Any insights of how I can maintain small range while using the big M value in
my model?
Thank you
• Hello Amal,
As discussed in the Guidelines for Numerical Issues (see here), Gurobi relies on floating-point numbers for representing data and performing computations. Floating-point numbers are amenable to
computation on standard computer hardware, but only store a finite approximation to each number. This necessitates the use of numerical tolerances. For example, a solution is considered feasible
even if it violates some constraints by a very small amount (which is controlled by the parameter FeasibilityTol with a default value of 1e-6).
So how could a solution that violates this tolerance be returned? Before solving a MIP, Gurobi applies a presolve routine to compute an equivalent model with reduced size and improved numerical
properties – the presolved model is then solved, and the solution is translated back to the original model. In some cases, a solution satisfies the numerical tolerances for the presolved model,
but when translated back to the original formulation, the violations may be larger, resulting in warning messages such as those you have encountered. This is often the result of numerical issues,
which can be due to large coefficients.
This behavior may be resolved by reformulating your model to avoid having a very large “big M.” To do this, you could try using your knowledge of the problem to choose an M as small as possible,
while still having a correct model. Or you may find it helpful to scale some variables and constraints by changing the units (e.g., instead of a variable representing the number of units
produced, use a variable representing thousands of units produced). Another option could be to use the Gurobi indicator constraint as an alternative to big-M.
I hope this helps,
• Thank you Dan for your answer. I am looking into the different explanation and suggestions you provided.
Another question that is bugging me is that in my case the objective range is equal to 1. Do you have a possible explanation for that? with my little knowledge, I think it should be an interval .
Thank you
Objective range [1e+00, 1e+00]
• Hello Amal,
Sure, this statistic reports the range of the absolute values of the nonzero coefficients.
Objective range [1e+00, 1e+00]
So the above range indicates that the objective coefficients had values of 1, 0, or -1.
• Dan Steffy, I'm having the same problem mentioned above "Warning: 1 variables dropped from basis." Can you clarify what it means?
• Hello Gabriel,
Warning: ... variables dropped from basis.
The above warning is typically a sign that the solver is experiencing numerical difficulties. It occurs when a basis encountered in the simplex algorithm is detected to be singular, and the
solver remedies this by dropping some variables from the basis and forming a different basis. This may be done by replacing some structural variables in the basis with slack variables.
If you are experiencing the above warning message, please consult the Guidelines for Numerical Issues or the suggestions earlier in this discussion for guidance on improving the numerical
properties of your model.
Please sign in to leave a comment.
|
{"url":"https://support.gurobi.com/hc/en-us/community/posts/9148564677521-Understanding-log-warnings-for-MIP-problem?sort_by=votes","timestamp":"2024-11-13T01:11:40Z","content_type":"text/html","content_length":"71990","record_id":"<urn:uuid:bb9e70af-05be-47b1-be19-79698b5b0830>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00047.warc.gz"}
|
Optimal prophylactic vaccination in segregated populations: When can we improve on the equalising strategy? - Prism
Keeling, M.J. and Ross, J.V. (2015) Optimal prophylactic vaccination in segregated populations: When can we improve on the equalising strategy? Epidemics 11, 7-13.doi:10.1016/j.epidem.2015.01.002
One of the fundamental problems in public health is how to allocate a limited set of resources to have the greatest benefit on the health of the population. This often leads to difficult value
judgements about budget allocations. However, one scenario that is directly amenable to mathematical analysis is the optimal allocation of a finite stockpile of vaccine when the population is
partitioned into many relatively small cliques, often conceptualised as households. For the case of SIR (susceptible–infectious–recovered) dynamics, analysis and numerics have supported the
conjecture that an equalising strategy (which leaves equal numbers of susceptible individuals in each household) is optimal under certain conditions. However, there exists evidence that some of these
conditions may be invalid or unsuitable in many situations. Here we consider how well the equalising strategy performs in a range of other scenarios that deviate from the idealised household model.
We find that in general the equalising strategy often performs optimally, even far from the idealised case. However, when considering large subpopulation sizes, frequency-dependent transmission and
intermediate levels of vaccination, optimality is often achieved through more heterogeneous vaccination strategies.
|
{"url":"https://prism.edu.au/publications/599/","timestamp":"2024-11-08T11:41:27Z","content_type":"text/html","content_length":"30600","record_id":"<urn:uuid:4d337d05-a098-4996-8771-42be587bd079>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00608.warc.gz"}
|
Kim-Chuan Toh
1. S.L. Hu, D.F. Sun, and K.C. Toh, Quantifying low rank approximations of third order symmetric tensors,
Mathematical Programming, in print. arXiv:2307.10855
2. L. Liang, D.F. Sun, and K.C. Toh, A squared smoothing Newton method for semidefinite programming,
Mathematics of Operations Research, in print. arXiv:2303.05825
3. N.C. Xiao, X.Y. Hu, X. Liu, and K.C. Toh, CDOpt: A python package for a class of Riemannian optimization,
Mathematical Programming Computation, in print. arXiv:2212.02698
4. M.X. Lin, D.F. Sun, K.C. Toh and C.J. Wang, Estimation of sparse Gaussian graphical models with hidden clustering structure,
J. Machine Learning Research, 25 (2024), Article 256. arXiv:2004:08115
5. D. Hou, L. Liang, and K.C. Toh, A sparse smoothing Newton method for solving discrete optimal transport problems,
ACM Transactions on Mathematical Software, in print. arXiv:2311.06448
6. D. Wu, C.J. Yu, H.L. Wang, Y.Q. Bai, K.L. Teo, and K.C. Toh, Iterative Chebyshev approximation method for optimal control problems,
ISA Transactions, in print.
7. W.J. Li, W. Bian, K.C. Toh, On solving a rank regularized minimization problem via equivalent factorized column-sparse regularized models,
Mathematical Programming, in print. arXiv:2308.16690
8. L. Yang, L. Liang, H.T.M. Chu, and K.C. Toh, A corrected inexact proximal augmented Lagrangian method with a relative error criterion for a class of group-quadratic regularized optimal transport
J. Scientific Computing, 99 (2024), Article 79. arXiv:2311.01976
9. T.Y. Tang and K.C. Toh, A feasible method for general convex low-rank SDP problems,
SIAM J. Optimization, 34 (2024), pp. 2169–2200. arXiv:2312.07908
10. P. Zhou, X.Y. Xie, Z.C. Lin, K.C. Toh, and S.C. Yan, Win: Weight-Decay-Integrated Nesterov Acceleration for Faster Network Training,
J. Machine Learning Research, 25 (2024), Article 83.
11. X.Y. Hu, N.C. Xiao, X. Liu, and K.C. Toh, A constraint dissolving approach for nonsmooth optimization over the Stiefel manifold,
IMA Journal of Numerical Analysis, in print. arXiv:2205.10500
12. Y. J. Zhang, K.C. Toh, and D.F. Sun, Learning graph Laplacian with MCP,
Optimization Methods and Software, 39 (2024), pp. 569–600. arXiv:2010.11559
13. M.X. Lin, Y.C. Yuan, D.F. Sun, and K.C. Toh,
A highly efficient algorithm for solving exclusive Lasso problems,
Optimization Methods and Software, 39 (2024), pp. 489–518. arXiv:2306:14196.
The above is a revised version of “Adaptive sieving with PPDNA: Generating solution paths of exclusive Lasso models”, arXiv:2009:08719
14. T.Y. Tang, K.C. Toh, N.C. Xiao, and Y.Y. Ye, A Riemannian dimension-reduced second order method with application in sensor network localization,
SIAM J. Scientific Computing, 46 (2024), pp. A2025–A2046. arXiv:2304.10092
15. N.C. Xiao, X.Y. Hu, X. Liu, and K.C. Toh, Adam-family methods for nonsmooth optimization with convergence guarantees,
J. Machine Learning Research, 25 (2024), Article 48. arXiv:2305.03938
16. Y.J. Zhang, Y. Cui, B. Sen, and K.C. Toh, On efficient and scalable computation of the nonparametric maximum likelihood estimator in mixture models,
J. Machine Learning Research, 25 (2024), Article 8. arXiv:2208.07514
17. T.Y. Tang, and K.C. Toh, Self-adaptive ADMM for semi-strongly convex problems,
Mathematical Programming Computation, 16 (2024), pp. 113-150. arXiv:2310.00376
18. T.Y. Tang, and K.C. Toh, A feasible method for solving an SDP relaxation of the quadratic knapsack problem, Mathematics of Operations Research, 49 (2024), pp. 19-39. arXiv:2303.06599
19. T.Y. Tang, and K.C. Toh, Solving graph equipartition SDPs on an algebraic variety,
Mathematical Programming, 204 (2024), pp. 299-347. arXiv:2112.04256
20. N.C. Xiao, X. Liu, and K.C. Toh, Dissolving constraints for Riemannian optimization,
Mathematics of Operations Research, 49 (2024), pp. 366-397. arXiv:2203.10319
21. K.Y. Ding, X.Y. Lam, and K.C. Toh, On proximal augmented Lagrangian based decomposition methods for dual block-angular convex composite programming problems,
Computational Optimization and Applications, 86 (2023), pp. 117–161. arXiv:2303.06893
22. X.Y. Hu, N.C. Xiao, X. Liu, and K.C. Toh, An improved unconstrained approach for bilevel optimization,
SIAM J. Optimization, 33 (2023), pp. 2801-2829. arXiv:2208.00732
23. H.T. Chu, L. Liang, K.C. Toh, and L. Yang, An efficient implementable inexact entropic proximal point algorithm for a class of linear programming problems,
Computational Optimization and Applications, 85 (2023), pp. 107–146. arXiv:2011.14312
24. H. Yang, L. Liang, L. Carlone, and K.C. Toh, An inexact projected gradient method with rounding and lifting by nonlinear programming for solving rank-one semidefinite relaxation of polynomial
Mathematical Programming, 201 (2023), pp. 409–472. arXiv:2105.14033
Solver available at github
25. H.T. Chu, K.C. Toh, and Y.J. Zhang, On regularized square-root regression problems: distributionally robust interpretation and fast computations,
J. Machine Learning Research, 23 (2022), article 308. arXiv:2109.03632
26. Y.C. Yuan, T.H. Chang, D.F. Sun, and K.C. Toh, A dimension reduction technique for large-scale structured sparse optimization problems with application to convex clustering,
SIAM J. Optimization, 32 (2022), pp. 2294-2318. arXiv:2108.07462
27. L. Liang, X.D. Li, D.F. Sun, and K.C. Toh, QPPAL: A two-phase proximal augmented Lagrangian method for high dimensional convex quadratic programming problems,
ACM Transactions on Mathematical Software, 48 (2022), Article 33. arXiv:2103.13108
28. L. Yang and K.C. Toh, Bregman proximal point algorithm revisited: A new inexact version and its inertial variant,
SIAM J. Optimization, 32 (2022), pp. 1523-1554. arXiv:2105.10370
29. W.J. Li, W. Bian, K.C. Toh, DC algorithms for a class of sparse group L0 regularized optimization problems,
SIAM J. Optimization, 32 (2022), pp. 1614-1641. arXiv:2109.05251
30. M.X. Lin, D.F. Sun, and K.C. Toh, An augmented Lagrangian method with constraint generations for shape-constrained convex regression problems,
Mathematical Programming Computation, 14 (2022), pp. 223–270. Springer Nature ShareIt
arXiv:2012.04862, old version: arXiv:2002.11410
31. Y. Cui, L. Liang, D.F. Sun, and K.C. Toh,
On degenerate doubly nonnegative projection problems,
Mathematics of Operations Research, 47 (2022), pp. 2219-2239. arXiv:2009.11272, DOI
32. S.Y. Kim, M. Kojima, and K.C. Toh,
Doubly nonnegative relaxations for quadratic and polynomial optimization problems with binary and box constraints,
Mathematical Programming, 193 (2022), pp. 761–787. Optimization Online, DOI
33. T.-D. Quoc, L. Liang, K.C. Toh,
A new homotopy proximal variable-metric framework for composite convex minimization,
Mathematics of Operations Research, 47 (2022), pp. 508–539. arXiv:1812.05243, DOI
34. R. Wang, N.H. Xiu, and K.C. Toh,
Subspace quadratic regularization method for group sparse multinomial logistic regression,
Computational Optimization and Applications, 79 (2021), pp. 531–559.
35. L. Liang, D.F. Sun, and K.C. Toh,
An inexact augmented Lagrangian method for second-order cone programming with applications,
SIAM J. Optimization, 31 (2021), pp. 1748–1773. arXiv:2010.08772
36. N. Zhang, Y.J. Zhang, D.F. Sun, and K.C. Toh,
An efficient linearly convergent regularized proximal point algorithm for fused multiple graphical Lasso problems,
SIAM J. Mathematics of Data Science, 3 (2021), pp. 524–543. arXiv:1902.06952
37. L. Yang, J. Li, D.F. Sun, and K.C. Toh,
A fast globally linearly convergent algorithm for the computation of Wasserstein barycenters,
J. Machine Learning Research, 22 (2021), article 21. arXiv:1809.04249
38. X.Y. Lam, D.F Sun, and K.C. Toh,
A semi-proximal augmented Lagrangian based decomposition method for primal block angular convex composite quadratic conic programming problems,
INFORMS J. Optimization, 3 (2021), pp. 254–277. arXiv:1812.04941
39. S.Y. Kim, M. Kojima, and K.C. Toh,
A Newton-bracketing method for a simple conic optimization problem,
Optimization Methods and Software, 36 (2021), pp. 371–388. arXiv:1905.12840.
40. L. Chen, X.D. Li, D.F. Sun, and K.C. Toh,
On the equivalence of inexact proximal ALM and ADMM for a class of convex composite programming,
Mathematical Programming, 185 (2021), pp. 111–161. arXiv:1803.10803
41. D.F. Sun, K.C. Toh, and Y.C. Yuan,
Convex clustering: model, theoretical guarantee and efficient algorithm,
J. Machine Learning Research, 22 (2021), Article 9. arXiv:1810.0267
42. P.P. Tang, C.J. Wang, D.F. Sun, and K.C. Toh,
A sparse semismooth Newton based proximal majorization-minimization algorithm for nonconvex square-root-loss regression problems,
J. Machine Learning Research, 21 (2020), Article 226. arXiv:1903.11460
43. X.D. Li, D.F. Sun, and K.C. Toh,
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming,
SIAM J. Optimization, 30 (2020), pp. 2410–2440. arXiv:1903.09546.
44. Y.J. Zhang, N. Zhang, D.F. Sun, and K.C. Toh,
A proximal point dual Newton algorithm for solving group graphical Lasso problems,
SIAM J. Optimization, 30 (2020), pp. 2197–2220. arXiv:1906.04647.
45. S.Y. Kim, M. Kojima, and K.C. Toh,
A geometrical analysis of a class of nonconvex conic programs for convex conic reformulations of quadratic and polynomial optimization problems,
SIAM J. Optimization, 30 (2020), pp. 1251–1273. arXiv:1901.02179.
46. S.Y. Kim, M. Kojima, and K.C. Toh,
Doubly nonnegative relaxations are equivalent to completely positive reformulations of quadratic optimization problems with block-clique graph structures,
J. Global Optimization, 77 (2020), pp. 513–541. arXiv:1903.07325.
47. C. Ding, D.F. Sun, J. Sun, and K.C. Toh,
Spectral operators of matrices: semismoothness and characterizations of the generalized Jacobian,
SIAM J. Optimization, 30 (2020), pp. 630–659. arXiv:1810.09856.
48. X.D. Li, D.F. Sun and K.C. Toh,
On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope,
Mathematical Programming, 179 (2020), pp. 419–446. arXiv:1702.05934. Springer Nature ShareIt.
49. Y.J. Zhang, N. Zhang, D.F. Sun and K.C. Toh,
An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems,
Mathematical Programming, 179 (2020), pp. 223–263. arXiv:1712.05910. Springer Nature ShareIt.
50. D.F. Sun, K.C. Toh, Y.C. Yuan, and X.Y. Zhao,
SDPNAL+: A Matlab software for semidefinite programming with bound constraints (version 1.0),
Optimization Methods and Software, 35 (2020), 87–115. arXiv:1710.10604.
51. S.L. Hu, D.F. Sun, and K.C. Toh,
Best nonnegative rank-one approximations of tensors,
SIAM J. Matrix Analysis and Applications, 40 (2019), pp. 1527–1554. arXiv:1810.13372.
52. Y. Cui, D.F. Sun, and K.C. Toh,
Computing the best approximation over the intersection of a polyhedral set and the doubly nonnegative cone,
SIAM J. Optimization, 29 (2019), pp. 2785–2813. arXiv:1803.06566.
53. Z.Y. Lou, D.F. Sun, K.C. Toh, and N.H. Xiu,
Solving the OSCAR and SLOPE models using a semismooth Newton-based augmented Lagrangian method,
J. Machine Learning Research, 20 (2019), Article 106. arXiv:1803.10740.
54. M.X. Lin, Y.J. Liu, D.F. Sun, and K.C. Toh,
Efficient sparse semismooth Newton methods for the clustered Lasso problem,
SIAM J. Optimization, 29 (2019), pp. 2026–2052. arXiv:1808.07181.
55. L. Chen, D.F. Sun, K.C. Toh, and N. Zhang,
A unified algorithmic framework of symmetric Gauss-Seidel decomposition based proximal ADMMs for convex composite programming,
J. Computational Mathematics, 37 (2019), pp. 739–757. arXiv:1812.06579.
56. N. Ito, S. Kim, M. Kojima, A. Takeda, and K.C. Toh,
BBCPOP: A sparse doubly nonnegative relaxation of polynomial optimization problems with binary, box and complementarity constraints,
ACM Transactions on Mathematical Software, 45 (2019), Article 34.
arXiv:1804.00761. BBCPOP Matlab Software.
Valid lower bounds for large QAPs computed by Hans Mittelmann using BBCPOP.
57. N. Arima, S.Y. Kim, M. Kojima, and K.C. Toh,
Lagrangian-conic relaxations, Part II: Applications to polynomial optimization problems,
Pacific J. Optimization, 15 (2019), pp. 415–439. Optimization Online.
58. L. Chen, D.F. Sun and K.C. Toh,
Some problems on the Gauss-Seidel iteration method in degenerate cases (in Chinese)
Journal On Numerical Methods and Computer Applications, 40 (2019), pp. 98–110.
59. Y. Cui, D.F. Sun and K.C. Toh,
On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming,
Mathematical Programming, 178 (2019), pp. 381–415. arXiv:1706.08800. Springer Nature SharedIt.
60. X.D. Li, D.F. Sun and K.C. Toh,
A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications,
Mathematical Programming, 175 (2019), pp. 395–418. arXiv:1703.06629. Springer Nature SharedIt.
61. N. Ito, S. Kim, M. Kojima, A. Takeda, and K.C. Toh,
Equivalences and differences in conic relaxations of combinatorial quadratic optimization problems,
J. Global Optimization, 72 (2018), pp. 619–653. Optimization Online. Springer Nature SharedIt.
62. X.D. Li, D.F. Sun and K.C. Toh,
On efficiently solving the subproblems of a level-set method for fused lasso problems,
SIAM J. Optimization, 28 (2018), pp. 1842–1866.
arXiv:1706.08732. Detailed computational results in the paepr.
63. X.D. Li, D.F. Sun, and K.C. Toh,
QSDPNAL: A two-phase augmented Lagrangian method for convex quadratic semidefinite programming,
Mathematical Programming Computation, 10 (2018), pp. 703–743. arXiv:1512.08872. Springer Nature SharedIt.
64. K. Natarajan, D.J. Shi, and K.C. Toh,
Bounds for random binary quadratic programs,
SIAM J. Optimization, 28 (2018), pp. 671–692.
65. X.D. Li, D.F. Sun, and K.C. Toh,
A highly efficient semismooth Netwon augmented Lagrangian method for solving Lasso problems,
SIAM J. Optimization, 28 (2018), pp. 433–458. arXiv:1607.05428.
66. Z.W. Li, L.F. Cheong, S.G. Yang, and K.C. Toh,
Simultaneous clustering and model selection: algorithm, theory and applications,
IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (2018), pp. 1964–1978.
67. X.Y. Lam, J.S. Marron, D.F. Sun, and K.C. Toh,
Fast algorithms for large scale generalized distance weighted discrimination,
J. Computational and Graphical Statistics, 27 (2018), pp. 368–379. arXiv:1604.05473.
R package. Matlab package
68. T. Weisser, J.B. Lasserre, and K.C. Toh,
A bounded degree SOS hierarchy for large scale polynomial optimization with sparsity,
Mathematical Programming Computation, 10 (2018), pp. 1–32. arXiv:1607.01151. Springer Nature SharedIt.
69. C. Ding, D.F. Sun, J. Sun, and K.C. Toh,
Spectral operators of matrices,
Mathematical Programming, 168 (2018), pp. 509–531. arXiv:1401.2269.
70. Ethan Fang, H. Liu, K.C. Toh, W.-X. Zhou,
Max-norm optimization for robust matrix recovery,
Mathematical Programming, 167 (2018), pp. 5–35. Optimization Online. Springer Nature SharedIt.
71. N. Arima, S.Y. Kim, M. Kojima, and K.C. Toh,
Lagrangian-conic relaxations, Part I: A unified framework and its applications to quadratic optimization problems,
Pacific J. Optimization, 14 (2018), pp.161–192. Optimization Online.
72. N. Ito, A. Takeda, and K.C. Toh,
A unified formulation and fast accelerated proximal gradient method for classification,
J. Machine Learning Research, 18 (2017), Article 16.
73. N. Arima, S.Y. Kim, M. Kojima, and K.C. Toh,
A robust Lagrangian-DNN method for a class of quadratic optimizaiton problems,
Computational Optimization and Applications, 66 (2017), pp. 453–479. Optimization Online.
74. L. Chen, D.F. Sun, and K.C. Toh,
A note on the convergence of ADMM for linearly constrained convex optimization problems,
Computational Optimization and Applications, 66 (2017), pp. 327—343. arXiv:1507.02051
75. J.B. Lasserre, K.C. Toh, and S.G. Yang,
A bounded-SOS-hierarchy for polynomial optimization,
EURO J. Computational Optimization, 5 (2017), pp. 87–117. arXiv:1501.06126.
76. L. Chen, D.F. Sun, and K.C. Toh,
An efficient inexact symmetric Gauss-Seidel based majorized ADMM for high-dimensional convex composite conic programming,
Mathematical Programming, 161 (2017), pp. 237–270. arXiv:1506.00741. Springer Nature SharedIt.
77. D.F. Sun, K.C. Toh, and L.Q. Yang,
An efficient inexact ABCD method for least squares semidefinite programming,
SIAM J. Optimization, 26 (2016), pp. 1072–1100. arXiv:1505.04278.
Detailed computational results for over 600 problems tested in the paper.
78. Y. Cui, X.D. Li, D.F. Sun and K.C. Toh,
On the convergence properties of a majorized ADMM for linearly constrained convex optimization problems with coupled objective functions,
J. Optimization Theory and Applications, 169 (2016), pp. 1013–1041. arXiv:1502.00098. Springer Nature SharedIt
79. M. Li, D.F. Sun, and K.C. Toh,
A majorized ADMM with indefinite proximal terms for linearly constrained convex composite optimization,
SIAM J. Optimization, 26 (2016), pp. 922–950. arXiv:1412.1911.
80. S.Y. Kim, M. Kojima, and K.C. Toh,
A Lagrangian-DNN relaxation: a fast method for computing tight lower bounds for a class of quadratic optimization problems,
Mathematical Programming, 156 (2016), pp. 161–187.
81. C.H. Chen, Y.J. Liu, D.F. Sun, and K.C. Toh,
A semismooth Newton-CG dual proximal point algorithm for spectral norm approximation problems,
Mathematical Programming, 155 (2016), pp. 435–470.
82. X.D. Li, D.F. Sun and K.C. Toh,
A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions,
Mathematical Programming, 155 (2016), pp. 333–373. arXiv:1409.2679.
83. L.Q. Yang, D.F. Sun, and K.C. Toh,
SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints,
Mathematical Programming Computation, 7 (2015), pp. 331-366. arXiv:1406.0942.
More recent computational results (computed in Dec 2017).
Numerical experiments on a variety of large scale SDPs with the matrix dimension n up to 9,261 and the number of equality constraints m up to 12,326,390 show that the proposed method is very
efficient on certain large SDPs. We are also able to solve the SDP problem fap36 (with n=4,110 and m=1,154,467) in the Seventh DIMACS Implementation Challenge much more efficiently (in 23 hours
in 2015) and accurately than previous attempts. The approximate optimal objective value we obtained for fap36 is 69.85, with the corresponding solution having relative primal and dual
infeasibilities, and complementarity gap ⟨X,S⟩ all less than 1e-6.
84. D.F. Sun, K.C. Toh and L.Q. Yang,
A convergent 3-block semi-proximal alternating direction method of multipliers for conic programming with 4-type constraints,
SIAM J. Optimization, 25 (2015), pp. 882–915. arXiv:1404.5378.
Detailed computational results for over 400 problems tested in the paper.
Supplementary note: more detailed comparison between the performance of our algorithm and various variants of ADMMs.
85. M. Li, D.F. Sun, and K.C. Toh,
A convergent 3-block semi-proximal ADMM for convex minimization with one strongly convex block,
Asia Pacific J. Operational Research, 32 (2015), 1550024. arXiv:1410.7933.
86. Y.X. Wang, C.M. Lee, L.F. Cheong, and K.C. Toh,
Practical matrix completion and corruption recovery using proximal alternating robust subspace minimization,
International J. of Computer Vision, 111 (2015), pp. 315–344. arXiv:1309.1539.
87. C. Tang, K.K. Phoon, and K.C. Toh,
Effect of footing width on Ny and failure envelope of eccentrically and obliquely loaded strip footings on sand,
Canadian Geotechnical Journal, 52 (2015), pp. 694–707.
88. J. Peng, T. Zhu, H. Luo, and K.C. Toh,
Semidefinite relaxation of quadratic assignment problems based on nonredundant matrix splitting,
Computational Optimization and Applications, 60 (2015), pp. 171–198.
89. K.F. Jiang, D.F. Sun, and K.C. Toh,
A partial proximal point algorithm for nuclear norm regularized matrix least squares problems,
Mathematical Programming Computation, 6 (2014), pp. 281–325.
90. Z. Gong, Z.W. Shen, and K.C. Toh,
Image restoration with mixed or unknown noises,
Multiscale Modeling and Simulation, 12 (2014), pp. 458–487.
91. B. Wu, C. Ding, D.F. Sun, and K.C. Toh,
On the Moreau-Yoshida regularization of the vector k-norm related functions,
SIAM J. Optimization, 24 (2014), pp. 766–794.
92. K. Natarajan, D.J. Shi, and K.C. Toh,
A probabilistic model for minimax regret in combinatorial optimization,
Operations Research, 62 (2014), pp. 160–181.
93. C. Ding, D.F Sun and K.C. Toh,
An introduction to a class of matrix cone programming,
Mathematical Programming, 144 (2014), pp. 141–179.
94. C. Tang, K.K. Phoon, and K.C. Toh,
Lower bound limit analysis for seismic passive earth pressure on rigid walls,
International J. of Geomechanics, 14 (2014), 04014022.
95. C. Tang, K.C. Toh, and K.K. Phoon,
Axisymmetric lower bound limit analysis using finite elements and second-order cone programming,
J. of Engineering Mechanics, 140 (2014), pp. 268–278.
96. Z.Z. Zhang, G.L. Li, K.C. Toh, and W.K. Sung,
3D chromosome modeling with semi-definite programming and Hi-C data,
J. Computational Biology, 20 (2013), pp. 831–846.
97. J.F. Yang, D.F. Sun, and K.C. Toh,
A proximal point algorithm for log-determinant optimization with group Lasso regularization,
SIAM J. Optimization, 23 (2013), pp. 857–893.
98. X.V. Doan, K.C. Toh, and S. Vavasis,
A proximal point algorithm for sequential feature extraction applications,
SIAM J. Scientific Computing, 35 (2013), pp. 517–540.
99. T.H.H. Tran, K.C. Toh, and K.K. Phoon,
Preconditioned IDR(s) iterative solver for non-symmetric linear system associated with FEM analysis of shallow foundation,
International J. for Numerical and Analytical Methods in Geomechanics, 37 (2013), pp. 2972–2986.
100. K. B. Chaudhary, K.K. Phoon, and K.C. Toh,
Inexact block diagonal preconditioners to mitigate the effects of relative differences in material stiffnesses,
International J. Geomechanics, 13 (2013), pp. 273–291.
101. K. B. Chaudhary, K.K. Phoon, and K.C. Toh,
Effective block diagonal preconditioners for Biot’s consolidation equations in piled-raft foundations,
International J. Numerical and Analytical Methods in Geomechanics, 37 (2013), pp. 871–892.
102. K.F. Jiang, D.F. Sun, and K.C. Toh,
An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP,
SIAM J. Optimization, 22 (2012), pp. 1042–1064.
103. Y.J. Liu, D.F. Sun, and K.C. Toh,
An implementable proximal point algorithmic framework for nuclear norm minimization,
Mathematical Programming, 133 (2012), pp. 399–436.
104. X.Y. Zhao, and K.C. Toh,
Infeasible potential reduction algorithms for semidefinite programming,
Pacific J. Optimization, 8 (2012), pp. 725–753.
105. X. Chen, K.K. Phoon, and K.C. Toh,
Performance of zero-level fill-in preconditioning techniques for iterative solutions in geotechnical applications,
International J. Geomechanics, 12 (2012), pp. 596–605.
106. Z. Shen, K.C. Toh, and S. Yun,
An accelerated proximal gradient algorithm for frame based image restoration via the balanced approach,
SIAM J. Imaging Sciences, 4 (2011), pp. 573–596.
107. S. Yun, P. Tseng, and K.C. Toh,
A block coordinate gradient descent method for regularized convex separable optimization and covariance selection,
Mathematical Programming, 129 (2011), pp. 331–355.
108. L. Li, and K.C. Toh,
A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP,
Pacific J. Optimization, 7 (2011), pp. 43–61.
109. S. Yun, and K.C. Toh,
A coordinate gradient descent method for L1-regularized convex minimization,
Computational Optimization and Applications, 48 (2011), pp. 273–307.
Erratum: In Lemma 3.4, add Assumption 2 so that equation (22) is valid.
Before 2011
110. K.C. Toh, and S.W. Yun
An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems,
Pacific J. Optimization, 6 (2010), pp. 615–640.
Numerical results suggest that our algorithm is efficient and robust in solving large-scale random matrix completion problems. In particular, we are able to solve random matrix completion
problems with matrix dimensions up to $10^5$ each in less than 10 minutes on a modest PC.
111. Lu Li and K.C. Toh
An inexact interior point method for L1-regularized sparse covariance selection,
Mathematical Programming Computation, 2 (2010), pp. 291–315.
112. L. Li, and K.C. Toh,
A polynomial-time inexact interior-point method for convex quadratic symmetric cone programming,
J. Math-for-industry, 2 (2010), pp. 199–212.
113. X.-W. Liu, G.Y. Zhao, and K.C. Toh,
On the implementation of a log-barrier progressive hedging method for multistage stochastic programs,
J. of Computational and Applied Mathematics, 234 (2010), pp. 579–592.
114. C.J. Wang, D.F. Sun, and K.C. Toh,
Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm,
SIAM J. Optimization, 20 (2010), pp. 2994–3013.
115. X.Y. Zhao, D.F. Sun, and K.C. Toh,
A Newton-CG augmented Lagrangian method for semidefinite programming,
SIAM J. Optimization, 20 (2010), pp. 1737–1765.
Numerical experiments on a variety of large scale SDPs with the matrix dimension n up to 4,110 and the number of equality constraints m up to 2,156,544 show that the proposed method is very
efficient on certain large SDPs. We are also able to solve the SDP problem fap36 (with n = 4,110 and m = 1,154,467) in the Seventh DIMACS Implementation Challenge much more accurately than
previous attempts. The approximate optimal objective value we obtained for fap36 is 69.85, with the corresponding solution having relative primal and dual infeasibilities, and complementarity gap
(Tr(XS)) all less than 1e-6.
116. N.-H. Z. Leung and K.-C. Toh,
An SDP-based divide-and-conquer algorithm for large scale noisy anchor-free graph realization,
SIAM J. Scientific Computing, 31 (2010), pp. 4351–4372.
A movie showing how the divide-and-conquer algorithm computes the conformation of a protein molecule.
117. P. Biswas, K.C. Toh, and Y. Ye,
A distributed SDP approach for large scale noisy anchor-free graph realization with applications to molecular conformation,
SIAM J. Scientific Computing, 30 (2008), pp. 1251–1277.
118. K.C. Toh,
An inexact primal-dual path-following algorithm for convex quadratic SDP,
Mathematical Programming, 112 (2008), pp. 221–254.
119. K.C. Toh, and K.K. Phoon,
Comparison between iterative solution of symmetric and non-symmetric forms of Biot’s FEM equations using the generalized Jacobi preconditioner,
International J. for Numerical and Analytical Methods in Geomechanics, 32 (2008), pp. 1131–1146.
120. X. Chen, K.K. Phoon, and K.C. Toh,
Partitioned versus global Krylov subspace iterative methods for FE solution of 3-D Biot’s problem,
Computer Methods in Applied Mechanics and Engineering, 196 (2007), pp. 2737–2750.
121. J.S. Chai, and K.C. Toh,
Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming,
Computational Optimization and Applications, 36 (2007), pp. 221–247.
122. K.C. Toh, R.H. Tutuncu, and M.J. Todd,
Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems,
Pacific J. Optimization (special issue dedicated to Masakazu Kojima’s 60th birthday), 3 (2007), pp. 135–164.
123. R.M. Freund, F. Ordonez, and K.C. Toh,
Behavioral measures and their correlation with IPM iteration counts on semi-definite programming problems,
Mathematical Programming, 109 (2007), pp. 445–475.
124. Z. Cai and K.C. Toh,
Solving second order cone programming via the augmented systems,
SIAM J. Optimization, 17 (2006), pp. 711–737.
125. P. Biswas, T.C. Liang, K.C. Toh, T.C. Wang, and Y. Ye,
Semidefinite programming approaches for sensor network localization with noisy distance measurements,
IEEE Transactions on Automation Science and Engineering, regular paper, 3 (2006), pp. 360–371.
126. X. Chen, K.C. Toh, and K.K. Phoon,
A modified SSOR preconditioner for sparse symmetric indefinite linear systems of equations,
International J. Numerical Methods in Engineering, 65 (2006), pp. 785–807.
127. J.S. Chai and K.C. Toh,
Computation of condition numbers for linear programming problems using Pena’s method,
Optimization Methods and Software, 21 (2006), pp. 419–443.
128. G.L. Zhou, and K.C. Toh,
Superlinear convergence of a Newton-type algorithm for monotone equations,
J. Optimization Theory and Applications, 125 (2005), pp. 205–221.
129. G.L. Zhou, K.C. Toh, and J. Sun,
Efficient algorithms for the smallest enclosing ball problem,
Computational Optimization and Applications, 30 (2005), pp. 147–160.
130. K.K. Phoon, K.C. Toh, and X. Chen,
Block constrained versus generalized Jacobi preconditioners iterative solution of large-scale Biot’s FEM equations,
Computers and Structures, 82 (2004), pp. 2401–2411.
131. K.C. Toh, K.K. Phoon, and S.H. Chan,
Block preconditioners for symmetric indefinite linear systems,
International J. Numerical Methods in Engineering, 60 (2004), pp. 1361–1381.
132. S. K. Chua, K. C. Toh and G. Y. Zhao,
An analytic center cutting plane method with deep cuts for semidefinite feasibility problems,
J. Optimization Theory and Applications, 123 (2004), pp. 291–318.
133. G.L. Zhou, K.C. Toh, and G.Y. Zhao,
Convergence analysis of an infeasible interior point algorithm based on a regularized central path for linear complementarity problems,
Computational Optimization and Applications, 27 (2004), pp. 269–283.
134. K. C. Toh,
Solving large scale semidefinite programs via an iterative solver on the augmented systems,
SIAM J. Optimization, 14 (2004), pp. 670–698.
135. G.L. Zhou, and K.C. Toh,
Polynomiality of an inexact infeasible interior point algorithm for semidefinite programming,
Mathematical Programming, 99 (2004), pp. 261–282.
136. K.K. Phoon, K.C.Toh, S.H. Chan, and F.H. Lee,
Fast iterative solution of large undrained soil-structure interaction problems,
International J. for Numerical and Analytical Methods in Geomechanics, 27 (2003), pp. 159–181.
137. G.L. Zhou, K.C. Toh, and D.F. Sun,
A globally and quadratically convergent algorithm for minimizing a sum of Euclidean norms,
J. Optimization Theory and Applications, 119 (2003), pp. 357–377.
138. R.H Tutuncu, K.C. Toh, and M.J. Todd,
Solving semidefinite-quadratic-linear programs using SDPT3,
Mathematical Programming, 95 (2003), pp. 189–217.
139. K.C. Toh, G.Y Zhao, and J. Sun,
A multiple-cut analytic center cutting plane method for semidefinite feasibility problems,
SIAM J. Optimizaton, 12 (2002), pp. 1126–1146.
140. J. Sun, K.C. Toh, and G.Y Zhao,
An analytic center cutting plane method for semidefinite feasibility problems,
Mathematics of Operations Research, 27 (2002), pp. 332–346.
141. K.C. Toh, and M. Kojima,
Solving some large scale semidefinite programs via the conjugate residual method,
SIAM J. Optimization, 12 (2002), pp. 669–691.
142. K.C. Toh,
A note on the calculation of step-lengths in interior-point methods for semidefinite programming,
Computational Optimization and Applications, 21 (2002), pp. 301–310.
143. K.K. Phoon, K.C. Toh, S.H. Chan, and F.H. Lee
An efficient diagonal preconditioner for finite element solution of Biot’s consolidation equations,
International J. Numerical Methods in Engineering, 55 (2002), pp. 377–400.
144. A. Ron, Z.W. Shen, and K.C. Toh,
Computing the Sobolev regularity of refinable functions by the the Arnoldi Method,
SIAM J. Matrix Analysis and Applications, 23 (2001), pp. 57–76.
145. K.C. Toh,
Some new search directions for primal-dual interior point methods in semidefinite programming,
SIAM J. Optimization, 11 (2000), pp. 223–242.
146. K.C. Toh, and L.N. Trefethen,
The Kreiss Matrix Theorem on a general complex domain,
SIAM J. Matrix Analysis and Applications, 21 (1999), pp. 145–165.
147. K.C. Toh, M.J. Todd, and R.H. Tutuncu,
SDPT3 — a Matlab software package for semidefinite programming,
Optimization Methods and Software, 11 (1999), pp. 545–581.
148. K.C. Toh,
Primal-dual path-following algorithms for determinant maximization problems with linear matrix inequalities,
Computational Optimization and Applications, 14 (1999), pp. 309–330.
149. T.A. Driscoll, K.C. Toh and L.N. Trefethen,
From potential theory to matrix iterations in six steps,
SIAM Review, 40 (1998), pp. 547-578.
150. M.J. Todd, K.C. Toh, and R.H. Tutuncu,
On the Nesterov-Todd direction in semidefinite programming,
SIAM J. of Optimization, 8 (1998), pp. 769–796.
151. K.C. Toh and L.N. Trefethen,
The Chebyshev Polynomials of a Matrix,
SIAM J. Matrix Analysis and Applications, 20 (1998), pp. 400-419.
152. K.C. Toh,
GMRES vs. ideal GMRES,
SIAM J. of Matrix Analysis and Applications, 18 (1997), pp. 30–36.
153. K.C. Toh and L.N. Trefethen,
Calculation of pseudospectra by the Arnoldi iteration,
SIAM J. of Scientific Computing, 17 (1996), pp. 1–15.
154. K.C. Toh and L.N. Trefethen,
Pseudozeros of polynomials and pseudospectra of companion matrices,
Numerische Mathematik, 68 (1994), pp. 403–425.
155. K.C. Toh and S. Mukherjee,
Hypersingular and finite part integrals in the boundary element method,
International J. of Solids and Structures, 31 (1994), pp. 2299–2312.
Refereed Conference Papers
1. Anh Duc Nguyen, Tuan Dung Nguyen, Quang Minh Nguyen, Hoang H Nguyen, Lam M. Nguyen, Kim-Chuan Toh, On Partial Optimal Transport: Revising the Infeasibility of Sinkhorn and Efficient Gradient
Method, Oral presentation, 38th AAAI Conference on Artificial Intelligence (AAAI-24), 2024. arXiv:2312.13970
2. Y.C. Yuan, D.F. Sun, and K.C. Toh, An efficient semismooth Newton based algorithm for convex clustering,
Oral presentation, International Conference on Machine Learning (ICML) 2018. arXiv:1802.07091.
3. Z.W. Li, S.G. Yang, L.-F. Cheong, and K.C. Toh, Simultaneous Clustering and Model Selection for Tensor Affinities,
Spotlight presentation, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
4. Z.Z. Zhang, G.L. Li, K.C. Toh, and W. Sung, Inference of spatial organizations of chromosomes using semidefinite embedding approach and Hi-C data,
RECOMB 2013, The 17th Annual International Conference on Research in Computational Molecular Biology, Beijing, China, April 7-10, 2013.
In “Research in Computational Molecular Biology”, Lecture Notes in Computer Science, Volume 7821, 2013, Springer, pp. 317–332.
5. Krishna B. Chaudhary, K.K. Phoon, and K.C. Toh, Fast iterative solution of large soil-structure interaction problems in varied ground conditions,
Proceedings of 14th Asian Regional Conference on Soil Mechanics and Geotechnical Engineering, Hong Kong, China, 23-27 May 2011.
6. K. B. Chaudhary, K.K. Phoon, and K.C. Toh, Comparison of MSSOR versus ILU(0) Preconditioners for Biot’s FEM Consolidation Equations,
The 12th International Conference of International Association for Computer Methods and Advances in Geomechanics (IACMAG), 1-6 October 2008, Goa, India.
7. X. Chen, K.K. Phoon, and K.C. Toh, Symmetric indefinite preconditioners for FE solution of Biot’s consolidation problem,
Geotechnical Engineering in the Information Technology Age (2006): CDROM. Reston: ASCE. (GeoCongress2006, 26 Feb – 1 Mar 2006, Atlanta, United States).
8. K.C. Toh, R.H. Tutuncu, and M.J. Todd, On the implementation of SDPT3 (version 3.1) — a Matlab software package for semidefinite-quadratic-linear programming,
IEEE Conference on Computer-Aided Control System Design, Taipei, Taiwan, 2-4 September 2004.
9. F. Ting, W.J. Heng, and K.C. Toh, Question classification for e-learning by artificial neural network,
Fourth International Conference on Information, Communications & Signal Processing and Fourth IEEE Pacific-Rim Conference On Multimedia, 15-18 December 2003, Singapore.
10. K.K. Phoon, K.C. Toh, S.H. Chan, and F.H. Lee, A generalized Jacobi preconditioner for finite element solution of large-scale consolidation problems,
in Second MIT Conference on Computational Fluid and Solid Mechanics, 17–20 June 2003, Massachusetts Institute of Technology, Cambridge, United States, Vol.1, pp. 573–577, 2003.
11. G.L. Zhou, K.C. Toh, and J. Sun, Efficient algorithms for the smallest enclosing ball problem in high dimensional space,
Novel Approaches to Hard Discrete Optimization, Proceedings of Fields Institute of Mathematics, P. Pardalos and H. Wolkowicz eds., Canadian Mathematical Society, 2002.
Book chapters and others
|
{"url":"https://blog.nus.edu.sg/mattohkc/papers/","timestamp":"2024-11-04T18:38:45Z","content_type":"text/html","content_length":"144150","record_id":"<urn:uuid:7b6f5664-9068-4b03-acbd-f3079e418b75>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00227.warc.gz"}
|
Another application of Hooke's law - Stumbling Robot
Another application of Hooke’s law
Assume a spring obeys Hooke’s law and has length 1 meter. If a force of 100 newtons compresses it by 0.1 meters, how many Joules are required to compress it 0.5 meters? What is the length of the
spring when 20 Joules of work have been expended?
Since the spring obeys Hooke’s law and we are given that 100 newtons compresses it by 0.1 meters, we have,
Then we compute how much work is required to compress the spring 0.5 meters,
Next, we are given an amount of work (20 Joules) and want to solve for a distance
This is the amount the spring is compressed by 20 Joules. Since it’s initial length is 1m we then have the spring’s length when compressed equal to 1m – 0.2m = 0.8m.
Point out an error, ask a question, offer an alternative solution (to use Latex type [latexpage] at the top of your comment):
|
{"url":"https://www.stumblingrobot.com/2015/08/13/another-application-of-hookes-law/","timestamp":"2024-11-09T13:21:16Z","content_type":"text/html","content_length":"55496","record_id":"<urn:uuid:47145985-cea3-471a-9c7a-89060eb1ed2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00175.warc.gz"}
|
pmnewlog(1): stop and restart | Linux Man Page
pmnewlog — stop and restart archive logging for PCP performance metrics
$PCP_BINADM_DIR/pmnewlog [-NPsV?] [-a accessfile] [-c configfile] [-C saveconfig] [-n pmnsfile] [-p pid] [other pmlogger options] archive
pmnewlog may be used to stop and restart a running instance of pmlogger(1). This is most useful for managing multiple sets of Performance Co-Pilot (PCP) archive logs. These archive logs record the
history of performance metric values that may be “played back” by other PCP tools, and they form the basis of the VCR paradigm and retrospective performance analysis services common to the PCP
In normal usage, pmnewlog would be executed by cron(1) in the wee hours to terminate one PCP archive log and start another, i.e. to perform log rotation.
Even more common, would be the execution of pmnewlog from the PCP archive management script pmlogger_daily(1). In this case, direct end-user execution of pmnewlog is most unlikely.
The mandatory argument archive is the base name for the physical files that will constitute the new archive log.
The pmlogger instance to be stopped and restarted must be running on the same system as pmnewlog and is either the primary logger (the default) or the logger with pid as specified by the -p option.
If the -n option is specified, then pmnewlog will use the namespace in the pmnsfile, rather than the default Performance Metrics Name Space (PMNS).
If no -c option is specified, pmnewlog will use pmlc(1) to connect to the running pmlogger(1) and so determine all those metrics and instances that are subject to mandatory logging or advisory on
logging, and the associated logging frequencies. This information is used to synthesize a new pmlogger(1) configuration file. If the -n option is specified, it will also be used for these
interactions with pmlc(1).
If the -c option is specified, pmlogger(1) will be restarted with configfile as the configuration file. Normally configfile would be the same configuration file used to start pmlogger(1) in the first
place, however note that since pmlogger(1) is restarted, any changes to the logging status made using pmlc(1) will be lost, unless these have also been reflected in changes to configfile.
If configfile does not exist, then a search is made in the directory $PCP_VAR_DIR/config/pmlogger for a file of the same name, and if found that file is used, e.g. if config.mumble does not exist in
the current directory and the file $PCP_VAR_DIR/config/pmlogger/config.mumble does exist, then -c config.mumble and -c $PCP_VAR_DIR/config/pmlogger/config.mumble are equivalent.
Access controls specifications for the new pmlogger(1) instance may optionally be provided via the -a option. The contents of accessfile should start with the literal token [access] and conform to
the syntax of the access controls section as described for pmlogger(1).
The -C option may be used to save the configuration file that pmnewlog passes to the newly launched pmlogger(1).
If the pmlogger(1) instance needs to be started under the control of pmsocks(1) to connect to a pmcd through a firewall, the -s option may be used.
The -V option enables verbose reporting of the activity. By default no output is generated unless some error or warning condition is encountered.
The -N option enables a “show me” mode, where the actions are echoed, but not executed, in the style of “make -n”. Using -N in conjunction with -V maximizes the diagnostic capabilities for debugging.
The other pmlogger options are as described for pmlogger(1). Note that pmnewlog does not support the following options of pmlogger(1).
The available command line options are:
Specify access controls file for the new pmlogger.
Load configuration from file.
Save the configuration of new pmlogger in file.
Load an alternative Performance Metrics Name Space (PMNS(5)) from the file pmnsfile.
Perform a dry run.
Restart non-primary logger with PID PID.
Execute as primary logger instance.
Use pmsocks(1) to connect.
Use verbose reporting.
Display usage message and exit.
The following sh(1) script could be executed by root via cron(1) to start a new set of archive logs for the primary logger each evening. A more complete version of this script may be found in
$PCP_BINADM_DIR/pmlogger_daily, and is documented in the manual page for pmlogger_daily(1).
# start new logs for PCP primary logger on this host
# standard place for logs
# each new log is named yymmdd.hh.mm
LOGNAME=`date "+%Y%m%d.%H.%M"`
# do it
[ ! -d $LOGDIR ] && mkdir -p $LOGDIR
cd $LOGDIR
$PCP_BINADM_DIR/pmnewlog -l $LOGDIR/pmlogger.log $LOGDIR
If no configfile is specified, the method for synthesizing a configuration file using a pmlc(1) connection to the existing pmlogger(1) is, of necessity, incomplete. In particular, for metrics with
dynamic underlying instance domains, it is not possible to identify a configuration that logs all instances of a metric all of the time, so rather the synthesized configuration file requests the
continued logging of the set of instances that exist at the time pmlogger(1) is interrogated by pmnewlog.
If this situation is a concern, a fixed configuration file should be used, and passed to pmnewlog via the -c option.
Due to the precious nature of the archive logs, pmnewlog is rather paranoid in its checking and validation, and will try very hard to ensure that an appropriately configured pmlogger(1) can be
restarted, before terminating the existing pmlogger(1).
As a consequence of this checking, pmnewlog tends to generate rather verbose error and warning messages.
metadata (metric descriptions, instance domains, etc.) for the archive log
initial volume of metrics values (subsequent volumes have suffixes 1, 2, ...)
temporal index to support rapid random access to the other files in the archive log
sample script to rotate archives for a number of loggers
if this directory exists within the directory that the archive files will be created by a new pmlogger(1) then the log file (from pmlogger's -l argument) will be linked into the SaveLogs
directory with the name archive.log so it can be inspected at a later time. Because the cron-driven PCP archive management scripts run under the uid of the user “pcp”, SaveLogs typically needs to
be owned by the user “pcp”.
PCP Environment
Environment variables with the prefix PCP_ are used to parameterize the file and directory names used by PCP. On each installation, the file /etc/pcp.conf contains the local values for these
variables. The $PCP_CONF variable may be used to specify an alternative configuration file, as described in pcp.conf(5).
Referenced By
PCP Performance Co-Pilot
|
{"url":"https://dashdash.io/1/pmnewlog","timestamp":"2024-11-11T11:26:28Z","content_type":"text/html","content_length":"29362","record_id":"<urn:uuid:8bf6d57f-87cc-4791-9ffe-aaa98fd39424>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00422.warc.gz"}
|
Toward Reducing the Possibility of False-Positive Results in Epidemiologic Studies of Traffic Crashes
Large databases, such as the U.S. Fatality Analysis Reporting System (FARS), are often used to study factors that influence traffic crashes that result in injuries or deaths, and how such events may
be affected by human activities or environmental conditions. This genre of epidemiologic research presents considerable study design challenges, since crash rates vary by season, day of the week, and
time of day.
To minimize the effects of these extraneous factors, many studies use a double-control matched design as laid out by Redelmeier and Tibshirani in the Journal of Clinical Epidemiology in 2017. This
design identified the particular days (or portions of a day) in which the condition/factor of concern was present. The concern might relate to a national election or holiday, or Super Bowl Sunday, or
the change to/from daylight savings time (DST). For each such day of concern, and for each year for which data are available, two comparison days—seven days before and seven days after—are
identified. The notation for the counts of interest on these three days is shown in Figure 1.
The statistical methods used to convert the summed counts (C, B, A) in Figure 1 into inferential statistics typically follow those laid out by Redelmeier and Yarnell in CHANCE (2013). The
estimand—the comparative parameter to be estimated—is the rate ratio (RR), namely the ratio of the expected rate of interest for the day of concern to the expected rate of interest for the (matched)
control days. The estimator of the RR is the ratio of C to (A + B)/2 (under the assumption that the denominator is constant across all three days). Other estimands, such as the rate difference, may
also be of interest.
Figure 1 illustrates the -7/+7 study design using the FARS data set, and our notation. The day of concern is the Monday immediately after the change to Daylight Savings Time. In any given year i, we
denote by c[i], b[i], and a[i]—the relevant count on the Monday of concern, the Monday before and the Monday after. Their sums over all years studied are denoted by C, B, and A. Here, as is the case
in many reports, the counts refer to the numbers of people involved in a traffic crash with at least one fatality, but the numbers of cars involved in crashes could also be considered (as seen
The null hypothesis is that the expected count of interest per day (i.e., the rate) on days of concern does not differ from the expected counts of interest per day for the pair of “control” days
(i.e., that the RR parameter equals 1). The null hypothesis is typically tested by calculating the binomial probability that C or more of the total (B + C + A) would occur on the days of concern,
assuming an expected proportion of 1/3. A confidence interval (CI) for the RR is obtained using the same binomial model. This interval can be calculated exactly or, since C, B, and A are usually
large, as the anti-log of a z-based CI that uses the standard error (SE) of the log of the point-estimate.
The SE is computed as (1/C + 1/[A+B])^½. The same p-value and the same CI would be obtained if C and (A + B) were treated as two Poisson random variables. As shown by Clayton and Hills (1993), the SE
can be derived by fixing (conditioning on) their sum, and treating C as a binomial random variable with expected proportion RR/(2+RR), or by an unconditional argument.
This SE, involving just the three observed totals, C and (A + B), and the CI derived from it, is too narrow when applied to traffic-related counts of total crashes involving at least one fatality, or
of the number of individuals involved, injured, or dying in such crashes. It leads to too many empirical rate ratios being declared statistically significant, and to overly precise SEs.
Orientation: The Rutherford Standard Error
To understand why this SE is too narrow, it helps to consider an example of a Poisson process. Imagine counting the scintillations (flashes of light) produced by the decaying products from a
radioactive source in a laboratory setup resembling that used by Rutherford and colleagues in 1910. The experiment involved 42 separate occasions, each with a duration of 3 minutes.
On each occasion, the distance from the source to the target is fixed at X during the first and third minutes, and made 5% shorter during the second minute. Just as in Figure 1, the scintillation
counts are denoted in the successive minutes of each occasion i as b[i], c[i], and a[i], and their sums over all occasions as B, C, and A. Suppose that, from one occasion to another, the distance
from the source target may vary.
The purpose is to use these data to estimate the ratio of the radioactive intensity detected when the source-distance is 0.95X rather than X. The estimand is the Rate Ratio parameter RR, and the data
yield an estimate of RR. As Rutherford and colleagues were able to show, both theoretically and empirically, the “control” counts—b[i] and a[i]—arise from a Poisson distribution with a common (but
unknown) mean m[i], and c[i] arises from a Poisson distribution whose mean is RR times m[i]. Because of the between-occasion variation in the strength of the source, the unknown means (m[i] and RR
times m[i]) vary from occasion to occasion.
Nevertheless, each within-occasion ratio c[i] / [(a[i] + b[i])/2] can be used to estimate the common RR. Moreover, the estimator of RR is the ratio of C to (A + B)/2, and the expression (1/C + 1/[(A
+ B)])^½ provides an appropriate SE for its log. However, this SE only holds if, on each occasion, the b[i], c[i], and a[i] counts follow the two Poisson laws just described. It would be too narrow
if, within some occasions, the strength of the source is allowed to vary from minute to minute. This SE formula can be referred to as the Rutherford SE, to emphasize that its accuracy can only be
guaranteed within these controlled laboratory conditions.
In traffic crash epidemiology, a large number of environmental and social factors cannot be controlled (or adequately controlled for) by the investigators. Thus, one would not expect the counts from
the two comparison days in the same year to arise from a Poisson distribution with the same mean even if these two days are on the same day of the week and just 14 days apart.
FARS data can be used to demonstrate empirically that the pair of counts from days just 14 days apart in the same year do not share the same mean and variance, to identify 1,094 non-overlapping pairs
of days that were 14 days apart, and to derive the corresponding b[i] and a[i] counts (of the individuals involved in car crashes that have at least one fatality) for these pairs (Figure 2). For each
pair, we calculated the within-pair difference b[i]–a[i] and converted it to a Z-value, assuming that for any particular pair, the two counts arose from two independent Poisson distributions with the
same expected value.
Under this assumption, which is at the core of the Rutherford SE used by Redelmeier and Yarnell, the resulting histograms should resemble standard Gaussian distributions. However, Figure 2 provides
considerable evidence of “extra-Poisson” variation in the b[i] and a[i] counts for the two comparison days in the same year. The observed Z-values have a distribution that is two to three times
wider—flatter relative to the familiar bell shape—than Rutherford would have observed within his (b[i] a[i]) pairs (Figure 2). Thus, even though the expected ratio under the null is 1:2 in the -7+7
design, it does not automatically follow that the observed split of the (b[i] + c[i] + a[i]) count in any one year, or in the overall (B + C + A) count, between the two types of days should follow a
binomial distribution. Instead, it will exhibit extra-binomial variation.
There is additional proof that using the Rutherford SE in double-control matched designs of traffic-related counts produces confidence intervals that are too narrow, and there also are other existing
methods (extra-binomial and bootstrap-based) for calculating the SE for the log(RR) estimate. These do not make the restrictive Poisson assumptions that would only be satisfied in a laboratory
Three alternative data-based methods (jackknife, permutation, and split-halves) and two regression-based methods (quasi-Poisson and negative binomial) can be proposed.
Finally, the seven alternative SEs are more conservative and more realistic than the Rutherford SE. All of the code and data to reproduce the analysis can be found here.
Standard Errors and Their Impact on Statistical Inference of Log Rate Ratios
Eight methods may be used to compute the SE of the log(RR): the Rutherford SEs, extra-binomial, jackknife, bootstrap, permutation method, split-halves estimate, negative binomial regression, and
quasi-Poisson regression. Box 1 provides more detail for each method.
To compare the performance of these methods, we applied each method to the calculation of the SE of the log(RR) comparing a day of concern and two comparison days a week apart, for each of the 365
possible days of concern in the calendar. We used the FARS data and repeated this analysis, each time with 42 triplets (one for each year from 1975 to 2016) and excluding February 29 from all
The eight methods can also be applied to three simulated data sets. The three simulated data sets were derived from fitted values of a quasi-Poisson regression applied to the count of all individuals
involved in a crash with at least one fatality in the FARS data. This measure is conventionally used in the literature.
The model had 63 parameters, one for each year (42), month (12), day of the week (7), January 1 (1), and July 4 (1). These fitted values were then used as the means for each day of the year for the
The first simulated data set has no extra-Poisson variation, the second has half the extra-Poisson variation as the FARS data set, and the third has double the extra-Poisson variation as the FARS
data set. This resulted in a total of 11680 (8 x [3 + 1] x 365) SEs for the log(RR): one for each of the 365 days, for the eight methods, over the FARS data and the three simulated “data sets.” The
distribution of SE estimates, as well as the median and interquartile range (IQR) for each method, are presented as boxplots in Figure 3.
For the FARS data, the smallest median and IQR for any of the methods tested were from the Rutherford SE: median = 0.011 and IQR = 0.001. The next-smallest median and IQR were for the extra-binomial,
bootstrap, jackknife, and split-halves methods, which all had medians of 0.025 and IQRs of 0.005. The most-conservative was the permutation method, which had a median of 0.026 and an IQR of 0.006.
On average, the IQR for the other methods was five to six times larger than for the Rutherford method, and the medians were more than double (Figure 3).
For the first simulated data set with no extra-Poisson variation, all the SE methods tested had roughly the same median and IQR (Figure 3). Notably, the median (0.011) and IQR (0.001) from the
Rutherford SE were similar in the simulated Poisson data set as in the FARS data, despite the fact that the simulated, purely Poisson data set has no extra-Poisson variation and the FARS data have a
considerable amount.
The second simulated data set, with half the extra-Poisson variation, had medians and IQRs that changed for all the methods relative to the amount of variation, except for the Rutherford method
(Figure 3). In the simulated data set with double the amount of extra-Poisson variation, all the methods again responded to the doubling in variance, with the exception of the Rutherford method
(Figure 3).
These simulations demonstrate that the Rutherford SE is systematically smaller and less variable than the seven other methods, and fails to incorporate extra-Poisson variation.
To examine the impact of these different methods for calculating SEs on statistical inference, we used the FARS count of individuals involved in car crashes to compute a Z-value for every day of the
year for each SE method.
A Z-value measures how many standard deviations an estimate is from the null, assuming the null hypothesis is true. Z-values greater than 2 or less than -2 were considered as “statistically
significant,” using a conventional threshold for alpha set at 0.05.
Figure 4 reports a representative sample of these results as heat calendars, with each day representing the Z-value estimated using 42 years of pooled data. Each graph uses the same color scale to
display the intensity of the Z-values. The positive end of the scale is shown in dark red with a maximum value of 40. The negative end of the scale is in dark green with a minimum value of -25.
The maximum and minimum were determined by taking the maximum and minimum Z-value among the 2,920 estimates and rounding them to the nearest base 5 number. In addition, Z-values that were considered
significant at alpha = 0.05 were shown in bold, while non-significant values were shown in gray to provide a visual contrast.
Not surprisingly, it is evident from the heat calendars in Figure 4 that the Rutherford calendar has more “significant” days than any of the other two calendars, and in general, the Z-values are
larger in absolute value. The calendars for the bootstrap and negative binomial methods are similar to each other and show less intensity relative to the Rutherford method. The calendars for the
other five alternative methods strongly resemble the non-Rutherford calendars in Figure 4 (additional calendars can be found here).
In total, the Rutherford SEs led to 225 days out of 365 being incompatible with the null (i.e., an absolute Z-value greater than or equal to 2), while the other methods yielded roughly half that
number (Figure 4). This is not surprising, given the results in Figure 2: The overly narrow nature of the Rutherford SEs leads to many more “statistically significant” results, largely by failing to
account for the extra-Poisson variation that is present in the numbers of individuals who are involved in a traffic crash with at least one death that occurs on the same day of the week but just
seven or 14 days apart in the same year.
Counting Individuals in Crashes vs. Number of Cars in Crashes
So far, the focus has been on the number of total individuals involved in traffic crashes with at least one fatality. Clearly, the multiplicity is one of the reasons for the greater-than-Poisson
noise. However, the greater-than-Poisson noise is still evident, even when any multiplicity is removed and only the number of cars in crashes involving at least one fatality (Figure 5) are counted.
The reason goes back to the large number of factors that affect the variation in the numbers of fatal crashes on the same day of the week, but two weeks apart, in a wide geographical area. The
count-triplets recorded by Rutherford emanate from the same amount of inanimate source material (the alpha decay of an atom), observed over 3 adjacent minutes in one secluded laboratory. The narrow
SEs derived from this ideal model are not appropriate for epidemiologic studies of fatal traffic crashes.
In Conclusion
This study compared eight ways to calculate SEs for the log(RR) that contrast rates of traffic-related events using the “double control” design, and examined their impact on statistical inference.
The results demonstrate that the currently used method (which only holds under strict Poisson assumptions) does not account for the demonstrable extra-Poisson variation in the numbers of persons or
crashes on days that are just 14 days apart but on the same day of the week. These methods lead to an excess of “statistically significant” days.
This study also demonstrates seven alternative methods for calculating standard-errors that do not suffer from this problem and should be used instead. Data-based SEs (Jackknife and Bootstrap) are
more appropriate than overly tight SEs based on statistical models that may not mimic actual variation. A switch from overly tight, good-only-in-the-laboratory, model-based SEs to empirical (robust)
SEs would lead to improved statistical inference in this field of research.
Further Reading
Clayton, D., and Hills, M. 1993. Statistical methods in epidemiology. Oxford, UK: Oxford University Press. Chapter 13.
Redelmeier, D.A., and Yarnell, C.J. 2016. Can tax deadlines cause fatal mistakes? CHANCE 16;26(2):8–14.
Rutherford, E., Geiger, H., and Bateman, H. 1910. LXXVI. The probability variations in the distribution of alpha particles. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of
Science. 20 (118):698–707. doi: 10.1080/14786441008636955.
Harper, S., and Palayew, A. 2019. The annual cannabis holiday and fatal traffic crashes. Injury Prevention 28:injuryprev-2018.
Weast, R. 2017. Temporal Factors in Motor Vehicle Crashes—10 Years Later. Arlington, VA: Insurance Institute for Highway Safety.
BOX 1: Methods to Compute SE of log(RR)
1. Rutherford: Staples and Redelmeier illustrated the SE of the log(RR) (in CHANCE. 2013). The log(RR) is estimated as log{C / ([A + B] / 2)}, where C is the total number of instances (of the entity
being counted) in the n days of concern C = Σc[i]; where B is the total number of instances on the “before” control days, B = Σb[i]; A is the corresponding sum, Σa[i], of the counts in the n “after”
days and i is the index of the years over which the counts are summed. SE (log [ ] = (1/C + 1 / [A + B])^½. It is derived by assuming that C and B + A are two Poisson counts, with expectations μ
[null] and 2 × μ[null] × RR respectively. With this assumption, the single random variable obtained by conditioning on their sum, i.e., C | (C + A + B), has a binomial distribution with parameters ′n
′ = C + A + B and π = RR / [RR + 2].
2. Extra-binomial: This SE for the log(RR) estimator explicitly uses the n matched (by year i) pairs of counts, {c[1],[a[1] + b[1]]} to {c[n],[a[n] + b[n]]}. It allows for the strong possibility that
even if the RR were truly 1, the counts for the three days in the same year have considerably greater-than-Poisson variation, and thus that each c[i] | (c[i] + a[i] + b[i]) follows an extra-binomial
distribution. Instead of a model-based SE that assumes merely binomial variation, this SE is based on the empirical (actual) year-to-year variations in the proportions of the instances that occur on
the day of concern. It is obtained by fitting an intercept-only logistic model to the n pairs of counts, and then applying the “sandwich” (robust, “empirical”) estimator of the SE (see R code).
3. Jackknife: This SE also explicitly uses the matching. Systematically excluding each triplet in turn provides the n jackknife estimates log[̂ ][[-1]] to log[ ][[-n]], where the subscript denotes the
deletion of the i^th data point i = 1,…,n. Let d[i] denote the amount by which the i^th of these differs from loĝ , the jackknife SE for log[̂ ] is calculated as SE[jackknife] = {^n-1⁄[n] × Σd[i]^2}^
4. Bootstrap: This SE also explicitly uses the n triplets of matched counts. It creates several data sets, each one containing n triplets sampled with replacement from the indices 1 to n. The
bootstrap SE is calculated as the standard deviation of the several estimates of log[ ] obtained from these different perturbations of the original data set.
5. Permutation: This SE also explicitly uses the matching. Within each triplet, one of the three days is chosen at random to serve as the day of concern, with the remaining two serving as the
“reference” days. A value of log[ ] is calculated. This process is repeated several times and the standard deviation of the several estimates serves as the (null) SE. Under the null, there are three
choices for the possible day of concern for each of the 42 years, so there are 3 to the power of 42 (1.09e + 20) possible rate ratios. Since it takes too long to generate them all, they are sampled
6. Split-Halves: This SE also preserves the matching. A random n/2 of the n triplets are randomly selected. A log[ ]is calculated from the resulting C and (A + B)/2, and subtracted from the log[ ]
obtained from the remaining half. This process is repeated several times, and the standard deviation of the several differences computed. The SE is obtained by multiplying this standard deviation by
1/2 if n is even, and by a slightly smaller factor if n is odd (see R code).
7. Negative Binomial: This SE is obtained by regressing the 3n observed counts on n + 1 regressors, namely an indicator variable of the day of concern, plus a factor of length n representing n
separate intercepts—one per triplet—and using the generalized linear regression function shown in the code. The output consists of the log(RR) estimate and its SE, together with the n fitted
intercepts and their SEs, along with the fitted extra-Poisson parameter θ. The smaller the θ value, the greater the extra-Poisson variation, and the SE of interest. This regression can be seen as
model-based—rather than actual—matching: For triplet number 5, say, conditional on its own separate intercept μ5, the two reference counts follow a negative binomial distribution with mean μ[5] and
variance μ[5] + μ[5]2 / θ, and the count on the day of concern follows a negative binomial distribution with mean RR × μ5 and variance RR × μ5 + (RR × μ5)^2 / θ.
8. Quasi-Poisson: The SE is also obtained by regression, but using the Quasi-Poisson family in the generalized linear regression function (see code). The output is similar, but with the extra-Poisson
variation captured by a dispersion parameter, called D. The larger the D value, the greater the extra-Poisson variation. For triplet number 5, conditional on its own separate intercept μ5, the two
reference counts follow a quasi-Poisson distribution with mean μ5 and variance μ5 × D, and the index count follows a quasi-Poisson distribution with mean RR × μ5 and variance RR × μ5 × D. While
suggesting the choice between methods 7 and 8 depends on the application, the literature (Venables and Ripley. 2002; Ver Hoef and Boveng. 2007) tends to favor the more-flexible Quasi-Poisson model:
It can model both under- and over-dispersion, and is more conservative in its weighting of large counts.
About the Authors
Adam Palayew is a PhD student in the Department of Epidemiology at the University of Washington. He has a master’s degree in epidemiology from McGill University. His interests include infectious
diseases and the health of people who use substances, as well as reproducibility in science and epidemiologic methods.
James Hanley began his career as a biostatistician for multi-center clinical trials in oncology. Since joining McGill University in 1980, he has taught in the graduate programs in epidemiology
and biostatistics, written several expository articles, and collaborated widely. His recent research has focused on cancer screening; his side interests include the history of epidemiology and of
Sam Harper is an associate professor in the Department of Epidemiology, Biostatistics, & Occupational Health at McGill University and holds an endowed chair in public health at Erasmus Medical
Center. He received his PhD in epidemiologic science from the University of Michigan. His research focuses on understanding population health and its social distribution, with specific interests
in experimental and non-experimental evaluation of social policies and programs, social epidemiology, injuries, and reproducible research.
|
{"url":"https://chance.amstat.org/2021/04/traffic-crashes/","timestamp":"2024-11-05T03:20:14Z","content_type":"application/xhtml+xml","content_length":"69949","record_id":"<urn:uuid:f5ad8d1c-67e2-4d1f-9ff2-a3523cf8dc37>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00618.warc.gz"}
|
Push Pull Force Calculator - CalculatorsPot
Push Pull Force Calculator
Published on
When it comes to physics, understanding the forces at play in any scenario is crucial, whether you’re a student, an engineer, or just someone curious about how things work. One essential concept is
the net force acting on an object, which is especially important in situations involving both pushing and pulling forces. To simplify these calculations, the Push-Pull Force Calculator becomes an
indispensable tool. This article breaks down the workings of this calculator, providing a clear definition, the formula it uses, step-by-step examples, and a conclusion highlighting its applications.
Push-Pull Force Calculator
The Push-Pull Force Calculator is a straightforward, user-friendly tool designed to calculate the net force acting on an object when there are both pushing and pulling forces involved. It helps in
understanding how these forces interact and influence the movement or stationary state of objects.
Purpose and Functionality
The main purpose of the Push-Pull Force Calculator is to determine the resultant force, or net force, acting on an object. This is crucial for analyzing motion and predicting how an object will move
under certain conditions. It simplifies the process of calculating net force by automating the computation, thereby reducing human error and saving time.
Formula for Net Force
The formula it uses is simple yet powerful:
• netFnet is the net (resultant) force applied on the object.
• pullFpull is the magnitude of the pulling force applied.
• pushFpush is the magnitude of the pushing force applied.
Inputs Required
• Pulling Force (pullFpull): The magnitude of force applied to pull the object, measured in newtons (N).
• Pushing Force (pushFpush): The magnitude of force applied to push the object, also measured in newtons (N).
Calculation Steps
1. Identify the Forces: Determine the magnitudes of both the pulling and pushing forces applied to the object.
2. Apply the Formula: Subtract the pushing force from the pulling force to calculate the net force acting on the object.
Example Calculation
Suppose an object is being pulled with a force of 100 N and pushed with a force of 60 N in the opposite direction.
• Pulling Force (pullFpull): 100 N
• Pushing Force (pushFpush): 60 N
Using the formula:
The net force acting on the object is 40 N in the direction of the pulling force.
Relevant Information Table
Input Description Unit
pullFpull Magnitude of the pulling force N
pushFpush Magnitude of the pushing force N
netFnet Resultant (net) force acting on the object N
The Push-Pull Force Calculator is a simple yet powerful tool that plays a crucial role in physics and engineering. By calculating the net force in push-pull scenarios, it aids in analyzing motion,
designing mechanical systems, and understanding the principles that govern how objects move. Whether for educational purposes, professional projects, or personal curiosity, this calculator makes the
complex world of forces much more accessible and understandable.
Leave a Comment
|
{"url":"https://calculatorspot.online/engineering-tools/push-pull-force-calculator/","timestamp":"2024-11-03T16:50:00Z","content_type":"text/html","content_length":"102898","record_id":"<urn:uuid:efe8a3ba-3720-45ad-9b8e-77233df27e1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00430.warc.gz"}
|
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
My husband has been using the software since he went back to school a few months ago. Hes been out of college for over 10 years so he was very rusty with his math skills. A teacher friend of ours
suggested the program since she uses it to teach her students fractions. Mike has been doing well in his two math classes. Thank you!
B.C., Florida
I got a A in my class. Thanks for the help!
Farley Evita, IN
Be it any equation, within seconds not only you have the answers but also the steps to refer to. This is awesome.
Allen Donland, GA
As a math teacher, Im always looking for new ways to help my students. Algebrator not only allows me to make proficient lesson plans, it also allows my students to check their answers when I am not
Kelly Brown, NY
Search phrases used on 2007-04-09:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• Free worksheets Maths Grammer Hindi
• converter octal TI-89
• mcdougal littell geometry"explorations and applications" chapter 4 tests
• buy homework solutions
• LU Ti-89 help
• sample KS2 math
• solving rational expressions with four variables in the denominator
• kumon free worksheet
• t-charts for grade 8 problems
• square of a binomial leeson plan, worksheets
• free ged worksheet
• lesson plan: using Alge-tiles
• two step division equation worksheets
• Log2 graphics calculator
• algebra 2 linear programing word problems "algebra 2"
• solving non-homogeneous 2nd order equations
• "ti-84 software" factor
• C- Language Highest Common Factor
• answer key to prentice hall geometry
• list of algebra 1 formulas for powers
• boolean algebra math
• circumferance formula
• algebra for 6 graders
• summation calculator download
• printable math games for 7th grade
• math class 9th circle formula
• "maths circle"
• "trig identity solver"
• coordinate graphing pictures
• ged sample exam paper
• "rearranging equations" basics
• precalculus for dummies
• why cant you have a radical at the bottom of a fraction?
• Integer Sample Questions
• free ti-84 games
• Trivia of algebra
• linear equations yr 10
• McDougal Littell vocabulary worksheet
• equation for a hyperbola y = TI 83
• activities for least common multiple
• greatest common factor of large numbers
• kumon level g test pass mark
• completing the square with Matlab
• solving equations with Varables on each side
• dr math-solving equations using fractions and decimals
• multiplacation flashcards download
• download free 1st class maths
• lesson plans first grade word problems
• college algebra problems
• algebra interactive lessons
• factoring program for TI-83
• middle school math with pizzazz book e creative publications
• O Level Mathematics paper free
• free algebra calculator download
• ti-89 + double angle formula
• mcdougal littell study guide answers
• math solver
• help on the distributive property 7th grade
• prentice hall mathematics algebra1
• solving 3rd degree equations
• www.AAAmath .com
• grade 10 maths exam paper
• Word Problems online solver
• year 10 maths test and answers
• what's the greatest common factor
• simplification calculator
• grade 5 area and perimeter test and ontario
• maths-how to solve equations
• online equation solver program multiple variables
• Glencoe/McGraw- HIll Advanced Mathematical Concepts Answer Key
• algebra: order of operations work sheet
• mode and range/maths
• download calculator fraction exponents
• factoring algebra revision year 9
• exponent online calculators
• math problem for ninth graders
• random number java determine how many times a number repeats
• adding fractions worksheet
• formula for adding ascending number
• how do i solve logarithms
• equations polinomial java
• quadriatic formula
• square root simplify
• rational equation simplifier
• Graphing Linear Equations in Three Variables
|
{"url":"https://softmath.com/algebra-help/algebra-tile-worksheet.html","timestamp":"2024-11-10T05:30:49Z","content_type":"text/html","content_length":"35038","record_id":"<urn:uuid:565fbf8b-615a-4de5-a00e-8e4d87bd8907>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00538.warc.gz"}
|
Gravitational effect creates the irreversibility
The individual objects that make up our universe have an inevitable interaction with the environment. The reversible process is an ideal process in which the object can be treated with complete
isolation from the environment. In reality, no process is absolutely reversible, and no system can operate in a completely isolated environment. Irreversibility is the most natural characteristic of
all phenomena and a natural law. Since every phenomenon is reversible, it’s of great interest to investigate the physical mechanism of the irreversibility and to answer what creates it.
Obviously, we cannot create an environment that can entirely eliminate the influence of gravitational field. An object occupies a position and takes up a certain amount of time to generate the
space-time fabric in the universe, and the gravitational force must have always act on it. The position that the object takes depends on the gravitational field, which in turn depends on the position
of the object. The object at different positions has different quantized states in a gravitation field and thus, its mass related to its total internal energy is different at different energy states.
The situation can become quite complicated because the object always moves from one place to another. To do that some work will be supplied against the gravitational force. In order to tie the
gravitational field to the irreversibility, we here make the crucial connections among the theory of relativity, quantum physics, energy conservation and thermodynamics. The energy of the universe is
always constant and the law of energy conservation works. We follow these rules to begin our analysis.
We start at position A to move an object to another position B, and then we move it back to position A in a gravitational field. The gravitational force must always have an effect on it under any
circumstances, and the object at positions A and B has different quantized states. If the principle of energy conservation works, the work done during the process from position A to position B is not
symmetrical for that done during the return trip from position B to position A. We have had to do an additional work against the gravitational force, which may be written mathematically in this way
m is the mass of the object, which is related to its total internal energy E by m=E/c^2 , c is the velocity of light, g is the free fall acceleration and H is the height in a gravitational field.
Figure 1 Energy diagram for possible transitions
We first think of the transitions from different conduction bands to the valence band in a quantum well (QW) diode in a gravitational field, for example, from energy state E[1 ]to energy state E[0],
as show in Fig. 1. This transition happens by emitting light with a frequency of ω[c-v]. According to Planck’s formula, the energy of a photon related to its frequency will be given by hω[c-v], h
is a fundamental physical constant named the Planck constant. During the return trip, a photon with frequency is absorbed to achieve the transition from energy state E[0] to energy state E[1]. The
energy of this photon is hω[v-c] . Since the transition occurs in a gravitational field, a falling distance H exists from energy state E[1] to energy state E[0]. Therefore, the mass will be given by
m[1]=E[1]/c^2at the energy state E[1] and m[0]=E[0]/c^2 at the energy state E[0], respectively. As a result, for a stable free fall acceleration g, the additional work should be done as
Because the height H is not less than zero and energy state E[1] is higher than energy state E[0], we have
The frequency is higher than the frequency in the gravitational field, suggesting that the QW diode can only detect and modulate higher-energy photons than those emitted by itself. The heights H
are normally very small, and consequently, the frequency differences between and are small.
Figure 2 Possible paths between two points in a gravitational field
Proceeding in this way, we consider an object which is moved from position A to position B and brought back to position A along the same path in a gravitational field, as shown in Fig. 2. Both
position A and position B are separated by the height H in this gravitational field with the free fall acceleration g. The object has a total internal energy E[b] at position B and E[a] at position A
, respectively, in which E[b] is not less than E[a]. So, we have done a net amount of work against the gravitational force, which is equal to
Because the height H is not less than zero and E[b] is not less than E[a], we have
Figure 3 Change in net work in an irreversible process
In thermodynamics, we use the mean molecular kinetic energy as the definition of the temperature T. Suppose that a system operates from condition A(T[a], V[a]) to condition B(T[b], V[b]), changing
energy states A(E[a], H[a]) to B(E[b], H[b]) in a gravitational field. For simple, E[b] is not less than E[a], and H[b] is not less than H[a]. When the system goes from B(E[b], H[b]) to A(E[a], H[a])
and returns back from A(E[a], H[a]) to B(E[b], H[b]), some additional work will be done. Since and are equal to and , respectively, according to the equivalence principle, the net amount of work
done is given by
For a stable temperature T[a]=T[b], E[a] is equal to E[b], the net work done depends on the change of the volume, and will be given by
For a stable volume V[a]=V[b], H[a] is equal to H[b], the net work done depends on the change of the temperature, and will be given by
Our postulations are based on the law of energy conservation and the equivalence principle: (1) The individual objects cannot be completely isolated from the environment in reality, and the
gravitational force must always act on them under any circumstances; (2) The objects at different positions have different quantized states in a gravitation field and their masses in turn depends on
their energy states. (3) the amount of work done against the gravitational force is different when the object moves from one place to another and goes back to the starting position. The theory of
relativity, quantum physics and energy conservation all fit together to connect the gravitational field with the irreversibility. We can conclude that the gravitational effect creates the
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in
|
{"url":"https://communities.springernature.com/posts/gravitational-effect-creates-the-irreversibility?channel_id=behind-the-paper","timestamp":"2024-11-06T05:52:21Z","content_type":"text/html","content_length":"134272","record_id":"<urn:uuid:6913c448-1065-4374-8d8e-ad0a73b95a76>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00008.warc.gz"}
|
Examples of Reporting or Ignoring Interactions
Lead Author(s): Jeff Martin, MD These are guidelines for reporting or ignoring interactions. First consideration must be given to the clinical, statistical, and practical decisions.
Effect Small - P-Value Large - Ignore
If the two strata give results of 2.3 and 2.6 and a p-value for the test of heterogeneity of 0.45, what should we do with it?
• We should ignore this difference and not report the presence of interaction.
Effect Small - P-Value Small - Ignore
What if the p value is 0.001?
• This is an example of where we should still ignore it because this difference really is pretty small from a clinical or biologic perspective -- not substantively meaningful.
Effect Large - P-Value Small - Report
What if we got 2.0 in one stratum and 20 in another and a p value of 0.001.
• Here, this is worth reporting.
Effect Large - P-Value Getting Larger - Report
If we saw a difference of 2 and 20 and a p value of 0.2,
• we still might want to report or show this interaction rather than ignoring it.
As the p value gets higher, I would be less and less interested in reporting and more and more interested in just lumping the stratum together.
Effect Not Big - Depends on P-Value
How about a difference between 3 and 4.5?
• This is not that big a difference in clinical or biological terms.
• Hence I would probably ignore it if the p value was 0.3 and be on the fence about it even if the p value was very small.
Qualitative Interaction - Report
Finally, how about in the presence of what appears to be qualitative interaction?
• I would have a lower threshold to report it, perhaps even up to a p value of 0.2.
Again, the p value does not have any different meaning here than in other contexts and I am not saying that a p of 0.2 is statistically significant. I am just stating that it is reasonable to report
stratum specific differences of large magnitude, even if the p value is up to 0.2. Such a report still requires dedicated confirmation in other studies, hopefully with adequate statistical power.
|
{"url":"https://ctspedia.org/ctspedia/egreportinter","timestamp":"2024-11-11T20:47:40Z","content_type":"text/html","content_length":"7690","record_id":"<urn:uuid:b466c082-a968-44c7-b5db-b89997ebd987>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00328.warc.gz"}
|
Towards the construction of quantum field theories from a factorizing S-matrix
Lechner G (2007)
Publication Type: Book chapter / Article in edited volumes
Publication year: 2007
Publisher: Springer Basel
Edited Volumes: Rigorous Quantum Field Theory
Series: Progress in Mathematics
Book Volume: 251
Pages Range: 175-197
DOI: 10.1007/978-3-7643-7434-1_13
Starting from a given factorizing S-matrix S in two space-time dimensions, we review a novel strategy to rigorously construct quantum field theories describing particles whose interaction is governed
by S. The construction procedure is divided into two main steps: Firstly certain semi-local Wightman fields are introduced by means of Zamolodchikov’s algebra. The second step consists in proving the
existence of local observables in these models. As a new result, an intermediate step in the existence problem is taken by proving the modular compactness condition for wedge algebras.
Authors with CRIS profile
Additional Organisation(s)
Involved external institutions
How to cite
Lechner, G. (2007). Towards the construction of quantum field theories from a factorizing S-matrix. In Rigorous Quantum Field Theory. (pp. 175-197). Springer Basel.
Lechner, Gandalf. "Towards the construction of quantum field theories from a factorizing S-matrix." Rigorous Quantum Field Theory. Springer Basel, 2007. 175-197.
BibTeX: Download
|
{"url":"https://cris.fau.de/publications/264198986/","timestamp":"2024-11-08T08:29:07Z","content_type":"text/html","content_length":"8863","record_id":"<urn:uuid:29bee82e-8249-4e41-bed1-7839c53454cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00263.warc.gz"}
|
Aggregated Intelligence
In a previous post about guidelines for programming in C#2.0, I had written about how I did not understand one of the guidelines of using "ForEach via an Anonymous method predicate". Well now I know.
And I illustrate with some examples below. These examples show the use of the ForEach as well as the use of Anonymous methods.
using System;
using System.Collections.Generic;
//A test class to insert into a List
public class TestClass
string _string;
int _number;
public TestClass(int num, string s)
{ _string = s; _number = num;}
public int Number { get{ return _number;}}
public string String{ get {return _string;}}
public class MyClass
public static void Main()
//Finding Even Integers in List;
List<int> integers = new List<int>();
for(int i=1; i<=10; i++) integers.Add(i);
//find even numbers using anonymous methods
List<int> even = integers.FindAll(delegate(int i)
return i%2==0;
WL("Even Numbers");
//Here is an example of using ForEach - and the Anonymous methods
//The one thing i still dont understand is why this is more
//efficient than using foreach directly.
even.ForEach(delegate(int i){Console.Write(i + " ");});
//Computing Sum of Integers in List<T>
int sum=0;
integers.ForEach(delegate(int i){ sum+=i; });
WL("Sum " + sum);
//Sort TestClasss in List<T>
List<TestClass> testClass = new List<TestClass>();
testClass.Add(new TestClass(10,"Milk"));
testClass.Add(new TestClass(5,"Cheese"));
testClass.Sort(delegate(TestClass x, TestClass y){
return Comparer<int>Default.Compare(x.Number,y.Number);
WL("Sorted TestClass");
testClass.ForEach(delegate(TestClass o){WL(o.Number + " " + o.String);});
#region Helper methods
private static void WL(object text, params object[] args)
Console.WriteLine(text.ToString(), args);
private static void RL()
Basic C# coding guidelines (from http://blogs.msdn.com/samar/archive/2007/03/15/basic-c-coding-guidelines.aspx) All of the following are good recommendations. The one thing that I dont understand is
the one about looping through generic collections using a "anonymous method predicate" - thats one I havent found any information about on the internet. So if you do, please leave a comment.
• Please do NOT use + sign to concatenate strings, because it creates three instances of string (try doing that in a long or infinite loop, your program will die with OutOfMemory exception in no
time), Rather use string.Concat it keeps one instance (and hey no OOM). If concatenating many strings in a loop etc, then use StringBuilder. Alternative to string.Concat is string.Format (which
uses StringBuilder inside)
• Use generic collections instead of Hashtables and ArrayList types
• If using Generic types, then refrain from using foreach loop on the collection. Rather use ForEach method to loop through via an anonymous method predicate (much faster because doesn’t create the
Iterator). For non generic types try to use for loop instead of foreach if the data being traversed is huge
• Nullify unused objects (doesn’t collect, but marks for collection and ceases from getting promoted into next generation)
• IF conditions having just one item in if and else, should be used as ternary operator (? Sign)
• Use ‘as’ operator instead of direct typecast using parenthesis with the exception of overloaded explicit cast operator, it saves from NullReferenceException and InvalidCastException
• Refrain from XmlDocument usage for navigational purpose, please either use XmlTextReader for sequential access or XPathDocument for XPath based data retrieval
• For server side XSL transformation, Use XslCompiledTransform instead of XslTransform (Please check http://blogs.msdn.com/antosha/archive/2006/07/24/677560.aspx ). For client side transformation,
try to load Xsl file asynchronously whenever possible via XMLDOM or XsltProcessor (Geckos)
• Always join your threads in a web page, if used (otherwise the page will be rendered and workers will continue operating at the server cost, and ofcourse the results will be unpredictable)
• Always do NULL checking before operating on an object, NullReferenceException is widely occurring exception in most applications and only a preventive programming can help us get the real error
• Handle most specific (expected) exceptions in the catch blocks before deciding towards general Exception, and please don’t use catch without an exception object if you’re not really writing P/
Invoke in C#
Recently I had to figure the number of primes below a given number n. One way to start at 2, go all the way till n, and check if any of the numbers are a multiple of any of the numbers upto n. This
is not the most efficent solution though. The sieve of Eratosthenes is an elegant solution to this problem. http://mathworld.wolfram.com/SieveofEratosthenes.html Basically, it says that you need to
check if every number from 2 to n, is a multiple of every number from 2 to the floor of the square of n. One can skip all those numbers that have already been set as not a prime, and the test of
multiples can be started at 2*currentNumber being checked - as that will be the first multiple of the number currentNumber. (The code is self explanatory and Wolfarm has a sieve example). Though this
code makes fewer checks and hence is faster (it is inefficent for larger values of n, as it needs to allocate an array of size n) In addition there is a function to output n primes - which uses the
traditional check for primes. Which can also used to count the number of primes below n. Here is the solution in C#:
using System;
using System.Collections.Generic;
public class SieveofEratosthenes
//based on Sieve of Eratosthenes http://mathworld.wolfram.com/SieveofEratosthenes.html
static int CountPrimes(int countTill)
bool []primes = new bool[countTill+1];//we are counting upto this number
int checkTill = (int)Math.Floor(Math.Sqrt((double)countTill));//according to Eratosthenes -
//we need to check only till the floor of the square
for (int index = 1; index <= countTill; index++)
primes[index] = true;
for (int i = 2; i <= checkTill; i++)
if (!primes[i])
continue;//can continue - as its not a prime and hence would have crossed out numbers
for (int j = i*2; j <= countTill; j++)
if ((j % i) == 0)
//is a multiple of i - there fore not a prime!
primes[j] = false;
int count = 0;
for (int i = 2; i <= countTill; i++)
if (primes[i])
return count;
public static bool IsPrime(int currentNumberToCheck)
int currentDivisor = 2;
int checkTill = currentNumberToCheck/2;
while(currentDivisor <= checkTill)
if (currentNumberToCheck % currentDivisor == 0)
return false;
return true;
public static void PrintNPrimes(int n)
int count = 0;
int currentNumberToCheck = 2;
while (count < n)
if (IsPrime(currentNumberToCheck))
public static void Main()
#region Helper methods
private static void WL(object text, params object[] args)
Console.WriteLine(text.ToString(), args);
private static void RL()
Here are two links to MSDN pages that talk about all the new features in .NET 2.0 and C# 2.0 Whats new in .NET 2.0 http://msdn2.microsoft.com/en-us/library/t357fb32.aspx Whats new in C# 2.0 http://
I recently wanted to revist the issue of serialization in .NET and the new features added in version 2.0 of .NET. (For those of you who have worked with earlier versions, you know the problems caused
by added properties, changes in version numbers and the extra code needed to fix these issues). Here is a MSDN article that discusses the new features. In brief these new features are:
• Tolerance of extraneous or unexpected data. This enables newer versions of the type to send data to older versions.
• Tolerance of missing optional data. This enables older versions to send data to newer versions.
• Serialization callbacks. This enables intelligent default value setting in cases where data is missing.
• In addition, there is a feature for declaring when a new optional field has been added. This is the VersionAdded property of the OptionalFieldAttribute attribute.
MSDN link: http://msdn2.microsoft.com/en-us/library/ms229752.aspx
And the article summarizes with these best practices: To ensure proper versioning behavior, follow these rules when modifying a type from version to version:
• Never remove a serialized field.
• Never apply the NonSerializedAttribute attribute to a field if the attribute was not applied to the field in the previous version.
• Never change the name or the type of a serialized field.
• When adding a new serialized field, apply the OptionalFieldAttribute attribute.
• When removing a NonSerializedAttribute attribute from a field (that was not serializable in a previous version), apply the OptionalFieldAttribute attribute.
• For all optional fields, set meaningful defaults using the serialization callbacks unless 0 or null as defaults are acceptable.
To ensure that a type will be compatible with future serialization engines, follow these guidelines:
• Always set the VersionAdded property on the OptionalFieldAttribute attribute correctly.
• Avoid branched versioning.
STANFORD Magazine: March/April 2007 > Features > Mind-set Research: "According to a Stanford psychologist, you’ll reach new heights if you learn to embrace the occasional tumble." I came across this
article through Guy Kawasaki's blog. The article is a review on psychology professor Carol Dweck's book Mindset: The New Psychology of Success.
I read this one on someone else's blog: What is HijklmnO? Answer: H2O (get it?)
One of my collegues at work asked me a 9 number puzzle question that he heard on the radio show "Car Talk" The puzzle is given the numbers 1 to 9, how do you insert 2 minuses and 1 addition so that
they add up to 100. So I wrote up a programming solution for this problem: (This is not the best way to do it - one could use memoization for speed up, as well as implement it as a recursive
solution. I would also like to make it so that it can take an arbitary number of operators and see if a solution exists). For now here is the solution in C#. The answer to the problem is : 123 - 45 -
67 + 89 = 100 The valid combination was found in 125 tries.
using System;
using System.Collections;
public class Find9NumberSolution
public const string numbers = "123456789";
public enum Operator
public static string OperatorToString(Operator op)
case Operator.Add:
return " + ";
case Operator.Sub:
return " - ";
return "";
public static int Operate(Operator op, int v1, int v2)
int val = 0;
case Operator.Add:
val = v1 + v2;
case Operator.Sub:
val = v1 - v2;
return val;
public static bool Calc(Operator first,int l1,
Operator second,int l2,
Operator third, int l3,
int targetValue)
int num1 = int.Parse(numbers.Substring(0,l1+1));
int num2 = int.Parse(numbers.Substring(l1+1, (l2-l1)));
int num3 = int.Parse(numbers.Substring(l2+1, (l3-l2)));
int num4 = int.Parse(numbers.Substring(l3+1));
int val = Operate(first,num1, num2);
val = Operate(second, val, num3);
val = Operate(third, val, num4);
if (val == targetValue)
WL(num1 + OperatorToString(first) + num2 + OperatorToString(second) + num3 + OperatorToString(third) + num4 + " = " + val);
return true;
return false;
public static void Main()
Operator op1 = Operator.Add;
Operator op2 = Operator.Sub;
Operator op3 = Operator.Sub;
bool targetValFnd = false;
int targetVal = 100;
int len = numbers.Length;
int i,j,k;
int count = 0;
for (i = 0; i < len - 3; i++)
for (j = i+1; j < len - 2; j++)
for (k = j+1;k < len-1; k++)
if (targetValFnd = Calc(op1,i,op2,j,op3,k,targetVal))
if (targetValFnd = Calc(op2,i,op3,j,op1,k,targetVal))
if (targetValFnd = Calc(op2,i,op1,j,op3,k,targetVal))
if (targetValFnd)
if (targetValFnd)
WL("Valid combination was found in " + count + " tries.");
#region Helper methods
private static void WL(object text, params object[] args)
Console.WriteLine(text.ToString(), args);
private static void RL()
Windows Vista Team Blog : DirectX10: The Next Generation in Gaming The screen shot the closes the deal for me is this pair: http://www.techeblog.com/index.php/tech-gadget/gdc-07-crysis-footage
Sean McBreen's WebLog : Improving .NET 2.0 Application Performance: "Improving .NET 2.0 Application Performance "
8 Confessions Of A Former Verizon Sales Rep - Consumerist Interesting tips to get a cheaper deal on Verizon phones and contracts
void swap( char *l, char *r ) { char t = *l; *l = *r; *r = t; } void reverse( char *inString, int inStringLen ) { char *front, *end; for( front = inString, end = inString+inStringLen-1; front < end;
front++, end-- ) swap( front, end ); } //call as reverse(sampleString, strlen(sampleString));
FAR 101: "Excerpts from U.S. Federal Aviation Regulations (FAR) Part 101 related to unmanned free balloons." This page lists the rules and regulations that cover the launching of unmanned free
10 Rules for a Good User Interface on Managing Automation 1. Visibility of system status The system should always keep users informed about what is going on. 2. Match between system and the real
world The system should speak the user's language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms. 3. User control and freedom Users often choose system
functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. 4. Consistency and standards Users should not have
to wonder whether different words, situations, or actions mean the same thing. 5. Error prevention Even better than good error messages is a careful design which prevents a problem from occurring in
the first place. (read more at the site : http://www.managingautomation.com/maonline/magazine/read/view/10_Rules_for_a_Good_User_Interface_15564802)
|
{"url":"https://blog.aggregatedintelligence.com/2007/03/","timestamp":"2024-11-12T10:40:40Z","content_type":"application/xhtml+xml","content_length":"237238","record_id":"<urn:uuid:6d323157-5815-423d-9538-78d6ad7f7bf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00070.warc.gz"}
|
Decile wise lift chart python
We now finally create the lift chart using the above table. Python. lift_chart1 lift_chart1.set_xlabel("Decile",fontsize=12).
2017年2月28日 Confusion Matrix; Gain and Lift Chart; Kolmogorov Smirnov Chart; AUC – ROC; Gini You can also plot decile wise lift with decile number :. How to build a lift chart (a.k.a gains chart)
in Python? plot the lift chart. I understand the concept of lift, but I'm struggling to understand how to actually implement it in python. python machine-learning to compare different models? In the
microsoft ressource you provided, it is said : "You can add multiple models to a lift chart In lift: Compute the Top Decile Lift and Plot the Lift Curve. Description Usage Arguments Value Author(s)
Examples. Description. TopDecileLift computes the commonly used top decile lift by ordering the data by the predictions, and computing the proportion of positives in the top 10%.. Usage Each column
in the decile analysis chart represents a collection of records that have been scored using the model. The height of each column represents the average of those records’ actual behavior. How the
Decile Analysis is Calculated. 1. The hold-out or validation sample is scored according to the model being tested. 2. The final model that gives us the better accuracy values is picked for now.
However, we are not done yet. We need to evaluate the model performance based on a variety of metrics. The framework contain codes that calculate cross-tab of actual vs predicted values, ROC Curve,
Deciles, KS statistic, Lift chart, Actual vs predicted chart, Gains chart. 12 Lift vs. Decile Charts Both embody concept of “moving down” through the records, starting with the most probable Decile
chart does this in decile chunks of data Y axis shows ratio of decile mean to overall mean Lift chart shows continuous cumulative results Y axis shows number of important class records identified 13.
From here you could assign the deciles to the data using df['decile'] = deciles, group entries using df.groupby('decile'), and so on. The one liner for all of the above is pd.qcut(df['sales_total'],
10).values.codes .
The lift chart is synonymous with evaluating data mining model performance and the predictive power of one model against another. Often, in presentations and training sessions it is suggested that
the chart is indicative of the models ability to accurately predict within a training population. For example, the following explanation is provided; "the lift chart…
Hence, the maximum lift at first decile could have been 543/3850 ~ 14.1%. Hence, we are quite close to perfection with this model. Let’s now plot the lift curve. Lift curve is the plot between total
lift and %population. Note that for a random model, this always stays flat at 100%. Here is the plot for the case in hand : You can also plot decile wise lift with decile number : What does this
graph tell you? The decile-wise lift curve is drawn as the decile number versus the cumulative actual output variable value, divided by the decile's mean output variable value. The bars in this chart
indicate the factor by which the Bagging Neural Network model outperforms a random assignment, one decile at a time. Refer to the validation graph below. How to create Lift Chart and decile tables in
R. Contribute to Deepesh87/Lift-Charts-in-R development by creating an account on GitHub. How to create Lift Chart and decile tables in R. Contribute to Deepesh87/Lift-Charts-in-R development by
creating an account on GitHub. Details. Lift charts are a commonly used tool in business data mining applications. They are used to assess how well a model is able to predict a desirable (from an
organization's point-of-view) response on the part of a customer compared to alternative estimated models and a benchmark model of approaching customers randomly. Cum Lift - for a given depth-of-file
- is the Cumulative Response Rate divided by the overall response rate of the file, multiplied by 100. It measures how much better one can expect to do with the model than without a model. For
example, a Cum Lift of 294 for the top decile means that when soliciting to the top 10%
The random expectation at the xth decile is x%. Interpretation: The Cum Lift of 4.03 for top two deciles, means that when selecting 20
Hence, the maximum lift at first decile could have been 543/3850 ~ 14.1%. Hence, we are quite close to perfection with this model. Let’s now plot the lift curve. Lift curve is the plot between total
lift and %population. Note that for a random model, this always stays flat at 100%. Here is the plot for the case in hand : You can also plot decile wise lift with decile number : What does this
graph tell you? The decile-wise lift curve is drawn as the decile number versus the cumulative actual output variable value, divided by the decile's mean output variable value. The bars in this chart
indicate the factor by which the Bagging Neural Network model outperforms a random assignment, one decile at a time. Refer to the validation graph below.
How to create Lift Chart and decile tables in R. Contribute to Deepesh87/Lift-Charts-in-R development by creating an account on GitHub. How to create Lift Chart and decile tables in R. Contribute to
Deepesh87/Lift-Charts-in-R development by creating an account on GitHub.
From here you could assign the deciles to the data using df['decile'] = deciles, group entries using df.groupby('decile'), and so on. The one liner for all of the above is pd.qcut(df['sales_total'],
10).values.codes .
This post covers Gains table/chart, Lift curves, Kolmogorov-Smirnov (K-S), Confusion In the end I also provide the Python code that generates a Gains table. The model then sorts the customers into
ten equal sub-populations, or deciles,
Details. Lift charts are a commonly used tool in business data mining applications. They are used to assess how well a model is able to predict a desirable (from an organization's point-of-view)
response on the part of a customer compared to alternative estimated models and a benchmark model of approaching customers randomly. Cum Lift - for a given depth-of-file - is the Cumulative Response
Rate divided by the overall response rate of the file, multiplied by 100. It measures how much better one can expect to do with the model than without a model. For example, a Cum Lift of 294 for the
top decile means that when soliciting to the top 10% The lift chart is synonymous with evaluating data mining model performance and the predictive power of one model against another. Often, in
presentations and training sessions it is suggested that the chart is indicative of the models ability to accurately predict within a training population. For example, the following explanation is
provided; "the lift chart… The chart below is one part of the decile transactions analysis showing the differences in number of transactions per year. Best Customers Identification Our third decile
analysis uses decision trees to show the external variables that separates your top 20% from the other 80%. The y-value of the lift curve at 10% is 30 / 10 = 3. Analyzing the Charts: Cumulative gains
and lift charts are a graphical representation of the advantage of using a predictive model to choose which customers to contact. The lift chart shows how much more likely we are to receive
respondents than if we contact a random sample of customers.
From here you could assign the deciles to the data using df['decile'] = deciles, group entries using df.groupby('decile'), and so on. The one liner for all of the above is pd.qcut(df['sales_total'],
10).values.codes . Lift Charts . The lift curve is a popular technique in direct marketing. One useful way to think of a lift curve is to consider a data mining model that attempts to identify the
likely responders to a mailing by assigning each case a “probability of responding" score. Run the following code to create lift chart. The Cumulative Lift of 3.4 for top two deciles, means that when
selecting 20% of the records based on the model, one can expect 3.4 times the total number of targets (events) found by randomly selecting 20%-of-records without a model. Lift and Gain Charts are a
useful way of visualizing how good a predictive model is. In SPSS, a typical gain chart appears as follows: In today's post, we will attempt to understand the logic behind generating a gain chart and
then discuss how gain and lift charts are interpreted. The Cum Lift of 4.03 for top two deciles, means that when selecting 20% of the records based on the model, one can expect 4.03 times the total
number of targets (events) found by randomly selecting 20%-of-file without a model. Decile wise lift chart.
|
{"url":"https://bestoptionsyupc.netlify.app/hitchings68394vop/decile-wise-lift-chart-python-250.html","timestamp":"2024-11-03T17:10:17Z","content_type":"text/html","content_length":"36791","record_id":"<urn:uuid:4b582a59-b408-40b3-9396-b8af361ea122>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00303.warc.gz"}
|
Self-Assessment...I Just Can't Get You Off My Mind
I am still thinking about student self-assessment. Well, self-assessment is an aspect of what has really been on my mind: students actively taking responsibility for their learning. I use an online
publisher provided software package in my courses. Students complete their homework using this software so that they can receive immediate feedback and have access to supplemental resources. Whenever
a student works a homework problem incorrectly, he or she can choose to work a similar problem. Every correctly worked “similar problem” takes the place of the previously incorrectly worked one. By
continuing to work on problems that they initially get wrong, students can get more practice and score 100% on every homework assignment. I tell my students this at the beginning of each class, and I
emphasize that homework is for their use. It is a tool to help them master the material that we are working with in the classroom. It is my hope that they will complete a homework assignment, go back
and take a look at the methods used for incorrectly worked problems, and use examples from class and the online software package to see where they made mistakes so that they can find and correct
their own errors. An important aspect of mathematics, the “real” mathematics done by mathematicians who discover new theorems, finally come up with a proof for very old ones, and publish in journals,
is making mistakes and carefully reviewing your work to find where you have gone wrong so to speak. However, this past semester it dawned on me that students were leaving homework assignments at 70%
or 80% and coming into class with lists of problems they “got wrong” or “couldn’t do” from the homework. I would in turn carefully and methodically work out solutions to these on the board, and
students would copy them down without really learning very much. Even though the software package gave them the ability to repeat incorrectly worked problems, and even though they had examples and
notes from class as well as the resources provided online, students were continuing to give up on their homework (and themselves) way too early!!!
So I began searching through folders (both on thumb drives and in cabinets) of teaching techniques and best practices I had collected over the years for ways to get students to take responsibility
for their own learning. I came across an article I had photocopied from NCTM’s Empowering Students by Promoting Active Learning in Mathematics, published way back in 1994, written by Marcia Standera.
In the article, Ms. Standera states that she doesn’t allow students to simply leave homework assignments unfinished because they couldn’t do certain problems. In Ms. Standera’s class, to complete an
assignment, students:
must do every problem or write one sentence or more of explanation for each place where they “got stuck.” [Because, m]any times, describing what they don’t understand helps them think about the
process of solving the problem, and they “unstick” themselves. Writing about where they “got stuck” also makes them more accountable for assignments. No longer can students hand in blank sheets of
paper and say, “I don’t understand.” They have to think about what they don’t understand and express it in written form.” (25)
As I begin my classes this semester, each time I give a homework assignment that counts for a grade, I am going to use this stipulation. Since I use the online software for homework, students will
know whether they incorrectly worked a problem or not after they have finished it. So, I will require for each homework problem that a student works incorrectly or cannot finish an accompanying
sentence or two of explanation. We’ll see how it goes. Results tba!!
Thinking about Ms. Standera’s assignment brought to mind another assignment I used in those halcyon days of yore that I think I will revive as well in an attempt to promote self-assessment and active
learning. This is an assignment that, in a way, turns a summative assessment into a formative one. Once upon a time, I would ask students, after each major unit test to complete a test review. I
required that students first go through their test carefully and redo every problem that they got wrong (or only received partial credit for). Then, for each problem they corrected, I would ask them
to write an explanation of why they thought they got the problem wrong on the first attempt. I urged students to be honest with themselves and me. If the reason for getting it wrong was because they
had no earthly idea how to answer or work out the solution, say so. I reminded them that their teacher could only provide help or get them additional help if he knew where their weaknesses were. I
modeled a few possible responses, such as “I got question 2 wrong because I got mixed up about which side of the less than/greater than symbol was supposed to be aimed at the larger number. I guess I
didn’t study that and commit it to memory as well as I should have.” I remember this assignment adding a great deal to my teaching, and the only reason I stopped using it – I am confessing now – is
because I got lazy. To finish the assignment, I asked students to write a paragraph or so outlining a course of action for preparing for the next exam. I used (and am reviving) this assignment
because I want students to see that a successful student embraces his or her summative assessments as just another way to learn, rather than fearing the possibility of failure. Rather than getting
frustrated and angry because they didn’t make the grade hoped for, they should do a self-assessment of their skills and habits, and then make a plan for how they will change in the future. Hopefully
using these two assignments will help my students take more responsibility for their own learning.
Work Cited.
Standera, Marcia. “Listening to Students through Writing.” Empowering Students by Promoting Active Learning in Mathematics. Reston: NCTM, 1994.
1 Comment
Mel Rivera link
2/22/2021 12:48:26 pm
Hi, great reading your blog
Leave a Reply.
|
{"url":"http://www.kentuckywritingproject.com/the-teachers-are-all-write/self-assessmenti-just-cant-get-you-off-my-mind","timestamp":"2024-11-12T23:15:31Z","content_type":"text/html","content_length":"49918","record_id":"<urn:uuid:9d92f763-8df8-4e6c-b8de-3ea633e29372>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00351.warc.gz"}
|
Animated Contrasting Cases in Geometry
What is a geometric dilation? Why does the Pythagorean theorem work? What’s the difference in corresponding and supplementary angles? How do I find the volume of a composite figure?
These are all questions that middle school students are expected to be able to answer. Through our research and the consequent development of these materials, we hope to make the answers to these
questions a little easier to understand.
In this collection of materials, we cover four geometric topics: Angles, Transformations, Pythagorean Theorem, and Volume. These topics are discussed in scenarios of contrasting cases, where two
fictional students each present a unique method or solution strategy to the same problem. The goal is then to analyze both methods and discuss similarities and differences, strengths and weaknesses
of each. Visit the Curricular Materials page to explore each unit.
View the animated materials here!
What makes these materials unique?
Continuing the work done on contrasting cases in algebra by a research team at Harvard, this project:
• aims to digitize and animate the contrasting cases for even greater visual effect, which may lead to more in-depth understanding of the material.
• explores the realm of geometry, a subject often associated with visual learners.
• affords students the opportunity to critique the reasoning of others (in this case, two fictitious students), a skill that is an often-overlooked component of many states’ mathematical standards.
|
{"url":"https://sites.ced.ncsu.edu/acing/","timestamp":"2024-11-01T23:41:10Z","content_type":"text/html","content_length":"58127","record_id":"<urn:uuid:046f949a-43a2-427a-9e1e-9e1e4aec85aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00613.warc.gz"}
|
Math Unites the Celestial and the Atomic - Research & Development World
Math Unites the Celestial and the Atomic
In recent years, researchers have developed astonishing new insights into a hidden unity between the motion of objects in space and that of the smallest particles. It turns
There is an almost perfect parallel between math describing the motion of celestial objects, like the sun (shown here in an ultraviolet image), and atomic objects. Image courtesy of NASA.
out there is an almost perfect parallel between the mathematics describing celestial mechanics and the mathematics governing some aspects of atomic physics. These insights have led to new ways to
design space missions, as described in the article, “Ground Control to Niels Bohr: Exploring Outer Space with Atomic Physics” by Mason Porter and Predrag Cvitanovic, which appears in the October 2005
issue of the Notices of the American Mathematical Society. The article describes work by, among other scientists, physicist Turgay Uzer of the Georgia Institute of Technology, mathematician Jerrold
Marsden of the California Institute of Technology and engineer Shane Ross of the University of Southern California. Imagine a group of celestial bodies — say, the Sun, the Earth, and a Spacecraft
— moving along paths determined by their mutual gravitational attraction. The mathematical theory of dynamical systems describes how the bodies move in relation to one another. In such a
celestial system, the tangle of gravitational forces creates tubular “highways” in the space between the bodies. If the spacecraft enters one of the highways, it is whisked along without the need to
use very much energy. With help from mathematicians, engineers and physicists, the designers of the Genesis spacecraft mission used such highways to propel the craft to its destinations with minimal
use of fuel. In a surprising twist, it turns out that some of the same phenomena occur on the smaller, atomic scale. This can be quantified in the study of what are known as “transition states”,
which were first employed in the field of chemical dynamics. One can imagine transition states as barriers that need to be crossed in order for chemical reactions to occur (for “reactants” to be
turned into “products”). Understanding the geometry of these barriers provides insights not only into the nature of chemical reactions but also into the shape of the “highways” in celestial systems.
The connection between atomic and celestial dynamics arises because the same equations govern the movement of bodies in celestial systems and the energy levels of electrons in simple systems, and
these equations are believed to apply to more complex molecular systems as well. This similarity carries over to the problems’ transition states; the difference is that which constitutes a “reactant”
and a “product” is interpreted differently in the two applications. The presence of the same underlying mathematical description is what unifies these concepts. Because of this unifying description,
the article states, “The orbits used to design space missions thus also determine the ionization rates of atoms and chemical-reaction rates of molecules!” The mathematics that unites these two very
different kinds of problems is not only of great theoretical interest for mathematicians, physicists, and chemists, but also has practical engineering value in space mission design and chemistry.
|
{"url":"https://www.rdworldonline.com/math-unites-the-celestial-and-the-atomic/","timestamp":"2024-11-07T00:49:33Z","content_type":"text/html","content_length":"61497","record_id":"<urn:uuid:0f1817e0-8dc8-4f27-bb79-131a041a07ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00703.warc.gz"}
|
hex to binary convertion in java
Related Tutorials/Questions & Answers:
hex to binary convertion in java - Java Beginnershex
binary convertion
in java HI, im doin a application wch requires to send a
message to other mobiles, so i need to convert the
values to
format.So the
format could either be a ringtone
hex to binary convertion in java - Java Beginnershex
binary convertion
in java HI, im doin a application wch requires to send a
message to other mobiles, so i need to convert the
values to
format.So the
format could either be a ringtone
hex to binary convertion in java - Java Beginnershex
binary convertion
in java HI, im doin a application wch requires to send a
message to other mobiles, so i need to convert the
values to
format.So the
format could either be a ringtone
convertion how to convert
class to executable file using command prompt
convertion how to convert
class to executable file import java.io.*; import java.util.zip.*; public class CreateZip { public static int buffer = 10240; protected void createZipArchive(File zipFile, File
Java read binary fileJava
file I want
file example code... at Reading
file into byte array in
. Thanks Hi, There is many more examples at
File - Learn how to handle files in
with Examples
BINARY TO DECIMAL - Java BeginnersBINARY
TO DECIMAL Hi friend, Program to convert
To Decimal : import... (System.in); System.out.print ("Enter a
number "); String str
JAVA: Recusrion, Binary SearchJAVA
: Recusrion,
Search I want to learn about
Search... it using a recursive implementation of
Search. For the cases when more than one result can be returned, modify
Search to return all the elements
binary search - Java Beginnersbinary
search Write a
program to search an array by using recursive
search. /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package
Binary to decimal - Java BeginnersBinary
to decimal Need help pls.. i cannot run this program... pls... = Integer.parseInt(JOptionPane.showInputDialog("Input
:")); String c = args[0...; String value = JOptionPane.showInputDialog("Input
"); int len
binary Hi I want to write a program in pascal that ask a user to input a decimal number and then return its
equivalent in the minimum number of bits required to repesent the number. Thks
Binary Search!!! - Java BeginnersBinary
Search!!! Hi Sir, My question is quite simple. im only getting "ArrayIndexOutOfBoundsException : 10" in the if statement which is inside the 1st while loop. How can i get rid of it? The if statement
is btw
Java binary to decimal Java binary
to decimal This Tutorial helps you to know the code for
Java binary
... you in understanding a how to get a '
Java binary
to decimal'. For this we have
How to convert binary to decimal in Java?
How to convert
to decimal in
program? In this section we are going to convert a
number to it decimal representation... concerts
to Decimal in
. Computer is based on the
ModuleNotFoundError: No module named 'hex'
ModuleNotFoundError: No module named '
' Hi, My Python program is throwing following error: ModuleNotFoundError: No module named '
' How to remove the ModuleNotFoundError: No module named '
' error
How to write in Binary file in Java programming language
How to write in
file in
programming language I need some help about
programming. Please explain me "How to write in
file in
programming language
Datatype convertion
Datatype convertion it possible to convert long datatype into string in
if possible means how
How to using Binary Search Array Java ?
How to using
Search Array
? Hi, I am beginners in
... functions. The problem is that how to use
search array in
. Please give any online reference show that i will implement the
search array in
how to convert ACSII to HEX
charecter(~)"TILDA" from a no. of files to NULL(00:
value). ASCII value of ~7e
...() to convert it to
value. class ConvertAsciiToHex { public static void... = str.toCharArray(); StringBuffer
= new StringBuffer(); for(int
convertion of string entered in the web interfaceconvertion
of string entered in the web interface hi i am entering... and convert to particular type.in short how to link web interface values with my
code.sorry if the question is too simple.i am newbie in
binary addition,subtraction and modulus operations - Java Beginners
logic in this!!! please send me the
coding for these three...
addition,subtraction and modulus operations i wanna perform
addition,subtraction and modulus operation between two numbers of 512 bit
Java Binary data file - Java BeginnersJava Binary
data file Hi, I have a
data file(binfile.data) and the file has what is commonly referred to as variable length... in the
file.I have to do this using java.I tried to figure out the correct
Java Array Binary Search example Java
Search It is a method for searching the array element... example demonstrates how to do a
search on the
array object... the
search algorithm. It returns the index of the found element
Write a program in Java to convert Decimal number into Binary number
Write a program in
to convert Decimal number into
number Hi, I have decimal number and I want to convert it to
. How to Write a program in
to convert Decimal number into
number? Thanks
Java Convert Octal to BinaryJava
Convert Octal to
In this tutorial, you will learn how to convert octal to
has provide different ways to change the number system of different numbers. You can convert and decimal to octal, decimal to
Writing to and reading from a binary file in java.
Writing to and reading from a
file in
. I have written the following code to convert an ASCII text file to a
file: public static... the
file from another program as follows: m_dis = new DataInputStream
Send me Binary Search - Java Beginners
Send me
Search how to use
think in
give me the
Search programm thx.. Hi friend, import java.io....)); } } ----------------------------------------- Read for more information. http://www.roseindia.net/
/ Thanks
binary tree
-tree-code.shtml http://www.roseindia.net/
tree can a
tree be implemented with out comparing
ModuleNotFoundError: No module named 'hex-grid'
ModuleNotFoundError: No module named '
-grid' Hi, My Python program is throwing following error: ModuleNotFoundError: No module named '
-grid' How to remove the ModuleNotFoundError: No module named '
ModuleNotFoundError: No module named 'hex-ocr'
ModuleNotFoundError: No module named '
-ocr' Hi, My Python program is throwing following error: ModuleNotFoundError: No module named '
-ocr' How to remove the ModuleNotFoundError: No module named '
ModuleNotFoundError: No module named 'hex-utils'
ModuleNotFoundError: No module named '
-utils' Hi, My Python... '
-utils' How to remove the ModuleNotFoundError: No module named '
... have to install padas library. You can install
-utils python with following
Reading binary file into byte array in Java
Example code of reading
file into byte array in
This example shows you how to read a
file into byte array from
program. This type... is the complete code of the
program that reads the
file into byte array:ADS
how to covert JPG format to Binary formart using java code..
how to covert JPG format to
formart using
code.. convert JPG format to
formart How can i convert JPG format to
format using
code plz help me out
Conversion of Decimal to Binary in Java
Conversion of Decimal to Binary in
In this section we will discuss about conversion of Decimal to
. You know that number having... be converted to
and written as 1010. Example : Convert 12 into
octal to binary
octal to binary i want to know how to convert a octal to
number Here is a
example that converts octal to
. import...
=Integer.toBinaryString(i); System.out.println(
Java Write To File BinaryJava
Write To File
In this tutorial you will learn how to write to
file. A
file is a file into which the bit patterns of mostly data types can be formed to a byte of 8 bits. Write to a
file in
Binary Search in JavaBinary
Search in
is used to search an element from an array. Programmers opt for
search over linear search when it comes to large numbers. It can... the answer "Not Found".ADS_TO_REPLACE_1 Following is the example of
Java File BinaryJava
In this section, you will learn how to write numeric data into the
file. Description of code: Numeric data converts compactly and faster in a
format than the text. In the given example, at first, we have
Binary treeBinary
tree a. Construct a method to implement a
tree using an array. b. Implement the
tree to store numbers in sorted order
Binary Search in JavaBinary
Search in
In this section, we are going to search an element from an array using
Search. The advantage of a
search over a linear search is astounding for large numbers. It can be done either recursively
|
{"url":"https://www.roseindia.net/answers/viewqa/Java-Beginners/5378-hex-to-binary-convertion-in-java.html","timestamp":"2024-11-07T16:28:45Z","content_type":"text/html","content_length":"61390","record_id":"<urn:uuid:8bde547b-a364-401e-8d7a-6e2f196f74ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00117.warc.gz"}
|
What I want, is to lay AA in memory in a way that is efficient, but still easy to work with. Obviously, those two dimensions have to be projected into one. But how? Should it be [1112|2313|1−1−2−6]
[1112|2313|1−1−2−6], or [121|13−1|11−2|23−6][121|13−1|11−2|23−6]? The answer is: it depends on how your algorithm accesses it most often. Neanderthal gives you both options. When you create any kind
of matrix, you can specify whether you want it to be column-oriented (:column, which is the default), or row-oriented (:row). In the following example, we will use CPU matrices from the native
namespace. The same options also work for functions that create GPU CUDA matrices (cuda namespace), or OpenCL's GPU and CPU matrices (opencl namespace).
|
{"url":"https://clojurians-log.clojureverse.org/uncomplicate/2017-07-01","timestamp":"2024-11-05T20:40:44Z","content_type":"text/html","content_length":"113111","record_id":"<urn:uuid:a0cc39b5-a2c2-4d3d-896d-732c6bcb7f47>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00878.warc.gz"}
|
Hertz Contact Model: Complex Loading Paths
Problem Statement
The project file for this example may be viewed/run in PFC.[1] The data files used are shown at the end of this example.
The Hertz Model provides a nonlinear elastic force-displacement law, with viscous dashpots. The purpose of this example is to analyze the response of a single contact under complex loading paths.
Results for a ball-ball contact and a ball-wall interaction are discussed and compared.
PFC3D Models
Two simple systems are set up. The first system is composed of two identical balls of unit diameter; in the second system, the bottom ball is replaced with a circular wall.
All degrees of freedom are fixed for both bodies in contact, and the top ball is displaced in order to load the contact, with the following loading path (in the global coordinate system):
1. Normal loading for \(t\) = 0.1 [time-unit] (loading displacement along \(\mathbf{Y}\))
2. Rolling for \(t\) = 0.05 [time-unit] (rotation around \(\mathbf{Z}\))
3. Rolling for \(t\) = 0.05 [time-unit] (rotation around \(\mathbf{X}\))
4. Twisting for \(t\) = 0.05 [time-unit] (rotation around \(\mathbf{Y}\))
5. Normal unloading for \(t\) = 0.1 [time-unit] (unloading displacement along \(\mathbf{Y}\))
For both systems, the same loading path is repeated three times, with a different set of properties as follows (refer to the “Hertz Model” section for a detailed description and listing of the
1. \(G\) = 1.0 × 10^9, \(\nu\) = 0.3, and \(\mu\) = 0.5.
2. Same as 1. with \(M_s\) = 1 (activate scaling down of the shear force upon normal unload).
3. Same as 2. with \(\beta_n=\beta_s= 0.5\) and \(M_d\) = 1 (activate viscous damping).
Model Results
Ball-Ball Contact
Two identical balls with unit diameter are submitted to the loading paths detailed above. Fig. 1 shows the state of the system at the end of loading stage 1a (normal loading). In this figure, the
contact is represented twice: as a line perpendicular to the contact plane, and as a disk coplanar with the contact plane.
The histories of the magnitude of the normal and shear forces (both in the springs and the dashpots) monitored during loading 1, 2, and 3 are shown in Figs. 2 to 4, respectively.
The evolution of the normal force in the spring is identical in all three cases: it increases non-linearly during normal loading stage a), maintains a constant value during rolling and twisting
loading stages b), c), and d), then vanishes non-linearly during normal unloading stage e). Its maximal value, which corresponds to an overlap of 1.0 × 10^-3 [length-unit], agrees with the analytical
solution of 3.0117 × 10^4 [force-unit] given by equation (3) and (4) in the Hertz contact model description [SE: verify the link and reference in the preceding are correct].
The evolution of the shear force in the spring is identical for cases 2) and 3), but differs for case 1) due to the different value of \(M_s\). In case 1), the shear force remains null during normal
loading stage a), then increases linearly during rolling loading stage b). During the latter, the slope is given by the initial tangent shear stiffness, which is constant at constant overlap
(equation (11) in the Hertz contact model description). [SE: verify the link and reference in the preceding are correct] During rolling stage c), the magnitude of the shear force increases
non-linearly, although the overlap does not change (thus the initial tangent shear stiffness). This is because the loading direction changed, and the shear force is now made up of two components
(along the \(\mathbf{X}\) and \(\mathbf{Z}\) global axes). The shear force magnitude is not affected by twisting (loading stage d)), because no shear displacement is built up during this stage.
However, the direction of the shear force vector accommodates the relative twist between the balls, as shown in Fig. 5, where the absolute value of the \(x\)-component of the shear force decreases
linearly, while the absolute value of the \(z\)-component of the shear force increases linearly during the twisting loading stage (mechanical time from 0.2 to 0.25 [time-unit]). Finally, because no
shear displacement is built up during the normal unloading stage 1e), the shear force remains constant at the beginning of this stage (Fig. 2, mechanical time from 0.25 to \(\approx 0.28\)
[time-unit]). However, as the unloading proceeds, the Coulomb slip criterion is eventually met, and the shear force vanishes with the normal force (mechanical time from \(\approx 0.28\) to 0.35
The evolution of the shear force in cases 2) and 3) is similar to that of case 1), except during the normal unloading stage e). For those cases, scaling-down of the shear force upon normal unload is
activated (\(M_s\) = 1). Thus the shear force starts decreasing linearly while the normal force is decreasing, until the Coulomb criterion is met and the shear force vanishes proportionally to the
normal force (Fig. 2, mechanical time from 0.25 to 0.35 [time-unit]). Because the shear force magnitude at the beginning of slip is therefore lower in cases 2) and 3) than in case 1), the energy
dissipated during sliding will also be lower. This can be seen by comparing Figs. 6, 7, and 8, which show the evolution of the contact energies (strain energy and slip and dashpot work) during
loading cases 1), 2), and 3), respectively. Note that viscous damping is activated only for loading case 3), which is the only case where the dashpot forces (Fig. 4) and dashpot work (Fig. 6) are
Ball-Wall Interaction
Fig. 9 shows the state of the system at the end of loading stage 1a (normal loading), for the ball-wall interaction case. In this case, multiple contacts exist between the ball and the facets that
make up the wall. However, since full contact resolution mode is active (default setting), only one contact is deemed active at a time, although more than one contact may exhibit a positive overlap
(refer to the description of the faceted wall logic for further details).
The histories of the magnitude of the normal and shear forces (both in the springs and the dashpots) monitored during loading 1, 2, and 3 are shown in Figs. 10 to 12 respectively, and the histories
of the contact energies are shown in Figs. 13 to 15.
All quantities evolve qualitatively similarly to the ball-ball contact case. However, they are quantitatively different, although the contact shear modulus \(G\) and Poisson ratio \(\nu\) are
identical, because the contact effective radius is different (equation (6) in the Hertz contact model description [SE: verify the link and reference in the preceding are correct]).
In this example, a ball-ball contact and a ball-wall interaction with the Hertz contact model are exercised under complex loading paths. The evolution of the forces and energies are discussed and
Data Files
cmhertz-setup.dat (3D)
model large-strain on
; fname: cmhertz-setup.p3dat
; Setup shared parameters, loading path and history monitoring
; =============================================================================
model domain extent -2 2 -2 2 condition periodic
contact cmat default model hertz property hz_shear 1e9 hz_poiss 0.3 fric 0.5
model mechanical timestep fix 1e-3
fish define monitor
modeltime = mech.time.total
cnforce = 0
cnvforce = 0
csforce = 0
csvforce = 0
loop foreach local cp contact.list
hz_f_loc = contact.prop(cp,'hz_force')
cnforce = comp.x(hz_f_loc)
hz_fs_loc = hz_f_loc
comp.x(hz_fs_loc) = 0.0
csforce = math.mag(hz_fs_loc)
hz_fs_glob = contact.to.global(cp,hz_fs_loc)
cxsforce = comp.x(hz_fs_glob)
cysforce = comp.y(hz_fs_glob)
czsforce = comp.z(hz_fs_glob)
cnvforce = comp.x(contact.prop(cp,'dp_force'))
csvforce = comp.y(contact.prop(cp,'dp_force'))
fish callback add monitor 50.0
history interval 1
model history name '1' mechanical time-total
model history name '2' timestep
fish history name '100' cnforce
fish history name '101' cnvforce
fish history name '102' cxsforce
fish history name '103' cysforce
fish history name '104' czsforce
fish history name '105' csforce
fish history name '106' csvforce
model energy mechanical on
model history name '1000' mechanical energy energy-strain
model history name '1001' mechanical energy energy-slip
model history name '1002' mechanical energy energy-dashpot
fish define loading_path(root,n)
local fnamea = string.build('%1%2d_load%3a',root,global.dim,n)
local fnameb = string.build('%1%2d_load%3b',root,global.dim,n)
local fnamec = string.build('%1%2d_load%3c',root,global.dim,n)
local fnamed = string.build('%1%2d_load%3d',root,global.dim,n)
local fnamee = string.build('%1%2d_load%3e',root,global.dim,n)
ball attribute velocity-y -0.01 range id 1
model solve time 1e-1
model save [fnamea]
ball attribute velocity-y 0 spin-z 0.00628 range id 1
model solve time 5e-2
model save [fnameb]
ball attribute spin multiply 0.0 spin-x 0.00628 range id 1
model solve time 5e-2
model save [fnamec]
ball attribute spin multiply 0.0 spin-y 6.28 range id 1
model solve time 5e-2
model save [fnamed]
ball attribute spin multiply 0.0 velocity-y 0.01 range id 1
model solve time 1e-1
model save [fnamee]
program return
; =============================================================================
; eof: cmhertz-setup.p3dat
cmhertzbb.dat (3D)
; fname: cmhertzbb.p3dat
; Exercise the Hertz contact model under multiple loading paths
; =============================================================================
model new
model title 'Exercising the Hertz contact model (ball-ball contact)'
program call 'cmhertz-setup' suppress
ball create id=2 position-x=0.0 position-y=0.0 radius=0.5
ball create id=1 position-x=0.0 position-y=1.0 radius=0.5
ball attribute density 2500.0
ball fix velocity-x velocity-y velocity-z spin-x spin-y spin-z
model clean
model save 'hzbb3d_ini'
; load 1 :
; load 2 : hz_mode = 1
model restore 'hzbb3d_ini'
contact property hz_mode 1
; load 3 : hz_mode = 1 - normal and shear viscous damping
model restore 'hzbb3d_ini'
contact property hz_mode 1 ...
dp_nratio 0.5 ...
dp_sratio 0.5 ...
dp_mode 3
program return
; =============================================================================
;eof: cmhertzbb.p3dat
cmhertzbw.dat (3D)
; fname: cmhertzbw.p3dat
; Exercise the Hertz contact model under multiple loading paths
; =============================================================================
model new
model title 'Excercising the Hertz contact model (ball-wall interaction)'
program call 'cmhertz-setup' suppress
wall generate id=1 disk radius=0.5 position 0.0 0.5 0.0 dip 90 resolution 0.1
ball create id=1 position-x=0.0 position-y=1.0 radius=0.5
ball attribute density 2500.0
ball fix velocity-x velocity-y velocity-z spin-x spin-y spin-z
model clean
model save 'hzbw3d_ini'
; load 1 :
; load 2 : hz_mode = 1
model restore 'hzbw3d_ini'
contact property hz_mode 1
; load 3 : hz_mode = 1 - normal and shear viscous damping
model restore 'hzbw3d_ini'
contact property hz_mode 1 ...
dp_nratio 0.5 ...
dp_sratio 0.5 ...
dp_mode 3
program return
; =============================================================================
;eof: cmhertzbw.p3dat
Was this helpful? ... Itasca Software © 2024, Itasca Updated: Oct 31, 2024
|
{"url":"https://docs.itascacg.com/itasca900/pfc/docproject/source/manual/examples/verification/hertz_contact_model/cmhertz_single.html","timestamp":"2024-11-13T19:39:57Z","content_type":"application/xhtml+xml","content_length":"47515","record_id":"<urn:uuid:e232e482-889f-4506-af10-4b3361a894a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00046.warc.gz"}
|
Geometrically aware communication in random wireless networks
Some of the first routing algorithms for position-aware wireless networks used the Delaunay triangulation of the point-locations of the network's nodes as the underlying connectivity graph. Later on
these solutions were considered impractical because the Delaunay triangulation may in general contain arbitrarily long edges and because calculating the Delaunay triangulation may require a global
view of the network. Many other algorithms were then suggested for geometric routing, often assuming random placement of network nodes for analysis or simulation [27, 5, 28, 15]. But as we show, when
the nodes are uniformly placed in the unit disk the Delaunay triangulation does not contain long edges, it is easy to compute locally and it is in many ways optimal for geometric routing and
flooding. In particular, we prove that with high probability the maximal length of an edge in Del(P), the Delaunay triangulation of a set P of n nodes uniformly placed in the unit disk, is O (3√log n
/n), and that the expected sum of squares of all the edges in Del(P) is O(1). These geometric results imply that for wireless networks, randomly distributed in a unit disk (1) computing the Delaunay
triangulation locally is asymptotically easy; (2) simple "face routing" through the Delaunay triangulation optimizes, up to poly-logarithmic factors, the energy load on the nodes, and (3) flooding
the network, an operation quite common in sensor nets, is with high probability optimal up to a constant factor. The last property is particularly important for geocasting because the Delaunay
triangulation is known to be a spanner.
Conference Proceedings of the 23rd Annual ACM Symposium on Principles of Distributed Computing
Country/Territory Canada
City St. John's, Nfld.
Period 25/07/04 → 28/07/04
• Delaunay triangulation
• Geometric flooding
• Geometric routing
Dive into the research topics of 'Geometrically aware communication in random wireless networks'. Together they form a unique fingerprint.
|
{"url":"https://cris.biu.ac.il/en/publications/geometrically-aware-communication-in-random-wireless-networks-2","timestamp":"2024-11-02T09:34:34Z","content_type":"text/html","content_length":"58413","record_id":"<urn:uuid:32f8339b-2fd8-4446-8ae8-8d9de35930e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00322.warc.gz"}
|
Applications of General Relativity in Modern Physics and Astrophysics
Author: admintanbourit
General Relativity, a fundamental theory of Einstein’s, has revolutionized the field of modern physics and astrophysics since its development in the early 20th century. It is a cornerstone theory
that describes gravity as the curvature of spacetime caused by the presence of matter and energy. While its impact on our understanding of the universe is vast, in this article, we will focus on the
applications of General Relativity in modern physics and astrophysics.
One of the most critical applications of General Relativity is its role in explaining the motion of celestial bodies and the structure of the universe. The theory predicts that massive objects such
as planets, stars, and galaxies exert a gravitational pull on the fabric of spacetime. This curvature in spacetime then determines the paths of other celestial bodies around them. For example, the
gravitational pull of the Sun keeps the planets of our solar system in orbit. This application of General Relativity is fundamental in understanding the motions and trajectories of objects in space.
Another crucial application of General Relativity is in the study of black holes, one of the most fascinating objects predicted by the theory. A black hole is created when a massive star collapses
under its gravity, creating a region of space with an incredibly strong gravitational pull. According to General Relativity, the curvature of spacetime near a black hole is so extreme that nothing,
including light, can escape its grasp. This concept has been supported by numerous observations, such as the detection of gravitational waves from merging black holes.
General Relativity also plays a significant role in our understanding of the birth and evolution of the universe. The theory predicts that the universe is expanding, and this expansion is
accelerating. This discovery was made possible through the work of physicist Edwin Hubble, who observed that galaxies are moving away from each other at an increasing rate. This expansion is
attributed to the dark energy, a mysterious force that counteracts gravity and is believed to make up about 70% of the universe. Understanding the properties of dark energy is one of the most
significant challenges in modern physics, and General Relativity is a crucial tool in studying it.
Apart from its role in astrophysics, General Relativity has also found numerous applications in modern technology. The Global Positioning System (GPS), a system that provides precise location and
time information, relies on Einstein’s theory. The satellites used in the GPS have atomic clocks that run faster than clocks on Earth due to their speed and position in a lower gravitational field.
Without accounting for the effects of General Relativity, the GPS system would give erroneous results.
Moreover, General Relativity has also played a critical role in the development of gravitational lensing, a phenomenon that is used to study distant objects in space. The theory predicts that massive
objects can bend light, much like a lens, thus magnifying and distorting the light from objects behind them. This has allowed scientists to observe and study distant galaxies that would have been
impossible to see otherwise.
In modern physics, General Relativity has also led to the development of various theories, such as the Quantum Gravity and String Theory, which aim to unify the fundamental forces of the universe.
Although not yet proven, these theories have the potential to unlock deeper understanding of the laws of the universe. General Relativity has also played a crucial role in the discovery of the Higgs
boson, a fundamental particle that gives mass to all other particles, by predicting its existence through the Higgs field.
In conclusion, General Relativity has far-reaching applications in modern physics and astrophysics. Its role in explaining the motion of celestial bodies, the structure of the universe, and the
existence of black holes has been supported by numerous observations. Its impact goes beyond theoretical concepts and has practical applications in modern technology, such as the GPS system. As we
continue to unravel the mysteries of the universe, General Relativity remains an essential cornerstone theory that continues to shape our understanding of the world around us.
|
{"url":"https://tanbourit.com/applications-of-general-relativity-in-modern-physics-and-astrophysics/","timestamp":"2024-11-11T13:42:51Z","content_type":"text/html","content_length":"113886","record_id":"<urn:uuid:94ed8d95-8ff6-4ff7-a033-890ed99bb1c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00428.warc.gz"}
|
C : Roots of Bhaskaras formula based on 3 floating numbers
C Exercises: Print the roots of Bhaskara’s formula from the given three floating numbers
C Basic Declarations and Expressions: Exercise-20 with Solution
Write a C program to print the roots of Bhaskara’s formula from the given three floating numbers. Display a message if it is not possible to find the roots.
Pictorial Presentation:
C Code:
#include <stdio.h>
#include <math.h>
// Include math.h header file for mathematical functions
int main() {
double a, b, c, pr1; // Declare variables for coefficients and discriminant
// Prompt user for coefficients 'a', 'b', and 'c'
printf("\nInput the first number(a): ");
scanf("%lf", &a);
printf("\nInput the second number(b): ");
scanf("%lf", &b);
printf("\nInput the third number(c): ");
scanf("%lf", &c);
pr1 = (b*b) - (4*(a)*(c)); // Calculate discriminant
if(pr1 > 0 && a != 0) { // Check conditions for real roots
double x, y;
pr1 = sqrt(pr1); // Calculate square root of discriminant
x = (-b + pr1)/(2*a); // Calculate first root
y = (-b - pr1)/(2*a); // Calculate second root
printf("Root1 = %.5lf\n", x); // Print first root
printf("Root2 = %.5lf\n", y); // Print second root
else {
printf("\nImpossible to find the roots.\n"); // Print message for no real roots
return 0;
Sample Output:
Input the first number(a): 25
Input the second number(b): 35
Input the third number(c): 12
Root1 = -0.60000
Root2 = -0.80000
C Programming Code Editor:
Previous: Write a C program that accepts 4 integers p, q, r, s from the user where q, r and s are positive and p is even. If q is greater than r and s is greater than p and if the sum of r and s is
greater than the sum of p and q print "Correct values", otherwise print "Wrong values".
Next: Write a C program that reads an integer and check the specified range where it belongs. Print an error message if the number is negative and greater than 80.
Specified Range: [0, 20], [21, 40], [41, 60], [61, 80]
What is the difficulty level of this exercise?
Test your Programming skills with w3resource's quiz.
It will be nice if you may share this link in any developer community or anywhere else, from where other developers may find this content. Thanks.
• Weekly Trends and Language Statistics
|
{"url":"https://www.w3resource.com/c-programming-exercises/basic-declarations-and-expressions/c-programming-basic-exercises-20.php","timestamp":"2024-11-09T22:27:06Z","content_type":"text/html","content_length":"138935","record_id":"<urn:uuid:13060666-3732-475e-b16e-735a7e30a500>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00269.warc.gz"}
|
[Solved] I think the Power Distance measure in Hof | SolutionInn
Answered step by step
Verified Expert Solution
I think the Power Distance measure in Hofstede's model (Hofstede Insights, n.d.) is particularly interesting.I led divisions in the U.S., New Zealand, and Thailand.Those three
I think the Power Distance measure in Hofstede's model (Hofstede Insights, n.d.) is particularly interesting. I led divisions in the U.S., New Zealand, and Thailand. Those three countries represented
a broad range of power distance scores:
New Zealand: 22
U.S.: 40
Thailand: 64 (Hofstede Insights, n.d.)
My first assignment overseas was to New Zealand. I was told that I was the expert on papermaking operations and that the New Zealand staff needed me to whip them into shape. I listened to that
advice, unfortunately, and began my time in New Zealand, throwing my weight around and acting like the big boss. Big mistake, more like! With a Power Distance of 22, workers in New Zealand expect to
collaborate with managers on an equal footing. Only after I gained some metaphorical bruises did I sort that out.
Imagine then the adjustment I had to make when I joined another company and shipped off to Thailand. There the Power Distance is very high, especially after adjusting to leading in New Zealand. In
Thailand, whatever I said was taken as perfectly correct and the absolutely right thing to do. I had to learn not to state my opinion until I had received detailed written analyses from my staff.
(Written because in person my staff would attempt to read my face and body language and tell me what they thought I needed to know).
If you were to be assigned to lead a team in New Zealand or Thailand, what adjustments would you need to make? Provide reference.
There are 3 Steps involved in it
Step: 1
New Zealand Power Distance 22 In New Zealand with a low Power Distance score there are several adjustments you should consider making 1 Equal Footing ...
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
|
{"url":"https://www.solutioninn.com/study-help/questions/i-think-the-power-distance-measure-in-hofstedes-model-hofstede-934906","timestamp":"2024-11-10T04:32:33Z","content_type":"text/html","content_length":"114963","record_id":"<urn:uuid:d18fb869-75e7-4a18-a120-66271b647079>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00595.warc.gz"}
|
Saunders MacLane
Saunders MacLane
4 August 1909
) is a
US mathematician
. He is one of the founders of
category theory
MacLane was born in Taftville, Connecticut. He studied at the Yale University from 1926 to 1930 before going to the University of Chicago on a one-year fellowship. He was in Göttingen 1931-1933,
returning to a one-year position at Yale. He then became an instructor at Harvard University, followed by a one-year post at Cornell in 1936-1937. He was again at Chicago in 1937-1938; and took a
position at Harvard University as assistant professor in 1938. In 1941 his Survey of Modern Algebra, written with Garrett Birkhoff.
After working in applied mathematics during the war years, he was professor at Chicago from 1947.
His first works were in field theory and valuation theory. He wrote on valuation rings and Witt vectors, and separability in infinite field extensions.
He started writing on group extensions in 1942, and collaborated with Samuel Eilenberg from 1943, on what are now called Eilenberg-MacLane spaces K(π,n), having a single non-trivial homotopy group
(π, in dimension n). This work opened the way to group cohomology in general.
After the introduction via the Eilenberg-Steenrod axioms of the abstract approach to homology theory, he became one of the developers of category theory. He is known particularly for his work on
coherence theorems, and his textbooks.
|
{"url":"http://www.fact-index.com/s/sa/saunders_maclane.html","timestamp":"2024-11-15T03:29:45Z","content_type":"text/html","content_length":"5587","record_id":"<urn:uuid:81dc5d3a-4d95-4f8b-b317-70161d37ea9a>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00046.warc.gz"}
|
Applied Math
Applied Math
The Applied Math pages contain textbook examples and code covered by Dr. Nguyen in two courses: Mathematical Modeling and Numerical Analysis. Some of her projects with undergraduate students are also
presented here as an illustration of how different techniques in Math can be applied to solve real-world problems. The links to several useful tutorials are also available for reference. For any
questions, please feel free to contact Dr. Nguyen (hnguyen5@trinity.edu).
|
{"url":"http://tutorial.math.trinity.edu/content/applied-math","timestamp":"2024-11-10T21:59:28Z","content_type":"application/xhtml+xml","content_length":"10488","record_id":"<urn:uuid:c1c874da-f8fe-4ed5-ba97-de911b5572c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00465.warc.gz"}
|
Two Planes Intersect To Form A
3 Intersecting Planes (example 1) GeoGebra
Two Planes Intersect To Form A. Web when two planes intersect, the intersection becomes a line. When two planes intersect, a line in space is the result.
3 Intersecting Planes (example 1) GeoGebra
This is a logical extension of the. When two planes intersect, a line in space is the result. Web the equation of the plane is ax + by + cz + d = 0, where (a,b,c) is the plane's normal, and d is the
distance to the origin. Web the line of intersection between two planes. N 1 β r = h 1. Web two planes can intersect each other (unless, of course, they are parallel). Letβ s consider an example
of finding the line of intersection between two planes: Web write the vector and scalar equations of a plane through a given point with a given normal. Web two planes specified in hessian normal form
are parallel iff |n_1^^Β·n_2^^|=1 or n_1^^xn_2^^=0 (gellert et al. Web to find the line of intersection of two planes we calculate the vector product (cross product) of the 2 planes normals.
The intersection of two planes form a? Web click here to see all problems on triangles; In three dimensions (which we are implicitly working with here), what is the intersection of two. This means
that every point (x,y,z) that. This gives us the direction vector o. Web typically two planes intersect along some line. Web to find the line of intersection of two planes we calculate the vector
product (cross product) of the 2 planes normals. π ₯ β 4 π ¦ + 3 π § β 4 = 0, 2 π ₯ + 2 π ¦ β . Formulation [ edit ] the line of intersection between two planes Ο 1 : Letβ s consider an
example of finding the line of intersection between two planes: Web the equation of the plane is ax + by + cz + d = 0, where (a,b,c) is the plane's normal, and d is the distance to the origin.
|
{"url":"https://form.uame.edu.mx/two-planes-intersect-to-form-a.html","timestamp":"2024-11-15T04:59:15Z","content_type":"text/html","content_length":"20800","record_id":"<urn:uuid:a517c88f-b58f-4a2d-b561-f6f153484edf>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00294.warc.gz"}
|
On a certain map, the scale is 1 inch=10 miles. Aurora Springs and gandale are 3 inches apart on the map. What is the actual distance between Aurora Springs and Glendale?
Find an answer to your question 👍 “On a certain map, the scale is 1 inch=10 miles. Aurora Springs and gandale are 3 inches apart on the map. What is the actual distance ...” in 📗 Mathematics if the
answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers
|
{"url":"https://cpep.org/mathematics/531633-on-a-certain-map-the-scale-is-1-inch10-miles-aurora-springs-and-gandal.html","timestamp":"2024-11-12T02:07:06Z","content_type":"text/html","content_length":"23189","record_id":"<urn:uuid:20184fc9-27ee-4bae-a790-3176403a1e5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00137.warc.gz"}
|
Quantitative Archives - Page 4 of 4 - GRE and Grad School Admissions Blog
GRE word problems sometimes use “real-life settings,” says ETS, to test your quantitative problem solving skills. Talk of salary ranges, fabric purchases, population densities, or similar topics will
prompt you to do some algebra or other standard GRE math. Figuring out the math can be tough, given that word problems can be a bit convoluted. […]
|
{"url":"https://blog.powerscore.com/gre/category/quantitative/page/4/","timestamp":"2024-11-14T17:23:09Z","content_type":"text/html","content_length":"34301","record_id":"<urn:uuid:d6eb8363-20fc-45d9-8f23-78557d74f8f1>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00144.warc.gz"}
|
Physics Mcqs For Class 11 with Answers PDF Download Chapter No 02
MCQsFoundry.com brings to you 500+ 11th Standard Physics Mcqs which are new and latest. These Mcqs are never published on internet so far. For full information about all PPSC / FPSC / CSS / PMS
latest jobs visit theiteducation.com
Top 11th Class Physics Subject Mcqs Pdf Download
Chapter 2
Vectors and Equilibrium
Encircle the most appropriate answer among the following options
1. A scalar quantity can be described by choose which one is correct:
(a) Magnitude
(b) Unit
(c) Magnitude and unit
(d) Number
2. A vector quantity can be described by magnitude, unit and choose which one is correct:
(a) Direction
(b) Rotation
(c) Dimension
(d) Unit vector
3. Choose one Which among of the following is a vector quantity choose which one is correct:
(a) Energy
(b) Power
(c) Work
(d) Momentum
4. Choose Which among of the following is a scalar quantity choose which one is correct:
(a) Mass
(b) Displacement
(c) Force
(d) Torque
`5. Two lines are drawn at right angle to each other are known as choose which one is correct:
(a) Coordinate axis
(b) xy-axis
(c) Components
(d) Cartesian axis
6. A vector which gives the direction of a given vector is called choose which one is correct:
(a) Unit vector
(b) Position vector
(c) Null vector
(d) Negative vector
7. Choose one among the following When a vector is divided by its magnitude we get choose which one is correct:
(a) Null vector
(b) Unit vector
(c) Zero vector
(d) Position vector
8. Pick out the scalar quantity among the following choose which one is correct:
(a) Force
(b) Torque
(c) Time
(d) Velocity
9. Pick out the vector quantity among the following choose which one is correct:
(a) Power
(b) Energy
(c) Force
(d) Mass
10. The magnitude of a null vector is choose which one is correct:
(a) One
(b) Zero
(c) Double
(d) Negative
Il. Null vector is a vector having zero magnitude and choose which one is correct:
(a) Arbitrary direction
(b) No direction
(c) Specific direction
(d) Opposite direction
12. Unit vector of a vector A describes choose which one is correct:
(a) Direction of a given vector
(b) Magnitude of a given vector
(c) Shape of a given vector
(d) All of above
13. The unit vector of among vector is determined choose which one is correct:
(a) By multiplying the vector with its own magnitude
(b) By dividing the vector with its own magnitude
(c) Both (a) and (b)
(d) None of these
14. Unit vector is used to specify choose which one is correct:
(a) Direction of a vector
(b) Position Of a vector
(c) Magnitude of a vector
(d) Dimension of a vector
15. An example of a scalar quantity choose which one is correct:
(a) Displacement
(b) Acceleration
(c) Force
(d) Speed
16. An example of a vector quantity choose which one is correct:
(a) Speed
(b) Work
(c) Acceleration
(d) Mass
17. A vector which has magnitude one is called choose which one is correct:
(a) Null vector
(b) Unit vector
(c) Resultant vector
(d) Position vector
18. A vector which has zero magnitude is called choose which one is correct:
(a) Null vector
(b) Unit vector
(c) Resultant vector
(d) Position vector
19. The sum of two or more vectors is equal to a single vector which is called choose which one is correct:
(a) Component of vector
(b) Product vector
(c) Null vector
(d) Resultant vector
20. Resultant of two vectors of magnitude 24 and 7 is 25. The angle between them is choose which one is correct:
(a) 90^o
(b) 180^o
(c) 360^o
(d) 270^o
21. When a vector A is multiplied by a negative number then its direction choose which one is correct:
(a) Remains same
(b) Changed by 180 0
(c) Does not change
(d) None of these
22. When a vector A is multiplied by a positive number then its direction choose which one is correct:
(a) Remains same
(b) Changed by 180^o
(c) Does not change
(d) None of these
23. The splitting up of a vector into its components is called choose which one is correct:
(a) Sum of vector
(b) Subtraction of a vector
(c) Resolution of a vector
(d) None of these
24. The angle between two rectangular components is choose which one is correct:
(a) 600
(b) 900
(c) 1800
(d) 2700
25. The resultant of two anti-parallel vectors A and B is choose which one is correct:
(a) A +B
(b) A – B
(c) Zero
(d) None of these
26. Two vectors having same magnitude and direction are called choose which one is correct:
(a) Equal vectors
(b) Unequal vectors
(c) Null vectors
(d) None of these
27. The sum of two equal and opposite vectors is a vector called choose which one is correct:
(a) Equal vector
(b) Null vector
(c) Position vector
(d) Unit vector
28. The resultant of three vectors whose magnitude are 3 units in east, 12 units in north and 4 units vertically upwards is
(b) 13
(d) 19
29. What is the resultant of 3N and 4N forces acting at right angle to each other choose which one is correct:
(a) 90 N
(b) 5 N
(c) 7
(d) 1 N
30. If a force of 10 N makes an angle of 30 0 with x-axis, its x-component is given by choose which one is correct:
(a) 86.6 N
(b) 0.866 N
(c) 8.66 N
(d) None of these
31. Choose one among the following Two forces of 10 N and 7 N respectively are acting to an object. The minimum value of resultant is choose which one is correct:
(a) 0 N
(b) 10N
(c) 7 N
(d) 3 N
32. Choose one among the followingTwo forces act together on a body, the magnitude of their resultant is greatest when the angle between the forces is choose which one is correct:
(a) 45^0
(b) 60^0
(c) 0^0
(d) 180^o
33. The position vector in xy-plane is written as choose which one is correct:
(a) r = x+ y
(b) r = y + z
(c) r = y + z
(d) None of these
34. The position vector in xz-plane is written as choose which one is correct:
(a) r = x+ y
(b) r = y + z
(c) r = x + z
(d) None of these
35 The position vector in yz-plane is given by choose which one is correct:
(a) r = x+ z
(b) r = x + y
(c) r = y + z
(d) r = x + y+ z
36. If a force of 50 N is acting along x-axis, then its component along y-axis will be choose which one is correct:
(a) The same
(b) Zero
(c) Half magnitude
(d) None of these
37. Choose one among the A force of 10 N is acting along z-axis, its component along x-axis and y-axis is choose which one is correct:
(a) 5N, 8 N
(c) 5 N each
(d) Zero
38. If two vectors of magnitude Fl and F2 act on a body at an angle O, the magnitude of their resultant is choose which one is correct:
39. The magnitude of a vector A = A[x] + A[y] + A[z] is given by choose which one is correct:
(a) A[x ]+ A[y] + A[z]
(b) A[x ]cos
(d) None of these
40. If a vector A makes an angle with x-axis, the magnitude of its, x-component is choose which one is correct:
(a) A[y] = A sin
(b) A[x] = A cos
(c) Both (a) and (b)
(d) None of these
41. Choose one If a vector A makes an angle with x-axis, the magnitude of its y-component is choose which one is correct:
(a) A[y] = A sin
(b) A[x] = A cos
(c) Both (a) and (b)
(d) None of these
42. The reverse process of vector addition is called choose which one is correct:
(a) Subtraction of a vector
(b) Addition of a vector
(c) Negative of a vector
(d) Resolution of a vector
43. The expression r = a + b is for choose which one is correct:
(a) Unit vector
(b) Position vector
(c) Null vector
(d) Negative vector
44 The direction of a resultant vector R is given by choose which one is correct:
(a) = tan^-1
(b) = tan^-1
(c) = sin^-1
(d) None of these
45. If both the components of a vector are negative then vector is in choose which one is correct:
(a) 1st quadrant
(b) 2nd quadrant
(C) 3rd quadrant
(d) 4th quadrant
46. The scalar product is also known as choose which one is correct:
(a) Vector product
(b) Dot product
(c) Vector sum
(d) Scalar sum
47. The scalar product of A and B is given by choose which one is correct:
(a) A x B
(b) A . B
(c) A – B
(d) AB
48. The projection of vector A on B is given by choose which one is correct:
(c) AB cos [S:0:S]
49. The self scalar product of A is given by choose which one is correct:
(b) A^3
(c) A^2
(d) A
50. If A and B are anti-parallel then their scalar product is choose which one is correct:
(a) AB cos [S:0:S]
(b) -AB
(c) -AB cos [S:0:S]
(d) Zero
51. The scalar product of two similar unit vector is choose which one is correct:
(a) One
(b) Zero
(c) Twice
(d) Negative
52. If A . B = B . A this is called choose which one is correct:
(a) Commutative law
(b) Associative law
(c) Distributive law
(d) None of these
53. If the multiplication of two vectors results into a vector quantity then the product is called choose which one is correct:
(a) Dot product
(b) Vector product
(c) Scalar product
(d) None of these
54. If the multiplication of two vectors result into a scalar quantity then the product is called choose which one is correct:
(a) Vector product
(b) Cross product
(c) Scalar product
(d) None of these
55. If A x B points along positive z-axis, then vector A and B will lie in choose which one is correct:
(a) zx-plane
(b) xy-plane
(c) yz-plane
(d) None Of these
56. If two vectors A and B are non-parallel vectors then the direction of A x B is along choose which one is correct:
(a) y-axis
(b) z-axis
(c) x-axis
(d) None of these
57. Select the correct answer choose which one is correct:
(a) . =
(b) . = 0
(c) . =
(d) . = 1
58. Select the correct one choose which one is correct:
(a) A . B = – B . A
(b) A . B = ½ B . A
(c) A . B = B . A
(d) None of these
59. Which of the following unit vectors represent the direction of normal drawn on a specific surface choose which one is correct:
60. If A= 2 +4 + 5 and B = 2 +2 + ,What will be the value of A . B choose which one is correct:
(a) 9
(b) -9
(c) 5
(d) 10
61 If A= 2 +4 + 5 then What will be the value of choose which one is correct:
(d) None of these
62. The scalar product of two vectors is zero when choose which one is correct:
(a) They are equal vectors
(b) They are in the same direction
(c) They are at right angle
(d) None of these
63. If the vectors A and B are parallel to each other then choose which one is correct:
(a) A . B = AB
(b) A . B AB
(c) A . B = 0
(d) A . B = AB cos [S:0:S]
64. If A = A[x] + A[y] + A[z] and B = B[x] + B[y] + B[z] then the value of A . B is choose which one is correct:
(a) A[x]B[x] + A[,y]B[y ]+ A[z]B[z]
(b) A[x]B[x]+ A[y]B[z] +A[z]B[z]
(d) None of these
65. The scalar product of two vectors will be negative if choose which one is correct:
(a) They are at right angle to each other
(b) They are parallel
(c) They are anti-parallel
(d) None of these
66. The dot product of . = . = . is equal to choose which one is correct:
(a) 0
(b) 1
(c) -1
(d) 2
67. The dot product of . = . = . is equal to choose which one is correct:
(a) 0
(b) 1
(c) -1
(d) 2
68. The vector product of two vectors A and B is given by choose which one is correct:
(a) AB sin [S:0:S]
(b) AB sin [S:0:S]
(c) AB sin [S:0 :S]
(d) none
69. Vector product does not hold choose which one is correct:
(a) Commutative law
(b) Associative law
(c) Distributive law
(d) None of these
70. The direction of vector product is choose which one is correct:
(a) Parallel to plane
(b) Perpendicular to plane
(c) Anti-parallel
Along the plane
71. Self cross-product of a vector is equal to choose which one is correct:
(a) Zero
(b) One
(c) Double
(d) Negative
72. The cross product of unit vectors . = . = .
(a) One
(d) Zero
73. If A x B = 0 then the angle between the vectors is choose which one is correct:
(a) 600
(b) 900
(c) 270 0
(d) 1800
74. The magnitude of A x B is equal to area of choose which one is correct:
(a) Triangle
(b) Circle
(c) Parallelogram
(d) Rectangle
75. The cross product of two vectors will be negative when choose which one is correct:
(a) They are anti-parallel
(b) They are parallel
(c) They are rotated through an angle of 270^0
(d) None of these
76. The cross product of two parallel vectors A and B is equal to choose which one is correct:
(a) AB sin [S:0:S]
(b) AB sin [S:0:S]
(c) AB
(d) Zero
77. Select the correct one choose which one is correct:
(a) A . B = A . B
(b) A x B A x B
(c) A x B = A x B
(d) None of these
78. The cross product of i x j is equal to choose which one is correct:
(c) 0
79. The cross product of x is equal to choose which one is correct:
(c) 0
80. Select the correct one choose which one is correct:
(a) x =
(b) x =
(c) x =
(d) x =
81. The turning effect of a force is called its moment or choose which one is correct:
(a) Momentum
(b) Inertia
(c) Torque
(d) Impulse
82. The perpendicular distance from the line of action to the pivot is called choose which one is correct:
(a) Displacement
(b) Momentum
(c) Moment distance
(d) Moment arm
83. The SI unit of torque is choose which one is correct:
(a) N , m^2
(b) N , m
(c) N/m^2
(d) N^2m
84. The expression for torque is given by choose which one is correct:
(a) rF cos [S:0:S]
(b) rF sin [S:0:S]
(c) rF sin [S:0:S]
(d) rF cos [S:0:S]
85. Torque acting on a body determines its choose which one is correct:
(a) Velocity
(b) Momentum
(c) Force
(d) Angular momentum
86. When line of action of applied force passes through the pivot point then torque will be choose which one is correct:
(a) Maximum
(b) Constant
(c) Negative
(d) Zero
87. The direction of torque t = r x F is determined by choose which one is correct:
(a) Head to tail rule
(b) Right hand rule
(c) Left hand rule
(d) None of these
88. Conventionally anti-clock wise torque is taken as choose which one is correct:
(a) Zero
(b) Negative
(c) Positive
(d) None of these
89. Conventionally clockwise torque is taken as choose which one is correct:
(a) Zero
(b) Negative
(c) Positive
(d) None of these
90. Torque is also called as choose which one is correct:
(a) Moment of inertia
(b) Moment arm
(c) Moment of force
(d) Angular velocity
91. The dimension of torque are choose which one is correct:
(a) [ML^2T^2]
(b) [MLT^-I]
(c) [MOT]
(d) [M^2LT-^2]
92. Torque = x Force choose which one is correct:
(a) Velocity
(b) Momentum
(c) Arm of the weight
(d) Moment arm
93. Let torque r= r x F then direction of torque is choose which one is correct:
(a) In the direction F
(b) In the direction of r
(c) Normal to the plane
(d) None of these
94. Two equal and opposite forces acting on a body form a choose which one is correct:
(a) Momentum
(b) Torque
(c) Couple
(d) None of these
95. Indicate The point at which the whole weight of the body acts is called choose which one is correct:
(a) Torque
(b) Centre of gravity
(c) Centre of mass
(d) Centre of the body
96. The centre of gravity of a uniform body is choose which one is correct:
(a) At the axis of rotation of the body
(b) At its centre
(c) At its one end
(d) None of these
97. The centre of gravity of a triangular plate is choose which one is correct:
(a) At the axis of rotation of the body
(b) At its centre
(c) At the intersections of medians
(d) None of these
98. If a body is at rest or moving with uniform velocity then it is said to be in choose which one is correct:
(a) Torque
(b) Equilibrium
(c) Both (a) and (b)
(d) None of these
99. Torque has zero value if angle between r and F choose which one is correct:
(a) 60^0
(b) 45^0
(c) 90^0
(d) 0^0
100. The torque has maximum value if angle between r and F is choose which one is correct:
(a) 60^0
(b) 45^0
(c) 90^0
(d) 0^0
|
{"url":"https://mcqsfoundry.com/physics-mcqs-for-class-11-with-answers-pdf-download-chapter-no-02/","timestamp":"2024-11-06T01:18:00Z","content_type":"text/html","content_length":"89003","record_id":"<urn:uuid:4c02e648-bc80-4890-9f48-ca7daac23d0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00004.warc.gz"}
|
Certain pl(m,n)-Kummer Matrix Function of Two Complex Variables under Differential Operator
1. Introduction
Many Special matrix functions appear in connection with statistics [1], mathematical physics, theoretical physics, group representation theory, Lie groups theory [2] and orthogonal matrix polynomials
are closely related [3-5]. The hypergeometric matrix function has been introduced as a matrix power series and an integral representation and the hypergeometric matrix differential equation in [6-9]
and the explicit closed form general solution of it has been given in [10]. The author has earlier studied the Kummer’s and Horn’s
Throughout this paper for a matrix
where for a vector
The reciprocal gamma function denoted by
Jódar and Cortés have proved in [6], that
2. On pl(m, n)-Kummer Matrix Function
We We define the pl(m, n)-Kummer matrix function
For simplicity, we can write the
We begin the study of this function by calculating its radius of regularity
Summarizing, the following result has been established.
Theorem 2.1. Let
i.e., the l(m, n)-Kummer matrix function is an entire function.
Some matrix recurrence relations are carried out on the pl(m, n)-Kummer matrix function. In this connection the following matrix contiguous functions relations follow, directly by increasing or
decreasing one in original relation
By the same way, we have
Now, we consider the following differential operators
It is clear that
So that
Putting in this relation
and so that we can be written the relation
Therefore, the power series
i.e., the pl(m, n)-Kummer matrix function is a solution of the matrix differential equation
In this paper, we affect by differential operator D the pl(m, n)-Kummer matrix function, successively, then we have
i.e. the (m, n)-Kummer matrix function is a solution to this matrix differential equation
Therefore, the following result has been established.
Theorem 2.2. Let
From (2.1), (2.3) and (2.5), we obtain
Thus by mathematical induction, we have the following general form
Special cases: we can be written the matrix function
we see that
i.e., the
i.e., the
The results of this paper are variant, significant and so it is interesting and capable to develop its study in the future. One can use the same class of differential operators for some other
function of several complex variables. Hence, new results and further applications can be obtained.
3. Acknowledgements
The Author expresses his sincere appreciation to Dr. M. S. Metwally, (Department of Mathematics, Faculty of Science (Suez), Suez Canal University, Egypt) for his kind interest, encouragements, help,
suggestions, comments and the investigations for this series of papers.
The author would like to thank the referees for his comments and suggestions on the manuscript.
|
{"url":"https://www.scirp.org/journal/paperinformation?paperid=27204","timestamp":"2024-11-04T05:37:12Z","content_type":"application/xhtml+xml","content_length":"112005","record_id":"<urn:uuid:73973783-6403-4230-9815-274e00947921>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00525.warc.gz"}
|
Introduction to Multiplying and Dividing Mixed Numbers and Complex Fractions
What you’ll learn to do: Multiply and divide mixed numbers and complex fractions
Rufus volunteers at an animal shelter. Every morning, he feeds each dog [latex]\Large\frac{2}{3}[/latex] cup of food. If each bag of dog food contains [latex]25\Large\frac{1}{3}[/latex] cups of food,
how many dogs can he feed with each bag? This may sound like a simple division problem, but Rufus will need to figure out how to deal with a mixed number, [latex]25\Large\frac{1}{3}[/latex], before
he can find the answer. In this section, you’ll learn how to multiply and divide with mixed numbers and complex fractions.
Before you get started, take this readiness quiz.
readiness quiz
If you missed this problem, review this video.
If you missed this problem, review the video below.
If you missed this problem, review this video.
|
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/multiply-and-divide-mixed-numbers-and-complex-fractions/","timestamp":"2024-11-13T18:26:51Z","content_type":"text/html","content_length":"52026","record_id":"<urn:uuid:cb2a9fd2-9874-4494-82a4-5c88f4c2f57e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00828.warc.gz"}
|
In Maths, a function f(x) is said to be discontinuous at a point ‘a’ of its domain D if it is not continuous there. The point ‘a’ is then called a point of discontinuity of the function. In limits
and continuity, you must have learned a continuous function can be traced without lifting the pen on the graph. The discontinuity may arise due to any of the following situation:
1. The right-hand limit or the left-hand limit or both of a function may not exist.
2. The right-hand limit and the left-hand limit of function may exist but are unequal.
3. The right-hand limit, as well as the left-hand limit of a function, may exist, but either of the two or both may not be equal to f(a).
Discontinuity in Maths Definition
The function of the graph which is not connected with each other is known as a discontinuous function. A function f(x) is said to have a discontinuity of the first kind at x = a, if the left-hand
limit of f(x) and right-hand limit of f(x) both exist but are not equal. f(x) is said to have a discontinuity of the first kind from the left at x = a, if the left hand of the function exists but not
equal to f(a).
In the above graph, the limits of the function to the left and to the right are not equal and hence the limit at x = 3 does not exist anymore. Such function is said to be a discontinuity of a
Also, read:
Types of Discontinuity
There are three types of discontinuity.
• Jump Discontinuity
• Infinite Discontinuity
• Removable Discontinuity
Now let us discuss all its types one by one.
Jump Discontinuity
Jump discontinuity is of two types:
• Discontinuity of the First Kind
• Discontinuity of the Second Kind
Discontinuity of the First Kind: Function f(x) is said to have a discontinuity of the first kind from the right at x = a, if the right hand of the function exists but not equal to f(a). In Jump
Discontinuity, the Left-Hand Limit and the Right-Hand Limit exist and are finite but not equal to each other.
Discontinuity of the Second Kind: A function f(x) is said to have discontinuity of the second kind at x = a, if neither left-hand limit of f(x) at x = a nor right-hand limit of f(x) at x = a exists.
Removable Discontinuity
A function f(x) is said to have a removable discontinuity at x = a, if left-hand limit at x tends to point ‘a’ is equal to the right-hand limit at x tends to point ‘a’ but their common value is not
equal to f(a). A removable discontinuity occurs when there is a rational expression with common factors in the numerator and denominator. Since these factors can be cancelled, the discontinuity is
Infinite Discontinuity
In Infinite Discontinuity, either one or both Right Hand and Left Hand Limit do not exist or is Infinite. It is also known as Essential Discontinuity. Whenever the graph of a function f(x) has the
line x = k, as a vertical asymptote, then f(x) becomes positively or negatively infinite as x→k^+ or x→K^+. Then, function f(x) is said to have infinite discontinuity.
|
{"url":"https://mathlake.com/Discontinuity","timestamp":"2024-11-06T23:59:14Z","content_type":"text/html","content_length":"10686","record_id":"<urn:uuid:16c7faf8-9ebe-4372-bd4f-c6c68a23ade5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00245.warc.gz"}
|
F-distribution - Data Science Wiki
F-distribution :
The f-distribution, also known as the Fisher-Snedecor distribution, is a
distribution commonly used in statistical
hypothesis testing
. It is a continuous distribution that describes the ratio of two independent chi-squared random variables. This distribution is often used in testing the equality of variances in two samples, as
well as in analysis of
(ANOVA) tests to determine if there are significant differences between the means of multiple groups.
One example of the use of the f-distribution is in an ANOVA test. In this test, a researcher is interested in comparing the average heights of three different groups of people: men, women, and
children. The researcher collects data on the height of each individual in each group and calculates the variance for each group. The f-statistic is then calculated by taking the ratio of the
variance of the two groups being compared, with the group with the larger variance in the numerator. This f-statistic is then compared to the f-distribution to determine if the difference in
variances between the two groups is statistically significant.
Another example of the f-distribution is in testing the equality of variances in two samples. In this scenario, a researcher is interested in comparing the amount of time spent on a task by two
different groups of individuals. The researcher collects data on the time spent on the task by each individual in each group and calculates the variance for each group. The f-statistic is then
calculated by taking the ratio of the variance of the two groups, with the group with the larger variance in the numerator. This f-statistic is then compared to the f-distribution to determine if the
difference in variances between the two groups is statistically significant.
In both of these examples, the f-distribution is used to determine if the observed differences between the groups are statistically significant or if they could have occurred by chance. By comparing
the calculated f-statistic to the f-distribution, researchers can determine the probability that the observed differences are due to random chance and make conclusions about the underlying
populations being studied.
|
{"url":"https://datasciencewiki.net/f-distribution/","timestamp":"2024-11-13T15:02:32Z","content_type":"text/html","content_length":"41328","record_id":"<urn:uuid:87a51af4-3d95-4005-b612-e915fa5dfd2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00342.warc.gz"}
|
The Symmetry Group of Differential Equations
Printable PDF
Department of Mathematics,
University of California San Diego
Food for Thought Seminar
Raul Gomez
The Symmetry Group of Differential Equations
While he was studying partial differential equations, Sophus Lie came up with the idea of trying to solve them by using their symmetry group. His idea was to apply Galois Theory to differential
equations instead of polynomials. Lie's key observation was that these symmetry groups are locally determined by their Lie algebras. Normally Lie groups of differential equations are only locally
defined , i.e. they are only defined in a neighborhood of the identity element. However if we enlarge the manifold where the group is acting we can find a globally defined group action whose
restriction to the original manifold is the original action. In this talk we will calculate the symmetry group of the line equation $y''=0$ and see that, despite the simplicity of this equation, the
symmetry group is not globally defined! However, the action can be enlarged to a well defined action on $RP^2$. We will do the same with Maxwell's equations obtaining, in this way, a conformal model
of the universe where the symmetry group of Maxwell's equations is well defined.
November 30, 2006
12:00 PM
AP&M 7321
|
{"url":"https://math.ucsd.edu/seminar/symmetry-group-differential-equations","timestamp":"2024-11-08T10:34:56Z","content_type":"text/html","content_length":"33555","record_id":"<urn:uuid:39c77cf0-7234-47df-81fc-e7e40252b62e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00876.warc.gz"}
|
Modeling Line Features on DGGS Grids in the R-ArcGIS Environment
A Glance at DGGS
As a candidate for a new Earth reference standard, Discrete Global Grid System (DGGS) was confirmed by the Open Geospatial Consortium (OGC) as “a spatial reference system that uses a hierarchical
tessellation of cells to partition and address the globe” in 2017. DGGS has been recognized as the most potential foundation of Digital Earth in the next generation. There are many keywords
describing a DGGS: equal-area, tessellation, global domain, nested cells, unique index, to name just a few. Just try to imagine it as a hierarchical spreadsheet where cell locations are fixed at each
resolution and spatial information can then be assigned to individual cells. A DGGS commonly begins with a Platonic Solid — an initial discretization of the Earth into planar cells, the initial cells
are then be refined to an arbitrary resolution and mapped from planar cells to spherical cells by an equal-area projection method. OGC allows for a range of options to construct a DGGS, including
different cell shapes, refinement ratio, projection methods, etc. For those who have not known much about it, I strongly suggest checking the References and Further Reading section; there is much to
Model Line Features to Grids
Lately, I was working on a side project — uncertainties of geo-feature modeling on DGGS. The main goal for that project was to compare the geometric measurements and topological relationships among
different configurations and granularities. To be clear here, DGGS grids are generated from inversely projecting planar cells to spherical cells instead of a simple tessellation process on a 2D
plane. As one of the state-of-the-art DGGS implementations, the open-source R library dggridR provided all functions I need to generate DGGS grids within a certain region. However, to model a
specific line feature (intersect grids with lines) and visualize the results seems to be more convenient in a professional GIS environment (e.g., ArcGIS). Thus, I think it would be good to work in
both environment with an efficient bridge (e.g., the R library arcgisbinding). In this blog, I would like to show my way to model line features on DGGS cells in the context of ArcGIS Pro and R.
Vector data of major roads in Calgary, Canada were gained from the Open Calgary, the city of Calgary’s open data portal. In this blog, nine roads in the district 707G of Calgary were used as an
example to show how the whole process worked. The roads have three types — Avenue, Street, and Trail, among which avenues are west-east directed, and the others are north-south directed. The original
length for each road was calculated via geodesic distances (m). The length after conversion (m) was also calculated for the sample line features. In this experiment, the length after conversion was
measured as the sum of straight-line-distances (geodesic) of cell centroids for the individual line feature.
Figure 1. Nine target roads to be modeled in the district 707G, Calgary
Step 1. Generate DGGS grids with three different configurations covering the study area in the R environment
# Load libraries
# Read the dataset
dsn.road = "H:\\UCalgary Courses\\ECCE_UC\\ECCE_Blog_2020Winter\\Data\\MajorRoads.shp"
MajorRoads = readOGR(dsn.road, layer="MajorRoads")
# Visualize the dataset
ggplot() +
geom_polygon(data=MajorRoads, aes(x=long, y=lat, group=group), size = 1, fill=NA, color="black") +
scale_y_continuous(name = "Longitude") + scale_x_continuous(name = "Latitude") +
theme_classic() + coord_equal()
# Construct DGG objectS
dggs.ISEA4H.20 = dgconstruct(projection = "ISEA", aperture = 4, topology = "HEXAGON", res = 20, precision = 7)
dggs.ISEA4T.20 = dgconstruct(projection = "ISEA", aperture = 4, topology = "TRIANGLE", res = 20, precision = 7)
dggs.ISEA4D.20 = dgconstruct(projection = "ISEA", aperture = 4, topology = "DIAMOND", res = 20, precision = 7)
# Create DGGS grids and export as .shp files
dgshptogrid(dggs.ISEA4H.20, dsn.road, cellsize = 0.00001,
savegrid = "H:\\UCalgary Courses\\ECCE_UC\\ECCE_Blog_2020Winter\\Data\\Roads_ISEA4H_20.shp")
dgshptogrid(dggs.ISEA4T.20, dsn.road, cellsize = 0.00001,
savegrid = "H:\\UCalgary Courses\\ECCE_UC\\ECCE_Blog_2020Winter\\Data\\Roads_ISEA4T_20.shp")
dgshptogrid(dggs.ISEA4D.20, dsn.road, cellsize = 0.00001,
savegrid = "H:\\UCalgary Courses\\ECCE_UC\\ECCE_Blog_2020Winter\\Data\\Roads_ISEA4D_20.shp")
Line 1-4: Load R packages needed for this step — dggridR; rgdal; ggplot2.
Line 6-8: Read the .shp file into the R environment as a Spatial Lines Data Frame.
Line 10-14: Visualize the target roads to be modeled in the R window (Figure 2).
Line 16-19: Define three grid configurations used in this experiment (ISEA4H, ISEA4T, and ISEA4D), all at the resolution level 20.
• ISEA4H — Icosahedral Snyder Equal Area Aperture 4 Hexagonal Grid (cell spacing = 6.7 m at resolution level 20)
• ISEA4T — Icosahedral Snyder Equal Area Aperture 4 Triangular Grid (cell spacing = 3.9 m at resolution level 20)
• ISEA4D — Icosahedral Snyder Equal Area Aperture 4 Diamond Grid (cell spacing = 6.7 m at resolution level 20)
Line 21-27: Create grid cells with different configurations in the study area and save them as .shp files.
Figure 3 visualizes a section of generated hexagonal grids in the ArcGIS Pro environment.
Figure 2. Visualization of the raw roads in R
Figure 3. Visualization of a section of the hexagonal grids in ArcGIS Pro
Step 2. Detect grid cells representing individual roads in the ArcGIS Pro environment
## Set the working envrionment
env = r"H:\UCalgary Courses\ECCE_UC\ECCE_Blog_2020Winter\ECCE_Blog_2020Winter.gdb"
arcpy.env.workspace = env
arcpy.env.overwriteOutput = True
InputFeature = "Roads_ISEA4H_20" # Roads_ISEA4T_20 / Roads_ISEA4D_20
SpatilJoinOutput = "Roads_ISEA4H_spjoin" # Roads_ISEA4T_spjoin / Roads_ISEA4D_spjoin
## Spatial join the grid shapefile and the road shapefile
arcpy.SpatialJoin_analysis(InputFeature, "MajorRoads", SpatilJoinOutput, "JOIN_ONE_TO_MANY", "KEEP_COMMON", '', "INTERSECT", None, '')
## Add fields to store latitude and longitude values
arcpy.AddField_management(SpatilJoinOutput, "longitude", "DOUBLE")
arcpy.AddField_management(SpatilJoinOutput, "latitude", "DOUBLE")
## Calculate latitude and longitude for the cell centroids
arcpy.management.CalculateGeometryAttributes(SpatilJoinOutput, "longitude CENTROID_X;latitude CENTROID_Y", '', '')
First, spatial join the generated grids with the MajorRoads shapefile, with the JOIN ONE TO MANY option and only keep those records with INTERSECT relationships. Then, calculate the lat/lon
coordinates of cells centroids. Repeat the same operation for three datasets of different grid types. This step can be done manually with ArcGIS tools, while the above shows the Python script
accomplishing the same task. Figure 4-6 visualize the road representation by hexagonal, triangular, and diamond cells, respectively.
Figure 4. Represent roads by hexagonal cells
Figure 5. Represent roads by triangular cells
Figure 6. Represent roads by diamond cells
Step 3. Calculate the new road length after conversion in the R environment
# Load libraries
# Print information regarding the ArcGIS product and license
# Open the datasets
Rd.ISEA4H = arc.open("H:/UCalgary Courses/ECCE_UC/ECCE_Blog_2020Winter/ECCE_Blog_2020Winter.gdb/Roads_ISEA4H_spjoin")
Rd.ISEA4T = arc.open("H:/UCalgary Courses/ECCE_UC/ECCE_Blog_2020Winter/ECCE_Blog_2020Winter.gdb/Roads_ISEA4T_spjoin")
Rd.ISEA4D = arc.open("H:/UCalgary Courses/ECCE_UC/ECCE_Blog_2020Winter/ECCE_Blog_2020Winter.gdb/Roads_ISEA4D_spjoin")
# Create a new dataframe to store results
ISEA4H.df = ISEA4T.df = ISEA4D.df = data.frame(StreetID = integer(), ConvertLength = double())
# create a vector to store street ID
road.id = (0:8)
# Creat a function to calculate the road length after being converted on DGGS grids
# Two parameters included road ID and the input dataset with different grid types
polyline.length = function(roadid,inputdf){
# Import the subsection of the attribute tables
df = arc.select(object = inputdf, fields = c('JOIN_FID','Type','longitude','latitude'))
# Select those cells corresponding to a specific road
road.matrix = filter(df, (JOIN_FID == roadid))
# If the road is west-east directed (type is Avenue) then sort cells by longitude firstly
# If the road is north-south directed (type is not Avenue) then sort cells by latitude firstly
if(road.matrix$Type[1] == "Avenue"){
centroid.matrix = road.matrix[,c("longitude","latitude")][order(road.matrix$longitude, road.matrix$latitude),]
} else {
centroid.matrix = road.matrix[,c("longitude","latitude")][order(road.matrix$latitude, road.matrix$longitude),]
# Calculate the the length after conversion
Convert.length = lengthLine(centroid.matrix)
# Report the converted length
results = c(roadid,Convert.length)
# Call the function in three loops
for (rid in road.id) {ISEA4H.df = rbind(ISEA4H.df, polyline.length(rid,Rd.ISEA4H))}
for (rid in road.id) {ISEA4T.df = rbind(ISEA4T.df, polyline.length(rid,Rd.ISEA4T))}
for (rid in road.id) {ISEA4D.df = rbind(ISEA4D.df, polyline.length(rid,Rd.ISEA4D))}
# Summary all info in a new data frame for comparison
ConvertSummary = data.frame(MajorRoads)
ConvertSummary$ISEA4HLength = ISEA4H.df[,2]
ConvertSummary$ISEA4TLength = ISEA4T.df[,2]
ConvertSummary$ISEA4DLength = ISEA4D.df[,2]
Line 1-4: Load the packages needed — arcgisbinding; dplyr; geosphere.
Line 6-7: Print information regarding the ArcGIS product and license on this machine.
Line 9-12: Import the datasets of the grid cells representing individual roads.
Line 14-15: Create new data frames to store the calculation results.
Line 16-17: Create a vector to store road ID from 0 to 8.
Line 19-38: Create a function to calculate the new road length. The length was calculated as the sum of straight-line-distances (geodesic) of cell centroids for each road. Before calculation, the
cell records needed sorting by their spatial locations. If the road is west-east directed, sort the cells by ascending longitude then ascending latitude; if the road is north-south directed, sort the
cells by ascending latitude then ascending longitude.
Line 40-43: Call the newly created function for each road with each grid configuration.
Line 45-49: Summary all the converted length on various grid configurations for comparison. The comparison results are in Figure 7.
Figure 7. Comparison of the new length on various grid configurations with the original road length
The converted length was always greater than the original ones, that was because of the jagged, rasterization-like representation on the DGGS grids. This was a simple example showing the modeling
process at a coarse resolution for a clear illustration. As the resolution of the grid increases, the length measurement accuracy will be much improved.
I hope this blog provides some insight into working with R-ArcGIS environment and how to use a DGGS.
References and Further Reading
|
{"url":"https://ecce.esri.ca/ucalgary-blog/2020/03/24/modeling-line-features-dggs-r-arcgis/","timestamp":"2024-11-02T00:00:11Z","content_type":"text/html","content_length":"44252","record_id":"<urn:uuid:b508a85f-8eff-4ed9-b270-1005f12c46bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00538.warc.gz"}
|
Adding Liquidity
When a Liquidity Provider adds liquidity to an existing market, two things will happen:
• the Liquidity Provider will be assigned with a number of shares of the Liquidity Pool pro rata the Liquidity Pool share that the Liquidity Provider is funding.
• if the outcome prices are not equal, the Liquidity Provider will receive shares of all the outcomes, except for the least likely, i.e. the least expensive outcome, at the moment of the funding.
To better understand how this works, let's dive into some details.
As explained in the
Automated Market Maker
(AMM) and
Trading and Price Calculation
sections, the number of shares in a market is determined by a constant function that forces there to be a balance between the number of shares in the Liquidity Pool and the number of shares in the
Outcome Share Pools.
When a Liquidity Provider adds liquidity to a market, they in fact increase the number of shares in all pools in that market.
If the number of outcome shares is equal (i.e. if the outcome prices are equal), adding liquidity will not change the balance of the equation, and therefore the Liquidity Provider will only receive
shares of the Liquidity Pool in return for adding liquidity to the market.
If, on the contrary, the number of shares in each pool is unbalanced (i.e. if the outcome prices are not equal), then adding liquidity would change the balance of the equation, which would cause a
change in outcome prices.
To avoid that, the AMM gives the Liquidity Provider shares from the most likely outcome(s), but not from the least likely outcome(s) (i.e. the outcome(s) with the lowest price), in addition to shares
from the Liquidity Pool.
Adding Liquidity to a binary outcome market
The following examples use a binary outcome market, i.e. a market with only two outcomes.
Example 1: Adding liquidity to a market with equal outcome prices Alice has just created a market with $100 USDC in liquidity; no trades have happened in this market. The market status is as follows:
Liquidity Value $100 USDC
Outcome A Shares 100
Outcome B Shares 100
Outcome A Price $0.5 USDC
Outcome B Price $0.5 USDC
Alice currently holds 100 shares of the Liquidity Pool. Alice believes this market could attract lots of forecasters, and decides to make it more enticing for them to trade in it by increasing the
So she adds $1000 USDC in liquidity, changing the market status to the following:
Liquidity Value $1100 USDC
Outcome A Shares 1100
Outcome B Shares 1100
Outcome A Price $0.5 USDC
Outcome B Price $0.5 USDC
Since the market is balanced, Alice is given 1 share of the Liquidity per USDC provided in Liquidity, and now holds 1100 shares of the Liquidity Pool.
Example 2: Adding liquidity to a market with unequal outcome prices A few trades have now happened in Alice's market from Example 1, and its current status is as follows:
Liquidity Value $1100 USDC
Outcome A Shares 861.17
Outcome B Shares 1405.07
Outcome A Price $0.62 USDC
Outcome B Price $0.38 USDC
Bob sees that the market is attracting forecasters and believes that, were it more liquid, it could attract even more.
Having reviewed the
Strategies and Risks for liquidity providers
, Bob thinks that the market can yield a sufficient volume to offset and reward his risk through fees.
According to Bob's analysis, it appears that Outcome A, the most likely outcome, is currently underpriced. Were its value to increase, he'd also benefit from the fact that providing liquidity now
would get him a part of his funding in shares of Outcome A, not just in liquidity shares.
Bob goes ahead and adds an additional $1000 USDC in liquidity.
This move causes the market to have the following temporary status:
Liquidity Value $1100 + $1000 = $2100 USDC
Outcome A Shares 861.17 + 1000 = 1861.17
Outcome B Shares 1405.07 + 1000 = 2405.07
In the current state, the all-important constant formula is broken, as 2100^2 != 1861.17 * 2405.07. This would cause the outcome prices to change, so the smart contract calculates how many shares it
should remove from the pools and give to Bob in order to maintain the constant, and not impact the outcome prices.
Given that Outcome A is the most likely, it calculates how many shares of Outcome A the pool should keep, and how many it should attribute to Bob. The formula is:
And the market is now rebalanced:
Outcome A Shares 1474.07
Outcome B Shares 2405.07
Liquidity Value
Outcome A Price 0.62
Outcome B Price 0.38
In the end, Bob gets the following shares for adding $1000 USDC in liquidity to the market:
Liquidity Shares 1882.88 - 1000 = 882.88
Outcome A Shares 1861.17 - 1474.07 = 387.10
Adding Liquidity to a multiple outcome market
Example 3
: Continuing the multiple outcome example from
Trading and Price Calculation
, we have a market originally created by Charlie with the following characteristics:
Pool Shares Price
Liquidity 1000 N/A
Outcome A 461.53 0.48
Outcome B 1294 0.17
Outcome C 1294 0.17
Outcome D 1294 0.17
Dylan, who already bought shares of Outcome A, believes that this market would benefit from more liquidity. He knows that if he adds liquidity now, he’ll only get shares from Outcome A along with
Liquidity Pool shares, since Outcomes B, C and D are all priced equally. This makes it easier for him to manage his risk. Otherwise, he’d get shares from all Outcomes except for the one with the
lowest price.
So Dylan decides to add 1000 $USDC in Liquidity, which gives us the following temporary state:
Pool Shares
Liquidity 1000 + 1000 = 2000
Outcome A 461.53 + 1000 = 1461.53
Outcome B 1294 + 1000 = 2294
Outcome C 1294 + 1000 = 2294
Outcome D 1294 + 1000 = 2294
In the current state, the market is unbalanced. So the AMM will calculate how many shares Dylan should get for his contribution from the Liquidity Pool and from Outcome A, and rebalance the market so
that the AMM constant remains true and outcome prices don’t change.
The formulas we’ll use are the same as the ones used for binary outcome markets.
There are three Outcomes that meet the condition for being the least likely, and the number of shares of these outcomes is 2294. Note that we don’t add the number of shares, we just use the number of
shares related of one outcome.
Once we have the number of shares that remain in the pool, we can calculate the number of liquidity pool shares.
Pool Shares
Outcome A (rounded)
Outcome B 2294
Outcome C 2294
Outcome D 2294
Liquidity (rounded)
And now we can calculate how many LP and Outcome A shares will be given to Dylan with a simple subtraction:
Pool Shares
Liquidity 1769.68 - 1000 = 769.68
Outcome A 1461.53 - 812.46 = 649.07
Now, for the sake of this example, let’s make sure that the outcome prices remain unchanged. For this, we’ll use the formulas learned in
Trading and Price Calculation
First, we calculate the Price Weight of all Outcomes using the formula:
Outcome Outcome Price Weight
Outcome A
Outcome B
Outcome C
Outcome D
And now we can calculate the outcome prices using the following formula:
And so we get the following sum of all price weights:
Which we can use to calculate the price of each outcome:
Outcome Outcome Price
Outcome A (rounded)
Outcome B (rounded)
Outcome C (rounded)
Outcome D (rounded)
These correspond to the initial outcome prices, so we know we did our calculation for adding liquidity correctly 💪
A special thank you to Polkamarkets Discord member NFT_Trader for helping write the example for adding liquidity in multiple outcome markets.
|
{"url":"https://help.polkamarkets.com/how-polkamarkets-works/market-liquidity/adding-liquidity","timestamp":"2024-11-07T02:32:43Z","content_type":"text/html","content_length":"227390","record_id":"<urn:uuid:511a1d8e-5e06-4162-bbbe-70f754d5407c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00294.warc.gz"}
|
A Language Speed Test Comparing Embedded Linux Boards
October 2, 2015, updated October 3, 2015
I recently ran across Yet Another Language Speed Test: Counting Primes (C, C++, Java, JavaScript, PHP, Python and Ruby).
This is a fairly interesting test because of its simplicity when implementing across many different programming languages- even if it is not tailored specifically to idioms and abstractions provided
by those languages. The test referenced above directly compares programming language speeds and this one instead compares those programming language speeds on different embedded Linux boards. This is
not a test to optimize the languages, the code, or the boards to be as fast as possible. Instead it is intended to run the same exact code using the same runtime version of a language on different
embedded Linux boards in as identical an environment as possible.
The test code is very simple. When given a number n, the program simply counts the number of prime numbers found up to and including n. The test code is implemented in the following languages:
• C (gcc using -O2 optimization)
• Python 2.7.10 (using -O optimization)
• Python 2.7.10 (Compiled) (using -O optimization)
• Ruby 2.2.2p95
• Lua 5.1.5
• PHP 5.6.12
• Javascript (nodejs v0.10.40)
on the following boards:
The final result is enlightening. Here is the comparison of counting prime numbers up to 2^24. The runtimes here are significant enough to get a good comparison across boards in each of the languages
tested. Note that lower times mean faster.
If a lower value of n is charted, where n is 2^12, this puts far more weight on startup and initialization times of the test programs.
Below are charts of the raw and complete data for each of the boards. The full tests are run with n being from 2^12 to 2^24.
Do you see that drop in there in the BeagleBone Black C language data? All data was collected twice and that abnormality existed in both of them. I went back and manually ran the test and the same
thing happened. Just wanted to point out this is not a mistake and I am wondering what happens there.
This basic test is enough to heat up every single processor enough to be fairly warm to the touch. The BeagleBone black got the hottest with a 142F reading on the surface of the SoC.
The test consisted of building the same version of buildroot default configurations for all boards. This included creating bootloaders, a kernel, and a rootfs containing the languages and test code
on a uSD/SD card. These were all setup in a headless configuration with the only interface being a serial console. It turns out, this was quite a time consuming processing to get setup for each
BeagleBone Black
Raspberry Pi 2
Raspberry Pi Model B
The following are the actual test programs written in the various languages to compute the number of prime numbers given an argument of n.
function isPrime(n)
if (n < 2) then
return false
elseif (n == 2) then
return true
elseif (n % 2 == 0) then
return false
for i = 3, math.sqrt(n), 2 do
if (n % i == 0) then
return false
return true
local noPrimes = 0
local limit = tonumber(arg[1])
for n = 0, limit, 1 do
if isPrime(n) then
noPrimes = noPrimes + 1
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
int isPrime(int n) {
if (n < 2)
return 0;
else if (n == 2)
return 1;
else if (n % 2 == 0)
return 0;
int upperLimit = sqrt(n);
int i = 3;
while (i <= upperLimit) {
if (n % i == 0)
return 0;
i += 2;
return 1;
int main(int argc, char** argv) {
int noPrimes = 0;
int limit = atoi(argv[1]);
for (int n = 0; n <= limit; n++) {
if (isPrime(n))
noPrimes ++;
printf("pi(%d)\n", noPrimes);
return 0;
function isPrime(n) {
if (n < 2)
return false;
else if (n == 2)
return true;
else if (n % 2 == 0)
return false;
var uL = Math.sqrt(n) | 0, i = 3;
while (i <= uL) {
if (n % i == 0)
return false;
i += 2;
return true;
var noPrimes = 0;
var limit = process.argv[2];
for (var i = 0; i <= limit; i++) {
if (isPrime(i))
console.log("pi(" + noPrimes + ")");
function isPrime($n) {
if ($n < 2)
return false;
elseif ($n == 2)
return true;
elseif ($n % 2 == 0)
return false;
$ul = sqrt($n);
$i = 3;
while ($i <= $ul) {
if ($n % $i == 0)
return false;
return true;
$noPrimes = $i = 0;
$limit = $argv[1];
while ($i <= $limit) {
if (isPrime($i))
printf("pi(%d)\n", $noPrimes);
from math import sqrt
import sys
def isPrime(n):
if (n < 2): return False
elif (n == 2): return True
elif (n % 2 == 0): return False
upper_limit = int(sqrt(n))
i = 3
while (i <= upper_limit):
if (n % i == 0): return False
i += 2
return True
lim = int(sys.argv[1])
no_primes = 0
n = 0
while (n <= lim):
if (isPrime(n)): no_primes += 1
n += 1
print ("pi(%d)" % (no_primes))
def isPrime n
return false if n < 2
return true if n == 2
return false if n % 2 == 0
upper_lim = (Math.sqrt n.to_f).to_i
i = 3
while i<= upper_lim
return false if n % i == 0
no_primes = 0
i = 0
while i <= lim
no_primes += 1 if isPrime i
puts "pi(#{no_primes})"
Update 10-2-15: Updated charts for individual boards to have a logarithmic scale so the data can be seen better.
8 Comments
Comment October 2, 2015 by Alessandro Stamatto
Great work. Very nice article! Javascript for math seems to have an insane JIT, always so fast on those math benchmarks @o@ I posted it on the programming reddit ( https://www.reddit.com/r/
programming/comments/3n6qk1/a_language_speed_test_on_embedded_linux_boards/ ) (And there are some suggestions/questions there)
Comment October 2, 2015 by Angel Aray
It would be interesting to see Go included in the list.
Comment October 2, 2015 by Isaac Gouy
fyi Tiny tiny prime sieve programs are available in the old benchmarks game code archive for a variety of languages. See nsieve and sieve: https://alioth.debian.org/scm/viewvc.php/shootout/bench/
nsieve/?root=shootout&hideattic=0 https://alioth.debian.org/scm/viewvc.php/shootout/bench/sieve/?root=shootout&hideattic=0
Comment October 2, 2015 by Isaac Gouy
>>The test referenced above directly compares programming language speeds and this one instead compares those programming language speeds on different embedded Linux boards.<< Except that as soon as
you show a chart with different programming language implementations, your readers **will** directly compare the programming language speeds. For example -- "Wow, really surprised by the javascript
results there, did not expect it to be so close to c."
Comment October 2, 2015 by anonymous
Angel: How would go work on MIPS boards? AFAIK it only has x86/x64/arm support.
Comment October 3, 2015 by digitalpeer
@Alessandro Javascript is obviously the surprise here (for me anyway), but it's worth noting nodejs has JIT unlike the other interpreted languages. It goes without saying some of the others have JIT
capable runtimes that are not used here. Thanks for posting to /r/programming. The feedback has been awesome as usual. @Angel While Go, Rust, and friends are the hot "new" languages, this also means
that support for them is not, well, mature. In some ways, embedded lags what happens in PC/Server world here. For example, these languages are not part of Buildroot. However, I'm working on it.
@Isaac Point well taken. While my intent was to compare embedded Linux boards, data is data. As long as the parameters of the test are understood, people can use it to come up with whatever
conclusions they want. I find this sort of data interesting. This gives me the opportunity to stand back and see the bigger picture.
Comment October 3, 2015 by anonymous
What is "compiled" Python? If you're just talking about generating the .pyo files, that takes a constant few milliseconds and doesn't really add any new information to the discussion.
Comment February 13, 2016 by Thom
Would love to see Erlang in this test. It's fault tolerance is a good companion to embedded devices.
|
{"url":"http://www.digitalpeer.com/blog/a-language-speed-test-comparing-embedded-linux-boards","timestamp":"2024-11-06T09:27:31Z","content_type":"text/html","content_length":"41112","record_id":"<urn:uuid:6c4b60a2-123f-4923-a1e9-cb2eb0ad4f6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00455.warc.gz"}
|
Steric effects of ions in the charge-related wetting phenomena
Steric effects of ions on the charge-related wetting phenomena are studied. Along with a general treatment, three specific problems in two-dimensional system are considered: a droplet on an
electrode, a droplet on a charged surface, and an electrowetting phenomenon on a dielectric. For computation of wetting tension, the electromechanical approach is adopted with the principle of
mechanical force balance for each phase. The modified Poisson-Boltzmann equation, which was originally proposed by Bikerman [Philos. Mag. 33, 384 (1942)], is adopted for the analysis of the steric
effects. It is found that the steric hindrance reduces significantly both the osmotic pressure and the electrical stress near the triple contact line. This reduction results in a considerable
decrease in the wetting tension when the ratio of the capacitance per unit area of the electrical double layer to that of the dielectric layer is small.
All Science Journal Classification (ASJC) codes
• Condensed Matter Physics
• Statistical and Nonlinear Physics
• Statistics and Probability
Dive into the research topics of 'Steric effects of ions in the charge-related wetting phenomena'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/steric-effects-of-ions-in-the-charge-related-wetting-phenomena","timestamp":"2024-11-12T23:09:09Z","content_type":"text/html","content_length":"50442","record_id":"<urn:uuid:42cf3a11-ac9a-47e8-a61c-c5d0586c7ebb>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00532.warc.gz"}
|
Puzzles Michaelmas Term 2012
1. There are three men in the room: one of them always tells the truth, one always lies and one answers yes/no randomly. Each of them knows who is who. What three yes/no questions would you ask them
to determine who is who?
2. Given n different subsets of the set of numbers 1,2,3,…,n, prove that there exists a number, such that if it is left out of each subset, the remaining subsets will still be different.
3. A town is surrounded by a circular wall. There are 12 guards serving on the wall. At twelve noon, each guard leaves his watchpost and starts walking the wall in some direction, at a speed at
which it would take exactly one hour to walk around the whole town. If two guards meet, they both turn around immediately and walk at the same speed in the opposite direction. Prove that at
twelve midnight each guard will be back at his watchpost.
4. Prove that there exists a natural number n satisfying following properties: it has at least 100 digits in decimal representation, it is not divisible by 10 (so far so good) and it is possible to
interchange two different digits of n to obtain a number with the same set of prime factors as n.
5. A jigsaw puzzle has tiles that can be assembled to form a 23 x 37 rectangle. The pieces come in five shapes, as shown below. Can you say how many pink pieces must be included in the puzzle?
1. An n x m rectangle is composed out of nm unit squares (n and m are integers). Draw a diagonal of the rectangle. How many unit squares does it go though?
Join our mailing list so you don’t miss any events!
|
{"url":"https://www.invariants.org.uk/2012/10/puzzles-michaelmas-term-2012/","timestamp":"2024-11-06T07:47:37Z","content_type":"text/html","content_length":"87927","record_id":"<urn:uuid:aaa42a61-a783-439e-95e6-b0781eb7a6f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00361.warc.gz"}
|
Seismic performance of building structures based on improved viscous damper seismic design
Earthquakes have serious destructive effects on building structures, and effective seismic design is the key to building design. In order to reduce the damage of earthquakes to building structures,
seismic design of buildings is based on improved viscous dampers. First, the displacement seismic design was studied and a displacement-based structural seismic model was constructed. In addition,
analyzing traditional viscous dampers, an improved viscous damper is adopted based on it. Through equivalent damping expression, a displacement seismic model based on the improved viscous damper is
constructed. Finally, two targets, frequent and rare earthquakes, were selected for experimental analysis. In frequent earthquake experiments, the improved viscous damper structure increased the
shock absorption rate by 35.65 % compared to the no-structure design. In the shear force comparison, the maximum shear force of the improved viscous damper structure in the HB wave $X$ direction is
2186 KN, which is the smallest shear force among the three structural designs. In a rare earthquake experiment, the maximum value of the floor shear force in the $X$-direction of the Humbolt bay wave
of the proposed improved viscous damper structure was 8696 KN. Compared with other structures, the floor shear force was the smallest. In the comparison of floor displacements, the maximum
inter-story displacement in the Humbolt bay wave $Y$-direction of the proposed improved viscous damper structure is 162 mm, which is the smallest inter-story displacement compared with other
structures. In addition, the structure apex displacement was also compared. The structure apex displacement value of the improved viscous damper structure was lower than that of other structures and
was in the slight damage range. The overall seismic effect was significantly better than other structural designs. The research content is conducive to optimizing the application effect of viscous
dampers and provides technical reference for the seismic design of building structures.
1. Introduction
The destructiveness of earthquakes to building structures often brings huge disasters to people, so the seismic design of buildings becomes particularly important. In recent years, with the
advancement of science and technology and in-depth research, the seismic design of buildings based on displacement has gradually attracted attention. Compared with traditional design methods,
displacement seismic design has many technical features and advantages. For example, traditional seismic design often only considers the stiffness and strength of the structure, while displacement
seismic design pays more attention to the displacement response of the structure to earthquakes, which can better protect the integrity of the building and the safety of personnel. Secondly,
displacement-based seismic design can more accurately evaluate the seismic performance of the structure by building a seismic model. It not only considers the overall stiffness of the structure, but
also considers the displacement distribution and shape of the structure under earthquake action. By analyzing and studying displacements, the behavior of buildings in earthquakes can be better
predicted, thereby optimizing and improving design solutions. However, the displacement design based on viscous dampers still has shortcomings, and its effect is easily limited by small deformation
factors. Therefore, based on displacement seismic design, an improved amplified viscous damper is introduced for seismic structural design. Research and technological innovation are mainly reflected
in two aspects. Firstly, based on displacement seismic design, the stability, durability, and adaptability requirements of the building structure are fully considered in the structural design,
ensuring the effectiveness of the structural design. Secondly, traditional viscous dampers have been improved to fully consider more seismic scenarios and better adapt to different seismic
conditions, thereby improving overall seismic performance. The significance of the research is that through the research and improvement of displacement seismic design, the seismic performance of
building structures in earthquakes can be improved, the losses of earthquake disasters can be reduced, and technical guidance can be provided for the structural seismic design of related buildings.
The research content is divided into four parts. The first part introduces the relevant cutting-edge technologies and applications of building seismic design. The second part studies the seismic
design of buildings, analyzes related technologies based on displacement seismic design, and builds related models. The third part conducts performance testing of the proposed building anti-seismic
technology to verify its application effect in actual scenarios. The fourth part summarizes and analyzes the full text and elaborates on the improvement direction of the research.
2. Related works
Earthquakes have a fatal impact on building structures. They will not only destroy the structure of the building itself, but also affect the safety of users. Therefore, the design of seismic
structures of buildings has attracted much attention [1-3]. Muttoni and others found that punching failure is a controlled failure mode of flat plate frames, and structural limitations can easily
lead to building collapse. In order to solve the above problems, the punching shear Eq. of concrete structures was studied and a closed expression was obtained, which can be used to predict the
deformation capacity of internal slab-column connections. Finally, corresponding experimental analysis was carried out, and this technology was applied to optimize and improve the building structure,
and the overall seismic resistance of the building was significantly improved [4]. Zhong and Christopoulos conducted research on early building structures, in which the self-centered seismic knot
design can effectively reduce earthquake damage and residual drift. Therefore, in order to effectively improve the structural seismic resistance of buildings, further research is conducted on
self-centered seismic knots and applied to the seismic process of buildings. Through the optimization and adjustment of building structural parameters, the seismic effect of building structures can
be further improved. Specific experiments show that this technology has good seismic performance in building structures and is better than similar related technologies [5]. Alam et al. found that
asymmetric structures are more susceptible to earthquake damage under earthquake action. Further research found that mainly due to coupled torsional vibration, it will cause destructive effects on
the building structure, thereby affecting the stability of the building. In order to solve the above problems, a correlation experimental analysis was carried out on the eccentric quarter-scale
asymmetric structure, and a fiber Bragg grating sensor was used to detect structural damage. Finally, a finite element model was established and the results of the model and experiments were
compared. Through relevant experimental discussions, we have a better understanding of the mechanism of asymmetric structures in building seismic resistance, which is conducive to strengthening the
overall seismic resistance of building structures [6].
In recent years, viscous dampers have become one of the key technologies in seismic design of building structures. The viscous damper adopts advanced manufacturing technology and materials, which can
withstand various vibration and impact tests, and exhibits extremely high reliability in terms of service life. Therefore, relevant scholars have conducted research on it. Hu et al. proposed a
building seismic design using viscous dampers to improve building seismic performance. However, the parameter distribution between the damper installation locations was not fully considered in the
design. In order to solve the above problems, a distribution pattern is adopted that can reduce the overall requirement of building coefficient without changing the response mitigation effect. In the
relevant experimental analysis, by establishing a model and conducting simulation experiments, the researched technology has better anti-seismic effects in actual scenarios compared with similar
technologies [7]. De Domenico D. et al. proposed a practical nonlinear viscous damper building seismic optimization method to realize the transformation of steel frame structures. First, the Maxwell
model was used to simulate the damper performance. During the design process, the nonlinearity of the parent steel frame was considered to determine the performance optimization parameters of the
building. The results show that the research technology has excellent stability and lower building damage [8]. Akehashi and Takewaki proposed a method to determine the optimal location of a target
viscous damper. The first choice is to study the parameters of the building, obtain the building structure model, and design a model optimization adjustment technology based on the building
optimization objective function. Deep learning algorithms are used to solve the target. The proposed technology has excellent seismic performance in real scenarios [9]. Karami and others found that
in areas with strong earthquakes, housing buildings face the threat of serious earthquake damage. In order to ensure the safety of area residents, it is particularly critical to carry out structural
seismic design. In this regard, a numerical design optimization method is proposed to use nonlinear viscous dampers for seismic reinforcement of existing steel frames to achieve seismic optimization
of building structures [10]. Humaidi et al. conducted a study on the seismic performance of building structures, with the aim of reducing the damage of seismic excitation to building structures.
Therefore, the bee algorithm was introduced in the study to adjust the gain of traditional PID controllers, thereby achieving active control of two-story building structures. In addition, the
research also continuously optimizes the structural design parameters by calculating the excitation parameters between different floors. Finally, relevant seismic models were used for experiments,
and this technology has good feasibility [11].
In addition, many scholars have conducted extensive research on the mechanical performance analysis of seismic resistant structural materials. Scholars such as Jain et al. have studied traditional
foundation isolators and found that there is a significant relative displacement between these components and the foundation. Therefore, further study the attenuation characteristics and investigate
their periodic characteristics. Simultaneously, the periodic fundamental bandgap characteristics are calculated using the plane wave expansion method. Through this study, the seismic characteristics
of different materials will be effectively analyzed, and support will be provided for the seismic resistance of building structures [12]. Pany and Li have studied periodic repetitive crystal
materials in Jining, which are widely used in various aerospace, industrial, and building structural processes to ensure their excellent quality performance. At the same time, the numerical method of
finite element theory was used to construct a physical model in the study, in order to analyze the mechanical condition of the structure on multiple surfaces. Obtained a three-dimensional graph
related to the phase constant, flutter frequency, and pressure parameters corresponding to the optimal period angle. The research will also provide reference for seismic optimization design of
building structures [13]. Hu et al. have conducted research on seismic design of building structures, and in order to improve the seismic performance of buildings, a structural design scheme using
viscous dampers has been adopted. Among them, an optimized damping distribution pattern was adopted, which can reduce the overall requirement for damping coefficient without changing the response
mitigation effect. In addition, the seismic demand ratio of each floor was analyzed to optimize different types of design responses. The final test results show that the design has good seismic
performance and a simple structure [7]. Pany and Parthan analyzed the one-dimensional axial wave propagation in an infinitely long period supported cylindrical curved panel under the action of
supersonic airflow. The study used aerodynamic analysis based on piston theory to analyze the performance of cylindrical curved panel structures, while using both precise and finite element methods
to analyze flutter frequency and pressure parameters. Through the above analysis, technical support is provided for the optimization of seismic resistant structures in buildings [14].
According to the above research, it can be seen that effective building seismic design is the key to reducing earthquake disaster damage. In the seismic design of buildings, displacement-based
seismic design has excellent application effects. The use of viscous dampers can significantly reduce earthquake displacement damage and minimize earthquake risks. However, the above-mentioned
literature research did not consider the adaptability of traditional seismic design in rare and frequent scenarios. Therefore, a more advanced viscous damper was adopted in the subsequent research on
seismic design, taking into account different seismic scenarios. Applying it to common and frequent earthquake scenarios to improve the overall seismic design effect of buildings.
3. Construction of building seismic model based on displacement
This part mainly studies the seismic design of structures based on displacement and builds related models. At the same time, the shortcomings of the traditional viscous damper are analyzed, an
improved viscous damper is introduced, and a displacement-based seismic model is constructed.
3.1. Building seismic design based on displacement design
In recent years, more and more large-scale buildings have been built. In architectural design and construction, earthquake-resistant design of buildings is the key to construction. Among them, the
structural seismic design based on displacement has attracted much attention. Compared with the traditional seismic design based on load-bearing, displacement control pays more attention to the
performance and seismic resistance of the building, which is more obvious for the stability and safety of the building structure [15]. In displacement based design, equating multi degree of freedom
structures to single degree of freedom is the key to seismic design, mainly because the application of multi degree of freedom equivalence can reduce the difficulty of subsequent calculations and
improve the effectiveness of structural design. The principle of displacement seismic design is shown in Fig. 1.
Fig. 1Principle of displacement seismic design
In displacement-based seismic design, it is necessary to determine the performance objectives of the structure, such as considering building stability, durability, and adaptability conditions, etc.,
and comprehensively determine the performance design requirements based on various factors. In the study, both frequent and rare cases were considered for design. Based on the maximum elastic inter
story displacement to floor height ratio of the frame not exceeding 1/550, the inter story displacement angle in frequent cases was set at 1/550. For rare cases, to prevent damage from characteristic
earthquakes, the inter story displacement angle was set at 1/100 [16]. In displacement-based design, equating multi-degree-of-freedom structures into single degrees of freedom is the key to seismic
design. The equivalent principle of building degrees of freedom is shown in Fig. 2 [17].
Fig. 2Equivalent principle of structural degrees of freedom
a) Multi degree of freedom system
b) Acceleration and inertial force
In the design of building structures, the building's multi-degree-of-freedom dynamic system is shown in Eq. (1):
where, $M$ represents the mass matrix, $K$ represents the stiffness matrix, $C$ represents the damping matrix, $\stackrel{¨}{U}$, $\stackrel{˙}{U}$, and $U$ represent different velocities under three
degrees of freedom: multi degree of freedom system, acceleration and inertial force, and displacement shape. ${\stackrel{¨}{u}}_{g}\left(t\right)$ representing inertial force. The lateral shift state
of the multi-degree-of-freedom system is defined as Eq. (2):
$U\left(\xi ,t\right)=\mathrm{\Phi }\left(\xi \right)Z\left(t\right),$
where, $Z\left(t\right)$ represents the normalized coordinates and $\mathrm{\Phi }\left(\xi \right)$ represents the mode matrix. By bringing Eq. (1) and Eq. (2) into the solution, the normalized
structural dynamic Eq. can be obtained, such as Eq. (3):
$M\mathrm{\Phi }\stackrel{¨}{Z}+C\mathrm{\Phi }\stackrel{˙}{Z}+K\mathrm{\Phi }Z=-M{\stackrel{¨}{u}}_{g}\left(t\right).$
In the equivalent degree of freedom system, set the equivalent displacement as ${u}_{eff}$, the equivalent acceleration as ${a}_{eff}$, the equivalent mass as ${M}_{eff}$, the equivalent base shear
force as ${V}_{b}$, and the equivalent stiffness as ${k}_{eff}$, then the ${u}_{i}$ equation of multi-degree-of-freedom particle lateral displacement and equivalent displacement ${u}_{eff}$ is shown
in Eq. (4):
${u}_{eff}=\frac{{u}_{i}}{{c}_{i}}=\frac{{u}_{i}\cdot {a}_{eff}}{{a}_{i}},$
where, ${a}_{i}$ represents the absolute acceleration of the particle. The seismic force exerted on the multi-degree-of-freedom particle $i$ at this moment is shown in Eq. (5):
where, ${m}_{i}$ represents the particle mass. The equivalent base shear force can then be calculated, as shown in Eq. (6):
${V}_{b}=\sum _{i=1}^{n}{F}_{i}=\left(\sum _{i=1}^{n}{m}_{i}{c}_{i}\right){a}_{eff}={M}_{eff}{a}_{eff}.$
The equivalent mass can be further obtained from Eq. (5) and Eq. (6), as shown in Eq. (7):
${M}_{ef}=\frac{\sum _{i=1}^{n}{m}_{i}{u}_{i}}{{u}_{eff}}.$
Through the Eq. (7), $i$ the seismic force on the final particle can be obtained, as shown in Eq. (8):
${F}_{i}=\frac{{m}_{i}{u}_{i}}{\sum _{j=1}^{n}{m}_{j}{u}_{j}}{V}_{b},$
where, ${m}_{j}$ and ${u}_{j}$ are the single-degree-of-freedom particle mass and equivalent displacement respectively. Under the action of earthquake excitation, the work done by single degree of
freedom and multiple degrees of freedom are the same, and there is Eq. (9):
$\sum _{i=1}^{n}{F}_{i}{u}_{i}={V}_{b}{u}_{eff}.$
According to Eq. (8) and Eq. (9), the equivalent displacement can be obtained, as shown in Eq. (10):
${u}_{eff=}=\frac{\sum _{i=1}^{n}{m}_{i}{u}_{i}^{2}}{\sum _{i=1}^{n}{m}_{i}{u}_{i}}.$
In addition, in displacement-based seismic design, the equivalent period also needs to be determined ${T}_{eff}$, which needs to be obtained by constructing a displacement response spectrum. The
seismic integral curve is used to construct the displacement response spectrum, and the seismic integral curve is shown in Fig. 3.
Fig. 3Seismic integration curve
In order to obtain the displacement response spectrum, the dynamic differential equation of the equivalent degree of freedom system is set as ${\stackrel{¨}{u}}_{\left(t\right)}+2\xi \omega {\
stackrel{˙}{u}}_{\left(t\right)}+{\omega }^{2}{u}_{\left(t\right)}=-{\stackrel{¨}{u}}_{g\left(t\right)}$, where and $2\xi$ are $\omega$ both damping term parameters, ${\stackrel{¨}{u}}_{\left(t\
right)}$ representing multi-degree-of-freedom acceleration, and ${u}_{\left(t\right)}$ multi-degree-of-freedom displacement, ${u}_{\left(t\right)}$ representing the displacement under multi-degree of
freedom. Due to the fact that the Duhamet integral is an effective method for solving the response of linear systems under arbitrary external excitation, which improves computational efficiency, the
displacement obtained by using the Duhamet integral is shown in Eq. (11) [18]:
${u}_{\left(t\right)}=-\frac{T}{2\pi }{\int }_{0}^{T}{\stackrel{¨}{u}}_{g\left(\tau \right)}{e}^{-\frac{T}{2\pi }\left(t-\tau \right)}\mathrm{s}\mathrm{i}\mathrm{n}\frac{2\pi }{T}\left(t-\tau \right)
d\tau ,$
where, $T$ represents the seismic response period and $\tau$ represents the interval time, which can be obtained by using the second-order derivative ${\stackrel{¨}{u}}_{\left(t\right)}$, as shown in
Eq. (12):
${\stackrel{¨}{u}}_{\left(i\right)}=-{\left(\frac{T}{2\pi }\right)}^{2}{u}_{\left(i\right)}.$
According to the Eq. (12), the acceleration response spectrum can be obtained, as shown in Eq. (13):
${S}_{d}={\left(\frac{T}{2\pi }\right)}^{2}{S}_{a}={\left(\frac{T}{2\pi }\right)}^{2}{\alpha }_{l}g,$
where, ${\alpha }_{l}$ represents the seismic response coefficient, ${S}_{a}$ which is the spectral acceleration, $g$ represents the gravity acceleration, which ${S}_{d}$ is the
single-degree-of-freedom spectral displacement. The earthquake response period can be obtained from the displacement spectrum, and the period expression is shown in Eq. (14):
$T=2\pi \sqrt{\frac{{S}_{d}}{{\eta }_{2}{\alpha }_{\mathrm{m}\mathrm{a}\mathrm{x}}g}},0.1s\le T\le T.$
In Eq. (14), ${\eta }_{2}$ represents the damping adjustment coefficient.
3.2. Construction of displacement seismic model based on amplified viscous damping structure
In the displacement-based seismic structural design, in order to reduce the impact of earthquakes on the building structure, appropriate dampers need to be used to reduce the impact of earthquakes on
the building. Considering that the research object is concrete steel frame concrete structure buildings, viscous dampers (Viscous Damper, VD) have excellent seismic effects in the seismic resistance
of steel frames, concrete frames and other structures [19]. Therefore, viscous dampers are used as the main structure for anti-seismic design. However, traditional viscous dampers have limited energy
consumption and poor shock absorption effects in scenarios where structural deformation is small. Therefore, in order to solve the above problems, an Amplified Viscous Damper (Amplified Viscous
Damper, AVD) displacement seismic method [20]. The schematic diagram of the enlarged damper device is shown in Fig. 4.
Fig. 4A new type of amplified damper device
According to the structure of Fig. 4, the device consists of a horizontally placed viscous damper, a vertical amplification lever and a herringbone bracket. In order to analyze the mechanical model
of AVD, the AVD mechanical model will be obtained through nonlinear derivation of VD. Suppose the structural velocity is $\stackrel{˙}{u}$, the structural displacement is $u$, and by using the lever
device to amplify it $n$ times, the amplified displacement is ${u}_{f}=n*u$, and the speed is ${\stackrel{˙}{u}}_{f}=n*\stackrel{˙}{u}$. Where and are ${\stackrel{˙}{u}}_{f}$ input ${u}_{f}$ into the
nonlinear viscous damper model, and the AVD damping force expression is obtained as shown in Eq. (15):
${F}_{n}=c\left|n\stackrel{˙}{u}{|}^{\alpha }sgn\left(n\stackrel{˙}{u}\right)={n}^{\alpha }c\right|\stackrel{˙}{u}{|}^{\alpha }\mathrm{s}\mathrm{g}\mathrm{n}\left(\stackrel{˙}{u}\right),$
where, $c$ represents the damping coefficient, $\alpha$ represents the damping coefficient. Using the lever amplification effect, the AVD mechanical model is obtained, as shown in Eq. (16):
${F}_{d}\left(t\right)=n{F}_{n}\left(t\right)={n}^{\alpha +1}c|\stackrel{˙}{u}{|}^{\alpha }\mathrm{s}\mathrm{g}\mathrm{n}\left(\stackrel{˙}{u}\right).$
Expressing $F$ the damping force of ordinary VD, the energy consumption of ordinary VD is expressed as Eq. (17):
$E=Fu=cu\cdot |\stackrel{˙}{u}{|}^{\alpha }\cdot \mathrm{s}\mathrm{g}\mathrm{n}\left(\stackrel{˙}{u}\right).$
The energy consumption of VD after magnification n times is shown in Eq. (18):
${E}_{1}={F}_{n}u={n}^{\alpha +1}cu{\left|\stackrel{˙}{u}\right|}^{a}\mathrm{s}\mathrm{g}\mathrm{n}\left(\stackrel{˙}{u}\right).$
Among them, the damping coefficient $c$ is ${c}_{\delta }$ enlarged $\delta$ to such that ordinary VD and $n$-times enlarged AVD have the same energy consumption and shock absorption ability for the
building, then the VD energy consumption at this moment is as follows: Eq. (19):
${E}_{2}=Fu={c}_{\delta }u\cdot |\stackrel{˙}{u}{|}^{\alpha }\cdot \mathrm{s}\mathrm{g}\mathrm{n}\left(\stackrel{˙}{u}\right)={E}_{1}.$
According to Eq. (19) and Eq. (20), the expression can be obtained ${c}_{\delta }$, as shown in Eq. (20):
${c}_{\delta }={n}^{\alpha +1}c.$
According to the results of Eq. (20), it can be concluded that, under the same other conditions, a VD $c$ magnified ${n}^{\alpha +1}$ by $n$ times has the same energy consumption and shock absorption
effect as an AVD magnified by $n$ times [21]. At the same time, in the construction of the displacement model based on AVD, assuming that the displacement is added to the single free system of the
AVD containing the lever amplification type $u$, the acceleration is $\stackrel{˙}{u}$, then the AVD work expression can be obtained as shown in Eq. (21) [22]:
${W}_{d}=c\left(n\omega {u}_{0}{\right)}^{1+\alpha }{\int }_{0}^{\frac{2\pi }{\omega }}|\mathrm{s}\mathrm{i}\mathrm{n}\left(\omega t\right){|}^{1+\alpha }dt,$
where, ${u}_{0}$ represents the displacement amplitude and $\omega$ represents the frequency of external force. Let $dt=\left(2/\omega \right)d\theta$, $\omega t=2\theta$, be substituted into Eq.
(21) to get the updated one ${W}_{d}$, as shown in Eq. (22):
${W}_{d}=={2}^{2+\alpha }c{\omega }^{\alpha }\left(n{u}_{0}{\right)}^{1+\alpha }\frac{{\mathrm{\Gamma }}^{2}\left(1+\frac{\alpha }{2}\right)}{\mathrm{\Gamma }\left(2+\alpha \right)}.$
Among them, if intermediate variable parameters are introduced $\lambda$, $\mathrm{\Gamma }$ the function expression is as shown in Eq. (23):
$\lambda ={2}^{2+\alpha }\frac{{\mathrm{\Gamma }}^{2}\left(1+\frac{\alpha }{2}\right)}{\mathrm{\Gamma }\left(2+\alpha \right)},$
where, $\alpha$ represents the damping index. The damping ratio $\alpha$ has a corresponding relationship with the variable parameters. Refer to the new generation seismic performance method in the
United States, as $\lambda$ shown in Table 1 [23].
Table 1Correspondence between variable parameter λ and damping ratio α
$\lambda$ Damping index $\alpha$ $\lambda$ Damping index $\alpha$
3.88 0.10 3.42 0.60
3.83 0.15 3.38 0.65
3.77 0.20 3.34 0.70
3.72 0.25 3.30 0.75
3.67 0.30 3.27 0.80
3.63 0.35 3.24 0.85
3.58 0.40 3.20 0.90
3.54 0.45 3.17 0.95
3.50 0.50 3.14 1.00
3.46 0.55 – –
According to Eq. (22) and Eq. (23), the equivalent damping ratio of the AVD single degree of freedom system is obtained, as shown in Eq. (24):
${\xi }_{d}=\frac{\lambda c{\omega }^{\alpha -2}{u}_{0}^{\alpha -1}{n}^{1+\alpha }}{2\pi m},$
where, $m$ is the mass of the damper system. Extending the single-degree-of-freedom system to a multi-degree-of-freedom system, the structural elastic strain is obtained, as shown in Eq. (25):
${W}_{k}=\frac{1}{2}\sum _{i}{{F}_{i}}^{"}{\mathrm{\Delta }}_{i},$
where, ${{F}_{i}}^{"}$ represents the shear force between floors and $i{\mathrm{\Delta }}_{i}$ represents the displacement between floors. $i$ in a multi-degree-of-freedom system, generally only the
first vibration shape is considered, and finally the equivalent damping ratio under the nonlinear AVD multi-degree-of-freedom system can be obtained, as shown in Eq. (26) [24]:
${\xi }_{eff}={\xi }_{0}+\frac{\sum _{j}\lambda {c}_{j}\left(n{\varphi }_{rj}{\right)}^{1+\alpha }}{2\pi {A}^{1-\alpha }{\omega }^{2-\alpha }\sum {m}_{i}{\varphi }_{i}^{2}},$
where, $A$ represents the maximum displacement of the top floor of the building, represents the ${\varphi }_{rj}$ regularized relative displacement of ${c}_{j}$ the $j$th damper under the first-order
vibration shape, $j$ represents the coefficient of the $j$th damper, represents ${m}_{i}$ the mass of the th system, ${\varphi }_{i}$ represents the regularized displacement of the top-level
displacement, ${\xi }_{0}$ represents the system Inherent damping ratio.
4. Experimental analysis of seismic model performance
This part will be divided into two links to test the application effect of seismic structural design in actual scenarios, including frequent earthquake scenarios and rare earthquake scenarios.
Evaluation indicators include vertex displacement, floor shear force, inter-floor displacement, etc.
4.1. Experimental analysis of frequent earthquakes
In order to verify the proposed displacement seismic design performance, a 6-story reinforced frame concrete building will be selected for seismic experimental analysis, and the two seismic design
objectives of frequent and rare occurrences will be analyzed. The experiment uses a 6-story cast-in-place reinforced concrete structure designed by the PKPM platform of the China Institute of
Building Research as an experiment. At the same time, combined with the principle of equivalent structural degrees of freedom, different directional degrees of freedom loads are converted into
equivalent vertical loads for convenient experimental calculations. The specific parameters of the structure are shown in Table 2.
At the same time, there are two types of structural design target damper deployment schemes for the PKPM platform: frequent and rare. The arrangement of frequent and rare dampers is shown in Table 3.
Table 2Building structural parameters
Parameter indicator type Numerical value
The height of the bottom layer of the structure (m) 3.9
Other layer heights (m) 3.6
Total building height (m) 21.9
Structural concrete strength C40
Reinforcement grade HRB400
Site characteristic period $Tg$ (s) 0.35
Site type and grouping Class II and Group 1
Seismic fortification intensity 8 degrees (0.3 g)
Live load (KN/m^2) 2
Horizontal load (KN/m^2) 6
$\gamma$ 0.8084
${\alpha }_{max}$ 1.2
${u}_{eff}$ 0.1156
${k}_{eff}$ 89241 KN/m
The seismic damping ratio and damping index are the same 0.0845
Table 3Layout of VD and AVD dampers under frequent and rare designs
VD damping coefficient AVD*2 damping coefficient AVD*3 damping coefficient
Floor $Y$-direction quantity Quantity in $X$ direction
[kN⋅(s/m)^0.45] [kN⋅(s/m)^0.45] [kN⋅(s/m)^0.45]
1 4 4 400 1092.4 1967.5
2 4 4 450 1228.5 2214.5
3 4 4 450 1228.5 2214.5
Rare earthquakes
4 4 4 450 1228.5 2214.5
5 4 4 400 1091.5 1968.5
6 4 4 350 955.1 1720.3
1 4 4 400 1090.8 -
2 4 4 450 1228.5 -
3 4 4 450 1228.5 -
Frequent earthquakes
4 4 4 450 1228.5 -
5 4 4 400 1090.4 -
6 4 4 350 955.1 -
In the analysis of frequent earthquakes, refer to Table 3 for experimental parameters. According to the demand, two natural waves, Ken County (Ken County, KC) and Humbolt bay (Humbolt bay, HB) in
1952, were selected as experiments. In the analysis of frequent earthquakes, the peak acceleration is 110 cm/s^2. The comparison of the structural vertex displacement is shown in Fig. 5.
Fig. 5Comparison of displacement of structural vertices
a) HB wave $X$-axis vertex displacement
b) HB wave $Y$-axis vertex displacement
c) KC wave $X$-axis vertex displacement
d) KC wave $Y$-axis vertex displacement
Fig. 5(a) to 5(b) show the vertex displacement in the $X$ and $Y$ directions of the HB wave, respectively. From the data results, it can be seen that in frequent earthquake scenarios, VD structures
have significant advantages in seismic performance compared to AVD*2 structures without control. The VD structure has a 27.65 % increase in seismic reduction compared to uncontrolled structures,
while the AVD*2 structure has a 35.65 % increase in seismic reduction compared to uncontrolled structures. Fig. 5(c) and 5(d) show the displacement of KC wave vertices in the $X$ and $Y$ directions,
respectively. According to the data results, VD and AVD*2 with structural design have significantly smaller displacement in the $X$ and $Y$ directions compared to uncontrolled structures. Compared
with AVD * 2, the uncontrolled structure of VD has an increase of 24.35 % and 32.35 % in shock absorption rate, respectively. It can be seen that VD and AVD*2 optimize the displacement variation of
the building structure in the $X$ and $Y$ directions, reduce the force by improving the viscous damper, shorten the movement of the $X$ and $Y$ axes, and significantly improve the safety of the
building structure. The comparison of shear forces on structural floors is shown in Fig. 6.
Fig. 6(a) to Fig. 6(b) are the HB wave $X$- and $Y$-direction shear force diagrams respectively. According to the curves in the figure, there are obvious differences in the seismic effects of the
three types of seismic structure designs between floors. Among them, the AVD*2 structure with the best performance has the $X$-direction floor shear force controlled at 2186 KN on the first floor,
while the VD structure is controlled at 2556 KN, and the Uncontrolled structure shear force is 2498 KN. In the $Y$-direction floor shear analysis, there are big differences between the three
structures. The maximum floor shear force is still on the first floor. The maximum shear forces of no structure, VD and AVD*2 are 1956 KN, 1465 KN and 1265 KN respectively. Fig. 6(c) to Fig. 6(d) are
the KC wave $X$- and $Y$-direction shear force diagrams respectively. It can be seen from the comparison of the curves that in the comparison of shear forces between floors in the Comparing the shear
force, the AVD*2 structure with the smallest shear force control has a shear force value of 1503 KN, followed by VD with a shear force value of 1635 KN. The worst performance is the uncontrolled
structure with a shear force value of 2623 KN. It can be seen that adopting structural design can significantly reduce the impact of seismic forces, mainly due to more effective seismic structural
design, which offsets the forces from multiple directions of earthquakes. However, uncontrolled structures cannot counteract the forces, causing the structure to be affected by more forces and
significantly reducing safety. Fig. 7 shows the displacement comparison results between structural floors.
Fig. 6Comparison of shear forces on structural floors
a) HB wave $X$-direction floor shear force
b) HB wave $Y$-direction floor shear force
c) KC wave $X$-direction floor shear force
d) KC wave $Y$-direction floor shear force
Fig. 7(a) to Fig. 7(b) are the $X$- and $Y$-direction displacement diagrams of the HB wave respectively. Judging from the data curve, as the number of floors continues to increase, the displacement
between floors will continue to expand. When the floor reaches 6 floors, the maximum inter-floor displacement value will be obtained. Therefore, when comparing the inter-story displacement values of
different structural designs on the 6th floor, in the comparison of $X$-direction displacement values, the largest displacement value is the uncontrolled structure, with a maximum value of 31.95 mm,
followed by the VD structure, with a maximum value of 21.05 mm. The smallest inter-layer displacement is the AVD*2 structure, with a maximum value of 18.35 mm. In the comparison of $Y$-direction
floor displacement values, the maximum displacement values of no structural design, VD, and AVD*2 on the 6th floor are 19.36 mm, 20.35 mm, and 10.68 mm respectively. It can be seen from the data
structure that structural design is obviously better in seismic displacement control than uncontrolled structure. Fig. 7(c) to Fig. 7(d) are the KC wave $X$- and $Y$-direction displacement diagrams
respectively. The test results are consistent with the HB test results. The AVD*2 structural design still has better seismic resistance in the $X$ and $Y$ directions. For example, in the $X$
-direction floor displacement comparison, the maximum inter-story displacement value of the Uncontrolled structure is 43.35 mm, while the AVD*2. The maximum inter-story displacement value is 19.05
mm. It can be seen that AVD*2 structural design can significantly reduce the displacement effect of earthquakes on floors compared with no-structure design, and minimize earthquake hazards.
Fig. 7Comparison of displacement between structural floors
a) HB wave $X$-direction floor displacement
b) HB wave $Y$-direction floor displacement
c) KC wave $X$-direction floor displacement
d) KC wave $Y$-direction floor displacement
4.2. Analysis of rare earthquakes
The HB wave and KC wave in the experiment were still selected for the experiment, but the peak acceleration was increased to 510 cm/s^2 to carry out 8-degree (0.3 g) rare earthquake experimental
analysis. The remaining parameters are set according to the parameters of rare regions. The structural vertex displacement comparison is shown in Fig. 8.
Fig. 8(a) to Fig. 8(b) are the HB wave $X$- and $Y$-direction structural vertex displacement diagrams respectively. Judging from the data curve in the figure, the AVD*2 and AVD*3 structural designs
can control the $X$-direction vertex displacement within the range of 50 mm, while VD can control the $X$-direction vertex displacement within the range of 100 mm. The maximum $X$-direction vertex
displacement value of the structure-free design is 112 mm. In the comparison of the $Y$-direction structural apex displacement, the AVD*3 structural design performed best, followed by AVD*2, VD and
no-structure design. Among them, the maximum displacement of the AVD*3 structure was controlled within the range of 30 mm, compared with 88 mm of the no-structure design. There is a significant
improvement. Fig. 8(c) to Fig. 8(d) are the KC wave $X$ and $Y$ structure vertex displacement diagrams respectively. The test results are basically consistent with the HB wave test results. The AVD*3
structural design has better overall seismic resistance than the other two structural designs. The shock absorption rate of AVD*3 structural design is increased by 25.65 % compared to the
structure-free design, the shock absorption rate of AVD*2 is increased by 21.56 % compared to the structure-free design, and the shock absorption rate of VD is increased by 12.35 % compared to the
structure-free design. The comparison of structural floor shear forces is shown in Fig. 9.
Fig. 8Comparison of displacement of structural vertices
a) HB wave $X$-axis vertex displacement
b) HB wave $Y$-axis vertex displacement
c) KC wave $X$-axis vertex displacement
d) KC wave $Y$-axis vertex displacement
Fig. 9Comparison of shear forces on structural floors
a) HB wave $X$-direction floor shear force
b) HB wave $Y$-direction floor shear force
c) KC wave $X$-direction floor shear force
d) KC wave $Y$-direction floor shear force
Fig. 9(a) to Fig. 9(b) are the HB wave $X$- and $Y$-direction shear force diagrams respectively. In rare earthquakes, the lower the floor, the more obvious the inter-floor shear force will be.
Therefore, the first layer was selected for experimental analysis. In the shear force analysis between floors in the 8699 KN, while VD is 8865 KN. At the same time, in the $Y$-direction floor shear
comparison, the AVD*3 structure still has the smallest inter-story shear value, which is 4486 KN, while the Uncontrolled structure is 8006 KN. Fig. 9(c) to Fig. 9(d) are the KC wave $X$- and $Y$
-direction shear force diagrams respectively. AVD*2 and AVD*3 still have excellent shear force control effects. In the $Y$-direction floor shear force analysis, the maximum interstory shear force of
the AVD*3 structure on the first floor is 6105 KN, while the maximum interstory shear force of AVD*2 is 6105 KN. The shear force is 6186 KN. Fig. 10 shows the comparison results of displacement
between structural floors.
Fig. 10Comparison of displacement between structural floors
a) HB wave $X$-direction floor displacement
b) HB wave $Y$-direction floor displacement
c) KC wave $X$-direction floor displacement
d) KC wave $Y$-direction floor displacement
Fig. 10(a) to Fig. 10(b) are the $X$- and $Y$-direction interlayer displacement diagrams of HB wave respectively. According to the data, the maximum inter-story displacement value appears on the
highest floor, and the 6th floor has the maximum displacement value. From a comprehensive comparison, it can be seen that AVD*3 and AVD*2 have the smallest inter-layer displacement values in the $X$
and $Y$ directions. In the comparison of X-direction inter-story displacement values, the maximum inter-story displacement values of AVD*3 and AVD*2 are 162 mm and 169 mm respectively, while the
maximum inter-story displacement value of the Uncontrolled structure is 212 mm. Fig. 10(c) to Fig. 10(d) are the KC wave $X$- and $Y$-direction interlayer displacement diagrams respectively. The test
results are consistent with the HB wave test results. AVD*3 still has better inter-layer displacement control effect.
5. Conclusions
The energy generated by earthquakes will cause serious damage to building structures, and effective seismic design of building structures is key. In order to solve the seismic problem of buildings, a
displacement-based seismic design method is proposed. Firstly, research will be conducted on displacement seismic resistance and a displacement seismic resistance model will be constructed. At the
same time, considering that traditional VD cannot effectively identify small displacement scenes, an optimized AVD is proposed and a seismic model is constructed based on the equivalent damping ratio
expression. In the analysis of multiple earthquake resistance experiments, the shock absorption rate of AVD*2 is 35.65 % higher than that of the no-structure design, and is better than the VD
structure, with the smallest vertex displacement value. In the floor shear comparison, HB wave Y-direction floor shear was selected for analysis. The maximum shear forces of VD and AVD*2 are 1465 KN
and 1265 KN respectively. The overall seismic resistance of AVD*2 is better. In a rare earthquake experiment, the AVD*3 structural design increased the shock absorption rate by 25.65 % compared to
the structure-free design, and the AVD*2 structure design increased the shock absorption rate by 21.56 % compared to the structure-less design. Compared with the VD and structure-less designs, there
is a significant improvement. improvement. In the floor shear comparison, in the HB wave Y-direction shear analysis, the maximum interstory shear force on the first floor of the AVD*3 structure is
6105 KN, while the maximum interstory shear force of the AVD*2 structure is 6186 KN. Compared with the VD structure, the interlaminar shear is significantly smaller. Comparing the displacements
between floors, in the comparison of the HB wave $X$-direction inter-story displacement, AVD*3 and AVD*2 have the smallest inter-story displacement values in the $X$ and $Y$ directions, which are 162
mm and 169 mm respectively, while the Uncontrolled structure is 212 mm. Through the above experiments, it can be seen that the technology proposed by the research institute has excellent application
performance in both frequent and rare earthquakes, and is superior to similar technologies. It can be seen from this that the seismic design of buildings proposed in this research has excellent
application effects in practical scenarios. However, there are also shortcomings in the research. The proposed seismic design did not consider the impact of amplification devices on dampers under
horizontal displacement, and further research is needed in the future. In addition, research is mainly based on displacement methods for energy dissipation design of damping frames. In the future,
capacity spectrum method and ductility coefficient method can be added for seismic design to improve the seismic performance of building structures.
• P. Zakian and A. Kaveh, “Seismic design optimization of engineering structures: A comprehensive review,” Acta Mechanica, Vol. 234, No. 4, pp. 1305–1330, Dec. 2022, https://doi.org/10.1007/
• D. Shahnazaryan and G. J. O. ’Reilly, “Integrating expected loss and collapse risk in performance-based seismic design of structures,” Bulletin of Earthquake Engineering, Vol. 19, No. 2, pp.
987–1025, Jan. 2021, https://doi.org/10.1007/s10518-020-01003-x
• X. Guan M. Eeri, H. Burton M. Eeri, and M. Shokrabadi, “A database of seismic designs, nonlinear models, and seismic responses for steel moment-resisting frame buildings,” Earthquake Spectra,
Vol. 37, No. 2, pp. 1199–1222, Nov. 2020, https://doi.org/10.1177/8755293020971209
• A. Muttoni et al., “Deformation capacity evaluation for flat slab seismic design,” Bulletin of Earthquake Engineering, Vol. 20, No. 3, pp. 1619–1654, Jan. 2022, https://doi.org/10.1007/
• C. Zhong and C. Christopoulos, “Self-centering seismic-resistant structures: historical overview and state-of-the-art,” Earthquake Spectra, Vol. 38, No. 2, pp. 1321–1356, Dec. 2021, https://
• Z. Alam, L. Sun, C. Zhang, Z. Su, and B. Samali, “Experimental and numerical investigation on the complex behaviour of the localised seismic response in a multi-storey plan-asymmetric structure,”
Structure and Infrastructure Engineering, Vol. 17, No. 1, pp. 86–102, Jan. 2021, https://doi.org/10.1080/15732479.2020.1730914
• X. Hu, R. Zhang, X. Ren, C. Pan, X. Zhang, and H. Li, “Simplified design method for structure with viscous damper based on the specified damping distribution pattern,” Journal of Earthquake
Engineering, Vol. 26, No. 3, pp. 1367–1387, Feb. 2022, https://doi.org/10.1080/13632469.2020.1719239
• D. de Domenico and I. Hajirasouliha, “Multi-level performance-based design optimisation of steel frames with nonlinear viscous dampers,” Bulletin of Earthquake Engineering, Vol. 19, No. 12, pp.
5015–5049, Jun. 2021, https://doi.org/10.1007/s10518-021-01152-7
• H. Akehashi and I. Takewaki, “Modeling of resilience based on categorized recovery scenario and improving resilience with viscous damper,” Japan Architectural Review, Vol. 5, No. 3, pp. 279–294,
Jun. 2022, https://doi.org/10.1002/2475-8876.12273
• M. Karami, H. E. Estekanchi, I. Hajirasouliha, and S. A. Mirfarhadi, “Value-based seismic performance optimization of steel frames equipped with viscous dampers,” Journal of Earthquake
Engineering, Vol. 27, No. 14, pp. 4024–4050, Oct. 2023, https://doi.org/10.1080/13632469.2022.2155733
• J. Humaidi, M. E. Sadiq, A. I. Abdulkareem, and I. M. Ibraheem, “Adaptive backstepping sliding mode control design for vibration suppression of earth-quaked building supported by
magneto-rheological damper,” Journal of Low Frequency Noise, Vibration and Active Control, Vol. 41, No. 2, pp. 768–783, 2022, https://doi.org/10.1177/146134842110646
• S. Jain, S. Pujari, and A. Laskar, “Investigation of one dimensional multi-layer periodic unit cell for structural base isolation,” Structures, Vol. 34, pp. 2151–2163, Dec. 2021, https://doi.org/
• C. Pany and G. Li, “Editorial: Application of periodic structure theory with finite element approach,” Frontiers in Mechanical Engineering, Vol. 9, No. 1, p. 11926, Apr. 2023, https://doi.org/
• C. Pany and S. Parthan, “Flutter analysis of periodically supported curved panels,” Journal of Sound and Vibration, Vol. 267, No. 2, pp. 267–278, Oct. 2003, https://doi.org/10.1016/s0022-460x(02)
• G. Baltzopoulos, A. Grella, and I. Iervolino, “Seismic reliability implied by behavior‐factor‐based design,” Earthquake Engineering and Structural Dynamics, Vol. 50, No. 15, pp. 4076–4096, Sep.
2021, https://doi.org/10.1002/eqe.3546
• G. J. O. ’Reilly, H. Yasumoto, Y. Suzuki, G. M. Calvi, and M. Nakashima, “Risk‐based seismic design of base‐isolated structures with single surface friction sliders,” Earthquake Engineering and
Structural Dynamics, Vol. 51, No. 10, pp. 2378–2398, May 2022, https://doi.org/10.1002/eqe.3668
• H.-H. Tsang, “Analytical design models for geotechnical seismic isolation systems,” Bulletin of Earthquake Engineering, Vol. 21, No. 8, pp. 3881–3904, Jul. 2022, https://doi.org/10.1007/
• M. Guesmi, N. Belkheiri, and M. L. Guesmi, “Time history analysis of structures under multi-support excitation by state-space method,” Studies in Engineering and Exact Sciences, Vol. 5, No. 1,
pp. 209–222, Jan. 2024, https://doi.org/10.54021/seesv5n1-012
• R. Zhu, L. Song, T. Guo, and F. Mwangilwa, “Seismic analysis and design of SDOF elastoplastic structures with self-centering viscous-hysteretic devices,” Journal of Earthquake Engineering, Vol.
26, No. 9, pp. 4613–4634, Jul. 2022, https://doi.org/10.1080/13632469.2020.1835752
• A. Q. Al-Dujaili, A. J. Humaidi, Z. T. Allawi, and M. E. Sadiq, “Earthquake hazard mitigation for uncertain building systems based on adaptive synergetic control,” Applied System Innovation, Vol.
6, No. 2, p. 34, Feb. 2023, https://doi.org/10.3390/asi6020034
• T. Vu, H. Yeon, Y. Song, Y. Kim, and H. Lee, “Seismic response of an electrical switchboard with vibration absorbers during earthquake excitation,” Journal of Mechanical Science and Technology,
Vol. 37, No. 7, pp. 3347–3355, Jun. 2023, https://doi.org/10.1007/s12206-023-0602-7
• S. Leyva, N. Cruz-Pérez, J. Rodríguez-Martín, and J. C. Santamarta, “Classification of risks for landslides in slopes and hillsides of volcanic nature in Macaronesia and their application to the
Canary Islands,” Geosciences, Vol. 13, No. 6, p. 155, May 2023, https://doi.org/10.3390/geosciences13060155
• M. Böse, J. Andrews, R. Hartog, and C. Felizardo, “Performance and next-generation development of the finite-fault rupture detector (FinDer) within the United States West Coast ShakeAlert warning
system,” Bulletin of the Seismological Society of America, Vol. 113, No. 2, pp. 648–663, Apr. 2023, https://doi.org/10.1785/0120220183
• A. M. Usman and M. K. Abdullah, “An assessment of building energy consumption characteristics using analytical energy and carbon footprint assessment model,” Green and Low-Carbon Economy, Vol. 1,
No. 1, pp. 28–40, Mar. 2023, https://doi.org/10.47852/bonviewglce3202545
About this article
Seismic engineering and applications
seismic design
viscous damper
equivalent damping
rare earthquakes
This study is supported by Henan Science and Technology Project, Study on dynamic response of Bridge Foundation under tunnel construction in Loess area of Henan Province, No.: 232102240024 and Henan
Province Higher Education Key Scientific Research Project Plan: Analysis of the Bidirectional Vibration Impact of a Subway Shield Underpassing an Existing National Railway Line Project, No.:
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Author Contributions
Yingfei Guo made contributions to conceptualization, formal analysis, methodology, project administration, writing-original draft preparation and writing-review and editing. Sen Wang made
contributions to made contributions to data curation, formal analysis, visualization and writing-review and editing. Shuyuan Zhang made contributions to made contributions to investigation,
validation and writing-review and editing. All authors reviewed the manuscript.
Conflict of interest
The authors declare that they have no conflict of interest.
Copyright © 2024 Yingfei Guo, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/23988","timestamp":"2024-11-12T00:14:39Z","content_type":"text/html","content_length":"199867","record_id":"<urn:uuid:1001bfe1-dc5b-40e8-b7e4-bbc69a9041cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00310.warc.gz"}
|
The data are from boiling water in a pot under various conditions. The response variable, y, is the time taken, in minutes to reach 90 degrees Celsius. Accurately measuring the time to actual boiling
is hard, hence the 90 degrees Celsius point is used instead.
Three factors are varied in a full factorial manner (the first 8 observations). The data are in standard order, however the actual experiments were run in random order. The last 3 rows are runs close
to, or interior to the factorial.
Factors varied were:
• A = Amount of water: low level was 500 mL, and high level was 600 mL
• B = Lid off (low level) or lid on (high level)
• C = Size of pot used: low level was 2 L, and high level was 3 L.
Coded values for A, B, and C, should be used in the linear regression model analysis, with -1 representing the low value and +1 the high value.
|
{"url":"https://www.rdocumentation.org/packages/pid/versions/0.50/topics/boilingpot","timestamp":"2024-11-15T01:13:37Z","content_type":"text/html","content_length":"56093","record_id":"<urn:uuid:8f69ed0b-2ac0-4f35-8881-09d9888d7502>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00720.warc.gz"}
|
Bird's Engineering Mathematics 9th Edition by John Bird, ISBN-13: 978-0367643782 - ebookschoice.comBird’s Engineering Mathematics 9th Edition by John Bird, ISBN-13: 978-0367643782ebookschoice.com - The best ebooks collection
Bird’s Engineering Mathematics 9th Edition by John Bird, ISBN-13: 978-0367643782
[PDF eBook eTextbook]
• Publisher: Routledge; 9th edition (March 16, 2021)
• Language: English
• 742 pages
• ISBN-10: 0367643782
• ISBN-13: 978-0367643782
Now in its ninth edition, Bird’s Engineering Mathematics has helped thousands of students to succeed in their exams. Mathematical theories are explained in a straightforward manner, supported by
practical engineering examples and applications to ensure that readers can relate theory to practice. Some 1,300 engineering situations/problems have been ‘flagged-up’ to help demonstrate that
engineering cannot be fully understood without a good knowledge of mathematics.
The extensive and thorough topic coverage makes this a great text for a range of level 2 and 3 engineering courses – such as for aeronautical, construction, electrical, electronic, mechanical,
manufacturing engineering and vehicle technology – including for BTEC First, National and Diploma syllabuses, City & Guilds Technician Certificate and Diploma syllabuses, and even for GCSE and
A-level revision.
Its companion website at www.routledge.com/cw/bird provides resources for both students and lecturers, including full solutions for all 2,000 further questions, lists of essential formulae,
multiple-choice tests, and illustrations, as well as full solutions to revision tests for course instructors.
Table of Contents:
Half Title
Title Page
Copyright Page
Section 1 Number and algebra
1 Revision of fractions, decimals and percentages
1.1 Fractions
1.2 Ratio and proportion
1.3 Decimals
1.4 Percentages
2 Indices, engineering notation and metric conversions
2.1 Indices
2.2 Worked problems on indices
2.3 Engineering notation and common prefixes
2.4 Metric conversions
2.5 Metric – US/imperial conversions
3 Binary, octal and hexadecimal numbers
3.1 Introduction
3.2 Binary numbers
3.3 Octal numbers
3.4 Hexadecimal numbers
4 Calculations and evaluation of formulae
4.1 Errors and approximations
4.2 Use of calculator
4.3 Conversion tables and charts
4.4 Evaluation of formulae
Revision Test 1
5 Algebra
5.1 Basic operations
5.2 Laws of indices
5.3 Brackets and factorisation
5.4 Fundamental laws and precedence
5.5 Direct and inverse proportionality
6 Further algebra
6.1 Polynomial division
6.2 The factor theorem
6.3 The remainder theorem
7 Partial fractions
7.1 Introduction to partial fractions
7.2 Partial fractions with linear factors
7.3 Partial fractions with repeated linear factors
7.4 Partial fractions with quadratic factors
8 Solving simple equations
8.1 Expressions, equations and identities
8.2 Worked problems on simple equations
8.3 Further worked problems on simple equations
8.4 Practical problems involving simple equations
8.5 Further practical problems involving simple equations
Revision Test 2
9 Transposition of formulae
9.1 Introduction to transposition of formulae
9.2 Worked problems on transposition of formulae
9.3 Further worked problems on transposition of formulae
9.4 Harder worked problems on transposition of formulae
10 Solving simultaneous equations
10.1 Introduction to simultaneous equations
10.2 Worked problems on simultaneous equations in two unknowns
10.3 Further worked problems on simultaneous equations
10.4 More difficult worked problems on simultaneous equations
10.5 Practical problems involving simultaneous equations
11 Solving quadratic equations
11.1 Introduction to quadratic equations
11.2 Solution of quadratic equations by factorisation
11.3 Solution of quadratic equations by ‘completing the square’
11.4 Solution of quadratic equations by formula
11.5 Practical problems involving quadratic equations
11.6 The solution of linear and quadratic equations simultaneously
12 Inequalities
12.1 Introduction to inequalities
12.2 Simple inequalities
12.3 Inequalities involving a modulus
12.4 Inequalities involving quotients
12.5 Inequalities involving square functions
12.6 Quadratic inequalities
13 Logarithms
13.1 Introduction to logarithms
13.2 Laws of logarithms
13.3 Indicial equations
13.4 Graphs of logarithmic functions
Revision Test 3
14 Exponential functions
14.1 Introduction to exponential functions
14.2 The power series for ex
14.3 Graphs of exponential functions
14.4 Napierian logarithms
14.5 Laws of growth and decay
15 Number sequences
15.1 Arithmetic progressions
15.2 Worked problems on arithmetic progressions
15.3 Further worked problems on arithmetic progressions
15.4 Geometric progressions
15.5 Worked problems on geometric progressions
15.6 Further worked problems on geometric progressions
15.7 Combinations and permutations
16 The binomial series
16.1 Pascal’s triangle
16.2 The binomial series
16.3 Worked problems on the binomial series
16.4 Further worked problems on the binomial series
16.5 Practical problems involving the binomial theorem
Revision Test 4
Section 2 Trigonometry
17 Introduction to trigonometry
17.1 Trigonometry
17.2 The theorem of Pythagoras
17.3 Trigonometric ratios of acute angles
17.4 Fractional and surd forms of trigonometric ratios
17.5 Evaluating trigonometric ratios of any angles
17.6 Solution of right-angled triangles
17.7 Angle of elevation and depression
17.8 Trigonometric approximations for small angles
18 Trigonometric waveforms
18.1 Graphs of trigonometric functions
18.2 Angles of any magnitude
18.3 The production of a sine and cosine wave
18.4 Sine and cosine curves
18.5 Sinusoidal form Asin(ωt±α)
18.6 Waveform harmonics
19 Cartesian and polar co-ordinates
19.1 Introduction
19.2 Changing from Cartesian into polar co-ordinates
19.3 Changing from polar into Cartesian co-ordinates
19.4 Use of Pol/Rec functions on calculators
Revision Test 5
20 Triangles and some practical applications
20.1 Sine and cosine rules
20.2 Area of any triangle
20.3 Worked problems on the solution of triangles and their areas
20.4 Further worked problems on the solution of triangles and their areas
20.5 Practical situations involving trigonometry
20.6 Further practical situations involving trigonometry
21 Trigonometric identities and equations
21.1 Trigonometric identities
21.2 Worked problems on trigonometric identities
21.3 Trigonometric equations
21.4 Worked problems (i) on trigonometric equations
21.5 Worked problems (ii) on trigonometric equations
21.6 Worked problems (iii) on trigonometric equations
21.7 Worked problems (iv) on trigonometric equations
22 Compound angles
22.1 Compound angle formulae
22.2 Conversion of asinωt+bcosωt into Rsin(ωt+α)
22.3 Double angles
22.4 Changing products of sines and cosines into sums or differences
22.5 Changing sums or differences of sines and cosines into products
Revision Test 6
Section 3 Areas and volumes
23 Areas of common shapes
23.1 Introduction
23.2 Properties of quadrilaterals
23.3 Areas of common shapes
23.4 Worked problems on areas of common shapes
23.5 Further worked problems on areas of plane figures
23.6 Worked problems on areas of composite figures
23.7 Areas of similar shapes
24 The circle and its properties
24.1 Introduction
24.2 Properties of circles
24.3 Radians and degrees
24.4 Arc length and area of circles and sectors
24.5 Worked problems on arc length and area of circles and sectors
24.6 The equation of a circle
25 Volumes and surface areas of common solids
25.1 Introduction
25.2 Volumes and surface areas of regular solids
25.3 Worked problems on volumes and surface areas of regular solids
25.4 Further worked problems on volumes and surface areas of regular solids
25.5 Volumes and surface areas of frusta of pyramids and cones
25.6 The frustum and zone of a sphere
25.7 Prismoidal rule
25.8 Volumes of similar shapes
26 Irregular areas and volumes and mean values of waveforms
26.1 Area of irregular figures
26.2 Volumes of irregular solids
26.3 The mean or average value of a waveform
Revision Test 7
Section 4 Graphs
27 Straight line graphs
27.1 Introduction to graphs
27.2 The straight line graph
27.3 Practical problems involving straight line graphs
28 Reduction of non-linear laws to linear form
28.1 Determination of law
28.2 Determination of law involving logarithms
29 Graphs with logarithmic scales
29.1 Logarithmic scales
29.2 Graphs of the form y=axn
29.3 Graphs of the form y=abx
29.4 Graphs of the form y=aekx
30 Graphical solution of equations
30.1 Graphical solution of simultaneous equations
30.2 Graphical solution of quadratic equations
30.3 Graphical solution of linear and quadratic equations simultaneously
30.4 Graphical solution of cubic equations
31 Functions and their curves
31.1 Standard curves
31.2 Simple transformations
31.3 Periodic functions
31.4 Continuous and discontinuous functions
31.5 Even and odd functions
31.6 Inverse functions
Revision Test 8
Section 5 Complex numbers
32 Complex numbers
32.1 Cartesian complex numbers
32.2 The Argand diagram
32.3 Addition and subtraction of complex numbers
32.4 Multiplication and division of complex numbers
32.5 Complex equations
32.6 The polar form of a complex number
32.7 Multiplication and division in polar form
32.8 Applications of complex numbers
33 De Moivre’s theorem
33.1 Introduction
33.2 Powers of complex numbers
33.3 Roots of complex numbers
Section 6 Vectors
34 Vectors
34.1 Introduction
34.2 Scalars and vectors
34.3 Drawing a vector
34.4 Addition of vectors by drawing
34.5 Resolving vectors into horizontal and vertical components
34.6 Addition of vectors by calculation
34.7 Vector subtraction
34.8 Relative velocity
34.9 i,j, and k notation
35 Methods of adding alternating waveforms
35.1 Combination of two periodic functions
35.2 Plotting periodic functions
35.3 Determining resultant phasors by drawing
35.4 Determining resultant phasors by the sine and cosine rules
35.5 Determining resultant phasors by horizontal and vertical components
35.6 Determining resultant phasors by complex numbers
Revision Test 9
Section 7 Differential calculus
36 Introduction to differentiation
36.1 Introduction to calculus
36.2 Functional notation
36.3 The gradient of a curve
36.4 Differentiation from first principles
36.5 Differentiation of y=axn by the general rule
36.6 Differentiation of sine and cosine functions
36.7 Differentiation of eax and lnax
37 Methods of differentiation
37.1 Differentiation of common functions
37.2 Differentiation of a product
37.3 Differentiation of a quotient
37.4 Function of a function
37.5 Successive differentiation
38 Some applications of differentiation
38.1 Rates of change
38.2 Velocity and acceleration
38.3 Turning points
38.4 Practical problems involving maximum and minimum values
38.5 Points of inflexion
38.6 Tangents and normals
38.7 Small changes
39 Solving equations by Newton’s method
39.1 Introduction to iterative methods
39.2 The Newton–Raphson method
39.3 Worked problems on the Newton–Raphson method
40 Maclaurin’s series
40.1 Introduction
40.2 Derivation of Maclaurin’s theorem
40.3 Conditions of Maclaurin’s series
40.4 Worked problems on Maclaurin’s series
Revision Test 10
41 Differentiation of parametric equations
41.1 Introduction to parametric equations
41.2 Some common parametric equations
41.3 Differentiation in parameters
41.4 Further worked problems on differentiation of parametric equations
42 Differentiation of implicit functions
42.1 Implicit functions
42.2 Differentiating implicit functions
42.3 Differentiating implicit functions containing products and quotients
42.4 Further implicit differentiation
43 Logarithmic differentiation
43.1 Introduction to logarithmic differentiation
43.2 Laws of logarithms
43.3 Differentiation of logarithmic functions
43.4 Differentiation of further logarithmic functions
43.5 Differentiation of f(x)x
Revision Test 11
Section 8 Integral calculus
44 Standard integration
44.1 The process of integration
44.2 The general solution of integrals of the form axn
44.3 Standard integrals
44.4 Definite integrals
45 Integration using algebraic substitutions
45.1 Introduction
45.2 Algebraic substitutions
45.3 Worked problems on integration using algebraic substitutions
45.4 Further worked problems on integration using algebraic substitutions
45.5 Change of limits
46 Integration using trigonometric substitutions
46.1 Introduction
46.2 Worked problems on integration of sin2x,cos2x,tan2x and cot2x
46.3 Worked problems on integration of powers of sines and cosines
46.4 Worked problems on integration of products of sines and cosines
46.5 Worked problems on integration using the sin θ substitution
46.6 Worked problems on integration using the tan θ substitution
Revision Test 12
47 Integration using partial fractions
47.1 Introduction
47.2 Integration using partial fractions with linear factors
47.3 Integration using partial fractions with repeated linear factors
47.4 Integration using partial fractions with quadratic factors
48 The t=tanθ2 substitution
48.1 Introduction
48.2 Worked problems on the t=tanθ2 substitution
48.3 Further worked problems on the t=tanθ2 substitution
49 Integration by parts
49.1 Introduction
49.2 Worked problems on integration by parts
49.3 Further worked problems on integration by parts
50 Numerical integration
50.1 Introduction
50.2 The trapezoidal rule
50.3 The mid-ordinate rule
50.4 Simpson’s rule
50.5 Accuracy of numerical integration
Revision Test 13
51 Areas under and between curves
51.1 Area under a curve
51.2 Worked problems on the area under a curve
51.3 Further worked problems on the area under a curve
51.4 The area between curves
52 Mean and root mean square values
52.1 Mean or average values
52.2 Root mean square values
53 Volumes of solids of revolution
53.1 Introduction
53.2 Worked problems on volumes of solids of revolution
53.3 Further worked problems on volumes of solids of revolution
54 Centroids of simple shapes
54.1 Centroids
54.2 The first moment of area
54.3 Centroid of area between a curve and the x-axis
54.4 Centroid of area between a curve and the y-axis
54.5 Worked problems on centroids of simple shapes
54.6 Further worked problems on centroids of simple shapes
54.7 Theorem of Pappus
55 Second moments of area
55.1 Second moments of area and radius of gyration
55.2 Second moment of area of regular sections
55.3 Parallel axis theorem
55.4 Perpendicular axis theorem
55.5 Summary of derived results
55.6 Worked problems on second moments of area of regular sections
55.7 Worked problems on second moments of area of composite areas
Revision Test 14
Section 9 Differential equations
56 Introduction to differential equations
56.1 Family of curves
56.2 Differential equations
56.3 The solution of equations of the form dydx=f(x)
56.4 The solution of equations of the form dydx=f(y)
56.5 The solution of equations of the form dydx=f(x)·f(y)
Revision Test 15
Section 10 Further number and algebra
57 Boolean algebra and logic circuits
57.1 Boolean algebra and switching circuits
57.2 Simplifying Boolean expressions
57.3 Laws and rules of Boolean algebra
57.4 De Morgan’s laws
57.5 Karnaugh maps
57.6 Logic circuits
57.7 Universal logic gates
58 The theory of matrices and determinants
58.1 Matrix notation
58.2 Addition, subtraction and multiplication of matrices
58.3 The unit matrix
58.4 The determinant of a 2 by 2 matrix
58.5 The inverse or reciprocal of a 2 by 2 matrix
58.6 The determinant of a 3 by 3 matrix
58.7 The inverse or reciprocal of a 3 by 3 matrix
59 The solution of simultaneous equations by matrices and determinants
59.1 Solution of simultaneous equations by matrices
59.2 Solution of simultaneous equations by determinants
59.3 Solution of simultaneous equations using Cramers rule
59.4 Solution of simultaneous equations using the Gaussian elimination method
Revision Test 16
Section 11 Statistics
60 Presentation of statistical data
60.1 Some statistical terminology
60.2 Presentation of ungrouped data
60.3 Presentation of grouped data
61 Mean, median, mode and standard deviation
61.1 Measures of central tendency
61.2 Mean, median and mode for discrete data
61.3 Mean, median and mode for grouped data
61.4 Standard deviation
61.5 Quartiles, deciles and percentiles
62 Probability
62.1 Introduction to probability
62.2 Laws of probability
62.3 Worked problems on probability
62.4 Further worked problems on probability
62.5 Permutations and combinations
62.6 Bayes’ theorem
Revision Test 17
63 The binomial and Poisson distribution
63.1 The binomial distribution
63.2 The Poisson distribution
64 The normal distribution
64.1 Introduction to the normal distribution
64.2 Testing for a normal distribution
Revision Test 18
65 Linear correlation
65.1 Introduction to linear correlation
65.2 The Pearson product-moment formula for determining the linear correlation coefficient
65.3 The significance of a coefficient of correlation
65.4 Worked problems on linear correlation
66 Linear regression
66.1 Introduction to linear regression
66.2 The least-squares regression lines
66.3 Worked problems on linear regression
67 Sampling and estimation theories
67.1 Introduction
67.2 Sampling distributions
67.3 The sampling distribution of the means
67.4 The estimation of population parameters based on a large sample size
67.5 Estimating the mean of a population based on a small sample size
Revision Test 19
List of essential formulae
Answers to Practice Exercises
John Bird, BSc (Hons), CEng, CMath, CSci, FIMA, FIET, FCollT, is the former Head of Applied Electronics in the Faculty of Technology at Highbury College, Portsmouth, UK. More recently, he has
combined freelance lecturing at the University of Portsmouth, with Examiner responsibilities for Advanced Mathematics with City and Guilds and examining for the International Baccalaureate
Organisation. He has over 45 years’ experience of successfully teaching, lecturing, instructing, training, educating and planning trainee engineers study programmes. He is the author of 146 textbooks
on engineering, science and mathematical subjects, with worldwide sales of over one million copies. He is a chartered engineer, a chartered mathematician, a chartered scientist and a Fellow of three
professional institutions. He has recently retired from lecturing at the Royal Navy’s Defence College of Marine Engineering in the Defence College of Technical Training at H.M.S. Sultan, Gosport,
Hampshire, UK, one of the largest engineering training establishments in Europe.
What makes us different?
• Instant Download
• Always Competitive Pricing
• 100% Privacy
• FREE Sample Available
• 24-7 LIVE Customer Support
There are no reviews yet.
|
{"url":"https://ebookschoice.com/product/birds-engineering-mathematics-9th-edition-by-john-bird-isbn-13-978-0367643782/","timestamp":"2024-11-05T20:02:31Z","content_type":"text/html","content_length":"126254","record_id":"<urn:uuid:f001b9be-471f-45db-8eea-81163fd23efb>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00089.warc.gz"}
|