content
stringlengths
86
994k
meta
stringlengths
288
619
Applications of Harmonic Motion Now that we have established the theory and equations behind harmonic motion, we will examine various physical situations in which objects move in simple harmonic motion. Previously, we worked with a mass-spring system, and will examine other harmonic oscillators in a similar manner. Finally, after establishing these applications, we can examine the similarity between simple harmonic motion and uniform circular motion. The Torsional Oscillator Consider a circular disk suspended from a wire fixed to a ceiling. If the disk is rotated, the wire will twist. When the disk is released, the twisted wire exerts a restoring force on the disk, causing it to rotate past its equilibrium point, twisting the wire the other direction, as shown below. This system is called a torsional oscillator. Figure %: A torsional oscillator. The point P oscillates between the lines Q and R with a maximum angular displacement of θ [o] . It has been found experimentally that the torque exerted on the disk is proportional to the angular displacement of the disk, or: is a proportionality constant, a property of the wire. Note the similarity to our spring equation F = - kx . Since τ = Iα for any rotational motion we can state that - κθ = Iα = I If we substitute , and we can see that this is the exact same differential equation we had for our spring system. Thus we may skip to the final solution, describing the angular displacement of the disk as a function of θ [m] is defined as the maximum angular displacement and is the angular frequency given by σ = : It is important not to confuse angular frequency and angular velocity. in this case refers to the angular frequency of the oscillation, and cannot be used for angular velocity. From our expression for angular frequency we can derive that This equation for the period of a torsional oscillator has a significant experimental use. Suppose a body of unknown moment of inertia is placed on a wire of known constant . The period of oscillation can be measured, and the moment of inertia of the body can be determined experimentally. This is quite useful, as the rotational inertia of most bodies cannot be easily determined using the traditional calculus-based method. From our examination of the torsional oscillator we have derived that its motion is simple harmonic. This oscillator can almost be seen as the rotational analogue of the mass-spring system: just as with the mass-spring we substituted θ for x , I for m and κ for k . Not all simple harmonic oscillators have such close correlation. The Pendulum Another common oscillation is that of the simple pendulum. The classic pendulum consists of a particle suspended from a light cord. When the particle is pulled to one side and released, it swings back past the equilibrium point and oscillates between two maximum angular displacements. It is clear that the motion is periodic--we want to see if it is simple harmonic. We do so by drawing a free body diagram and examining the forces on the pendulum at any given time. Figure %: A simple pendulum with cord of length L , shown with free body diagram at a displacement of θ from the equilibrium point The two forces acting on the pendulum at any given time are tension from the rope and gravity. At the equilibrium point the two are antiparallel and cancel exactly, satisfying our condition that there must be no net force at the equilibrium point. When the pendulum is displaced by an angle , the gravitational force must be resolved into radial and tangential components. The radial component, mg cosθ , cancels with the tension, leaving net tangential force; In this case the restoring force is proportional to the angular displacement , but is rather proportional to the sine of the angular displacement, . Strictly speaking, then, the pendulum does not engage in simple harmonic motion. However, most pendulums function at very small angles. If the angle is small we may make the approximation sinθ θ . With this approximation we can rewrite our force expression: F = - mgθ This equation does predict simple harmonic motion, as force is proportional to angular displacement. We can simplify by noticing that the linear displacement of the particle corresponding to an angle is given by x = Lθ . Substituting this in, we see that: Thus we have an equation in the same form as our mass-spring equation; in this case k = . We can skip the calculus and simply state the period of the pendulum: Note that the period, and thus the frequency, of the pendulum is independent of the mass of the particle on the cord. It only depends on the length of the pendulum and the gravitational constant. Keep in mind, also, that this is only an approximation. If the angle exceeds more than fifteen degrees or so, the approximation breaks down. The torsional oscillator and the pendulum are two easy examples of simple harmonic motion. This type of motion, described by the same equations we have derived, comes up in molecular theory, electricity and magnetism, and even astronomy. The same method we applied in this section can be applied to any situation in which harmonic motion is involved. Relation Between Simple Harmonic and Uniform Circular Motion Through our study of simple harmonic oscillations we have used sine and cosine functions, and talked about angular frequency. It seems natural that there should be some connection between simple harmonic motion and uniform circular motion. In fact, there is an astonishingly simple connection that can be easily seen. Consider a particle traveling in a circle of radius R centered about the origin, shown below: Figure %: A particle, starting at point P, travelling in uniform circular motion with a radius of R, and angular velocity σ . What is the coordinate of the particle as it goes around the circle? The particle is shown at point Q, at which it is inclined an angle of from the -axis. Thus the position of the particle at that point is given by: x = R cosθ However, if the particle is traveling with a constant angular velocity , then we can express θ = σt . In addition, the maximum value that can take is at the point (R,0), so we can state that x [m] = R . Substituting these expressions into our equation, This is the exact form as our equation for displacement of a simple harmonic oscillator. The similarity leads us to a conclusion about the relation between simple harmonic motion and circular motion: Simple harmonic motion can be seen as the projection of a particle in uniform circular motion onto the diameter of the circle. This is an astonishing statement. We can see this relation through the following example. Place a mass on a spring such that its equilibrium point is at the point x = 0 . Displace the mass until it is at the point (R,0). At the same time that you release the mass, set a particle in uniform circular motion from the point (R,0). If the two systems have the same value for σ , then the x coordinate of the position of the mass on the spring and the particle will be exactly the same. This relation is a powerful application of the concepts of simple harmonic motion, and serves to increase our understanding about oscillations.
{"url":"http://www.sparknotes.com/physics/oscillations/applicationsofharmonicmotion/section1.rhtml","timestamp":"2014-04-19T02:22:54Z","content_type":null,"content_length":"72992","record_id":"<urn:uuid:95719c34-0d2d-4aeb-9986-04f364a9758f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
National Nanotechnology Infrastructure Network PARSEC at NNIN Pseudopotential Algorithms for Real Space Energy Calculations The PARSEC package provides users with the ability to solve the electronic structure of confined systems such as atomic clusters, molecules, and quantum dots by using a real space approach. The current version can calculate forces and is capable of performing ab-initio molecular dynamics studies, including simulated annealing. The code takes advantage of three common approximations to perform electronic structure calculations. These include the Born-Oppenheimer approximation (seperation of nuclear and electronic degrees of freedom), pseudopotentials, and local density or generalized gradient approximations for exchange and correlation effects. PARSEC solves the Kohn-Sham equation on a cubic grid in real space and is ideally suited for situations where approaches that rely on periodic boundary conditions (i.e. plane-waves, LMTO) can fail. The real space approach is particularly important in low dimensional systems such as atomic clusters, molecules, finite nanowires, and quantum dots. A MATLAB version that can run on a laptop is now available by request from the developers. • Electronic structure of Low Dimension Structures (Quantum Dots, Clusters, Molecules) • Optical Absorption and Spectra • Charged Systems • Time Dependent Density Functional Theory - Molecular Dynamics • Structural Relaxations • Electronic Transport J. R. Chelikowsky, Y. Saad, M. Troullier, A. Stathopoulos, K. Wu, S. Ogut, H. Kim, M. Jain, I. Vasilev, L. Kronik, R. Burdick, A. Makmal, M. Alemany, M. Tiago, C. Pickard, J. Nocedal, J. L. Martins, and K. Burke (for specific contributions see this page) Much of the work on this program was done at the University of Minnesota with subsequent contributions from the Weizmann Institute and the Institute for Computational Engineering and Sciences ( ICES) at the University of Texas. Getting Started: PARSEC Website (contains source code, documentation, and much more!) User Guide for Version 1.1 Lectures and Tutorials from the 2006 CNF Fall Workshop "Building Nanostructures Bit by Bit" "Real space approaches for modeling clusters, nanowires, and more (pdf)", (Murilo Tiago, University of Texas) [Video of talk on CNF MediaSite] Parsec Tutorial by Murilo Tiago (pdf version) (html) Running PARSEC using NNIN resources Contact Derek Stewart stewart (at) cnf.cornell.edu or Michael Stopa, stopa (at) deas.harvard.edu Relevant Research Articles: The First Article • Chelikowsky, J. R., Troullier, N., and Saad, Y., "Finite-difference pseudopotential method: Electronic structure without a basis", Physical Review Letters, 72, 1240, (1994). Overview Papers • A. Natan, A. Benjamini, D. Naveh, L. Kronik, M. L. Tiago, S. P. Beckman, and J. R. Chelikowsky "Real space pseudopotential method for first principles calculations of general periodic and partially periodic systems", Phys. Rev. B, 78, 075109 (2008). • L. Kronik, A. Makmal, M. L. Tiago, M. M. G. Alemany, M. Jain, X.-Y. Huang, Y. Saad, and J. R. Chelikowsky, "PARSEC - the pseudopotential algorithm for real-space electronic structure calculations: recent advances and novel applications to nano-structures", Phys. Stat. Sol. (b), 243, 1063 (2006). Recent Applications and Developments... Questions, Comments... Please contact: Derek Stewart stewart (at) cnf.cornell.edu Cornell Nanoscale Science and Technology Facility, Cornell University
{"url":"http://www.nnin.org/parsec-nnin","timestamp":"2014-04-24T00:45:05Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:072d3c36-8ab7-472d-b45a-7f3a5b208914>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US3484746 - Adaptive pattern recognition system m. 36, W69 5. c. FRALICK T AL 3,434,746 ADAPT IVE PATTERN RECOGNITION SYSTEM 9 Sheets-Sheet 1 Filed Jan. 11, 1965 DECISION CIRCUIT LIKELI HOOD PROBABILITY COMPUTER COMPUTER LIKELIHOOD TIME i WH IT mlrmw k OAS TR S W NFX E O QN J N K. E Y .N Evns m MN T AHN T T E MA SJD, s. c. FRALICK IAL ADAPTIVE PATTERN RECOGNITION SYSTEM Filed Jan. 11, 19 65 I} I m 9 S hee ts-Sheef. 2 0 ffim ATTORNEY m w, R969 ADAPTIV Filed Jan. 11, 1965 AMPLITUDE AMPLITUDE AMPLITUDE AMPLITUDE FIE-Z3 s. c. FRALICK ET AL 3,484,746 E PATTERN RECOGNITION SYSTEM 9 Sheets-Sheet 5 STANLEY C- FRALICK 2 Lol D I v I A 1 U 2 0.: f, f f f f f f f FREQUENCY F EQuENcY 1.0 25' g r I 'i i. a 1 v g 0.! | e .A El 4 f; f 'f f f; fli f' FREQUENCY FREQUENCY 1.0 ll f. E C I f, f f f f f f f FREQUENCY FREQUENCY |.o Y 1% 4: Y I 0 J l I J n I H I I l f f f f f f f z FREQUENCY FREQuENcY INVENTORS JOHN K. KNOX-SEITH DENNIS L. WILSON ATTORNEY 9 Sheets Sheet 4 L A T E K m L A R F C S ADAPTIVE PATTERN RECOGNITION SYSTEM Filed Jan. 11. 1965 K m mm OR TF m V C NY E L N A T S JOHN K. KNOX-SEITH DENNIS L.WILSON 2 v ATTORNEY FREQUENCY f f f .FREQUENCY v FREQUENCY FREQUENCY u tii E p FREOUE NCY f, f f FREQUENCY f, fyf FREQUENCY f f' f FREQUENCY ec. W, 1969 c. FRALICK ET AL I 3,44j4g ADAPTIVE PATTERN RECOGNITION SYSTEM Filed Jan. '11, 1965 9 Sheets-Sheet a, mm L \n 1 m w J 42g Y m I m l- 7. u: I '01 g8 20 m 2 rm l km W}- 2" u mm 0 v 32 I m5 n INVENTORS STANLEY C. FRALICK JOHN K. KNOX-SEITH DENNIS L. WILSON ATTORNEY Dec 16, 196% s. c. FRALICK ET AL fi y ADAPTIVE PATTERN RECOGNITION SYSTEM 9 Sheets-Sheet Filedv Jan. 11. 1965 Ifi, 11%9 s. c. FRALICK ET AL 3,434,746 ADAPTIVE PATTERN RECOGNITION SYSTEM ATTORNEY Filed Jan 11, 1965 9 Sheets-Sheet 9 302 v 3OI 3C8 .3I2' 303 I f I 306 I f 35 PROBABILITY I PROBABILITY 7 DECISION W COMPUTING COMPUTING CIRCUIT 307 MEANS MEANS MEANS I 3II' 3I5 314 I c all; I PROBABILITY I I ADDER COMPLITING I v MEANS d Q) m w j g 9 A 3 e I I T k F TIME .I, I 5 I I .2 l I I g f0 I II B Q. I I I I I I I I a 1 I I I I I I I I I III I I l oI III "4: 2 3 -4 I 3 I I TIME I I; A I C 3 INVENTORS 7 STANLEY c. FRALICK a Q 7 JOHN K.KNOX-SEITH g 1 {4| t2 t3 t4 DENNIS L. WILSON I s 'I' j/Zw TIME United States Patent M US. Cl. 340-446.?) 17 Claims ABSTRACT OF THE DISCLOSURE This system sequentially operates on portions or observation units of an input signal to indicate whether a current observation unit contains a prescribed signal. The system comprises the series combination of a first likelihood computer, a learning loop and a decision circuit. The first computer includes a periodogram calculator which divides the input signal into a sequence of observation units. The learning loop comprises a second likelihood computer, a third probability computer and a multiplier. The third computer is responsive to the outputs of the first and second computers for producing an output which is the analog of the probability that the current observation unit contains the prescribed signal, conditioned on information obtained from all prior observation units. The outputs of the first and third computers are combined in the multiplier. The product output of the multiplier is operated on by the second likelihood computer which produces at the end of each observation unit an output that is the analog of the likelihood that the current observation unit contained the prescribed signal, conditioned on information obtained from prior observation units. If the magnitude of the output of the second computer exceeds a predetermined threshold level, the decision circuit produces an output indicating that the current observation unit of the input signal contains the prescribed signal. This invention relates to adaptive pattern recognition systems and more particularly to pattern recognition systems which acquire information as knowledge of characteristics which identify patterns without external information on the proper classification of the patterns and without specific reference to classifications made by the system. The terms knowledge, learning and the like are used in the following description of the invention because the pattern recognition system performs functions similar to those a human would perform in processing information to derive an answer or The general problem of pattern recognition is one of classification which may be stated as follows: given a sequence of objects, each of which is described by a set of measurements, classify the objects into classes which have similar characteristics. The classes may be separately identified as Class 1, Class 2, etc. Examples of such pattern recognition problems are deciphering a carelessly written address on a letter and identifying an artist by examination of his handiwork. In the case of the former, the name of the city may be obscured but the other elements of the address legible; the solution requires that information on cities within the particular state be compared with the discernible characteristics of the city name Such as the number or positions of letters. In the example of art identification, idiosyncrasies of the artist such as length and pressure of brush strokes or the selection of colors are compared with similar characteristics in the Work being analyzed and the probability that it is or is not the work of the artist is established. In the area of electromagnetic communications, the pattern recognition problem may be posed by the re- 3,484,746 Patented Dec. 16, 1959 quirements for determining the presence or absence of a prescribed signal which is obscured by noise. A prior type of adaptive pattern recognition system for solving this problem employs a set of input signals of known classification to extract additional information about the distinguishing features of the patterns. This adaptive system is said to learn with a teacher since the correct classification of the patterns must be known While the system learns the additional information. Another prior type of adaptive pattern recognition system does not require a teacher having prior knowledge of the classification of the input signals while the system is extracting additional information about the distinguishing features since this system always takes its own classification of the patterns as being correct. A decision generally causes the structure of the machine to change so that the decision is made more emphatically. The disadvantage of this system is that it compounds an error if the original determination is incorrect. Also, this system converges slowly to the correct determination of the presence and shape of the prescribed signal and is complex and unstable when solving multi-parameter decision problems. An object of this invention is the provision of an adaptive pattern recognition system which is capable of learning to classify input signals without the aid of a teacher and without direct dependence on classifications made by the system. The system takes full advantage of information presently and previously available to the system to change the operation of the system so that classification becomes more reliable with time. Another object is the provision of an adaptive pattern recognition system which learns to classify input signals without a teacher having prior knowledge of the proper classification of the input Another object is the provision of an adaptive pattern recognition system which learns to classify input signals independently of the classifications which are made by the system. Another object is the provision of an adaptive pattern recognition system which classifies patterns with a minimum average risk of error associated with each classification. Another object is the provision of an adaptive pattern recognition system which classifies patterns with greatest possible accuracy. Another object is the provision of an adaptive pattern recognition system which takes advantage of all prior input signals in classifying a current input signal. Another object is the provision of an adaptive pattern recognition system which is capable of accurate classification of a new input signal after receipt of minimum number of signals. Another object is the provision of an adaptive pattern recognition system which acquires from each observation maximum additional knowledge of the features which distinguish the patterns. In the most general form, the input of this system consists of one or more time varying signals (for example, the input may be in the form of electrical signals on appropriate input lines). The purpose of the system is to distinguish between various classes of inputs. It is assumed that initially the nature of some of the features distinguishing these input classes is known. A primary feature of a system embodying this invention is the extraction during each observation (without specific reference to the classifications of observations) of additional information which characterize signals of different classes. Thus, this invention is particularly useful in instances where prior knowledge of signal characteristics is not initially sutficient to permit accurate classification of After receiving a signal, a system embodying this invention scans through the range of all possible values of distinguishing characteristics, and computes for each set of values within this range the probability that a signal from Class 1, for example, would be the same as the received signal. For each set of signal characteristics, the above probability is multiplied by the probability that this set of characteristics is the true set for signals in Class 1. The sum of all such products is calculated. This sum is the probability that a signal from Class 1 would result in the observed signal. Similarly, the system computes the probability that the observed input signal would result from each of the other classes. The system classifies the input signal by comparing these probabilities. In arriving at a decision appropriate account is taken of the relative penalty associated with the different types of misclassifications and the relative frequency of occurrence of the different classes. This aspect of the invention is described in detail hereinafter. Initially, the range of possible values of signal characteristics is known, but the probability associated with each set of signal characteristic values is merely postulated. By extracting the appropriate information from each observed signal, the system modifies the original (postulated) probability distribution for the characteristics such that the modified distribution more closely approaches the true distribution. The probability of misclassification becomes less as the assumed probability distribution of the characteristics approaches the true distribution (i.e., as the distinguishing characteristics become better known). The foregoing and other objects and the operation of this invention will be more fully understood from the following description of embodiments thereof, reference being had to the accompanying drawings in which: FIGURE 1 is a block diagram of a dual-hypothesis single-parameter adaptive pattern recognition system embodying this invention; FIGURE 2 is a detailed block diagram of the embodiment of FIGURE 1; FIGURES 3A-3H show Waveforms of typical outputs of a periodogram calculator which forms part of the system shown in FIGURE 2; FIGURES 4A-4H illustrate waveforms of typical outputs of a probability computer forming part of the system shown in FIGURE 1; FIGURE 5 is a waveform representing an input signal; FIGURE 6 (illustrated in three parts as FIGURES 6A, 6B and 6C) is a block diagram of a multi-parameter multi-hypothesis adaptive pattern recognition system embodying this invention; FIGURE 7 is detailed block diagram of probability computer 103 shown in FIGURE 6A; FIGURES 8A8C show timing diagrams illustrating the sweeping operation of probability computer 103 of FIGURE 6A; FIGURE 9 shows typical outputs of function generators 192 and 193 of FIGURES 6A and 6B, respectively; and FIGURE 10 is a simplified block diagram of a multiparameter multi-hypothesis adaptive pattern recognition system illustrating the invention in a broader form. A simplified form of this invention illustrated in FIG- URES 15 is first described to provide a basic understanding of the underlying principles and to explain terminology related to the invention. This embodiment comprises a system which solves a dual-hypothesis singleparameter signal detection problem. The system is used to detect a narrowband electromagnetic signal of unknown frequency imbedded in a noisy environment. A more complex system for solving a multi-hypothesis multiparameter signal detection problem is illustrated in FIG- URES 6-9, inclusive, and will be described thereafter. Finally, the invention is described in its broadest form in conjunction with FIGURE 10. 4 DUAL-HYPOTHESIS SINGLE-PARAMETER SYSTEM FIGURES 1 and 2 illustrate a system for detecting the presence and frequency of a narrowband electromagnetic signal (which will be referred to as the prescribed signal) of the form defined by s(t) :a cos (wt-H9) (1) where a is the amplitude, to is the radian frequency and equals 21rf, f is frequency, t is time, and 0 is the phase. The input signal is assumed to contain either the prescribed signal imbedded in white noise, or the noise background with no signal. The signal parameters a and 0 are random variables. For example, assume the amplitude parameter a is Rayleigh distributed (see Random Signals and Noise, by W. Davenport and W. Root, McGraw-Hill Book Co., 1958), and the phase parameter 0 is uniformly distributed over the interval 0 to 211', i.e., it is equally likely that 0 has each value in the interval 0 to 21r. Signal frequencies of interest are between a frequency f f c.p.s. and f f +W c.p.s. where W is the bandwidth of the system. By learning this frequency, the system improves its ability to distinguish between the presence and absence of the prescribed signal in an input signal. This system is particularly useful for detecting prescribed signals with magnitudes considerably less than the noise level. Referring now to FIGURE 1, the adaptive pattern recognition system comprises a likelihood computer 1, a learning loop 2 and a decision circuit 3. Learning loop 2 comprises a probability computer 4, a multiplier 5 and a likelihood computer 6. The input signal is applied on line 7 to likelihood computer 1. The output of likelihood computer 1 is applied on line 8 to a first input to multiplier 5 and on line 9 to a first input to probability computer 4. The output of the probability computer is applied on line 10 to a second input to multiplier 5. The output of multiplier 5 is applied to likelihood computer 6, the output of which is applied on line 11 to a second input to probability computer 4 and on line 12 to decision circuit 3 which generates an output on either line 13 or line 14. Likelihood computer 1 stores and operates on the section of the input signal received during a time period T. At a particular instant the output of the likelihood computer 1 is the analog representation of the likelihood that the observed section of input signal contains a signal of the prescribed form having a particular frequency A sequence of such outputs, corresponding in ascending order to all frequencies between f and f +W c.p.s., is provided during each time duration T. The likelihood ratio at the frequency f is defined as the ratio of the probability that an input signal containing the prescribed signal having a frequency f would be identical with the observed signal, to the probability that the input signal would be as observed if it contained only noise. On the average, the likelihood ratio will be largest at the frequency of the prescribed signal when that signal is present. Probability computer 4 processes the output of likelihood computer 1 and the output of the likelihood computer 6. At a particular time, the output of probability computer 4 is the analog of the probability that the frequency f is the frequency of the prescribed signal, conditioned on information obtained from all prior observations of the input signal. During a time interval T, there is produced a sequence of such outputs corresponding to all frequencies in the band f to f -l-W c.p.s. Likelihood computer 6 processes the product output of multiplier 5 and generates at the end of each time interval T an analog of the likelihood that the appropriate section of the input signal contains the prescribed signal, conditioned on information about the input signal that is obtained from prior observations thereof. Decision circuit 3 compares the output of learning loop 2 with a predetermined threshold value B and provides an output on either line 13 or line 14 indicating whether it is likely that the preceding section of the input signal does or does not, contain the prescribed signal. The threshold value ,8 is independent of the observations of the input signal. {3 is represented as where L is the relative penalty associated with a false alarm, i.e., deciding that the prescribed signal is present when it is not present; L is the relative penalty associated with a miss, i.e., deciding that the prescribed signal is not present when it is present; and p is the a priori probability that the prescribed signal is present in the input signal. An indication of the frequency of the prescribed signal is obtained by monitoring the output of probability computer 4. Referring now to the embodiment of FIGURE 2, likelihood computer 1 comprises a bandpass circuit 15, a timing circuit 16, a periodogram calculator 17, and an anti-log device 18. Bandpass circuit 15 has frequency limits of f c.p.s. and f W c.p.s. which define the band of frequencies W on which the system will operate. The bandpass circuit may, by way of example, be a passive filter. Timing circuit 16 is activated when the system is initially energized. This circuit generates on line 19 at time t an initiating pulse having a duration T and a timing pulse on line 20 at time t and every T seconds thereafter. The filtered input signal is applied on line 21 to periodogram calculator 17 which computes an analog of the periodogram or spectral (frequency) density distribution of the input signal. Typical periodograms, plotted as a function of frequency, are shown in FIGURE 3 and show the relative energy density of the input signal over the frequency band W. Periodogram calculator 17 may, by way of example, be a time compression type swept receiver as described in The Measurement of Frequency With Scanning Analyzers, by W. R. Kincheloe, Jr., Technical Report 557-2, dated October 1962, System Technology Laboratories, Stanford University, Stanford, Calif. The periodogram calculator 17 sweeps over all frequencies f in the band W once every T seconds in response to a timing pulse from timing circuit 16 on line 20a. The output of periodogram calculator 17 is applied to anti-log device 18. Anti-log device 18 generates an output that is proportional to the natural anti-log of the output of peri odogram calculator 17. By way of example, the construction of the anti-log device may be based on the logarithmic relationship between the voltage across a semiconductor junction and the current through the junction as described in Large-Signal Behavior of Junction Transistors, by Ebers & Moll, IRE Proceedings, volume 42, December 1954. The output of anti-log device 18 is applied on line 8 to the first input to multiplier 5 and on line 9 to delay device 25 of probability computer 4. The time delay of device 25 is equal to the sweep time T of periodogram calculator 17. Delay device 25 may, by way of example, be a delay line, shift register, or tape recorder. The output of delay device 25 is applied on line 27 to a first input to an adder 28. A predetermined bias a is generated on lines and 30a by a potentiometer 29. The bias on line 30 is applied to a second input to adder 28. The constant 41 is the analog of the ratio of the a priori probability that the input signal does not contain the prescribed signal to the a priori probability that the input signal does contain the prescribed signal. The sum signal from adder 28 is applied on line 31 to a first input to a divider 32. A second signal (to be described more fully hereinafter) is applied to a second input to divider 32 on line 33. The output of divider 32 is the analog of the signal on line 31 divided by the signal on line 33. Divider 32 may, by way of example, be of the type described by Kundu and Banerji in Transistorized Multiplier and Divider, IEEE Transactions on Electronic Computers, volume EC-l3, Number 3, June 1964. The output of divider 32 is applied on line 34 to a first input to multiplier 35. A second signal (to be described more fully hereinafter) is applied to a second input to multiplier 35 on line 36. The product output of multiplier 35 is applied on line 37 as a first input to gate 38. The initiating pulse from timing circuit 16 is applied to gate 38 on line 19. A predetermined bias a is generated by a potentiometer 39 and applied on line 40 as a third input to gate 38. The constant (1 is the analog of the initial or a priori probability density (determined before making any observations of time duration T) of the frequency of the prescribed signal. This a priori probability is the output of probability computer 4 (FIGURE 1) during the first observation and is represented by the waveform of FIGURE 4A. Since there is no prior knowledge of the probable frequency of the prescribed signal that may be present in the input signal, it is reasonable to make the initial distribution uniform and assign a a value of 1/ W, i.e., it is equally probable that the frequency of the prescribed signal is any frequency in the frequency band W. The output of gate 38 is applied to a delay device '42 on line 41. Delay device 42 is similar to delay device 25 and also has a time delay equal to the sweep duration T of periodogram calculator 17. The signal stored by delay device 42 is the signal applied to 'multiplier 35 on line 36. The output of gate 38 is also applied to multiplier 5 on line 43 and on line 44 to a display device 45. The display device may, by way of example, be an oscilloscope or strip chart recorder. The display device plots the signal output of gate 38. Typical product signal outputs or probability densities as a function of frequency are illustrated in FIGURE 4. The product output of multiplier 5 is applied to summation circuit 46 of likelihood computer 6. Summation circuit 46 may, by way of example, be a fixed time integrator circuit. The output of circuit 46 is the summation of the input product signal generated during the present observation of time duration of T seconds. The summation circuit is reset to zero every T seconds in response to a timing pulse from timing circuit 16 on line 20b. The output of summation circuit 46 is applied to sample-hold circuit 47. In response to a timing pulse on line 200, circuit 47 samples the output of the summation circuit once every T seconds, immediately prior to the summation circuit neset. The sample-hold circuit 47 stores the sampled signal for a time duration T, until it again samples the output of summation circuit 45. The output of sample-hold circuit 47 is applied on line 51 to a first input to an adder 52. The predetermined bias or; from potentiometer 29 is applied on line 30a to a second input to adder 52. The output of adder 52 is the aforementioned signal applied on line 33 to divider 32. The output of sample-hold circuit 47 is applied on line 12 to decision circuit 3. The decision circuit may, by way of example, be a comparator 53 comprising a threshold device such as a Schmitt trigger circuit. A predetermined bias or threshold B is generated by a potentiometer 57 and is applied to the comparator on line 58. If the signal on line 12 is larger than a, an output is present on line 13 indicating that the prescribed signal probably is present in the input signal. Conversely, if the signal on line 12 is less than [3, an output on line 14 is present indicating that the prescribed signal probably is not present in the input signal. The operation of the system is illustrated in the following example which shows the response of the system to a particular input signal. A typical input signal is represented by the waveform 65 of FIGURE 5. The instantaneous amplitude of the input signal is plotted as a function of time. The time scale is divided into a number of time periods of time duration T, the time interval t to t corresponding to the first observation period T etc. Each period of time duration T defines the time of one observation of the input signal. Assume that the prescribed signal is present in the input signal between times and r as indicated by the dashed line 66. The prescribed signal is not present between times t and t The signal parameters a and 9 in Equation 1 are assumed to be represented by known distribution functions. In this example, the frequency of the prescribed signal is a particular frequency f (see FIGURES 3 and 4); the operator of the system, however, only knows that the frequency is between f and f +W c.p.s. The input signal is assumed to contain White noise. The problem is to determine the presence and frequency of the prescribed signal. Normally the line 37 is connected through gate 38 to lines 41, 43 and 44. When the system is initially actuated, timing circuit 16 produces pulses on lines 19 and 20. The initiating pulse on line 19 disconnects line 37 from the output of the gate and connects the bias (1 on line 40 to the output of gate 38 and thereby to delay device 42. The initiating pulse on line 19 is removed after the first observation of time duration T and gate 38 is returned to its normal condition. The filtered input signal is continuously applied to periodogram calculator 17. The periodogram calculator operates on the filtered input signal as the calculator repeatedly sweeps linearly over the frequency band W. The filtered input signal is divided into a sequence of observations of time duration T by the operation of timing circuit 16. The timing pulses on line 20a synchronize and periodically (times t t t see FIGURE 5) reset the calculator so that it sweeps over the frequency band W once during each observation of time duration T. Thus, the calculator output, which is generated as a function of time, is also a function of frequency, i.e., each time t in the time interval T corresponds to a different frequency f in the frequency band W, the frequency f f -l-W corresponding to the time t=T. The time duration T of the observation is varied by adjustment of timing circuit 16 in accordance with the desired resolution and the expected time duration of the prescribed signal. The periodogram calculator output during a time period T is an estimate of the frequency distribution of input signal energy during the current observation. The calculator computes a new set of periodograms during each observation of time duration T. Periodograms of the input signal, computed during subsequent observations T T T T T T T and T are illustrated in FIGURES 3A, B, C, D, E, F, G and H, respectively. The periodograms of FIGURES 3A and 3B are generated during the first and second observations T and T respectively, when the signal input contains only noise. The periodogram of FIGURE 3C, however, is generated during the sixth observation T when the input signal contains the prescribed signal and noise. The periodogram of FIGURE 3D is generated during the seventh observation T after the prescribed signal having a particular frequency f has been present for the one full observation T The periodograms of FIGURES 3E, F, G and H are generated during the eighth, ninth, tenth and eleventh observations T T T and T after the prescribed signal has been present for two, three, four and five full observations, respectively. The calculation of a current periodogram, such as the periodogram T which corresponds to the Waveform of FIGURE 3D, is not a function of information obtained during the six prior observations. Reference to the waveforms of FIGURE 3 reveals the difficulty of determining when the prescribed signal is present in the input signal. Furthermore, each periodogram reveals little information about the frequency of the prescribed signal. During the following discussion, consider that the present observation is the kth observation of a series and that the system has processed completely the preceding k--1 observations. In the example illustrated in FIG- URES 3, 4 and 5, the prescribed signal is present during the sixth and subsequent observations. The output of anti-log device 18 is delayed one time duration T by delay device 25. Thus, the output of the delay device 25 during the kth observation is the output of the anti-log device during the klth observation. The constant a is added to the delayed signal in adder 28. The signal on line 31, which is an indication of the frequency distribution of signal energy, is normalized in divider 32 by dividing it by the output of sample-hold circuit 47 obtained at the end of the klth observation. The normalization insures that the summation of the probability on line 43 over each observation is unity. The output from sample-hold circuit 47 is a function of the information obtained during the prior kl observations. Thus, the output of divider 32 is a function of the information obtained from the prior kl observations. If this signal contains a peak at the same frequency during subsequent observations, this peak is reinforced in multiplier 35 (as described more fully hereinafter) where it is multiplied by the output of the delay device 42 which is the output of multiplier 35 generated during the prior (k-lth) observation. At a particular instant, the product signal output of multiplier 35 is the analog of the probability that the frequency of the prescribed signal is the frequency f, conditioned on the information obtained during the prior kl observations. During the kth observation, a sequence of such outputs is provided corresponding to all frequencies in the frequency band W. The frequencies may, by way of example, be presented in ascending order. The product signal outputs or probabilities generated by multiplier 35 during the observations, T T T T T T T and T are represented by the waveforms of FIGURES 4A, B, C, D, E, F, G and H respectively. The probabilities represented by the waveforms of FIG- URES 4E, F, G and H illustrate the reinforcement of the indication of the probable frequency of the prescribed signal when the prescribed signal is present in the input signal. The waveform of FIGURE 4A is generated during the first observation T As this waveform indicates the probability density of the frequency of the input signal during the klth or 0th observation, before any input signal has been observed, it represents the a priori probability generated by potentiometer 39 and applied on line 40 to gate 38. The waveform of FIGURE 4B is generated during the second observation T but it depends on information obtained during the first observation T The waveform of FIGURE 4C is plotted during the sixth observation T and depends on information obtained during the previous five observations. Since the prescribed signal has not been present prior to the sixth observation, the output of multiplier 35 can contain no information regarding the frequency of the prescribed signal. The waveform of FIGURE 4D is plotted during the seventh observation T (the prescribed signal being present during the sixth observation T Although this waveform indicates that a probable frequency of the prescribed signal is f it indicates that the signal frequency is more probably the frequency f or f The waveforms of FIG- URES 4E, F, G and H are plotted during subsequent observations T T T and T These Waveforms indicate the frequency probability density conditioned on information obtained from all previous observations. These figures clearly indicate that the system extracts the information regarding the frequency of the prescribed signal, when the prescribed signal is present in the input signal and reinforces the indication that the frequency of the 9 prescribed signal is f For example, the output of multiplier 35 during observation T (FIGURE 4H) indicates with virtual certainty that the prescribed signal has a frequency f The probability computed by probability computer 4 (the output of multiplier 35) during the kth observation T is conditioned on the information learned during the prior kl observations. This probability converges to unity at the exact value of the signal frequency as k is made large. At a particular instant, the output of anti-log device 18 on line 8 is an indication of the likelihood that a pre' scribed signal is present at the frequency f (FIGURE 3). This output is multiplied in multiplier by the output of gate 38 (FIGURE 5) which at that instant is'an indication of the probability that the signal frequency is f. The product output of multiplier 5 is summed over all frequencies in the frequency band W during the present observation by summation circuit 46. The summation signal at the end of the kth observation T is the analog of the likelihood that the input signal during the kth observation T contains the prescribed signal, conditioned on the information obtained from the prior k-1 observations. The summation signal is sampled by sample-hold circuit 47 at the end of each observation and the output from sample-hold circuit 47 is held constant at the sample value for one observation of time duration T. The constant M on line 30a is added to the summation signal on line 51 in adder 52. The sum signal from added 52 is the previously mentioned second input applied to divided 32. The output of sample-hold circuit 47 on line 12 is compared with the constant ,B by comparator 53 of decision circuit 3. If the magnitude of the summation signal is greater than the threshold 5, an output on line 13 indicates that the prescribed signal is probably present in the input signal. An indication of the frequency of the prescribed signal is obtained from plots of the outputs of gate 38 (FIGURE 4). If the magnitude of the summation signal is less than the threshold [3, an output on line 14 indicates that the prescribed signal is probably not present in the input signal. The learning process of learning loop 2 and the determination of the presence of the prescribed signal, will be more clearly understood from a qualitative discussion of the operation thereof. It will be noted, as described more fully hereinafter, that when the prescribed signal is pres ent in the input signal peaks on line 34 (FIGURE 2) consistently occur at the frequency of the prescribed signal (see the frequency f FIGURES 3D, E, F, G and H). The signal on line 34 is modified in multiplier 35 and delayed for one observation period by delay device 42 and applied on line 36 to multiplier 35. Thus, a peak at a specific frequency on line 34 during one observation shows up at the same frequency on line 36 during the next observation. The peaks that reoccur at the same frequency during successive observations are multiplied and thereby strongly reinforced in multiplier 35. The outputs of multiplicr 35 are reduced at all other frequencies by this rnultiplication and the normalizing operation in divider 32. Eventually only a single peak is present at the frequency p Consider now a specific example wherein k1=7 observations have been processed by the system and that the outputs of the various components during the seventh observation T are as described below. The peaks of the output of periodogram calculator 17 (see FIGURE 3D) are accentuated by analog device 18. Thus, the output of likelihood computer 1 during the observation T has several large peaks, one of these occurring at the frequency f indicating the possible presence of the prescribed signal at this frequency. At the same time, the outputs of multiplier 35 and of gate 38 are relatively small for the frequency f (see FIGURE 4D) since the system has not yet learned the frequency of the prescribed signal. The output of multiplier 5 is relatively small at all frequencies, since the probability on line 43 is not particularly large at any frequency. Consequently, the output of summation circuit 46 is small at the end of the seventh observation T indicating that the prescribed signal probably was not present during the seventh observation T Consider now that the present observation is the eight observation T The delayed signal on line 31 is proportional to the output of likelihood computer 1 generated during the prior observation T (FIGURE 3D). The signal on line 34 is the delayed signal on line 31 that is normalized by the operation of divider 32. The signal applied on line 36 to multiplier 35 is the output of multiplier 35 and gate 38 during the observation T (see FIG- URE 4D). As the signals on lines 34 and 36 both have peaks at the frequency f (see FIGURES 3D and 4D) during the present observation T the output of multiplier 35 also has a peak at the frequency f (see FIGURE 4E). It will be noted that the peak at the frequency f during observation T (see FIGURE 4B) is larger than the corresponding peak during the observation T (see FIGURE 4D), indicating the reinforcement of the indication of the signal frequency when the prescribed signal is present in the signal input. The operation of learning loop 2 during the observation T is similar to the above. As both the input signals to multiplier 35 have peaks at the frequency f (see FIG- URES 3E and 4E); the output of the multiplier 35 has an even more pronounced peak at the frequency f,;, (see FIGURE 4F). It will be noted that this peak at the frequency f (see FIGURE 4F) is much larger than the corresponding peak in the prior observation T (see FIG- URE 4E), whereas the peaks at other frequencies are less than the corresponding peaks during the prior observation. The further reinforcement, during subsequent observa tions, of the indication that the probable frequency of the prescribed signal is the frequency f is illustrated in the waveforms of FIGURES 4G and 4H. The probability computed by multiplier 35 on line 43 is multiplied in multiplier 5 by the output of likelihood computer 1 on line 8. As both signals are relatively large at the frequency f during observations T T T T and T (see FIGURES 3D-H and 4DH), the product output of multiplier 5 is greatly increased at that frequency. The product output is summed over each observation by summation circuit 46 and is sampled and held constant at the end of the observation by sample-hold circuit 47. This summation signal indicates the likelihood that the prescribed signal is present in the signal input during the past observation. This system may also be operated as a sampled data system by incorporating a sampling circuit (not shown) in line 21 between bandpass circuit 15 and periodogram calculator 17, and employing a clock circuit in place of timing circuit 16. The sampling circuit samples the filtered signal at the Nyquist rate of 2W samples per second (or faster) to convert the continuous time varying signal of FIGURE 5 to a finite number of signal samples. The clock generates a clock pulse on line 20 each time it receives 2TW signal samples. Signal samples which are generated at the Nyquist rate contain essentially the same information as the continuous time varying signal (see Communications in the Presence of Noise, by Claude E. Shannon, Proceedings of the IRE, volume 37, 1949). A sampled data system which has an equivalent frequency band W of one megacycle per second was operated and tested. The equivalent time duration T of each observation was 0.1 millisecond. 2T W=200 signal samples were generated during each observation. The waveforms of FIGURES 3 and 4 illustrate the operation of the system when the signal-to-noise ratio associated with the signal input was l7 db. It was determined empirically for a system having 2TW=200 signal samples generated during each observation, that 4, 15 and 99 consecutive observations containing the prescribed signal were required to accurately determine the signal frequency when 11 the signal-to-noise ratio was -11 db, 17 db and 23 db, respectively. MULTI-HYPOTHESIS MULTI-PARAMETER SYSTEM A system which distinguishes between two classes of input signals is described above. It was assumed that the signals defining the two classes of inputs were completely described except for one unknown parameter (frequency). The principle which underlies that system can be extended, as described hereinafter, to systems which must distinguish between a plurality of different classes of input signals, which are fully described except for a number of unknown parameters. Consider a system which must distinguish between the following classes of input (1) A signal which contains only noise; (2) A signal which contains noise plus a first prescribed signal represented as 1 C05 (UH- 1) where a m and 0 are unknown parameters which fall within given ranges; and (3) A signal which contains noise plus a second prescribed signal represented as 2( 2 cos (W -F 2) where a w and 0 are unknown parameters different from a m and 6 Although the parameters a, w, and 0 may take on any value within the given ranges, it is suificient to consider a fixed number of discrete values of these parameters spaced evenly over each range. Thus, if the frequency is known to lie between one and two kc., it may be suificient to consider that the frequency is one of the following values: 1.00 kc., 1.01 kc., 1.02 kc., 1.03 kc. 1.99 kc. and 2.00 kc. Thus, it will, in general, be possible to think of each parameter as having a finite number of possible values. During the learning process it is necessary to consider all possible combinations of parameter values. A multi-hypothesis multi-parameter signal detection system is illustrated in schematic form in FIGURE 6. The system comprises a bandpass circuit and timing circuit 16; a probability computer 101 and multiplier 102; a probability computer 103 and associated learning loops 104 (FIGURE 6A) and 105 (FIGURE 6B) and a decision circuit 106 (FIGURE 60) comprising a risk calculator 107 and a comparator circuit 108. Bandpass circuit 15' and timing circuit 16 are similar to those devices disclosed in the embodiment of FIGURE 2. Probability computer 103 is illustrated in block form in FIGURE 7. At a particular instant, probablity computer 103 computes the probability that a prescribed signal having a specific set of parameter values would produce an input signal as observed during the previous time duration T. Probability computer 103 scans all possible combinations of values of the parameters a, w, and 0 during each observation period T. This is accomplished, as illustrated in FIGURE 8, by dividing the time period T into a number of subintervals corresponding to the number of different values of one of the parameters. For example, consider that the phase parameter 0 can take on only one of four values between 0 and 6,, (see FIGURE 8A), and that the time duration T, is divided into four subintervals t t t t t t and t t Consider also that the frequency parameter 1 can take on only one of five discrete values between i and f +W; therefore each of the four subintervals (FIGURE 8A) are further divided into five smaller or incremental time intervals t t t -t etc. as illustrated FIGURE 8B. Consider further that the amplitude parameter a may take on any value between zero and A. Then, during time t t (FIGURE 8B) probability computer 103 scans all possible values of the amplitude parameter a (FIGURE 8C), while the frequency parameter 1 (FIGURE 8B) and the phase parameter 0 (FIGURE 8A) are kept fixed, 12 respectively, at f and 9 As indicated in FIGURE 8, probability computer 103 scans during time t t all possible combinations of the amplitude and frequency parameters while the phase parameter 0 is held fixed at 9,. Thus, during time T=t t probability computer 103 scans all possible combinations of the amplitude, frequency and phase parameters. Referring now to the embodiment of FIGURE 7, timing circuit 16' generates a timing pulse at times t t t etc. (FIGURES 8B and C). The input signal is filtered by bandpass circuit 15 and is applied to time compressor 111. A time compressor is a device which stores a signal received during a time period, and provides a readout during a shorter time interval. In this embodiment, a nondestructive readout is provided so that the same signal may be read out during successive intervals. Time compressor 111 compresses the input signal receive during the klth observation into the smallest incremental time interval, such as t t of FIGURE 8B. During the kth observation, the input signal stored in the compressor is read out on lines 113, and 116 during each of the successive time intervals t -t t -t etc. in response to a timing pulse on line 112. Time compressor 111 may, by Way of example, be the type marketed by Computer Control Company, Inc. and described in that companys Engineering Application Note 3C 005-3. The compressed signal is applied on line 113 to a first input to multiplier 114 and on lines 115 and 116 to inputs to multiplier 117. The output of multiplier 117, which is the square of the compressed signal, is summed by summation circuit 118 and sampled and held constant for one incremental time interval by sample-hold circuit 120. The operation of circuits 118 and is controlled by timing pulses on lines 119 and 121, respectively. The output of sample-hold circuit 120 is applied to an amplifier 122 for inverting the signal and for controlling the magnitude of the signal on line 123 relative to the other input signals to adder During a particular incremental time interval such as t -t a function generator generates an output of the same form as the prescribed signal (see Equations 3 and 4) with the parameters 0 and 1 having the values corresponding to those shown for the sub-intervals (in FIGURES 8A and 8B). The generation of these waveforms is initiated by timing pulses from timing circuit 16 on line 131. Function generator 130 may, by way of example, consist of a tape recorder (or other suitable circulating storage device), four phase-shift circuits and a synchronized commutator. Sinusoidal signals having frequencies f f f f and f respectively, are prerecorded on separate track of the tape recorder and played back, properly synchronized, through the phase-shift circuits. The commutator selects the proper frequency and phase. The output of function generator 130 is applied on lines 132 and 133 to inputs of a multiplier 134 and on line 135 to a second input to the multiplier 114. The product output of multiplier 134, which is the square of the output of function generator 130 is applied to a summation circuit 137 which sums the product signal during each incremental time interval t t etc. in response to a timing pulse on line 138. The summation signal is sampled and held constant for one incremental time interval by sample-hold circuit 139 in response to a timing pulse on line 140. The sampled signal is applied to an amplifier 141 similar to amplifier 122. The amplified signal is applied to a first input to multiplier 142. During each incremental time interval, a sweep generator 145 produces an output which sweeps over the range of the amplitude parameter a. Sweep generator 145 is initiated and synchronized by a timing pulse from timing circuit 16 on line 151. The output of sweep generator 145 is applied on line 146 to a multiplier 147, and on lines 148 and 149 to a multiplier 150. The output of multiplier 13 150, which is the square of the output of sweep generator 145, is applied to the second input to multiplier 142. The output of multiplier 142 is applied on line 153 to the second input to the adder 124. The product output of multiplier 114 is summed during each incremental time interval by summation circuit 155 in response to a timing pulse on line 156. The summation signal is sampled and held constant for one incremental time interval (e.g., r 4 by sample-hold circuit 157 in response to a timing pulse on line 158. The sampled signal is applied to the second input to multiplier 147. The output of multiplier 147 is applied on line 159 as the third input to adder 124. The sum signal from adder 124 is applied to anti-log device 160. The output of device 160 is applied as a first input to multiplier 162. A potentiometer 163 generates a predetermined bias which is the analog of 1 (21r) T o') where T is the time duration of one observation, W is the frequency bandwidth, and 0' is the RMS noise amplitude. This bias is applied on line 164 to the second input of multiplier 162. The output of multiplier 162 is the output of probability computer 103 (see FIGURE 6A). Referring now to FIGURE 6A, the output of probability computer 103 is applied on lines 109 and 110 to learning loops 104 and 105 (FIGURE 6B), respectively. The learning loops 104 and 105 are similar in structure and operation to the learning loop 2 which is illustrated in FIGURE 2. Since learning loops 104 and 105 are similar in structure and operation to each other and to learning loop 2 shown in FIGURE 2, it will be sufficient for an understanding of loops 104 and 105 to describe only one of them, i.e., loop 104, in respect to the differences between it and loop 102; like reference characters indicating like parts on the drawings. The signal on line 9 is applied to a first input to multiplier 171. A predetermined bias which is the analog of the a priori probability P (H that the observed signal input contains the first prescribed signal is generated by potentiometer 172 and applied on line 173 to the second input to multiplier 171. The output of multiplier 171 is applied to delay device 25. The delayed signal is applied to a first input to adder 175. The output of sample-hold circuit 47 on line 177 is applied to the first input of multiplier 178. The predetermined bias potential P (H is applied on line 173a to the second input to multiplier 178. The output of multiplier 178 is applied on line 179 to risk calculator 107 (FIGURE 60), on line 180 to a first input to an adder 181, and on line 182 to a difference circuit 183. The difference circuit may, by way of example, be a difference amplifier. Second and third inputs are applied on lines 195 and 200 to adder 181. Theh output of adder 181 is applied on line 185 to the second input to difference circuit 183 which subtracts the signal on line 182 from the signal on line 185. The difference signal is applied on line 186 to the second input to adder 175. The output of adder 181 is also applied on line 33 to divider 32. The output of adder 175 is applied on line 31 to divider 32 which divides that signal by the signal on line 33. The circuits connected between the divider 32 and multiplier 5 are similar to the corresponding circuits of FIGURE 2, except for function generators 192 and 193 (FIGURE 6B) which are used to generate the initial probability distribution for the parameters a, f, and 0, as will be described more fully hereinafter. Learning loop 105 is similar in structure to the learning loop 104 except that the former does not contain a circuit corresponding to adder 181. Adder 181 of learning loop 104 provides the necessary outputs for all other learning 14 loops. The output of adder 181 is applied on line to difference circuit 183 of FIGURE 6B. The predetermined bias generated by the potentiometer 172' of learning loop 105 and applied on line 173 to multiplier 171 and on line 173a to multiplier 178 is the analog of the priori probability P (H that the observed signal input contains the second prescribed signal. The initial bias applied to gate 38 (see FIGURE 2) of learning loop 2 is a constant since'it is equally likely that the frequency of the prescribed signal is any frequency f in the frequency band W. It is possible to insure that the learning loops 104 and 105 do learn the parameters corresponding to the two different prescribed signals by making the initial bias signals (which are applied to gates 38 in learning loops 104 (FIGURE 6A) and 105 (FIG- URE 6B)) different bias functions such as represented by the waveforms and 191 of FIGURE 9 (rather than constants). More particularly, the waveform 190 (FIGURE 9), which is the analog of the a priori probability P (H that the first signal is present, is generated by the waveform generator 192 (FIGURE 6A) and is applied to gate 38. Similarly, the waveform 191, which is the analog of the priori probability P (H that the second signal is present, is generated by a function generator 193 (FIGURE 6B) and applied to gate 38. The output of learning loop 105 (FIGURE 6B) on line 194 is applied on line 195 to adder 181 (FIGURE 6A) and on line 196 to risk calculator 107 (FIGURE 6C). Probability computer 101 computes the probability that an input signal consisting of noise alone would be as observed. This probability clearly does not depend on any of the parameters a, f, or 0. It is calculated by taking the analog of the integral of the square of the filtered input signal, where the integral is taken over the time duration of one observation. The output of probability computer 101 is applied to multiplier 102. A predetermined bias that is the analog of the a priori probability P (H that the observed input signal contains only noise is generated by a potentiometer 197 and applied on line 198 to multiplier 102. The output of multiplier 102 is applied on line 199 to risk calculator 107 (FIGURE 6C) and on line'200 to the third input to adder 181 of learning loop 104. Risk calculator 107 (FIGURE 6C) comprises multipliers 201 through 206, inclusive, and summation circuits 207, 208 and 209. The signal on line 199 is applied on line 199a to a first input to multi lier 203 and on line 19% to a first input to multiplier 205. The output of learning loop 104 on line 179 is applied on line 179a to a first input to multiplier 201 and on line 17% to a first input to multiplier 206. The output of learning loop 105 on line 196 is applied on line 196a to a first input to multiplier 202 and on line 1961) to a first input to multiplier 204. A predetermined bias which is generated by potentiometer 211 is applied on line 212 to a second input to multiplier 201. Similarily, a predetermined bias generated by potentiometer 213 is applied on line 214 to the second input of multiplier 202. The predetermined bias generated by potentiometer 211 is the assigned loss associated with making a particular decision that the observed input signal contains only noise, when the observed input signal actually contains the first prescribed signal. The predetermined bias generated by potentiometer 213 is the loss associated with making the decision that the observed input signal contains only noise, when the observed input signal actually contains the second prescribed signal. A predetermined bias generated by potentiometer 215 is applied on line 216 to a second input to multiplier 203. Similarly, a predetermined bias generated by a potentiometer 217 is applied on line 218 to the second input of multiplier 204. The predetermined bias generated by potentiometer 215 is the assigned loss associated with making the particular decision that the observed input signal contains the first prescribed signal, when the observed input signal actually contains only noise. The predetermined bias generated by potentiometer 217 is the loss associated with making the decision that the observed input signal contains the first prescribed signal when the observed input signal actually contains the second prescribed signal. A predetermined bias generated by potentiometer 219 is applied on line 220 to a second input to multiplier 205. Similarly, a predetermined bias generated by potentiometer 221 is applied on line 222 to the second input of multiplier 206. The predetermined bias generated by potentiometer 219 is the assigned loss associated with making a particular decision that the observed input signal contains the second prescribed signal when the observed input signal actually contains only noise. The predetermined bias generated by potentiometer 221 is the loss associated with making the decision that the observed input signal contains the second prescribed signal when the observed input signal actually contains the first prescribed signal. The product signals from multipliers 201 and 202 are applied on lines 231 and 232, respectively, to summation circuit 207; the outputs of multipliers 203 and 204 are applied on lines 233 and 234, respectively, to summation circuit 208; and the outputs of multipliers 205 and 206 on lines 235 and 236, respectively, are applied to summation circuit 209. The outputs of summation circuits 207, 208 and 209 are applied on lines 237, 238 and 239, respectively, to comparator circuit 108. Comparator circuit 108 comprises difference circuits 251, 252 and 253; amplifiers 254, 255 and 256; limiters 257, 258 and 259; and logic circuit 263. These parts of the comparator circuit are connected in three channels which provide inputs to the logic circuit, each channel comprising a difference circuit, an amplifier and a limiter connected in series. The signal on line 237 is applied on line 237a to a first input to difference circuit 251 and on line 237b to a first input to difference circuit 252. Similarly, the signal on line 238 is applied on line 238a to a second input to difference circuit 251 and on line 23812 to a first input to difference circuit 253. The signal on line 239 is applied on line 239a to the second input to difference circuit 252 and on line 23912 to the second input to difference circuit 253. The difference circuit may, by way of example, be a difference amplifier. The difference signal from each difference circuit is amplified in an associated high gain amplifier such as amplifier 254. The amplified difference signal is clipped or limited to a constant value by an associated limiter circuit such as limiter 257. The limited difference signals on lines 260, 261, and 262, respectively, are applied to logic circuit 263 which indicates which of the hypotheses H H or H, is most probably true as described hereinafter. Consider that an input signal such as represented by the waveform 65 of FIGURE 5 is applied on line 7 of FIG- URE 6A. The input signal is filtered and applied to probability computers 101 and 103. The outputs of the probability computer 103 during different observations are similar to the waveforms of FIGURE 3 except that the output will be a function of the three signal parameters a, 0 and to rather than of the one signal parameter w. Learning loops 104 and 105 operate on the output of probability computer 103 in a similar manner as described for the embodiment illustrated in FIGURE 2. By giving two different a priori distributions to the parameter sets (as shown in FIGURE 9), it is possible to insure that learning loop 104 will learn the parameter set associated with one of the prescribed signals and learning loop 105 will learn the parameter set associated with the other prescribed signal. Risk calculator 107 (FIGURE 6C) operates on the outputs of learning loops 104 and 105 and probability computer 101 to compute three outputs on lines 237, 238 and 239 which represent the analogs of the risk associated with making a particular decision. The product signal on line 231 is the product of the loss associated with making a decision that the input signal contains only noise when the input signal actually contains the first prescribed input signal, multiplied by the probability that the input contains the first prescribed signal. The product signal on line 232 is the product of the loss associated with making the decision that the input signal contains only noise when the input signal actually contains the second prescribed signal, multiplied by the probability that the input contains the second prescribed signal. Thus, the summation signal on line 237 is the total risk associated with making the decision that the input signal contains only noise. Similarly on lines 238 and 239 represent the total risk associated with making the decision that the first prescribed signal is present in the input signal and that the second prescribed signal is present in the input signal, respectively. Comparator 108 determines which of the three risks is the smallest. The difference circuits of comparator 108 generate differences between pairs of the risks on lines 237, 238 and 239. For example, the output of difference circuit 251 is the difference between the risk associated with making the decision that the input signal contains only noise (i.e., that hypothesis H is true) and the risk associated with making the decision that the input signal contains the first prescribed signal (i.e., that hypothesis H; is true). If the signal on line 237a is more positive than the signal on line 238a, indicating that the risk associated with deciding that the hypothesis H is true is greater than the risk associated with deciding that hypothesis H is true, the output of difference amplifier 251 is positive. Conversely, if the signal on line 238a is more positive than the signal on line 237a indicating that the risk associated with deciding that the hypothesis H is true is greater than the risk associated with deciding that the hypohesis H is true, the output of difference amplifier 251 is negative. Similarly, the sign of the outputs of difference circuits 252 and 253 indicate the relative magnitudes of the risks associated with hypotheses H and H and H and H respectively. The output signals from the limiter on lines 260, 261 and 262 are applied to logic circuit 263. Logic circuit 263 compares these input signals and decides which of the hypotheses H H or H is most probably true by determining which of the hypotheses has the smaller associated risk. The comparison function performed by logic circuit 263 is tabulated in Table I. TABLE I Input from difference True amplifier hypothesis H1 Hi H2 H2 Ha Ha GENERAL THEORETICAL DESCRIPTION A more general form of this invention is described in conjunction with the schematic representation of the invention illustrated in FIGURE 10. The embodiment of FIGURE 10 is similar to that of FIGURE 1 and comprises probability computing means 301, 304 and 306, multiplier means 305, and adder 315 and decision circuit 303. Each of the means 301, 302, 303, 304, 305 and 306 may comprise a plurality of elements, e.g., multiplier means 305 may comprise a plurality of multipliers. The components of FIGURE 10 are interconnected in the same manner as are the components of FIGURE 1 ex- 17 cept that the output of probability computing means 306 is also applied on line 311 to adder 315. The output of adder 315 is applied on line 316 to a second input to probability computing means 304. A glossary of symbols employed hereinafter is included at the end of the specification. Consider a general pattern recognition problem in which it is desired to determine which one of a set of in prescribed signals gave rise to the observed input signal; i.e., it is desired to classify the observed input signal as belonging to one of the m classes. Each class, or prescribed signal, is defined except for a set of unknown parameters having values within given ranges. In order to formulate the problem in decision theoretic terms, define a set of m hypotheses as: H =the hypothesis that the input pattern belongs to class 1 H,=the hypothesis that the input pattern belongs to class i H =the hypothesis that the input pattern belongs to class m Let the unknown parameter set associated with the ith class be designated A For example in the embodiment of FIGURE 6, the unknown parameter sets A and A consist of the amplitude parameter a, the frequency parameter to and the phase parameter 0. There were no unknown parameters for H Thus, the parameter set A may be represented by the vector (a 6 Without diminishing the general nature of these considerations, an observation of finite time duration and bandwidth may always be represented as a column vector X with a finite number of rows. T typical column vector associated with a sampled data form of the embodiment of FIGURE 6 is X x(A) x(nA) where A is the sampling interval and x(nA) is the sample value of the time varying input signal at time n times the sampling interval A. The object of this invention in these terms is the provision of means for operating on the input vectors X to decide for each vector which hypothesis H, is true. More particularly, the system computes, for each i the probability p(X |H ,0 that a signal from the class i would produce the observed vector X conditioned on the information obtained from prior observations. This information is used to decide which hypothesis H, is most probably true. In order to understand the computations required of this system, consider first the case in which the unknown parameter set A, is known to have the values a, (i.e., no learning is required). In this case the optimum system would compute P( kl 1) =P( kl 1 1) for each class i (N. Abramson and I. Farison, Applied Decision Theory, Report SEL-67-095 (TR2005-2), Stanford Elect. Labs, Stanford, Calif., September 1962). Equation 7 is the probability that a signal from class i would produce the observed vector X when the parameter set A, associated with class i has the value 0: If the value at, of parameter set A; is not known, but a probability distribution of the value al is known, then p(X !H is the expected value of the right hand side of Equation 7 as follows: In general, the desired classification can be achieved with greater accuracy if the uncertainty regarding the unknown parameters is reduced such as by modifying the distribution P(oq) of Equation 8 in accordance with the previous observations X X X,; as expressed in the equation Employing in Equation 8 the most recent modified distribution as given by Equation 9, the probability that a signal from class i would produce the observed vector X conditioned on information obtained from the prior k1 observations, is obtained. This probability may be represented as This is the probability computed by the system. To understand the computation of p(o ]0 consider the expression by use of Bayes Law) Assuming that the a priori probability (P(H1)) that an observed signal came from class i is known for all i, then and similarly p( k -1I k2) p( k1] k2: i)p( i) so that equation (12) becomes Thus by using Equation 16 as the distribution of the unknown parameter set in Equation 10, the required probability becomes L(H,,, d,,) (1 the exact value of the loss being being dictated by the particular application. Thus, there is a risk p(d associated with making any particular decision d (i.e., deciding that the hypothesis H was true). The average risk associated associated with making the decision d when X, is observed, is 19 An optimum decision system is herein defined as one which minimizes the average risk associated with the decision. In order to minimize pX (d Equation 19 may be rewritten as Since p(X does not depend on either i or d it is sufficient to minimize the function The system should make the computation of risk for each possible decision d and make that decision which has the least average risk. Referring now to the embodiment of FIGURE 10, probability computing means 301 operates on the input signal and generates an output which is the analog of the probability defined by Equation 7 and is the first term on the right of Equation 17. Probability computing means 304 operates on the outputs of probability computing means 301 and 306 and generates an output which is the analog of the probability defined by the product of the other terms on the right of Equation 17 and the a priori probability p(H The ouputs of probability computing means 301 and 304 are multiplied in multiplier 305. The product signal is operated on by probability computing means 306 which generates an output which is the analog of the product of the probability defined by Equation 17 and the a priori probability p(H,). Decision circuit 303 operates on the output of probability computing means 306 and computes the analog of the risks defined by Equation 21. The decision circuit compares these risks which are associated with each possible decision and generates an output associated with the smallest risk, indicating which of the hypotheses is most probably true. Referring to the embodiment of FIGURES 6A, B and C, the outputs of the various elements are analog representations of the following expressions. Probability computer 103 I k| i: t) Potentiometer 172, 172' Multiplier 171 P( 1 I 1, QIK O Delay device 25 P( k 1] r M 1) Multiplier 35 [)(a [0 )=Eql1atiOn 16 (26) Summation circuit 46 and sample hold circuit 47 Equation 17 (27) Multiplier 178 and learning loop 105 [p(H )][Equation 17] (28) Adder 181 2 wit-12. 11044) pt j) (29 Difference circuit 183 p Xr 1l i,0t 2 p (3 Adder 175 20 Divider 32 Delay device 42 2 il k-2) Multiplier 35 p(a;[0 )=EqL1tttiOl1 16 (34) Summation circuits 207, 208, 209 "I 2 r, i)I kl i)7 i) GLOSSARY ts -values of the parameter set A of class i aamplitude parameter A parameter set of class i ti -the decision hypothesis H is true E(. .)-expected value H a particular hypothesis H the hypothesis that the input pattern belongs to the class i i-(subscript) designates individual classes of the in classes j(subscript) designates individual classes of the m classes (j may or may not equal i) L-loss function m-the number of prescribed signals or classes O the sequence of prior observations wfrequency parameter p(. .)probabi1ity or probability density (d )the average risk associated with making the decision d when X is observed 0phase parameter X the kth observation of the input signal (the current observation) X -the k-lth observation of the input signal What is claimed is: 1. In apparatus for identifying in an observation unit q an input signal a prescribed signal having a predetermined parameter of unknown value, an adaptive system com- PIlSlIlg means for dividing the input signal into a succession of separate timed observation units of equal duration, and a learning loop comprising a probability computer receiving the output of said dividing means and having means for automatically relating each observation unit with an immediately preceding observation unit and deriving an output which is the analog of the probability that said predetermined parameter has a determined value, conditioned on information obtained from all prior observation a multiplier connected to the outputs of said dividing means and said probability computer and producing a product output, and a likelihood computer connected to the output of said multiplier and deriving an output at the end of each observation unit which is the analog of likelihood that a prescribed signal having said predetermined parameter of said determined value would produce the current observation unit, conditioned on information obtained from all prior observation units. 2. The system according to claim 1 in which said likelihood computer comprises a summation circuit for integrating the output of said multiplier for each Observat n unit, and a sample-hold circuit responsive to the operation of said summation circuit at the end of each observation unit for sensing and storing the integrated multiplier output. 3. The system according to claim 1 wherein said dividing means comprises a periodogram calculator responsive to the input signal for generating an analog of the density distribution of the input signal as a function of values of said parameter, and means for accentuating the larger values of the output of said periodogram calculator. 4. An adaptive system for identifying in an input signal a prescribed signal having a predetermined parameter of unknown value comprising means for dividing the input signal into a succession of timed observation units of equal duration, a learning loop comprising a first computer receiving the output of said dividing means and having a divider circuit and a multiplier circuit, said divider circuit dividing signals occurring during an observation unit immediately preceding a current observation unit by a reference signal and deriving a normalized output, said multiplier circuit being connected to the output of said divider circuit and being operative to multiply signals occurring during immediately successive observation units for deriving an output proportional to the probability that said predetermined parameter has a determined value, conditioned on information obtained from all prior observation units, a multiplier connected to the output of said dividing means and said multiplier circuit of the first computer and producing a product output, and a second computer receiving said product output and deriving an output at the end of each observation unit which is the analog of the probability that a signal having said predetermined parameter of said determined value would produce the current observation unit, conditioned on information obtained from all prior observation units, the output of the second computer comprising said reference signal in said first computer, and a decision circuit receiving the output of said second computer and responsive to the magnitude thereof for indicating the presence of the prescribed signal in the current observation unit of the input signal. 5. The system according to claim 4 with means for delaying a portion of the output of said multiplier circuit for a period of one observation unit to derive one input to said multiplier circuit, the other input to said multiplier circuit consisting of said divider circuit output. 6. An adaptive system for classifying a current observation of an input signal as one of a plurality of classes of signals having known forms by determining for the hypothesis for each class that a prescribed signal would produce the current observation of the input signal and determining the hypothesis having the greater probability of being true, said system comprising means responsive to the current observation of the input signal for computing for each class outputs representative of the probability that a prescribed signal from the class would produce the current observation of the observed input signal, conditioned on information obtained from prior observations of input signals, and satisfying the relationship wherein p(. represents probability, X represents the kth or current observation of the input signal, X represents the k 1th observation of the input signal, H represents the hypothesis associated with class i, A, represents the parameter set associated with class i, or, represents values of the parameter set A, O represents k2 prior observations of the input signal, and Hj represents the hypothesis designated by the subscript index 1', and decision circuit means responsive to outputs of said computing means for determining which probability is the largest and deciding which hypothesis is more probably true. 7. The system according to claim 6 wherein said decision circuit means comprises a risk calculator responsive to the outputs of said computing means for determining the risk associated with deciding that each hypothesis is true, and comparator means responsive to the outputs of said risk calculator for determining which hypothesis has the smallest associated risk and deciding which hypothesis is true. 8. The system according to claim 7 wherein the outputs of said risk calculator satisfy the relationship 11']. EM n OM ki QM i) ii where L represents loss and d, represents a particular decision that a particular hypothesis H, is true. 9. A system for classifying an input signal as being generated by a prescribed signal from the ith class of m classes of known form by determining for all i which hypothesis H is true, the hypothesis H, being the hypothesis that the prescribed signal defined by the associated parameter set A, would produce the input signal, said system comprising first computing means responsive to the input signal for converting the input signal to a sequence of observations X X X where X; is the kth or current observation, each observation being of time duration T, said first computing means providing during each observation an output representative of the values, for all i, of p(X IH a the probability that a prescribed signal from each class i would produce the observed input signal X; when the associated parameter set A, has the value al said output being independent of probability information obtained during prior observations of the input signal, a first adder having a plurality of inputs and an output, learning loop means having first inputs connected to said output of said first computing means, and having second inputs connected to said output of said first adder, and having outputs connected to respective inputs of said first adder, said outputs of said learning loop means being respectively representative of values of p(X IH O the probability that a signal from each class i would produce the observed input signal X conditioned on probability information obtained during the k1 prior observations (O of the input signal, and decision circuit means having inputs connected to respective outputs of said learning loop means, said decision circuit means deciding which hypothesis H is true, said outputs of said learning loops being independent of decisions made by said decision circuit means. 10. The system according to claim 9 wherein said learning loop means comprises first multiplier means having in first inputs each connected to the output of said first computing means, m second inputs and m outputs, second computing means having m first inputs each connected to the output of said first computing means, m second inputs each connected to the output
{"url":"http://www.google.cl/patents/US3484746","timestamp":"2014-04-17T06:45:55Z","content_type":null,"content_length":"120210","record_id":"<urn:uuid:1ae8b953-a2ca-4e35-b94d-d90ba720146f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
You are here: Home → Online resources → Math history Math history Math history History Topics Index from the MacTutor history of Mathematics archive An extensive list of essays about math history. Math History Timeline Timeline from 600BC till present. Click on the icons to learn more about the mathematicians. You have to push 'hard and long' on the arrow to move on the timeline. Living Math History Course Lesson plans to study math through history and related math topics in context. The lessons feature a biography of one or more mathematicians, with reading assignments, resources, suggested links and activities tying in with the concepts and history surrounding the individual studied. Fibonacci Numbers and the Golden Section A collection of information about Fibonacci numbers, including how they appear in nature (pine cones, flower petals, cauliflower florets etc.), puzzles, and intriguing math about them. Book: Historical Connections in Mathematics, Vol. I Includes biographical information, famous quotations, and anecdotes from the lives of 30 mathematicians (10 in each volume). Each chapter also includes four to six ready-to-use classroom activities that relate to the work of the mathematician. Reproducible activity sheets range from problem-solving exercises to hands-on learning experiences and skits. Volume I: Pythagoras, Archimedes, Napier, Galileo, Fermat, Pascal, Newton, Euler, Gauss, and Germain See also the "partial preview" for free sample lessons. Price: $21.95 Mathematicians Are People, Too: Stories from the Lives of Great Mathematicians (Volume 1) A wonderful collection of short stories about mathematicians from many different time periods, including Thales, Pythagoras, Hypatia, Galileo, Pascal, Germain, and still others. Makes great family reading, but is definitely interesting on an adult level, too.
{"url":"http://www.homeschoolmath.net/online/math_history.php","timestamp":"2014-04-19T09:27:26Z","content_type":null,"content_length":"23590","record_id":"<urn:uuid:7365d1f1-3ac1-4f47-9bc0-5c603bd129ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Benoît's Fractals Benoît’s Fractals Mandelbrot set Benoît Mandelbrot passed away a few dayes ago on October 14, 2010. Since 1987, Mandelbrot was a member of the Yale’s mathematics department. This chapterette from my book “Gina says: Adventures in the Blogosphere String War” about fractals is brought here on this sad occasion. A little demonstration of Mandelbrot’s impact: when you search in Google for an image for “Mandelbrot” do not get pictures of Mandelbrot himself but rather pictures of Mandelbrot’s creation. You get full pages of beautiful pictures of Mandelbrot sets Benoit Mandelbrot (1924-2010) Modeling physics by continuous smooth mathematical objects have led to the most remarkable achievements of science in the last centuries. The interplay between smooth geometry and stochastic processes is also a very powerfull and fruitful idea. Mandelbrot’s realization of the prominence of fractals and his works on their study can be added to this short list of major paradigms in mathematical modeling of real world phenomena. Fractals are beautiful mathematical objects whose study goes back to the late 19th century. The Sierpiński triangle and the Koch snowflake are early examples of fractals which are constructed by simple recursive rules. Koch snowflake (Note: The Koch snowflake is just the “boundary” of the blue shape in the picture.) Sierpiński triangle Other examples are based on the study iterations of simple functions, especially functions defined over the complex numbers. Mandelbrot set (top of the post) and Julia set (above). Still other examples come from various stochastic (random) processes. For instance, the outer boundary of a Brownian motion in the plane, and the boundary of the percolation process (random Hex game) Brownian motion in the plane. The boundary is the “border” between the white areas and the colored areas. (If you get the impression that “boundary” is an important notion in many areas you are We already mentioned the importance of the notion of “dimension” in mathematics. A point has dimension 0, a line has dimension 1, the plane has dimension 2 and the space is 3 dimensional. Fractals often have “fractional dimension”. The Koch snowflake has dimension 1.2619 , and the Sierpiński triangle has dimension 1.5850. The “boundary” of the Brownian motion in the plane is a fractal; Mandelbrot conjectured that its dimension is 4/3 and this was recently proved by Lawler, Schramm, and Werner. The term fractal was coined by Benoît Mandelbrot, who in his book also proposed the following informal definition of a fractal: “a rough or fragmented geometric shape that can be subdivided into parts, each of which is (at least approximately) a reduced-size copy of the whole.” An important property of fractals is referred to as “self-similarity”, whereby a small part of the big picture is very similar to the whole picture. Mandelbrot also understood and promoted the importance of fractals in various areas of physics. Indeed, today fractals play an important role in many areas of modern physics (and there is also some controversy regarding their role). Mandelbrot also wrote an important paper concerning applications of fractals in finance. The notion of self similarity is also important in other areas. In computer science, the self similarity of a problem is referred to as “self-reducibility”, and this property facilitates the design of efficient algorithms for solving the problem. 6 Responses to Benoît’s Fractals 1. I’ve always wanted to get an overview of what people study about fractals and Mandelbrot/Julia sets. It can’t be the study of pretty pictures. I have this impression (based on nothing concrete, and I would be happy to be proved wrong) that fractals and self-similarity are like perfect numbers: the problems are pretty, but there’s not much general theory and not much of a real connection to other areas of mathematics. Perhaps it provides some new questions to ask. Maybe someone could give an application of these concepts to some problem that does not contain the word “fractal” in it ? □ Hi Simon, It is hard to think of an impression which is so further away from the truth. (But it is nevertheless sort of interesting how you have reached this impression.) The area that studies Fractals close to Mandelbrot and Julia sets is called complex dynamics. It is a deep and important area of mathematics with a developed general theory and many exciting problems, with a lot of connections to other areas. □ Bodil Branner’s article “Dynamics” in the Princeton Companion to Mathematics explains very well why the Mandelbrot set is much more than just a pretty picture. 2. Pingback: Fractals 3. What I really love about fractals is the wide range of applications that fractals have. Perhaps the most interesting use of fractals I’ve heard about is that of antennas to catch multiple frequency ranges – an engineer attended a talk about fractals, and had an idea – what if I make an antenna like a fractal. Turns out that it was the most efficient design – and was proved theoretically later. This entry was posted in Geometry, Obituary, Physics, Probability. Bookmark the permalink.
{"url":"http://gilkalai.wordpress.com/2010/10/17/fractals/?like=1&source=post_flair&_wpnonce=671450efa2","timestamp":"2014-04-21T02:17:13Z","content_type":null,"content_length":"112095","record_id":"<urn:uuid:7744c618-c0f7-41a1-86ee-1af2c74483f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Mc Cook, IL Math Tutor Find a Mc Cook, IL Math Tutor ...During my two and half years of teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by the South Carolina Department of Education. During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2. 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...Any person can learn and understand math with the right help and encouragement. Let's get your student on the road to enjoying math. School is in!!! Let's get started today.I have played travel softball throughout my life. 16 Subjects: including calculus, ACT Math, algebra 1, algebra 2 ...Several years of group tutoring and many years of group training have given me additional experience as a speaker. As the instructor of a personal computing class I gained additional experience as a public speaker. I have helped many students to prepare for PRAXIS and PRAXIS II exams. 49 Subjects: including SAT math, English, trigonometry, prealgebra ...Making flashcards and coming up with pneumonia devices are just a few of the methods I use myself to study for a class. I took both Organic I and II in undergrad. I received an A and B+ in both of these courses. 29 Subjects: including calculus, chemistry, ACT Math, SAT math ...As a result, I have become proficient in differentiating instruction to meet the needs of every learner by using techniques such as: small group reteach, effective academic feedback, hands on activities, active learning and engaging lessons. I use student data from activities and tests to plan curriculum. This method allows me to determine student mastery. 70 Subjects: including algebra 1, algebra 2, reading, biology Related Mc Cook, IL Tutors Mc Cook, IL Accounting Tutors Mc Cook, IL ACT Tutors Mc Cook, IL Algebra Tutors Mc Cook, IL Algebra 2 Tutors Mc Cook, IL Calculus Tutors Mc Cook, IL Geometry Tutors Mc Cook, IL Math Tutors Mc Cook, IL Prealgebra Tutors Mc Cook, IL Precalculus Tutors Mc Cook, IL SAT Tutors Mc Cook, IL SAT Math Tutors Mc Cook, IL Science Tutors Mc Cook, IL Statistics Tutors Mc Cook, IL Trigonometry Tutors Nearby Cities With Math Tutor Argo, IL Math Tutors Brookfield, IL Math Tutors Countryside, IL Math Tutors Forest View, IL Math Tutors Hodgkins, IL Math Tutors La Grange Park Math Tutors La Grange, IL Math Tutors Lyons, IL Math Tutors Mccook, IL Math Tutors North Riverside, IL Math Tutors Riverside, IL Math Tutors Summit Argo Math Tutors Summit, IL Math Tutors Western, IL Math Tutors Willow Springs, IL Math Tutors
{"url":"http://www.purplemath.com/Mc_Cook_IL_Math_tutors.php","timestamp":"2014-04-21T12:46:25Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:9a06efab-c124-4d15-aef4-b863341c13e1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Havertown Science Tutor Find a Havertown Science Tutor ...I am a patient, flexible, and encouraging tutor, and I'd love to help you or your child gain confidence and succeed academically. I adapt my teaching style to students' needs, explaining difficult concepts step by step and using questions to "draw out" students' understanding so that they learn ... 38 Subjects: including ACT Science, philosophy, physics, reading WyzAnt tutoring got me through high school. I have methods to determine what is the best way a student learns, and am committed to finding a way to teach them effectively using visual, auditory, or kinesthetic strategies, or some combination thereof. I obtained my International Baccalaureate Diploma in July 2012 at Central High School of Philadelphia. 18 Subjects: including biology, reading, Spanish, algebra 1 ...I also tutor students for the nursing boards, including a 3 hour lecture in Pharmacology. The research that I focused on for my PhD utilized a number of sociology theories and research methods. I have taken 3 graduate level classes in sociology. 39 Subjects: including biology, study skills, SPSS, SAT reading ...I have also held tutoring sessions with fellow teachers seeking certification in math in order to help them pass the praxis. I studied civil engineering at Lafayette College and earned a Bachelor's of Science in the field. I worked as an intern at the Department of Transportation, conducted res... 21 Subjects: including physics, precalculus, trigonometry, algebra 1 I am a certified high school chemistry teacher entering my seventh year. Chemistry is a difficult subject; however, I am confident that anyone can be successful. I am a member of National Science Teacher's Association. 1 Subject: chemistry Nearby Cities With Science Tutor Aldan, PA Science Tutors Ardmore, PA Science Tutors Bala Cynwyd Science Tutors Broomall Science Tutors Bryn Mawr, PA Science Tutors Darby, PA Science Tutors Drexel Hill Science Tutors East Lansdowne, PA Science Tutors Haverford Science Tutors Kirklyn, PA Science Tutors Lansdowne Science Tutors Llanerch, PA Science Tutors Media, PA Science Tutors Morton, PA Science Tutors Wynnewood, PA Science Tutors
{"url":"http://www.purplemath.com/havertown_science_tutors.php","timestamp":"2014-04-20T09:02:44Z","content_type":null,"content_length":"23623","record_id":"<urn:uuid:233680e0-3f5c-488c-acf2-18e9fdd19f37>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Gardena Algebra Tutor ...Repetition is key! Once a concept becomes second nature, other layers can be added on to deepen the knowledge in a subject. For the tougher subjects, especially math, I like to put things into simple terms. 14 Subjects: including algebra 1, algebra 2, calculus, physics I am a student at North High School. I am searching for students whom I can tutor with my own experiences. I don't work anywhere before. 8 Subjects: including algebra 1, algebra 2, geometry, Microsoft Excel ...Calculus--the study of change and growth--was the class that convinced me to take a lot of advanced math in college. This is the basis for a lot of the work I do every day as a research scientist. My niece is taking Calculus this year, and at least once a week I help her through problem sets or test reviews. 13 Subjects: including algebra 1, algebra 2, physics, geometry ...I am a graduate from California State University Dominguez Hills. I received my Bachelor's degree in Liberal Studies in the Fall of 2010. Furthermore, I am currently in the teaching program at CSUDH working towards a teaching credential in multiple subject. 3 Subjects: including algebra 1, reading, prealgebra ...I encourage children to try their hardest and praise them when they do. I am a very energetic, fun, and funny person who has a great passion for teaching. I try using different tutoring techniques to teach students because I know there are different ways to teach subjects like math and English. 5 Subjects: including algebra 2, precalculus, algebra 1, prealgebra
{"url":"http://www.purplemath.com/Gardena_Algebra_tutors.php","timestamp":"2014-04-18T13:58:01Z","content_type":null,"content_length":"23495","record_id":"<urn:uuid:99e74517-f97e-40c3-8c00-2509a47a10bf>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Phase relation between current and electromagnetic field generated From Maxwell's Equations, for a time-harmonic field the electric field it appears that it may lead the current by 90 degrees since the time derivative of the electric field is equal to the curl of the magnetic field and the source current. But this is only at the source point, due to the retardation of the fields, there is another phase shift that arises as the fields propagate. Let us take the z component of the electric field from a z directed point source current. The field for a unity current source is: [tex] E_z = \frac{i\omega\mu}{4\pi k^2} \left[ ik - \frac{1+k^2z^2}{r^2} - \frac{3ikz^2}{r^2} + \frac{3z^2}{r^3} \right] \frac{e^{ikr}}{r^2} [/tex] So we find that different parts of the field are 90 degrees and 180 degrees out of phase of the current even before we take into account the spatial phase shift. As you go away from the source though, only the first term remains and you have a field that is 180 degrees out of phase plus a spatial phase shift. So in the situation where you have a waveguide, then you have to contend with the superposition of the reflections which would make it even more difficult. But my guess is you will have a hard time determining a rule for this.
{"url":"http://www.physicsforums.com/showthread.php?p=4262352","timestamp":"2014-04-17T15:43:24Z","content_type":null,"content_length":"24872","record_id":"<urn:uuid:1666a6c6-dac4-4da0-9f53-6d2021add3ec>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
triangle problem August 15th 2008, 06:59 PM #1 triangle problem see the picture attaced and many thanks to people who will help me solve this problem and the other problems I have posted before. I really appreaciate the contributors very much.I give thanks to everyone that answer to my question Last edited by helloying; August 15th 2008 at 07:00 PM. Reason: i forgot to attach the picture. You can find $\angle OAB$ by extending O to AB, thereby bisecting it. Now, recall that a circle's tangent forms a right angle with its radius at their point of intersection - i.e. $\angle OAT = 90^{\circ}$. You can then find $\angle BAT = \angle OAT - \angle OAB$. Now extend a horizontal line from the middle of AB (call this point S) to point T (again bisecting AB to form AS and SB). Now we have a right triangle, $\triangle SAT$. You can then use trigonometry to find TA since: $\cos \left(\angle BAT\right) = \frac{AS}{TA} \: \: \iff \: \: TA = \frac{AS}{\cos \left(\angle BAT \right)}$ August 15th 2008, 08:16 PM #2
{"url":"http://mathhelpforum.com/geometry/46049-triangle-problem.html","timestamp":"2014-04-20T18:01:13Z","content_type":null,"content_length":"34231","record_id":"<urn:uuid:f2e0a545-edf0-42e8-854c-19d948bf3c09>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
circle bundle circle bundle Classes of bundles Universal bundles Examples and Applications Special and general types Special notions A circle bundle is a principal bundle for the circle group $S^1$. Equivalently this is a $U(1)$-principal bundle, for the unitary group $U(1)$. Under the canonical representation $\mathbf{B}U(1) \to Vect_{\mathbb{C}}$ the corresponding associated bundle is a complex line bundle. Revised on April 13, 2013 23:46:58 by Urs Schreiber
{"url":"http://ncatlab.org/nlab/show/circle+bundle","timestamp":"2014-04-20T20:57:02Z","content_type":null,"content_length":"36047","record_id":"<urn:uuid:8e97948a-11a2-4bb3-b1f5-420b30661d60>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Replies: 4 Last Post: Sep 3, 2012 1:51 PM Messages: [ Previous | Next ] Ed Wall Re: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Posted: Sep 3, 2012 1:51 PM Posts: 837 Registered: Thank you. This seems interesting. There is, I mention, the work of Jean Lave and others about 'informal' mathematics and there are the more theoretical musings of Jacob Klein. These point somewhat in the Benezet However, all this raises another interesting difference between now and Benezet. That is, the 'informal' mathematics experience of children seems much different today. For instance, check-out at most grocery stores. Ed Wall On Sep 3, 2012, at 4:13 AM, Clyde Greeno wrote: > So arises the question of just what parts of the mathematical basis of the K-middle curriculum can be fully taught/earned "informally" through the learners' experiences with real-life activities. That approach is being pursued by the Tulsa-OK Mathematical Literacy Project ... through creation of show-&-tell videos about such "natural math" activities. The Project is open to advent of an international network of interested professionals ... and to their assistance and contributions. Address inquiries or overtures to clinic@malei.org > From: Ed Wall > Sent: Sunday, September 02, 2012 10:29 PM > To: math-learn@yahoogroups.com > Subject: Re: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education > Some of the original details - slight as they may be - indicate that we are talking here about putting off 'formal' instruction and that, in fact, students engaged in 'informal' mathematics activities in the earlier grades. > It seems that this 'putting off', so I have been told, still occurs in those certain private schools where students craft their own curriculum. Similar results, I am told, often follow. > In any case, the seen failure of teacher education would seem to be dwarfed by societal and parental opposition. There is no indication, by the way, that Benezet's teachers received any sort of training other than being told, in essence, 'hands-off.' Perhaps in today's parlance that would translate into no 'direct instruction.' However, when formal instruction was begun, it may well have been quite 'direct.' > Ed Wall > [Non-text portions of this message have been removed] > ------------------------------------ > Yahoo! Groups Links Date Subject Author 9/2/12 [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Richard Hake 9/2/12 Re: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Ed Wall 9/3/12 Re: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Robert Hansen 9/3/12 Re: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Clyde Greeno @ MALEI 9/3/12 Re: [math-learn] The Effective But Forgotten Benezet Method of K-8 Education Ed Wall
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2399794&messageID=7883540","timestamp":"2014-04-17T13:02:15Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:7f4e45ad-3883-4d82-bc1c-bd6b188972bc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
LTPP Guide to Asphalt Temperature Prediction and Correction The AREA basin shape factor is essentially the result of numerically integrating a normalized deflection basin. When the AREA factor was first developed, test equipment generally had only four deflection sensors, which were typically spaced at one-foot intervals. Therefore, while the methodology behind the AREA factor could be applied to any test setup, by default it is calculated using sensors at 0", 12", 24" and 36" offsets, as shown in Figure 15. Figure 15. Graphical Representation of the AREA Factor The AREA factor is equal to the sum of the three colored areas shown in Figure 15. The numeric expression of the AREA factor is: This equation simplifies to: AREA = 6*(1 + 2*(defl[12] + defl[24]) + defl[36])/defl[0] The AREA basin shape factor relates to the ratio of pavement stiffness to subgrade stiffness. The pavement stiffness characteristic here is dependent on both the thickness of the pavement structure, and on the stiffness of the materials that make up the pavement structure. The role that stiffness plays is clearly shown in Figure 18 by the relationship between AREA and pavement temperature. Increasing the thickness of the pavement would increase the AREA, whereas decreasing the thickness would decrease the AREA. Likewise, if this same pavement structure was placed on a less stiff subgrade, it would increase the AREA, or conversely, if the subgrade was stiffer, the AREA would decrease. AREA values can range from the low teens for thin soft asphalt pavements on stiff subgrade soils to the low 30s for thick stiff asphalt pavements on soft subgrade soils. If there is a hard bottom, the AREA factor will be slightly lower, depending how close to the surface is to the hard The AREA basin factor relates inversely to the mid-depth temperature of the asphalt, that is, as the temperature goes up, the magnitude of the AREA goes down. The relationship between the mid-depth temperature of the asphalt and AREA, interestingly enough, is linear. Figure 16, Figure 17, and Figure 18 show the transform of deflection basins that were measured from one specific point on a pavement as part of the LTPP Seasonal Monitoring Program. Figure 16. Sample Deflection Basins Measured at the Same Point Figure 17. Deflection Basins From Figure 15 Normalized Figure 18. AREA Above the Normalized Basins in Figure 16 Figure 16 contains three basins that were measured at the same point at different times of a day as the temperature in the pavement increased. Figure 17 has plots of the same three basins, but is deflection normalized by dividing all of the basin deflections by the deflection at the center of the load plate. Figure 18 shows the relationship of the calculated AREA of the deflection basins to the measured mid-depth asphalt temperatures. The LTPP SMP data was used to develop a regression equation that relates the AREA basin factor to pavement characteristics including the thickness of the asphalt, the temperature at the mid-depth of the asphalt, the 9 kip deflection at the 36-inch offset (which provides a replacement to the subgrade stiffness), and the latitude. (The latitude relates to the stiffness of the asphalt binder used in the mix. Hot southern climates use a stiffer binder than cooler or colder northern climates, which generally results in the asphalt mixes in colder climates being softer than asphalt mixes in warmer climates.) The resulting regression equation for the AREA basin factor is as follows: AREA = 13.0 + 7.77*log(ac)*log(defl[36]) - 6.78*log(lat)*log(defl[36]) + 0.105*T - 0.116*T*log(ac) ac = Total thickness of the HMA, mm lat = Latitude of the pavement section defl[36] = Deflection (load-normalized to 40.5 kN (9 kip)) at 915 mm (36") from the center of the load plate, µm T = Temperature at mid-depth of the HMA, °C Source code for implementing this equation as a function in MS Excel VBA is available here. Sample data for checking code is available here. The AREA increases as the log of the thickness of the AC increases; decrease as the temperature of the asphalt increases because the second term with T dominates the first term with T. The latitude is a substitute for the stiffness of the binder used in the asphalt mix and the AREA decreases as the latitude increases because softer binders are used in north and harder binders are used in the south. The coefficient on the term where the latitude appears is negative, indicating that as the latitude increases, the AREA decreases which is consistent with the relationship between AREA and stiffness. The defl[36] variable is in two terms where the net result of an increase in defl[36] is an increase in AREA. The defl[36] varies inversely with the subgrade stiffness, so as defl[36] increases, the subgrade stiffness decreases and the ratio of the stiffness of the pavement to the subgrade increases. Use of the AREA basin factor generally will require that the calculated AREA values be adjusted for temperature. Temperature adjustment factors can be calculated by using the AREA equation to calculate an AREA for the pavement tested using the mid-depth asphalt temperature at the time of the test (Tm). The AREA is then calculated again using the mid-depth asphalt temperature, or reference temperature (Tr) that it is to be adjusted to. The Basin Adjustment Factor for AREA, or BAFAREA, is the AREA at the reference temperature divided by the AREA at the measured temperature. The functions can be used to build tables of adjustment factors, or to calculate adjustment factors on a case-by-case basis. Source code for implementing BAFAREA as a function in MS Excel VBA is available here. Sample data for checking code is available here. Previous | Table of Contents | Next
{"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/ltpp/fwdcd/area.cfm","timestamp":"2014-04-18T16:13:22Z","content_type":null,"content_length":"16995","record_id":"<urn:uuid:a7bc92b0-bbee-42ff-a903-2a6f31df5761>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Library Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org. In the first part of the talk we introduce a technique to compute large scale correlations in LQG and spinfoam models. Using this formalism we calculate some components of the graviton propagator and of the n-points function. One of the most significant questions in quantum information is about the origin of the computational power of the quantum computer; namely, from which feature of quantum mechanics and how does the quantum computer obtain its superior computational potential compared with the classical computer? Quantum fields in the Minkowski vacuum are entangled with respect to local field modes. This entanglement can be swapped to spatially separated quantum systems using standard local couplings. A single, inertial field detector in the exponentially expanding (de Sitter) vacuum responds as if it were bathed in thermal radiation in a Minkowski universe. Linear cosmological perturbation theory is pivotal to a theoretical understanding of current cosmological experimental data provided e.g. by cosmic microwave anisotropy probes. A key issue in this theory is to extract the gauge invariant degrees of freedom which allow unambiguous comparison between theory and experiment. In this talk we will present a manifeslty gauge invariant formulation of general relativistic perturbation theory. We demonstrate a number of effective field theory constructions developed to capture the effects of new physics on the Higgs sector of the standard model. We demonstrate that as the self couplings of the Higgs could be significantly effected by new physics, novel phenomenology such as a two Higgs bound state (Higgsium) may be possible. We also demonstrate that the effects of new physics on the Higgs fermion couplings, and thus the Higgs width, could be significant. We show that it is possible this could happen while the new physics Cosmic strings are non-trivial configurations of scalar (and vector) fields that are stable on account of a topological conservation law. They can be formed in the early universe as it cools after the Big Bang. Strong gauge dynamics can be given a holographic description in terms of a warped extra dimension. In particular, Randall-Sundrum models with bulk fields are dual to Standard Model partial compositeness. We identify a holographic basis of 4D fields that allows for a quantitative description of the elementary/composite mixing in these theories. We highlight the unexpected impact of nucleosynthesis on the detectability of tracking quintessence dynamics at late times, showing that dynamics may be invisible until Stage-IV dark energy experiments (DUNE, JDEM, LSST, SKA). Nucleosynthesis forces |w (0)|
{"url":"http://perimeterinstitute.ca/video-library?title=&page=580","timestamp":"2014-04-19T15:00:32Z","content_type":null,"content_length":"69253","record_id":"<urn:uuid:3d7789da-136f-4624-a0c0-51d06b0d1eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello...newbie wants to learn how to make a really cool math puzzle. Re: Hello...newbie wants to learn how to make a really cool math puzzle. Hi AdamKralic Welcome to the forum! Excuse me for asking,but where are you from? It's okay if you don't want to say. Last edited by anonimnystefy (2012-05-05 02:25:25) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=217016","timestamp":"2014-04-19T14:37:04Z","content_type":null,"content_length":"18673","record_id":"<urn:uuid:4e6633ca-0a4e-4d67-aa9c-8b368cfd2c8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
my slice of pizza Stochastic Data Streams I am revisiting the area of streaming algorithms, and thought I will use the MFCS talk to push in a new direction. These days, I do research like cows chew their cud: I swallow an idea or a thought as it pops up during the busy day, and much later, when I have time, say during travel, I revisit the idea and chew on it. Much, much later, a paper or talk emerges. The new-ish direction is that of stochastic streams where a distribution is given a priori . Then, items arrive, drawn from this distribution, instantiating a stream. The performance of a streaming algorithm on this instance is calibrated against the OPT on this instance, or its expected behavior over all streams. We need nice problems in this model, or separate it from the plain streaming model where the distribution is not known, or from plain stochastic model in which the distribution is known, but the algorithms do not necessarily have to be streaming, etc. I have two potential examples in the writeup for MFCS, and one explained a little in the talk. Here are the (go to the end, this is a preview, it may change before the talk on Wednesday). Labels: aggregator 7 Comments: Sasho said... sounds like a good place to apply prophet inequalities? Indeed. The resulting algorithm should be implementable on stream, though. -- Metoo Hi Muthu, I'm surprised that in the last slide you labeled classical data streaming as "well-understood". I still feel like many known results are ad-hoc, without much unifying principle explaining "why" we should expect certain results to be true. Or am I being too hopeful in thinking such principles should exist? Hi Jelani, Apologies: I was overzealous, or may be mentally, I hoped all things will be resolved in your thesis. :) In any case, your point is valid, I will tone down that claim. Looks pretty nice. It seems like this might have some connections to the field of online learning and online to batch conversion bounds -- have you thought about that? Similar thought as above, that is, your setting reminds me of online algorithm and competitive analysis. Hi K and T, It is online algorithms, but with resource constraints of streaming algorithms and additionally knowing the distribution a priori. Hi K, There may be online learning, but I am unable to pull out a precise connection. - Metoo
{"url":"http://mysliceofpizza.blogspot.com/2009/08/stochastic-data-streams.html","timestamp":"2014-04-17T21:23:43Z","content_type":null,"content_length":"23663","record_id":"<urn:uuid:8ec1e1a6-3c88-45b7-a6dc-342c8b801eb4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Well Ordering September 6th 2010, 11:23 AM #1 Senior Member Apr 2009 Well Ordering Define a well-ordering relation R on the set of rationals: $S = \{x = \frac{3^n \times 7^m}{11^t} \text{for integers n, m, t} \ge 1\}$ So using the definition of well-ordering from wikipedia, "a well-order relation on a set S is a total order on S with the property that every non-empty subset of S has a least element in this In other words we need to ensure that the relation is transitive, antisymmetric and total and that every non empty subset of S has a least element. I am stuck on how to define such a relation... any help would be appreciated! Question: why does regular order $\le$ on $S$not work? One option that does work is the following. Let us define $T$ as a set of triples $\{(n,m,t)\in\mathbb{Z}^3\mid n,m,t\ge 1\}$ and $f:T\to S$ as a function $f(n,m,t)=(3^n\times 7^m)/11^t$. Since 3, 7, and 11 are pairwise relatively prime, $f$ is a bijection. Thus, any well-order $R'$ on $T$ gives rise to a well-order $R=\{(f(w_1),f(w_2))\mid w_1,w_2\in T, (w_1,w_2)\in R'\}$ on $S$. Now, one can take the lexicographical order on $T$ (see the remark about well-orders in the link). Thanks emakarov. Say we had $S = \{\frac{3 \times 7}{11}, \frac{3^2 \times 7}{11}, \frac{3^3 \times 7}{11}, \frac{3 \times 7^2}{11}\}<br />$ So a well ordered relation on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. So $(x, y) \in R$ if $x \le y$ $R = \{\left(\frac{3 \times 7}{11}, \frac{3^2 \times 7}{11}\right), \left(\frac{3 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left (\frac{3^2 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3^2 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left(\frac{3^3 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left(\frac {3 \times 7}{11}, \frac{3 \times 7}{11}\right), \left(\frac{3^2 \times 7}{11}, \frac{3^2 \times 7}{11}\right), \left(\frac{3^3 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3 \times 7^2}{11}, \frac{3 \times 7^2}{11}\right)\}$ R is a total order on S since it is transitive, total and antisymmetric. But what I don't get is "with the property that every non-empty subset of S has a least element in this ordering", this has nothing to do with the relation R whatsoever?? If we take any non-empty subset of S then the subset must has a least element, what's the point of creating a relation that is a total order on S? Thanks again. Hint: the definition of "least" involves R. To get a better idea, I recommend finding out why the regular order <= is not a well-order on S. Hmm sorry I've thought about it and I still can't really understand why. I still don't get why we have a define a relation that is a total order on S because if we take any non-empty set of S it must have a least element anyway. Hmm sorry I've thought about it and I still can't really understand why. If you think that the standard order is a well-order on S, then what is the least element in S? If we take any non-empty subset of S then the subset must has a least element, what's the point of creating a relation that is a total order on S? I see two things that need clarification. First, not every set with a total order (or partial order) has the least element. Second, the definition of "least" is relative to the order. One can have a regular $\le$-least element or an R-least element for some completely different order relation R. Therefore, if S does not have the least element with respect to the standard order $\le$, in order to make S a well-ordered set, one has to come up with some other order R so that S, as well as every proper subset of S, have the R-least element. By the way, a total order on a finite set is always a well-order. This is because for each element x, either x is the least, or you can find a smaller element; however, you can't find smaller and smaller elements forever because the set is finite. So, your example with finite S and R above does not show everything that is going on. So you are saying that the well-ordering R is: $R = \{\left(\frac{3 \times 7}{11}, \frac{3^2 \times 7}{11}\right), \left(\frac{3 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left (\frac{3^2 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3^2 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left(\frac{3^3 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left(\frac {3 \times 7}{11}, \frac{3 \times 7}{11}\right), \left(\frac{3^2 \times 7}{11}, \frac{3^2 \times 7}{11}\right), \left(\frac{3^3 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3 \times 7^2}{11}, \frac{3 \times 7^2}{11}\right)\}$ So you are saying that the well-ordering R is: $\textstyle R = \{\left(\frac{3 \times 7}{11}, \frac{3^2 \times 7}{11}\right), \left(\frac{3 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3 \times 7}{11}, \frac{3 \times 7^2}{11}\ right), \left(\frac{3^2 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \left(\frac{3^2 \times 7}{11}, \frac{3 \times 7^2}{11}\right), \left(\frac{3^3 \times 7}{11}, \frac{3 \times 7^2}{11}\ right), \left(\frac{3 \times 7}{11}, \frac{3 \times 7}{11}\right), \left(\frac{3^2 \times 7}{11}, \frac{3^2 \times 7}{11}\right), \left(\frac{3^3 \times 7}{11}, \frac{3^3 \times 7}{11}\right), \ left(\frac{3 \times 7^2}{11}, \frac{3 \times 7^2}{11}\right)\}$ Well-ordering on what set? On the finite S consisting of four elements, yes. As I said, any total order on that finite set is a well-order. On the original infinite S, no. It is not even a total I mean what are you saying that the well-ordering relation R on the set of rationals: $S = \{x = \frac{3^n \times 7^m}{11^t} \text{for integers n, m, t} \ge 1\}$ is??? I don't seem to understand what answer you are giving usagi_killer. As for the answer to the original question, namely, a well-order on the infinite set S, I described it in my first reply. What we've been discussing after that was why the standard order is not a well-order on S, which implies the need for a different order R. September 6th 2010, 01:53 PM #2 MHF Contributor Oct 2009 September 7th 2010, 07:03 AM #3 Senior Member Apr 2009 September 7th 2010, 08:17 AM #4 MHF Contributor Oct 2009 September 7th 2010, 10:47 PM #5 Senior Member Apr 2009 September 7th 2010, 11:35 PM #6 MHF Contributor Oct 2009 September 8th 2010, 02:48 AM #7 Junior Member Sep 2010 September 8th 2010, 03:03 AM #8 MHF Contributor Oct 2009 September 8th 2010, 03:35 AM #9 Junior Member Sep 2010 September 8th 2010, 04:04 AM #10 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/discrete-math/155373-well-ordering.html","timestamp":"2014-04-20T20:01:03Z","content_type":null,"content_length":"66438","record_id":"<urn:uuid:97ff0ff7-fddf-4a2e-8a52-06d49cea4cab>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector spaces. A vector is a mathematical object with both magnitude and direction. This much should be familiar to you all. Consider the Cartesian plane, with coordinates x and y. Then any vector here can be specified by v = ax + by, where a and b are just numbers that tell us the projection of v on the x and y axes respectively. Now it is natural in this example to think of v as being an arrow, or "directed line segment" to give it its proper name. But as the dimension of the containing space increases, this becomes an unimaginable notion. And in any case, for most purposes this is a far too restricted idea, as I will show you later. So, I'm going to give an abstract formulation that will include the arrows in a plane, but will also accommodate more exotic vectors. So, here's a Definition. A vector space V(F) is a set V of abstract objects called vectors, together with an associated scalar field F such that: V(F) forms an abelian group; for all v in V(F), all a in F, there is another vector w = av in V(F); for any a, b in F, and v, w in V(F) the following axioms hold; a(bv) = (ab)v; 1v = v; a(v + w) = av + aw; (a + b)v = av + bv. Now I've used a coupla terms you may not be familiar with. Don't worry about the abelian group bit - probably you'd be appalled by the notion of a non-abelian group, just assume that here the usual rules of arithmetic apply. About the field, again don't worry too much - just think of it as being a set of numbers, again with the usual rules of manipulation (there is a technical definition, of course). But I do want to say a bit about fields. The field F associated to the set V ( one says the vector space V(F) is defined over F) may be real of complex. All the theorems in vector space theory are formulated under under the assumption that F is complex, for the following very good reason: Suppose I write some complex number as z = a + bi, where i is defined by i² = -1. a is called the real part, and bi the imaginary part. When b = 0, z = a + 0i = a, and z is real. It follows that the reals are a subset of the complexes, therefore anything that is true for a complex field is of necessity true of a real field. However, there are notions attached to the manipulation of complex numbers (like conjugation, for example) which don't apply to real numbers, or rather are merely redundant, not wrong. Formulating theorems under the above assumption saves having two sets of theorems for real and complex spaces. You will note, in the definition of a vector space above, that no mention is made of multiplication of vectors - how do yo multiply arrows? It turns out there is a sort of way, well two really, but that will have to wait for another day. Meantime, any questions, I'll be happy to answer. Re: Vector spaces. Polynomials are typically a good way to think of vector spaces. You can add or subtract two polynomials and you end up with a polynomial of the same degree. But if you try to multiply them you will (typically) get a polynomial of greater degree. If you try to divide, you may not end up with a polynomial. Also, you can multiply and divide by scalars. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Legendary Member Re: Vector spaces. Given a field F, you can consider F^n as a dimension-n vector space over F. Then you can certainly multiply your vector-space elements in the obvious way (i.e. multiplying the corresponding co-ordinates). I’m not sure what this is going to lead to, though. Or maybe n×n matrices whose entries are elements in F? You can also multiply matrices (and this might be more interesting). Last edited by JaneFairfax (2007-04-12 04:57:07) Who wrote the novels Mrs Dalloway To the Lighthouse A: Click here for answer. Re: Vector spaces. Ricky: Yes, I was going to touch on polynomial space in a bit, I wanted to get established first, however, I think I may have to address this first; Jane: I don't at all like the way you expressed this, so I guess I'll have to take a bit of a detour. Consider R, the real numbers. We grew up with them and their familiar properties, and some of us became quite adapt at manipulating them in the usual ways. But we never stopped to think of what we really meant by R, and what the properties really represent. Abstract algebra, which is what we are doing here, tries to tease apart those particular properties that R shares with other mathematical It turns out that R is a ring, is a field, is a group, is a vector space, is a topological space......., but there are groups that are not fields, vector spaces that are not groups and so on. So it pays to be careful when talking about R, by which I mean one should specify whether one is thinking of R as a field or as a vector space. So, what you claim is not true, in the sense that the field R is a field, not a vector space. What is true, however, is that R, Q, C etc have the properties of both fields and vector spaces. So for example, if I say that -5 is a vector in R, I mean there is an object in R that has magnitude +5 in the -ve direction. Multiplication should be thought of as applying a directionless scalar, say 2, which is an object in R viewed as a scalar field. I hope this is clear. Sorry to ramble on, but it's an important lesson - R, C and Q are dangerous beasts to use as examples of, well anything, really, because they are at best atypical, and at worst very confusing. Legendary Member Re: Vector spaces. Sorry, I misunderstood you. I thought you wanted to find more vector spaces where “multiplication” of vectors is allowed. But then I didn’t see anything disastrously wrong with something being a group, a field, a totally ordered set, a vector space and a metric space all rolled into one. Last edited by JaneFairfax (2007-04-12 21:53:10) Who wrote the novels Mrs Dalloway To the Lighthouse A: Click here for answer. Re: Vector spaces. Sorry for the delay in proceeding, folks, I had a slight formatting problem for which I I have found a partial fix - hope you can live with it. But first this; When chatting to jane, I mentioned that the real line R is a vector space, and that we can regard some element -5 as a vector with magnitude +5 in the -ve direction. But of course there is an infinity of vector in R with magnitude +5 in the -ve direction. I should have pointed out that, in any vector space, vectors with the same magnitude and direction are regarded as equivalent, regardless of their limit points (they in fact form an equivalence class, which we can talk about, but it's not really relevant here) We agreed that a vector in the {x,y} plane can be written , where are scalar. We can extend this to as many Cartesian dimensions as we choose. Let's write , i = 1,2,...,n. (note that the superscripts here are powers, they are simply labels. Now, you may be thinking I've taken leave of my senses - how can we have more that 3 Cartesian coordinates? Ah, wait and see. But first this. You may find the equation a bit daunting, but it's not really. Suppose n = 3. All it says is that there is an object called v whose direction and magnitude can be expressed by adding up all the units (the ) that v projects on the coordinates . Or, if you prefer, v has projection along the th coordinate. Now it is a definition of Cartesian coordinates that they are to be perpendicular, right? Then, to return to more familiar territory, the x-axis has zero projection of the y-axis, likewise all the other pairs. This suggests that I can write these axes in vector form, so take the x-axis as an example. x = ax + 0y + 0z, this is the definition of perpendicularity (is this a word?) we will use. So, hands up - what's "a"? Yeah, well this alerts us to a problem, which I'll state briefly before quitting for today. You all said "I know, a = 1" right? But an axis, by definition, extends to infinity, or at least it does if we so choose. So, think on this; an element of a Cartesian coordinate system can be expressed in vector form, but not with any real sense of meaning. The reason is obvious, of course: Cartesian (or any other) coordinates are not "real", in the sense that they are just artificial constructions, they don't really exist, but we have done enough to see a way way around this. More later, if you want. Re: Vector spaces. So, it seems there are no problems your end. Good, where was I? Ah yes, but first this: I said that a vector space comprises the set V of objects v, w together with a scalar field F and is correctly written as V(F). Everybody, but everybody abuses the notation and writes V for the vector space; we shall do this here, OK with you? Good. We have a vector v in some vector space V which we wrote as , where the are scalar and the are coordinates. We also agreed that, when using Cartesian coordinates (or some abstract n-dimensional extension of them) we require them to be mutually perpendicular. Specifically we want the " projection" of each on each (i ≠ j) to be zero. We'll see what we mean by this in a minute. So now, I'm afraid we need a coupla inner product (or scalar product, or dot product) of v, w ∈ V is often written v · w = a (a scalar). So what is meant by v · w? Let's see this, in longhand; let and . Then . Now this looks highly scary, right? So let me introduce you to a guy who will be a good friend in what follows, the Kroenecker delta . This is defined as if i = j, otherwise = 0. So we find that So, to summarise, . Now can you see why it's sometimes called the scalar product? Any volunteers? Phew, I'm whacked, typesetting here is such hard work! Any questions yet? Real Member Re: Vector spaces. Just a concern; when you say that you are assuming an orthonormal basis in cartesian coordinates (where the metric tensor is just the identity, ). Perhaps you should make this assumption clearer. This is certainly fine in an introduction, as the inner product is given as usually when first introduced (at that stage usually you are not worried about vector spaces which use other inner products). Overall I just feel that the inner product section is a bit shaky. For example, you never tell the reader that even though you substitute it into your formula out of nowhere. Re: Vector spaces. Zhylliolom wrote: Just a concern; when you say that you are assuming an orthonormal basis in cartesian coordinates You're right, I am, I was kinda glossing over that at this stage. Maybe it was confusing? You are jumping a little ahead when you equate good ol' v · w with (v,w), but yes, I was getting to that (it's a purely notation convention). But I really don't think, in the present context, you should have both raised and lowered indices on the Kroenecker delta; I don't think that makes any sense (but see below). This is certainly fine in an introduction, as the inner product is given as Ah, well, again you are jumping ahead of me! your notation usually refers to the product of the components of a dual vector with its corresponding vector, as I'm sure you know. I was coming to the dual space, in due course. Overall I just feel that the inner product section is a bit shaky. For example, you never tell the reader that even though you substitute it into your formula out of nowhere. Well, I never did claim that equality, both my indices were lowered, which I think is the correct way to express it (I can explain in tensor language if you insist). But, yeah, OK, let's tidy it up, one or the other of us, but if you do want to have a pop (feel free!) make sure we are both using the same notation Re: Vector spaces. So, lemme see if I can make the inner product easier. First recall the vector equation I wrote down: . I hope nobody needs reminding this is just shorthand for . The scalars are called the of v. Note that the do not enter explicitly into the sum; they merely tell us in what we want to evaluate our components. (v is a vector, after all!). So we can think of our vector units on the ith axis. But we can ask the same question of two : what is the "shadow" that, say, v casts on w? Obviously this is at a maximum when v and w are parallel, and at a minimum when they are perpendicular (remember this, it will become important soon OK, so in the equation I wrote for the inner product what's going on? specifically, why did I switch indices on b, and what happened to the coordinates, the x's? Well, it makes no sense to ask, in effect, what is v's projection in the x direction on w's projection on the y axis. So we only use like pairs of coordinates and components, in which case we don't need to write down coordinates. In fact it's better if we don't, since we want to emphasize the fact that the inner product is by definition scalar. This what the Kroenecker delta did for us. It is the mathematical equivalent of setting all the diagonal entries of a square matrix to 1, and all off-diagonal entries to 0! On last thing. Zhylliolom wrote the inner product as something a bit like this <v,w>. I prefer (v, w), a symbol reserved in general for an ordered pair. There is a very good reason for this preference of mine, which we will come to in due course. But as always, I've rambled on too long! Last edited by ben (2007-04-19 08:13:29)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=68782","timestamp":"2014-04-20T21:27:41Z","content_type":null,"content_length":"40104","record_id":"<urn:uuid:6c8d0672-43ee-42a0-98f6-b26908bfe75d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Tight and taut immersions From Encyclopedia of Mathematics The total absolute curvature Immersion of a manifold) is expressed as an integral (a2) in terms of local invariants. It obeys Here Betti number for Čech homology with Čech cohomology). For a closed curve, surface Gaussian curvature, Euler characteristic of The general definition of The main problem is existence. One is interested also in the special properties of tight immersions for a given manifold Important is the following probabilistic definition: For smooth immersions one has The property Morse theory. The inequality (a1) is, in particular, a consequence of the Morse inequality Figure: t092810a Figure: t092810b Figure: t092810c Figure: t092810d An imbedding of spaces Figure: t092810e Figure: t092810f For curves and closed surfaces, the half-space property reduces to The half-space definition places tightness in classical geometry and convexity theory. Thus it follows that tightness is a projective property (cf. Projective geometry), as it is clearly invariant under any projective transformation in Conformal geometry). It is invariant under any conformal (Möbius) transformation of In proofs an important role is played by Kuiper's fundamental theorem. For imbeddings it says: Top sets of tightly imbedded spaces are tight. A top set is the intersection with a supporting half-space or hyperplane in Miscellaneous representative theorems, mainly mentioned for surfaces, are as follows. A tight closed curve in Knot theory). The trefoil knot in Fig. ahas The first higher-dimensional theorem is due to S.S. Chern and J. Lashof (1957): A substantial (not in a hyper-plane) immersion of a closed If projective plane) and Klein surface), not even a continuous immersion. The case A smooth tight substantial closed surface in Veronese mapping) in T. Banchoff (1965) suggested, however, and W. Kühnel (1980, see [a3]) proved, that except for the Klein bottle, a tight substantial polyhedral surface in In this context there is another remarkable theorem. A substantial tight continuous immersion of the real projective plane into [a11]. Every smooth immersion of a surface with [a15]). For the other surfaces the results are not yet complete. Every orientable surface with be analytic except for the torus (G. Thorbergsson, [a19]). Every smooth imbedded knotted orientable surface in [a10] proved that there do exist "isotopy-tight" knotted surfaces with Smooth immersions of surfaces in Tight analytic surfaces in [a3], p. 81, and Rigidity). Hardly anything more is known about [a9]. The image is a triangulation of Taut imbeddings deserve a separate discussion. Let Absolute retract for normal spaces; Retract of a topological space) is a smooth manifold. This is known for [a8]. The customary definitions of a smooth taut manifold [a3]). But in that case tight is not defined and so tightness is not a consequence. For a smooth proper submanifold Dupin cyclide). The diffeomorphism classes of all taut [a17]. T. Ozawa [a14] proved that every connected set of critical points of a distance function differential geometry, in the study of the following kinds of spaces. 1) Orbits of isotropy representations of symmetric spaces, also called [a1]. 2) Closed isoparametric submanifolds. A compact submanifold [a5]. If [a20]. Isoparametric submanifolds are taut. They form a generalization of [a6]). The concepts of taut imbedding and isoparametric submanifold generalize to the Hilbert space setting [a16]. Examples are the infinite-dimensional flag manifolds. Finally, a remarkable result due to H.-F. Münzer [a12] is that for an isoparametric 3) A submanifold [a4], the latter finally obtained the result that closed totally focal manifolds are the same as closed isoparametric submanifolds. Note that any Möbius transform, or stereographic projection, or tubular [a2]. The product of two taut imbeddings is taut, and cylinders and surfaces of revolution built from taut imbeddings are taut (see [a3] and [a15]). All closed taut submanifolds that are now known (1990) have been obtained by these and some other new constructions (see [a18] and [a13]). Perhaps these exhaust all possibilities. For a wealth of other results and generalizations see the references. [a1] R. Bott, H. Samelson, "Applications of the theory of Morse to symmetric spaces" Amer. J. Math. , 80 (1958) pp. 964–1029 MR0105694 Zbl 0101.39702 [a2] T.E. Cecil, S.S. Chern, "Tautness and Lie sphere geometry" Math. Ann. , 278 (1987) pp. 381–399 MR0909233 Zbl 0635.53029 [a3] T.E. Cecil, P.J. Ryan, "Tight and taut immersions of manifolds" , Pitman (1985) MR0781126 Zbl 0596.53002 [a4] S. Carter, A. West, "Isoparametric and totally focal submanifolds" Proc. London Math. Soc. , 60 (1990) pp. 609–624 MR1044313 Zbl 0663.53045 [a5] D. Ferus, H. Karcher, H.F. Münzer, "Clifford Algebren und neue isoparametrische Hyperflächen" Math. Z. , 177 (1981) pp. 479–502 MR624227 [a6] W.Y. Hsiang, R.S. Palais, C.L. Terng, "The topology of isoparametric submanifolds" J. Diff. Geom. , 27 (1988) pp. 423–460 MR0940113 Zbl 0618.57018 [a7] N.H. Kuiper, "Tight embeddings and maps" W.Y. Hsiang (ed.) et al. (ed.) , The Chern Symposium (1979) , Springer (1980) pp. 97–145 MR0609559 Zbl 0461.53033 [a8] N.H. Kuiper, "Taut sets in three-space are very special" Topology , 23 (1984) pp. 323–336 MR0770568 Zbl 0578.53044 [a9] W. Kühnel, T. Banchoff, "The 9-vertex complex projective plane" The Math. Intelligencer , 5 : 3 (1983) pp. 11–22 MR0737686 Zbl 0534.51009 [a10] N.H. Kuiper, W.F., III Meeks, "Total curvature for knotted surfaces" Invent. Math. , 77 (1984) pp. 25–69 MR0751130 Zbl 0553.53034 [a11] N.H. Kuiper, W.F. Pohl, "Tight topological embedding of the real projective plane in Invent. Math. , 42 (1977) pp. 177–199 MR0494122 [a12] H.-F. Münzner, "Isoparametrische Hyperflächen in Sphären II" Math. Ann. , 256 (1981) pp. 215–232 MR0620709 Zbl 0438.53050 [a13] R. Miyaoka, T. Ozawa, "Construction of taut embeddings and the Cecil–Ryan conjecture" , Proc. 1988 Symp. Differential geometry , Acad. Press (1990) [a14] T. Ozawa, "On critical sets of distance functions to a taut submanifold" Math. Ann. , 276 (1986) pp. 91–96 MR0863709 [a15] U. Pinkall, "Tight surfaces and regular homotopy" Topology , 25 (1986) pp. 475–481 MR0862434 Zbl 0605.53027 [a16] R.S. Palais, C.L. Terng, "Critical point theory and submanifold geometry" , Lect. notes in math. , 1353 , Springer (1988) MR0972503 Zbl 0658.49001 [a17] U. Pinkall, G. Thorbergsson, "Taut 3-manifolds" Topology , 28 (1989) pp. 389–402 MR1030983 Zbl 0686.53050 [a18] U. Pinkall, G. Thorbergsson, "Deformations of Dupin hypersurfaces" Proc. Amer. Math. Soc. , 107 (1989) pp. 1037–1043 MR0975655 Zbl 0682.53061 [a19] G. Thorbergsson, "Tight analytic surfaces" Topology (Forthcoming) MR1113686 Zbl 0727.57031 [a20] G. Thorbergsson, "Isoparametric foliations and their buildings" Ann. of Math. , 31 (1991) pp. 429–446 MR1097244 Zbl 0727.57028 How to Cite This Entry: Tight and taut immersions. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Tight_and_taut_immersions&oldid=24579 This article was adapted from an original article by N.H. Kuiper (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Tight_and_taut_immersions","timestamp":"2014-04-17T09:39:34Z","content_type":null,"content_length":"61585","record_id":"<urn:uuid:2245753d-99f8-4fbd-8883-8c8f2f8c49a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
El Granada Precalculus Tutor Find an El Granada Precalculus Tutor ...I am very effective in helping students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. I do this in a way that is positive, supportive, and also fun. I explain difficult math and science concepts in simple English, and continue working with the students until they understand the concepts really well. 14 Subjects: including precalculus, calculus, statistics, geometry ...I enjoyed helping my classmates with their challenges as math has always been one of my favorite subjects, and I continued to help my classmates during my free time in college. Now I am happy to become a professional tutor so I can help more students. My teaching method is emphasized on conceptual learning rather than rote memorization. 22 Subjects: including precalculus, calculus, algebra 1, algebra 2 ...I have a Master's degree in Applied Economics from the University of Santa Clara. My specialty is in Microeconomics, but I am very familiar with all the major aspects of free-market economic theory, including Macroeconomics, Econometrics, Money & Banking and International Economics. I have stro... 22 Subjects: including precalculus, calculus, geometry, statistics ...It takes time. It takes effort. It helps to have someone you're completely comfortable guiding you along the way. 8 Subjects: including precalculus, reading, algebra 1, algebra 2 ...I am a native Spanish speaker, so I can tutor Spanish and English as a Second Language. Additionally, I can tutor any of the technical subjects that I mentioned above in Spanish as well as English. My teaching methodology is to aid my students in developing an intuition that will allow them to ... 15 Subjects: including precalculus, Spanish, calculus, ESL/ESOL
{"url":"http://www.purplemath.com/El_Granada_Precalculus_tutors.php","timestamp":"2014-04-21T07:13:43Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:8028511c-5d12-4a72-b797-68df64a3fc4a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Do Your Algebra Homework Using Online Tools Edit Article Edited by Annie, Teresa, Maluniu, Tangy Tom and 5 others Having trouble in your Algebra homework? Don't know how to solve the math problem in front of you? No one to contact for help and assistance? No more worries starting today! Using this genius Algebra Solver, you'll finish your homework in minutes! 1. 1 Go to MyAlgebra. The webpage should look like the photo on the right. 2. 2 Type in the math problem. Type it next to "Enter Problem." There are various signs and symbols to use, such as pi (π), division sign (÷), and greater than or equal to (≥). Click the desired sign □ If your problem has an exponent, use the caret sign (^) and the exponent number. For example: to input 6², type in 6^2. 3. 3 Check the format. To check if you typed it correctly, select "Show" right next to "Math Format." If it's not correct, fix it. 4. 4 Select the topic. Do you want to simplify the expression? Factor is? Evaluate it? Use Cramer's rule? Whichever it is, choose one next to "Select Topic." 5. 5 Click "Answer." Your answer will be displayed right before your eyes. • Another very similar website is Mathway. Type in the problem, and you'll get your answer! Whichever website you're more comfortable with. • Take a while to get familiar with this website. It might take some time to know how to use it well. • If you would like to have the steps shown on how to solve it, you can purchase the software. • WolframAlpha is another, more versatile website with methods for performing algebraic functions. Albeit with more learning needed. • Make sure you also learn the lesson and how to solve the problems. Otherwise, you won't do so well on exams. • Although it is a great tool, it is highly suggested to just use it to check your answer. If you are not even trying to solve the math problem, you will never learn how and will probably fail the tests too. • Homework is to help you learn, so try not to use this Algebra Solver often. • Although it is rare, it might not solve some math problems, or might solve it incorrectly. Be careful, and double check the answer. Article Info Categories: Algebra Recent edits by: Oliver, Hjohn13, J̷̀K̀̀͟͝L̷̨͢͏1̶̛̀͞2̡̢3̡͏̸͘͢4҉̶̷ Thanks to all authors for creating a page that has been read 2,238 times. Was this article accurate?
{"url":"http://www.wikihow.com/Do-Your-Algebra-Homework-Using-Online-Tools","timestamp":"2014-04-16T19:07:36Z","content_type":null,"content_length":"64886","record_id":"<urn:uuid:1742911c-fd3d-4030-9f22-615583951117>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
November 26th 2007, 03:41 PM If $\bold{F}(x,y) = \frac{k(x \bold{i} + y \bold{j})}{x^{2}+y^{2}}$ find the work done by $\bold{F}$ in moving a unit charge along a straight line segment from $(1,0)$ to $(1,1)$. So $\bold{F}(1,y) = \frac{k(\bold{i} + y \bold{j})}{1 + y^{2}}$. Then $x = 1, \ y = y$. $k \int_{0}^{1} \frac{y}{1+y^{2}} \ dy$ $u = 1+y^{2}$ $du = 2y \ dy$ $\frac{k}{2} \int \frac{du}{u}$ $= \frac{k}{2} \int_{0}^{1} \ln|1+y^{2}|$ $= \frac{\ln 2}{2}$. Is this correct? November 26th 2007, 06:33 PM Essentially it the work is just in the y-direction right? November 27th 2007, 07:38 AM Yes it is, because the displacement is also in the +y direction. (Typically we'd have to take a component of the force along the y direction and wind up with a cosine term, but the force was conveniently in coordinate form to begin with.)
{"url":"http://mathhelpforum.com/advanced-applied-math/23557-work-print.html","timestamp":"2014-04-16T09:09:11Z","content_type":null,"content_length":"7012","record_id":"<urn:uuid:da80afb3-2440-491b-86bd-d3ea7054c3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic BSP Dungeon generation From RogueBasin A simple method to generate a basic dungeon using a bsp tree Building the BSP We start with a rectangular dungeon filled with wall cells. We are going to split this dungeon recursively until each sub-dungeon has approximately the size of a room. The dungeon splitting uses this operation : • choose a random direction : horizontal or vertical splitting • choose a random position (x for vertical, y for horizontal) • split the dungeon into two sub-dungeons Now we have two sub-dungeons A and B. We can apply the same operation to both of them : When choosing the splitting position, we have to take care not to be too close to the dungeon border. We must be able to place a room inside each generated sub-dungeon. We repeat until the lowest sub-dungeons have approximately the size of the rooms we want to generate. Different rules on the splitting position can result in homogeneous sub-dungeons (position between 0.45 and 0.55) or heterogeneous ones (position between 0.1 and 0.9). We can also choose to use a deeper recursion level on some parts of the dungeon so that we get smaller rooms there. Building the dungeon Now we create a room with random size in each leaf of the tree. Of course, the room must be contained inside the corresponding sub-dungeon. Thanks to the BSP tree, we can't have two overlapping To build corridors, we loop through all the leafs of the tree, connecting each leaf to its sister. If the two rooms have face-to-face walls, we can use a straight corridor. Else we have to use a Z shaped corridor. Now we get up one level in the tree and repeat the process for the parent sub-regions. Now, we can connect two sub-regions with a link either between two rooms, or a corridor and a room or two We repeat the process until we have connected the first two sub-dungeons A and B :
{"url":"http://www.roguebasin.com/index.php?title=Basic_BSP_Dungeon_generation","timestamp":"2014-04-18T03:05:10Z","content_type":null,"content_length":"14337","record_id":"<urn:uuid:375cb1b3-93cd-4b21-878e-bc0a2a416abe>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Lower bounds for Petersson inner products of cuspforms with integral Fourier coefficients up vote 1 down vote favorite Let $N\geq 1$ be an integer and let $S_2(\Gamma_0(N))$ be the cusp forms of weight 2 for the usual congruence subgroup $\Gamma_0(N)\subset SL_2(\mathbb Z)$. Let $a_n(f)$ denote the n-th Fourier coefficient of $f\in S_2(\Gamma_0(N))$, $n\geq 1$. Let $X_0(N)$ be a smooth projective model over $\mathbb Q$ of the modular curve associated to $\Gamma_0(N)$ and let $$(f,g)=\int_{X_0(N)}f(z)\ overline{g(z)}dxdy,$$ $z=x+iy$, be the Petersson inner product of $f,g\in S_2(\Gamma_0(N))$. Question (a) Does there exist a constant $c>0$, depending at most on $\Gamma_0(N)$ (or $X_0(N)$), with the following property: Suppose $f\in S_2(\Gamma_0(N))$ is an eigenform with $a_1(f)=1$ which lies in the new part, $g\in S_2(\Gamma_0(N))$ and $a_n(f),a_n(g)\in\mathbb Z, n\geq 1$. If $(f,g)>0$, then $$(f,g)\geq c.$$ (b) Can one find such a constant $c>0$ with an explicit dependence on $N$. (c) Can one find such a constant $c>0$ which is absolute. My main interest is when $f$ in the above question is in addition a newform for $\Gamma_0(N)$ with $a_1(f)=1$, but I don't know if this assumption simplifies things. Further, David Loeffler mentions below that $(f,g)$ is related to a residue of a certain $L$-series at $s=2$. I am a bit confused about your last remark. If you take two different newforms $f$ and $g$ satisfying your normalization, then $(f,g)=0$ by multiplicity one. – GH from MO Apr 9 '13 at 16:06 Dear GH, thanks for the remark. The assumptions of the question still hold: There I assume that (f,g)>0. I will edit the question to make this more clear. – ranicl Apr 9 '13 at 16:51 2 Given $f$, the possible $(f,g)$ form a subgroup of ${\bf R}$, which is either discrete or dense. Once the space of cuspforms has dimension at least $2$ one would expect it to be dense unless $f = 0$ (why should two or more "random" Petersson products be proportional?), and thus to contain arbitrarily small positive elements. Is there a further missing assumption? – Noam D. Elkies Apr 9 '13 at 17:24 2 If you assume that $f$ is a newform with integer coefficients, then the orthogonal of $f$ with respect to the Petersson scalar product admits a basis consisting of forms with integral coefficients, thus the image of the linear map on $S_2(\Gamma_0(N),\mathbf{Z})$ given by $g \mapsto (f,g)$ is a lattice of $\mathbf{R}$. Thus $c$ exists in this case, but it is not clear how to compute a lower bound in terms of $N$ because of the possible congruences between $f$ and other newforms as David explained. – François Brunault Apr 9 '13 at 21:39 Ah, I saw the "integer coefficients" part but didn't appreciate the significance of "newform" (implying not just in the "new" space but an actual Hecke eigenform). – Noam D. Elkies Apr 10 '13 at show 1 more comment 2 Answers active oldest votes It's not hard to see that the answer to (a) is yes. There is a basis of $S_2^{\textrm{new}}(\Gamma_0(N))$ consisting of newforms. These newforms come into Galois orbits $\{f^\sigma\}_{\ sigma}$. Here $\sigma$ runs through the embeddings of $K_f$ into $\mathbf{C}$, where $K_f$ is the field generated by the Fourier coefficients of $f$. A basic fact is that the $\mathbf{C} $-span of a given Galois orbit is generated by cusp forms with integral coefficients. This follows from considering the forms $\sum_{\sigma} \sigma(a) f^\sigma$ where $a$ runs through a $\mathbf{Z}$-basis of the ring of integers of $K_f$. up vote 1 down vote It follows that if the newform $f$ has integral coefficients, then its orthogonal complement $f^\perp$ is generated by cusp forms with integral coefficients. Now consider the linear map accepted $\lambda_f : S_2(\Gamma_0(N),\mathbf{Z}) \to \mathbf{R}$ given by $\lambda_f(g)=(f,g)$. By the previous remark, the kernel of $\lambda_f$ has rank one less than the rank of $S_2(\Gamma_0 (N),\mathbf{Z})$, which implies that the image of $\lambda_f$ is of the form $c_f \mathbf{Z}$ for some $c_f >0$. Thus we can take for $c$ the minimum of all the $c_f$. In fact $c_f = (f,f)/m_f$ for some integer $m_f$ measuring the congruences of $f$ with other cusp forms. Dear Francois Brunault, thats a fantastic answer. Thank you very much!!! – ranicl Apr 11 '13 at 11:06 add comment I typed a comment but the formatting wouldn't come out right, so here it is as an answer! I cannot work out why you expect the "Plancherel or Parseval type" formula to work. Does it not bother you a little that $a_n(f)$ and $a_n(g)$ are perfectly capable of being integers for all $n$, so your series is obviously divergent? Much better is to consider the series $$ L(f, g, s) = \sum_{n \ge 1} a_n(f) \overline{a_n(g)} n^{-s},$$ which converges for $Re(s) > 2$ (this is not so easy to see, but it is easy to show up vote 1 that it converges for $Re(s) \gg 0$). This has meromorphic continuation to all $s \in \mathbb{C}$ with a pole at $s = 2$ at which the residue is (maybe up to a normalizing constant down vote depending on $N$ that I have forgotten) the Petersson product $\langle f, g \rangle$. But this does not help you to get lower bounds on $\langle f, g \rangle$ as far as I can see. Some quite grotty things can happen, e.g. if $f$ is an eigenform and there is another newform $f'$ with $f = f'$ modulo some integer $N$, then one can take $g = (f - f')/N$, and this will be integral but its Petersson product with $f$ will be $\langle f, f \rangle / N$. So the issue of bounding Petersson products below is quite closely related to the issue of congruences between eigenforms. Dear David Loeffler, thanks a lot for the helpful comment. I will edit my question according to your comment. – ranicl Apr 9 '13 at 13:09 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/126969/lower-bounds-for-petersson-inner-products-of-cuspforms-with-integral-fourier-coe/126976","timestamp":"2014-04-17T04:29:33Z","content_type":null,"content_length":"64208","record_id":"<urn:uuid:7e1d459c-77d7-4b26-a069-521ae2d3f454>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Interpretting coefficinets in a fractional logit model? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Interpretting coefficinets in a fractional logit model? From "Verkuilen, Jay" <JVerkuilen@gc.cuny.edu> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Interpretting coefficinets in a fractional logit model? Date Tue, 8 Apr 2008 17:12:39 -0400 >>>Is there a convenient strategy for interpreting the coefficients in a fractional logit model? The coefficients giving the expected change in the response for a 1 unit increase in the predictor fail to provide an intuitive sense of magnitude. At least they are not intuitive to me. Suggestions would be much appreciated.<< Mike Smithson and I grappled with this question for a fair bit working on our article on beta regression in the hope that a reasonable scalar effect size measure could be found. The answer seems to be---much like for logistic regression for binary dependent variables---is no. In general the best strategy is to use the same basic methods used for logistic regression, i.e., generating predicted values for the mean. These predicted values should work exactly the same as for logistic regression of binary dependent variables. http://psychology.anu.edu.au/people/smithson/details/betareg/betareg.html If you are using the quasi-Bernoulli approach of Wedderburn, things would be the same. The book by J. S. Long (1998), Regression Models for Categorical and Limited Dependent Variables, Sage, provides a very good discussion of the problem of summarizing values from nonlinear regression models. Highly recommended.
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00336.html","timestamp":"2014-04-20T11:33:49Z","content_type":null,"content_length":"6405","record_id":"<urn:uuid:81933d55-194a-497f-8f21-c7ed28a52d97>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - Ring conforms to the circumference of sphere 12-08-2008, 09:37 AM Not a script question per say but a math related one. 1. a sphere of radius: 100 2. a circle shape of radius: 100 3.I align the circle to the sphere I want the circle to start with radius 100 and when i take it(or animate) down to the bottom of the sphere(pole) it should zero out in radius. Ring conforms to the circumference on each step of it's way down. What would the math be in this situation. i could hack this by: 1.At each step along it's down path(circle Pivot) shoot a ray to the sphere and use the the magnitude to drive the radius. but that wont further my mathematical understanding of the problem. Thanks for any help.
{"url":"http://forums.cgsociety.org/archive/index.php/t-705408.html","timestamp":"2014-04-17T13:08:58Z","content_type":null,"content_length":"6755","record_id":"<urn:uuid:141dd0bd-4e32-4e0d-a219-32a50cfc6bc5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the advantage of a one-tailed test over a two-tailed - JustAnswer Experts are full of valuable knowledge and are ready to help with any question. Credentials confirmed by a Fortune 500 verification firm. Hi there! I'm working on your questions now. Will be back shortly with your answers. Do you have a hard deadline? I am about on top of the deadline do you know how much longer you may be? Martin and 3 other Homework Specialists are ready to help you Master's Degree, strong math and writing skills, experience in one-on-one tutoring (college English) Expert in Java C++ C C# VB Javascript Design SQL HTML Disclaimer: Information in questions, answers, and other posts on this site ("Posts") comes from individual users, not JustAnswer; JustAnswer is not responsible for Posts. Posts are for general information, are not intended to substitute for informed professional advice (medical, legal, veterinary, financial, etc.), or to establish a professional-client relationship. The site and services are provided "as is" with no warranty or representations by JustAnswer regarding the qualifications of Experts. To see what credentials have been verified by a third-party service, please click on the "Verified" symbol in some Experts' profiles. JustAnswer is not intended or designed for EMERGENCY questions which should be directed immediately by telephone or in-person to qualified
{"url":"http://www.justanswer.com/homework/7u3jz-advantage-one-tailed-test-two-tailed.html","timestamp":"2014-04-19T12:57:09Z","content_type":null,"content_length":"89993","record_id":"<urn:uuid:d53e49f4-5872-49c9-88e9-4f707e397bc3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Do We Need a 37-Cent Coin? Dubner thinks we should do away with the penny. A young economist I know, Patrick DeJarnette, believes a much more radical change in currency is warranted. Here is what Patrick writes: Late one night I was curious how efficient the “penny, nickel, dime, quarter” system was, so I wrote a little script to compare all possible 4-coin systems, with the following stipulations: 1. Some combination of coins must reach every integer value in [0,99]. 2. Probability of a transaction resulting in value v is uniform from [0,99]. In other words, you start with $10 and no coins. You buy something at the store. Afterward, the chance you have 43 cents in your pocket is equal to the probability that you have 29 or 99 cents in your pocket (in addition to any bills). Requirement (1) implies the penny is necessary, as you must have a combination of coins that reach value = 1 cent. With this in mind, the current combination of coins (penny, nickel, dime, quarter) results in an average of 4.70 coins per transaction. What’s a little surprising is how inefficient our current setup is! It’s only the 2,952-nd most efficient combination. There are effectively 152,096 different combinations of penny + three coins. In other words, it’s only in the 98th percentile for How can you tell that Patrick is a young economist from the preceding discussion? Because he finds that the current government solution for the coins we use is 98 percent efficient and thinks this is inefficient. The other day I was walking through the halls of the University of Chicago economics department and heard a faculty member say that the right rule of thumb for government spending is that it is worth only 10 cents on the dollar because of inefficiency. Anyway, Patrick then tackles the question of which combinations of coins would be most efficient: The most efficient systems? The penny, 3-cent piece, 11-cent piece, 37-cent piece, and (1,3,11,38) are tied at 4.10 coins per transaction. But no one wants an 11-cent piece! There are other ways to look at efficiency; and given human limitations, this would result in a lot of errors and transactions would take more time. □ (1,4,15,40) is the first “reasonable looking” combination, with 4.14 coins per transaction. □ (1,3,10,35) also does well, with 4.16 coins per transaction. But what if we restrict ourselves to “all coins (except pennies) are multiples of 5″? There are 18 different combinations that are more efficient than our current setup, (1,5,15,40) being the most efficient at 4.40 coins per transaction. Some other examples: □ (1,5,15,35) at 4.50 coins. □ (1,5,10,30) at 4.60 coins. If we were to change just one of our current coins, what would be the most efficient? □ Changing the nickel to a 3-cent piece increases efficiency to 4.22 coins per transaction. □ Changing the dime to an 11-cent piece increases efficiency to 4.46 coins per transaction. (Although the 11-cent piece is unreasonable). □ Changing the quarter to a 30-cent piece increases efficiency to 4.60 coins per transaction. (Changing it to a 28-cent piece increases efficiency to 4.50, but that seems unreasonable.) Therefore, changing the nickel is the most efficient thing. Not surprisingly, losing the dime entirely only costs us ~0.8 coins per transaction in efficiency; it does the least good of the existing coins. Leave A Comment Comments are moderated and generally will be posted if they are on-topic and not abusive. COMMENTS: 139 View All Comments » 1. I love it. 2. Getting rid of the penny now will be no worse than getting rid of the half-cent in 1857 – there were howls of protest, but eventually people learned to live without it, and then wondered why it was still being manufactured. Yes, prices were rounded up to the nearest cent, instead of the nearest half-cent. The U.S. economy did not fall apart. 3. SO academic. How about no pennies, round to the nearest .05? It works in Monopoly… 4. How can you tell that this young economist is American? From the fact that he does not question the assumption that dollar bills should be retained in place of dollar coins (or perhaps 123-cent 5. The 37 cent coin would give us a golden opportunity to finally put Nixon on a coin (our 37th president). 6. I think you’re missing one thing that, at a practical level, would be a requirement (or at least strongly desirable): you should be able to come up with an even dollar with any single coin denomination. (4 quarters, 10 dimes, etc.) Of all the alternate denominations you listed as possible, only the 4 cent piece would fit that description. I think any coin that didn’t fit that criteria would and should be shot down. 7. there must be a reason why the Euro has 1, 2, 5, 10, 20, and 50 cents coins. The fallacy of this approach is that it starts with the inherent limitation of 4 coins in circulation. One should instead solve for the optimal coins per transaction (the absolute minimum, as shown, might not be practical to implement, e.g., 37c coins etc). 8. Its interesting that these combinations — particularly the most efficient ones — approximate an exponential curve. It’s just a hunch, but I’d bet that in general, the optimal solution for any problem like this is an exponential (allow, say, a different interval than [0,99], and/or a different number of coins). □ The Greedy combinations (the ones where you start with the biggest coin) are indeed exponential, or close. 100^(1/n) gives you the basic spread between numbers for n coins. 4 coins means each coin value should be roughly 2.7 times the amount of the one before it. (1, 3, 11, 37) When you throw in a bunch of coins like the Euro, with six, Each is roughly 1.6 times the one before it, which makes finding round numbers easier. (1, 2, 5, 10, 20, 50)
{"url":"http://freakonomics.com/2009/10/06/do-we-need-a-37-cent-coin/","timestamp":"2014-04-17T16:54:49Z","content_type":null,"content_length":"74640","record_id":"<urn:uuid:2e45f891-28b0-4dba-ac89-3514a7c5eccc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help February 17th 2011, 12:07 PM #1 Senior Member Apr 2008 Which of the following are sets are subrings of the field R of real numbers? a) A={m+n(2)^(1/2)|m,n in the integers and n is even} b)B={m+n(2)^(1/2)|m,n in the integers and m is odd} c)C={a+b(2)^(1/3)|a,b in the rationals} d) D={a+b(3)^(1/3)+c(9)^(1/3)|a,b, c in the rationals} My problem is getting started. I know a set R is a subring if it is closed under addition and multiplication, if a is in R, the -a is in R, R contains the identity Just check the things you mentioned. For example, B is clearly not closed under addition. Also, it is not necessarily required that a subring contain 1, although it's possible that in your setting, rings are assumed to have multiplicative identities. ok so m+n(2)^1/2+x+y(2)^1/2, m is even so call it 2b 2(a+b)+(n+y)2^1/2 Ok that makes sense for checking closure of addition. I think I can figure out multiplication. Now what about the a and -a stuff? a is in m+n(2)^1/2. Now I get stuck on going further. Well it looks like you're trying to show that $A$ is closed under addition, but if so then you have misread the set. We need $n$ even for $m+n\sqrt{2}$. However, your method is more or less correct once you flip those back around. We have $(m_1+n_1\sqrt{2})+(m_2+n_2\sqrt{2})=(m_1+m_2)+(n_1 +n_2)\sqrt{2}$, and this preserves the conditions of $A$. So $A$ is closed under addition. Also notice that if $m+n\sqrt{2}\in A$, then $-m-n\sqrt{2}$ is the additive inverse, and this is also an element of $A$. So $A$ is closed under additive inverses. Note also that $1\in A$. So you just need to show that $A$ is closed under multiplication. Let $m_1+n_1\sqrt{2}$ and $m_2+n_2\sqrt{2}$ be members of $A$. If you multiply them, do you get an element of $A$? If so, then $A$ is a subring. Otherwise it is not. Ok I follow all that except for why 1 is in A. Is that because if we let m=1 and n=0? D isn't a subring, right? Why isn't $D$ a subring? because it's not closed under multiplication. I got that it was closed under addition, but not closed under multiplication February 17th 2011, 01:37 PM #2 February 17th 2011, 02:00 PM #3 Senior Member Apr 2008 February 17th 2011, 03:47 PM #4 Senior Member Feb 2008 February 17th 2011, 04:00 PM #5 Senior Member Apr 2008 February 17th 2011, 04:37 PM #6 Senior Member Feb 2008 February 17th 2011, 05:56 PM #7 Senior Member Apr 2008 February 17th 2011, 06:10 PM #8 Senior Member Feb 2008 February 17th 2011, 06:14 PM #9 Senior Member Apr 2008
{"url":"http://mathhelpforum.com/advanced-algebra/171628-subrings.html","timestamp":"2014-04-19T12:19:22Z","content_type":null,"content_length":"51604","record_id":"<urn:uuid:0d6134f9-f926-4623-93cd-8955d030083e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
affine geometry The study of properties of geometric objects that remain unchanged after parallel projection from one plane to another. During such a projection, first studied by Leonhard Euler, each point (x, y) is mapped to a new point (ax + cy + e, bx + dy + f). Circles, angles, and distances are altered by affine transformations and so are of no interest in affine geometry. Affine transformations do, however, preserve collinearity of points: if three points belong to the same straight line, their images under affine transformations also belong to the same line and, in addition, the middle point remains between the other two points. Similarly, under affine transformations, parallel lines remain parallel, concurrent lines remain concurrent (images of intersecting lines intersect), the ratio of lengths of line segments of a given line remains constant, the ratio of areas of two triangles remains constant, and ellipses, parabolas, and hyperbolas continue to be ellipses, parabolas, and Related category
{"url":"http://www.daviddarling.info/encyclopedia/A/affine_geometry.html","timestamp":"2014-04-18T08:47:16Z","content_type":null,"content_length":"6243","record_id":"<urn:uuid:b49ca685-bb6a-4d7f-b9de-5612280500a0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Multiple group factor invariance Jennifer Rose posted on Friday, December 23, 2005 - 8:07 am I've conducted a multiple group analysis testing factor loading invariance across 2-3 groups using a 2 factor CFA with MLM estimation. The corrected chi-square difference test comparing this constrained model to an unconstrained model was nonsignificant, suggesting that the factor loadings are invariant across groups. My question is about reporting results. I would like to include a figure showing the standardized factor loadings and factor correlations. But, the standardized loadings differ across groups despite the fact that I imposed cross group equality constraints on the loadings (the parameter specification indicates that the constraints were correctly imposed and the estimates for the unstandardized loadings are the same across groups). I understand that the standardized loadings vary because of the variances used to calculate them, but I'm afraid that stating that I imposed equality constraints and then showing in the figure that the loadings are still different across groups will confuse readers, especially in the case where the standardized loadings are different by what would appear to be a nontrivial amount (e.g., difference of 0.07). I'm wondering if anyone has a suggestion for how to handle this apparent inconsistency. Linda K. Muthen posted on Friday, December 23, 2005 - 9:31 am I prefer to work with raw coefficients and would definitley not report standardized coefficients for a multiple group analysis for the reasons you state. If you must report them, then I would include the explanation you have given. Maybe someone else has an opinion on this. kberon posted on Saturday, December 31, 2005 - 12:29 pm I've also been interested in standardized coefficients across multiple groups. Lisrel has a feature that allows you to weight each group covariance matrix so that you end up having a common scale for all groups. This allows reporting a single "beta." I was wondering if Mplus had this facility? It sounds, from Linda's comment, that it doesn't but I'd like to confirm that. Thanks....Kurt Linda K. Muthen posted on Saturday, December 31, 2005 - 2:07 pm No, Mplus does not have this facility. finnigan posted on Friday, March 23, 2007 - 9:27 am I have five factor model which I'm testing across two groups. I have a suspicion that the the five factors will not replicate across groups,i.e in one group 4 factors emerge and in the second group 5 factors may emerge. If this is the case am I correct in saying that within group comparisions can be made ,but between group comparisons on factor means cannot? Or is it more appropriate to estimate one model for both groups and then test the factor structure? Any refs you may have would be appreciated. Linda K. Muthen posted on Friday, March 23, 2007 - 2:47 pm You should look at the factor structure in each group separately as a first step. If they don't have the same number of factors, then going on to look at them together is not appropriate unless four of the five factors are the same which would be fairly unusual I think. finnigan posted on Friday, March 23, 2007 - 3:34 pm If the same number of factors is not present across groups, is it reasonable to carry out a within multi group CFA for each separate group and make within group comparisons on latent means once measurement invariance is present? Linda K. Muthen posted on Friday, March 23, 2007 - 6:09 pm If you don't have the same number of factors in each group, you can look at the groups separately. I'm not sure what you mean by doing a multiple group CFA for each separate group since you would then be looking at a single group. Brian Hall posted on Monday, July 27, 2009 - 3:01 pm Dear list, I am testing a two-group CFA model testing for metric and configural invariance. I am extending this model to establish longitudinal invariance over three waves of data collection. I am testing several correlated models, and several hierarchical models. Does anyone happen to have example programming syntax that can help with this model? I am not sure how to fix the paths to be equal in the case of metric invariance, and to compare the models in the case of configural invariance. Any assistance would be much appreciated. Linda K. Muthen posted on Monday, July 27, 2009 - 4:27 pm See the Topic 1 course handout where all of the steps for testing for measurement invariance using multiple group analysis are given including inputs. See the Topic 4 course handout where the first steps in the multiple indicator growth model show the steps for testing for longitudinal measurement invariance. Joe posted on Thursday, March 18, 2010 - 1:33 pm I have 3 latent factors, each BY 16 dichotomous observed items. I would like to run an invariance analysis with the same factors across groups, but factor loadings, unique error variances, and item thresholds are freely estimated. I am getting an error that reads, "THE MODEL ESTIMATION TERMINATED NORMALLY THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED." Here is part of my syntax: GROUPING IS disabil (0=RegEd, 1=SpEd); numop BY fp11@1 fp12 fp13 fp14 fp15 fp16 fp17 fp18 fp19 fp110 fp111 fp112 fp113 fp114 fp115 fp116; geo BY fp21@1 fp22 fp23 fp24 fp25 fp26 fp27 fp28 fp29 fp210 fp211 fp212 fp213 fp214 fp215 fp216; numopalg BY fp31@1 fp32 fp33 fp34 fp35 fp36 fp37 fp38 fp39 fp310 fp311 fp312 fp313 fp314 fp315 fp316; MODEL SpEd: numop BY fp11@1 fp12 fp13 fp14 fp15 fp16 fp17 fp18 fp19 fp110 fp111 fp112 fp113 fp114 fp115 fp116; geo BY fp21@1 fp22 fp23 fp24 fp25 fp26 fp27 fp28 fp29 fp210 fp211 fp212 fp213 fp214 fp215 fp216; numopalg BY fp31@1 fp32 fp33 fp34 fp35 fp36 fp37 fp38 fp39 fp310 fp311 fp312 fp313 fp314 fp315 fp316; Can you tell how my model is misspecified? Linda K. Muthen posted on Thursday, March 18, 2010 - 1:59 pm You should not be freeing the factor loading for the first indicator in the group-specific MODEL command. This causes the model not to be identified. See pages 398-401 for the models to use for testing measurement invariance for categorical outcomes. See also the Topic 2 course handout. Joe posted on Thursday, March 18, 2010 - 3:00 pm Thank you, Dr. Muthen. I thought the factor loading for the first indicator in the group-specific MODEL command was fixed with @1 (e.g., numop BY fp11@1). How do I then appropriately fix this factor loading in the syntax? Could you please give an example? Thank you in advance for your time and assistance. Linda K. Muthen posted on Thursday, March 18, 2010 - 3:03 pm I misread your input. Then I don't know what the problem is. Please send the full output and your license number to support@statmodel.com. andrea j wong posted on Thursday, June 17, 2010 - 5:42 am i'm testing configural invariance for a 4 group, 2 latent factor, and 15 item (4 level likert scales) model. i'm getting the error: "THE MODEL MAY NOT BE IDENTIFIED." could you help me fix my input file? (the model stmts for the last 2 groups are the same as for month4.) dsclCncn BY S5Q1@1 S5Q2*0.5 S5Q3*0.5 S5Q5*0.5 S5Q7*0.5 S5Q9*0.5 S5Q14*0.5; persStig BY S5Q4@1 S5Q6*0.5 S5Q8*0.5 S5Q10*0.5-S5Q13*0.5 S5Q15*0.5; dsclCncn WITH persStig*0.5; [S5Q2$2* S5Q3$2* S5Q5$2* S5Q7$2* S5Q9$2* S5Q14$2* S5Q1$3* S5Q2$3* S5Q3$3* S5Q5$3* S5Q7$3* S5Q9$3* S5Q14$3*]; [S5Q6$2* S5Q8$2* S5Q10$2*-S5Q13$2* S5Q15$2* S5Q4$3* S5Q6$3* S5Q8$3* S5Q10$3*-S5Q13$3* S5Q15$3*]; MODEL month4: dsclCncn BY S5Q2* S5Q3* S5Q5* S5Q7* S5Q9* S5Q14*; persStig BY S5Q6* S5Q8* S5Q10*-S5Q13* S5Q15*; dsclCncn WITH persStig*0.5; [S5Q2$2* S5Q3$2* S5Q5$2* S5Q7$2* S5Q9$2* S5Q14$2* S5Q1$3* S5Q2$3* S5Q3$3* S5Q5$3* S5Q7$3* S5Q9$3* S5Q14$3*]; [S5Q6$2* S5Q8$2* S5Q10$2*-S5Q13$2* S5Q15$2* S5Q4$3* S5Q6$3* S5Q8$3* S5Q10$3*-S5Q13$3* S5Q15$3*]; Linda K. Muthen posted on Thursday, June 17, 2010 - 7:49 am When you free the thresholds, factor means must be fixed to zero in all groups and, if you are using the default estimator WLSMV, scale factors must be fixed to one in all groups. See the models for testing for measurement invariance in Chapter 14 of the Version 6 user's guide after the multiple group discussion. andrea j wong posted on Thursday, June 17, 2010 - 10:09 am thanks for your quick response! i added the following model stmt for all the groups: [dsclCncn@1 persStig@1]; and am getting a message about no convergence. here's the analysis block: TYPE = MGROUP; ITERATIONS = 9999; ESTIMATOR = WLS; when i test for invariant loadings and/or thresholds, i am not having this problem. it's only occuring w/ the configural invariance test. do you have any suggestions? any help would be appreciated! Linda K. Muthen posted on Thursday, June 17, 2010 - 10:26 am The factor means should be fixed to zero not one. With the Theta parameterization, residual variances of the factor indicators should be fixed to one in all groups. andrea j wong posted on Friday, June 18, 2010 - 5:26 am yea! that did the trick. i wanted to make sure i was doing the battery of tests correctly: (1) baseline model, free thresholds (outlined above) - factor means set to zero in all groups, residual variances of factor indicators fixed to one in all groups (2) invariant loadings, free thresholds - factor means set to zero in all groups, residual variances of factor indicators fixed to one in first group and free in other groups (3) invariant loadings and thresholds - factor means set to zero in first group and free in other groups, residual variances of factor indicators fixed to one in the first group and free in other (4) invariant loadings, thresholds, and uniqueness - factor means set to zero in first group and free in other groups, residual variances of factor indicators fixed to one in all groups. thanks again for all your help! Linda K. Muthen posted on Friday, June 18, 2010 - 9:50 am We recommend models 1, 3, and 4 for categorical outcomes. andrea j wong posted on Friday, June 18, 2010 - 12:48 pm i'm working with a different set of variables now; however, everything is set up as above. there are 4 groups: month0, month4, month8, month12 i am getting the error message: THE WEIGHT MATRIX FOR GROUP MONTH12 IS NOT POSITIVE DEFINITE can you explain what this error message means and how i might go about fixing it? also, why does it happen for one group and not all groups? i don't know if this matters, but the data is very skewed for all groups. i rescaled an 11 pt likert scale into a 3 pt likert scale where: 0-3 -> 1 4-6 -> 2 7-10 -> 3 should i set the scale differently so that the data is not so skewed? thanks again for your guidance! Linda K. Muthen posted on Friday, June 18, 2010 - 2:27 pm Please send the full output and your license number to support@statmodel.com. Hans Leto posted on Tuesday, April 10, 2012 - 10:33 am I am performing a single group analysis with the same syntax of slide 209 of Mplus' handout no. 1: USEOBSERVATIONS ARE (gender EQ 1); !change 1 to !0 for females f1 BY y1-y5; f2 BY y6-y10; But is gives me an error "Variable is uncorrelated with all other variables: gender. All least one variable is uncorrelated with all other variables in the model. Check that this is what is intended." (The variable is gender, variance 0). could you help me? Thank you in advance Linda K. Muthen posted on Tuesday, April 10, 2012 - 1:45 pm You must have gender on the USEVARIABLES list. This is only for variables used in the analysis. Remove it. Kimberly Henderson posted on Friday, June 08, 2012 - 11:50 am I am testing a measurement model at two waves (i.e., wave 3 and wave 4). The model is the same at each wave. Since, both measurement models will eventually be added to a two-time point longitudinal SEM, I was advised to test for measurement invariance between the measurement models of the two waves. My estimator is WLSMV and I have categorical and continuous variables. Can I simply use the DiffTest option to do a multi-group comparison or is there another test that is more appropriate? Linda K. Muthen posted on Friday, June 08, 2012 - 12:07 pm No, the groups would not be independent. You would test the measurement invariance in a single-group analysis. The multiple indicator growth model example in the Topic 4 course handout shows how to do this for continuous variables. You would take the same approach for categorical variables but use the steps shown in the Topic 2 course handout under multiple group analysis. Kimberly Henderson posted on Friday, June 08, 2012 - 12:12 pm Okay. Thanks for the reply. I will try this out. Kimberly Henderson posted on Monday, June 11, 2012 - 10:02 am Do you know what heading or page that I would need to refer to in Topic 2 and Topic 4 courses? Linda K. Muthen posted on Monday, June 11, 2012 - 11:07 am See the table of contents. For Topic 4, look for multiple indicator growth. For Topic 2, look for multiple group analysis. Kimberly Henderson posted on Monday, June 11, 2012 - 1:02 pm I see. Thanks! Ebrahim Hamedi posted on Thursday, September 13, 2012 - 8:10 pm I am doing a MGCFA, 3 groups. When testing each group separately, everything is fine and models are identified. when multi-group modeling however, I get this message: parameter 61 is for alpha (I guess something related to intercept) of item 14 in group 2. As I told you when doing the analysis separately in group 2, the model converges. It is noteworthy that the MGCFA converges when using Amos on the same data. Sounds like this is something specific to Mplus way of calculating estimates. In the MGCFA, the unstandardized coefficient for item 14 in group 2 is 1.292, while other items’ coefficients are smaller. This is the only aspect of this item different from others. Interestingly, when I omit this item in the model, still it does not converge and says something is wrong with item 13. Can you please help me resolve this? Many thanks, Linda K. Muthen posted on Friday, September 14, 2012 - 6:01 am Please send the output and your license number to support@statmodel.com. Ebrahim Hamedi posted on Friday, September 14, 2012 - 4:20 pm Thanks for your message. I think I found the problem. while in Amos, especifying factor means to be 0 leads to unidentification in a multigroup analyis, in mplus things are different. I had not specified the factor mean to be zero in my multigroup analysis in mplus, which resulted in an intercept issue in one of the groups. setting it to be zero resolved the issue. Ebrahim Hamedi posted on Thursday, November 01, 2012 - 8:17 pm I have a question and I would appreciate if you could answer. in testing for scalar invariance, when we find that an intercept is not invariant, in order to follow up this finding it is attractive to compare the intercepts in various groups. In doing so, should we compare the unstandardization intercepts or the standardized (Est./S.E.) ones? the results are sometimes contradictive. for example, based on the unstandardization estimates group 1 has the highest score but based on the standardized estimates, group 4 has the highest score. what do you suggest? the same question can be asked about comparing the intercepts in a number of groups, estimating separate models for each (not multigroup CFA). which one should be used for comparing scale origins across groups: the unstandardized or starndardized estimates? a marginal question is whether or not we could only examine observed item means in separate samples to follow up a noninvariant intercept. Thank you very much in advance, Bengt O. Muthen posted on Thursday, November 01, 2012 - 9:22 pm I am not sure I understand the situation, but it seems that you should use the unstandardized intercepts since you are doing an unstandardized analysis when you compare groups. Ebrahim Hamedi posted on Thursday, November 01, 2012 - 10:03 pm Thank you very much. Because this question is so critical for me at this point, I decided to give an example. I have three groups. One item intercept is non-invariant. Below the unstandardized intercepts, standardized intercepts, and simple observed item means for each group are reported respectively: g1: 4.354- 72.197- 4.35 g2: 4.738- 73.280- 4.74 g3: 4.492- 61.631- 4.01 the unstandardized and standardized intercepts are from the multigroup analysis for testing scalar invariance. Now, I would like to report actual intercept differences among these three groups for that item to understand the results better. As you can see it is so tricky. If I use the unstandardized intercepts, I should conclude that group 1 had the lowest scale origin (4.354), and the main difference is that group 2 scored remarkably differently from the other two groups. Alternatively, if I use the standardized intercepts, I should conclude that group 3 had the lowest scale origin (61.631), and the noninvariance is because this group scored remarkably differently from the other two groups. So the question is: in this multi group analysis, should I use standardized or unstandardized intercepts for comparison. Thank you very much , Bengt O. Muthen posted on Sunday, November 04, 2012 - 11:09 am You should not use standardized values such as intercepts when you compare groups. This is because standardization confounds the parameters of interest with group-varying variances. Note also the the item mean is a function of the item intercept, the factor loading and the factor mean. Ebrahim Hamedi posted on Sunday, November 04, 2012 - 1:10 pm Thank you very much professor Muthen. really helpful. H Steen posted on Tuesday, June 04, 2013 - 5:12 am I have a question about interpreting differences in model fit between groups. Before doing a multigroup second order CFA, I compared the models in the groups separately. And the results are so different that I have no reason to investigate any type of invariance any further. However the results are a bit surprising and I would like your comments. Comparing low, medium and high educated groups results in a mediocre model fit for the low educated (RMSEA 0.075) and reasonable fit in de high educated group (0.041). The groups are about the same size, N varies between 302 and 330. The thing is, the MI's are very similar; in all groups 10 is the highest, and almost all are (much) lower. Furthermore, the factor loadings are much higher in de low educated group, which seems contradictory, as the model fit is worse. The low educated group have much smaller variance in the items in the CFA though, and I would like to whether this can explain the combination of worse fit and higher loadings. With small variance, there is less to model anyway. Other types of Analysis (eg Mokken) which are not to be preferred though for my research, show best structure fit for the low educated. Shortly stated, can it be the case that the fit is indeed best in de low educated group as the factor loadings are highest? Thank you very much for any comments. Bengt O. Muthen posted on Tuesday, June 04, 2013 - 8:47 am Fit assessments are affected by the size of correlations among the observed variables - higher correlations give higher power to reject, ceteris paribus. hose correlation sizes may vary across your H Steen posted on Thursday, June 06, 2013 - 2:10 am Thank you for your response. You are right, in the lower educated group (which shows worst fit) the correlations are much higher. But what that this imply? That the fit indices are not meaningful? Is there a way to correct fit indices for the level of interrelatedness of items? Bengt O. Muthen posted on Thursday, June 06, 2013 - 7:47 am You should take a look at for instance Saris, Satorra, & Veld (2009) in the SEM journal about the weaknesses of fit indices. H Steen posted on Thursday, June 06, 2013 - 8:42 am Thank you very much! Sabrina Thornton posted on Wednesday, October 09, 2013 - 9:17 am Hi I managed to fit the model in each group, but it doesn't seem to fit in a multogroup CFA. I am performing a multi group CFA with a four factor solution across two groups. I followed the handout to the letter, but I get this message: MAXIMUM LOG-LIKELIHOOD VALUE FOR THE UNRESTRICTED (H1) MODEL IS -7520.177 PROBLEM INVOLVING PARAMETER 110. THE CONDITION NUMBER IS -0.318D-06. The syntax is as follows: USEVARIABLES ARE A1 A2 A3 A5 B1 B2 B3 B4 C5 C6 C7 C9 D3 D4 D5 D7; GROUPING IS Group (1=GROUP1 2=GROUP2); MISSING are all (666); IA BY A1-A5; OE by B1-B4; SRM by C5-C9; WRM by D3-D7; [IA@0 OE@0 SRM@0 WRM@0]; MODEL GROUP2: IA BY A1-A5; OE by B1-B4; SRM by C5-C9; WRM by D3-D7; [A1-A5 B1-B4 C5-C9 D3-D7]; OUTPUT:STANDARDIZED MODINDICES SampStat Residual; Linda K. Muthen posted on Wednesday, October 09, 2013 - 10:06 am You should not mention the first factor indicators in the group-specific MODEL command. When you do this, they are no longer fixed at zero and the model is therefore not identified, Sabrina Thornton posted on Wednesday, October 09, 2013 - 12:53 pm That has just made my evening! Thanks, Linda. Liting Cai posted on Tuesday, January 14, 2014 - 1:56 am In an earlier post, as well as in the handout for Topic 2, it was mentioned that to conduct multiple group analysis on categorical variables, the steps are slightly different from that if the variables are continuous. The difference is that for categorical variables, we do not conduct the following steps: (1) Fix the factor loadings across groups and free the thresholds, and (2) Fix the thresholds across groups and free the factor loadings. May I find out, what should I do if I have both continuous and categorical variables in my multiple group factor analysis? Do I go with the steps for categorical variables? Bengt O. Muthen posted on Wednesday, January 15, 2014 - 10:50 am You can go with the steps for the categorical variables for all the variables, but you can also do all steps for the continuous variables. Liting Cai posted on Thursday, January 16, 2014 - 6:57 pm Dear Dr Muthen, Thank you for taking time to address my query! May I understand the rationale for the different steps (for multiple group analysis) for continuous and categorical variables? Is there a paper that provides this rationale, that you could direct me Linda K. Muthen posted on Friday, January 17, 2014 - 8:53 am See the Millsap 2011 book. Liting Cai posted on Friday, January 31, 2014 - 1:48 am Thanks! Will check out the book. Kerry Zelazny posted on Tuesday, February 18, 2014 - 6:52 am I am trying to conduct multiple group invariance testing using categorical indicators for continuous latent factors. When I try to release the factor loadings and the thresholds for my second group, I get the following error: The TECH output tells me Param 42 is the second diagonal entry in the THETA matrix for my second group. I'm not sure what I'm doing wrong: VARIABLE: NAMES ARE DUMMY PTSD6-PTSD25; USEVARIABLES PTSD6-PTSD25; CATEGORICAL ARE PTSD6-PTSD25; GROUPING is DUMMY (0 = NOTRAUMA 1 = TRAUMA); ESTIMATOR = WLSMV; F1 BY PTSD6-PTSD13@1; F2 BY PTSD14-PTSD25@1; F1 with F2; F1 BY PTSD7-PTSD13; F2 BY PTSD15-PTSD25; Any ideas what I am doing wrong? Help is very much appreciated! Linda K. Muthen posted on Tuesday, February 18, 2014 - 8:20 am See the Version 7.1 Mplus Language Addendum on the website. You will find described in this document a way to do invariance testing automatically and a full description of the models for testing for measurement invariance in various situations. When factor loadings and thresholds are free across groups, residual variances should be one in all groups and factor means should be zero in all groups. Back to top
{"url":"http://www.statmodel.com/discussion/messages/9/967.html?1381348389","timestamp":"2014-04-17T12:53:00Z","content_type":null,"content_length":"96337","record_id":"<urn:uuid:3a7fffcb-769d-4707-9fc7-c50a2f084773>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Pololu - Know your units How many volts of current are there in a bolt of lightning? That’s the kind of stupid question your local news anchor might ask while bantering with the weather guy. Perhaps your favorite cringe-inducing unit abuse is someone thinking light-years measure time or a model rocket enthusiast telling you that a newton-second is a little longer than a regular second. Of course, I made the same class of mistake when looking for a 1-amp battery, which I described in my previous post about battery capacity. That article addressed a specific instance of a general problem: not knowing or understanding units, which allow us to talk about and measure physical properties that we must understand whether we’re designing robots or baking cakes. There are units for almost every property we care about. For some common properties, such as length or distance, there are many units to choose from; for other properties, there is such a strong connection between the unit and property that the property is called by the unit. For instance, in English, it is common and acceptable to refer to electrical potential as “voltage”; terms like “wattage” and “amperage” are also common but less generally accepted. For other properties, such as torque, there is no common, custom unit; instead, torque is expressed as force multiplied by distance. Part of using units correctly is understanding which of various physically equivalent expressions makes most sense for expressing the idea we want to communicate. Inches per week is a mathematically valid unit for speed, but it is unlikely to be an appropriate unit. A more subtle case of using the correct units for good communication is not over-simplifying a unit. For example, force multiplied by distance also gives us energy, but expressing a torque in joules, the unit for energy, would be counterproductive. By the way, converting from an incomprehensible quantity in a standard unit to some incomprehensible unit is just dumb. I see this most often in non-engineering contexts, where some understandable unit such as volume is converted to something asinine such as soda cans stacked end-to-end around the equator, or when dollars are converted to miles of stacked one-dollar bills. It can be a fun reality check to estimate the volume or weight of the haul in Ocean’s Eleven, but too often, the numbers are manipulated to exaggerate or obfuscate rather than to enlighten. Sure, big numbers are difficult for us to comprehend, but dump trucks lined up from the earth to the moon are not any better. Another class of generally useless discussion that comes up in the context of units is arguments about one unit’s superiority over another. This most commonly comes up in the context of the metric system vs. the units used in the United States. While it might be a bit frustrating to have to learn extra units, it’s very easy now (e.g. with Google) to convert them to more familiar ones, and your personal boycott of the inch or meter is not going to make it go away. It’s often convenient to stick with one unit within one calculation or one project, but in general, trying to develop an intuitive sense for more units will only make you more competent as an engineer. Part of the power of units is the basic math you can do with them. You can multiply and divide any unit by any other unit, and even if you end up with something you can’t quite understand like volts squared or mm^5, as long as you don’t make a mistake with your math, you can keep going. However you picture a square volt, you can trust that dividing it by a volt will get you back to volts and that dividing by an ohm will get you a watt. This is an important reason to pay attention to the units in all of your calculations: if you end up with the wrong unit, you can be sure something else is wrong with your result. For example, which of these is an expression for the equivalent resistance of two parallel resistors? If you look at the units, it’s easy to rule out the first candidate since the resulting unit is 1/R, not R as we need it to be. I’ll probably get to more in-depth discussions of individual units later, plus you can look them up in your physics textbook or Wikipedia, but here are some common groups of units you should at least be aware of: • Size (distance, area, volume, etc.) - Units for measuring these properties are probably among the most familiar, but because people have been aware of those properties for a long time, there are many alternatives. A consequence of the familiarity (combined with laziness) is that people tend to abbreviate the units, leaving out things like “square” when talking about square feet of area. A more confusing example is copper thickness on circuit boards, which is commonly specified in “ounces”, even though that’s normally a measure of weight. The unit is ounces of copper per square foot, and one ounce corresponds to about 0.0014 inches, or 35 microns. It might be difficult to know what level of abbreviation is appropriate vs. what might be too verbose, but if you’re not familiar or comfortable with a unit, being specific shouldn’t hurt; on the other hand, if everyone around you is saying “square millimeter”, you should not feel free to just call the same thing “millimeter”. Also, mils (thousandths of an inch), which also come up a lot in the context of printed circuit boards, are not short for millimeters. • Time, rates, frequency - Basic units like seconds should also be very familiar to you, so I’ll just point out that with electronics and programming, we are usually concerned with small fractions of seconds, so you should be comfortable with milliseconds, microseconds, and nanoseconds. We also divide by time to get various rates and frequency, like meters per second, hertz (Hz, or s^-1, or counts per second), and amperes (A, or amps, which are coulombs per second). Sometimes, rates (which are usually some unit divided by time) are instead reported using only the time needed to cover a standard reference amount. For instance, hobby servos could just have their speeds specified in RPM (rotations per minute) or degrees per second, but instead, they are usually specified in terms of how long it takes them to turn 60 degrees. A servo with a “speed” specification of 0.11 seconds can turn 60 degrees in 0.11 seconds, or a full rotation in 0.66 seconds. That’s one and a half rotations per second, or 90 RPM. It’s also worth noting with this example that it’s legitimate to talk about RPM on a device that is not capable of rotating more than a fraction of a rotation, just like you might talk about a rocket car running at 700 miles per hour even though it might not be able to actually travel 700 miles. • Weight, force, mass - Weight is a another familiar concept even for those who don’t quite know what it means. You should definitely learn how weight, mass, and force are related, but in a practical, robot-building sense (by which I mean staying near the earth’s surface), you probably don’t need to worry (or get up in arms) about whether a pound is a unit of force or a unit of mass. As with other common units, there are a host of unit conversion tools available, so asking someone to convert ounces to grams for you will just make you look lazy. (See how easy it is to convert 5 ounces to grams.) • Energy, power - Energy can take all kinds of forms, so there are many expressions for it. The basic unit, though, is the joule (abbreviated J), and the basic unit for power, or the rate of energy transfer, is the watt (abbreviated W), which is a joule per second. Watts multiplied by time get you back to energy, and watt-hours might be a more familiar energy unit than joules. So, you can talk about your battery or a cup of gas or the amount of electricity you use in a day in terms of joules or in terms of watt-hours. One horsepower is about 746 watts, so you could talk about having a 1.5 hp microwave oven. You can use energy and power calculations to quickly find all kinds of theoretical limits to your projects, so it’s good to get to know the many expressions for power and energy. • Electrical - Here are a few units that come up a lot in electronics: □ volts (V) for electrical potential. Voltage is measured between pairs of points, so saying a battery has 9 volts means that one terminal is 9 V higher than the other. There’s no absolute or universal zero reference for voltage, and you can call the most convenient reference point zero V. (A common problem is that some node you think is at zero volts is actually not.) □ amps (A) for current. Current is a rate (coulombs per second) of electrical charge flowing. □ ohms (Ω) for resistance. For cases where you want a lot of current flowing (e.g. wires), the resistance is probably under an ohm; if you are putting resistors into your circuit on purpose, they will probably be anywhere from a few ohms to a few megohms. □ farads (F) for capacitance. A farad is quite big, so most capacitors you will encounter will range from picofarads (pF) to microfarads (uF). However, “supercaps” with many farads are □ henrys (H) for inductance. A henry is also quite big, so most inductors you use in electronics will tend to be in the microhenry (uH) range. • Torque - Torque comes up a lot with motors, gearboxes, and servos. Torque is expressed as a force multiplied by (i.e. “times”) a distance, so “oz. in.” is pronounced “ounce-inch”, not “ounce per inch” (writing “oz/in” or otherwise introducing any notion of division is just as wrong), and an inch-ounce is the same as an ounce-inch. If you have a torque to start with, the longer the arm on the shaft, the less force you will get at the end of the arm. Conversely, if you are starting with a force, the longer a lever you use, the more torque you can generate. For small robots and toy motors, we usually use ounce-inches or kilogram-centimeters (there are about 13.9 oz. in. per kg-cm); for larger torques, you would see something like pound-feet. • Temperature - If you’re still reading this, you’re probably already aware of the three main units for temperature: Kelvin (K), Celsius (C), and Fahrenheit (F). A lot of operating ranges for electronics components are specified in C, and the main thing to note for Americans is that something like 150 degrees (F) is not that hot for electronics. If you get into the actual device physics, K starts showing up more. If the strips were traces on a PCB, the first and third ones would have the same resistance since each has 11 squares. • Dimensionless quantities - These aren’t exactly units, but they are related to units, and you should be as comfortable with them as with any unit. dB (decibel), ppm (parts per million), and % (percent) are used for expressing accuracies or ratios between quantities of the same unit. For instance, 1 ppm for 1 MHz is 1 Hz, so a 20 MHz crystal with a 50 ppm specification should be within 1000 Hz of 20 MHz. You should also be familiar with the standard prefixes, like kilo and micro, which just modify other standard units to match the scale of the system being characterized. Finally, you might sometimes see weird units like “per square”, for instance in the context of sheet resistance. In the case of a printed circuit board, once you have a material and thickness specified, you can think of the traces as being built up of squares the width of your trace, and you can think of the resistance based on how many squares make up the trace since a trace that’s twice as wide but twice as long will have the same resistance. 1 comment Hi Jan, I found your site by googling how to find the amp rating of a double A battery… which led me to your other post about battery capacity. Excellent post by the way… I noticed that there weren't any comments on this one and I just wanted to say thanks for writing it. I think people get lost in advanced math because they make stupid addition, subtraction, multiplication or division errors And I think the same applies to a wide swatch of general engineering endeavors with respect to units. So thanks for the challenge to NOT be lazy! Post a comment
{"url":"http://www.pololu.com/blog/3/know-your-units","timestamp":"2014-04-17T10:19:46Z","content_type":null,"content_length":"40397","record_id":"<urn:uuid:36e5a241-1d83-4805-964e-7bc194522311>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Ketan Mulmuley Responds Someone pointed out to me your where it is stated that, according to me, any approach to separate P from NP must go through GCT. This is not what I think or said. One cannot really say that GCT is the only way to separate P from NP or that any approach must go through it. Indeed as the article ( On GCT, P vs. NP and the Flip I: A High Level View )—henceforth referred to as GCTflip—which describes the basic plan of GCT, clearly states: GCT is a plausible approach to the P vs. NP problem. But as it also explains there are good mathematical reasons to believe why it may well be among the "easiest" approaches to the P vs. NP problem. One such argument—the zero information loss argument—was presented by K.V. in his talk. According to it, any approach to separate the permanent from the determinant in characteristic zero must understand, in one way or the other, the fundamental century-old problem in representation theory, called the Kronecker problem, or rather its decision form. (though this understanding may be expressed in that approach in a completely different language). This is what I repeated during the lunch after that talk, and this is perhaps what the post is referring to. The only known special case of this problem which is completely solved is the Littlewood-Richardson problem. The most transparent proof of this (which also provides far deeper information regarding this problem needed in GCT, unlike other proofs) goes through the theory quantum groups, and the only known good criterion for the decision version requires the saturation theorem for Littlewood-Richardson coefficients. GCT strives to lift this most transparent proof to the Kronecker problem, and more generally to the generalized subgroup restriction problem (and its decision form), which is needed in the context of the P vs. NP problem in characteristic zero. All this is explained in detail in the article GCTflip mentioned above. It does not assume any background in algebraic geometry or representation theory. It has been read by the computer science graduate students here. They had no problem reading it. But it does need a month. It is my hope that you would spare a month sometime for the sake of the P vs. NP problem. 16 comments: 1. Lance's post generated lot of comments and I am somewhat surprised that there are no responses to this post so far! 2. Anonymous #1: Because nobody wants to argue with the Ketan himself, which is a bit sad. 3. > Because nobody wants to argue with the > Ketan himself, which is a bit sad. Not at all, it's just that you need to wait for the required month. 4. It's because Ketan's post is grounded, as opposed to Lance's sensationalist fare. I guess that's why Lance is a good blogger. He knows how to stir up a crowd. 5. I found Ketan's response very reasonable. I fail to see why a complexity theory expert would not want to spend a bit of time to try to understand a new and very different approach to complexity 6. Ketan: What do you mean precisely when you say "the P vs. NP problem in characteristic zero"? Are you referring to Blum-Shub-Smale model of computation over Z (as opposed to Z/2Z)? Assuming that "the P vs. NP problem in characteristic zero" is not the same as the ordinary P vs. NP problem (presumably this would be characteristic 2?), does your work have any implications for the ordinary P vs. NP problem? I don't think you've made this very clear. But if "the P vs. NP problem in characteristic zero" is actually meant to be synonymous with the ordinary P vs. NP problem, then please say so. 7. To Anon 6: I think Ketan works with Valiant's algebraic version of P versus NP rather than with Blum, Shub and Smale's. 8. I fail to see why a complexity theory expert would not want to spend a bit of time to try to understand a new and very different approach to complexity theory. As far as I can tell, there has not been a publication using this approach in over 7 years. There has also not been a paper on this topic that was not co-authored by Mulmuley. (Please correct me if I am wrong.) It is therefore hard to tell whether anyone except Mulmuley believes this approach to be viable. I'm not saying it is not, just that if I were evaluating whether to spend 1 month reading a paper, I would at least like to know that there are other people I trust who have found the paper worthwhile. I also took a brief look at the "flip" paper and did not find it very well written. Again, I am not placing fault on Mulmuley but just explaining why people aren't rushing to read the paper or try this approach. 9. Kevin, the P vs. NP in char zero is different from its BSS or valiant form. it is explained in detail in GCT1. basically one takes an appropriate (co)-NP-complete integral function E(X) and the problem is to show that it does not have an arithmetic circuit of poly size over Z. This is a formal implication of the usual P vs. NP conjecture (or rather the nonuniform version which says E(X) considered over F_2 does not have poly-size boolean circuits) and hence has to be proved first anyway. that is why GCT focuses on that first. actually the first problem to consider is valiant's determinant vs. permanent problem in char zero, which is used as a running example in GCTflip to illustrate the basic ideas. it is conjectured that the flip paradigm would also work in the context of the usual P vs. NP problem over F_2 or F_p. This would be discussed in detail in GCT11. But implemention of the flip over a finite field would be much harder than in char zero. hence the focus on the latter. 10. I also took a brief look at the "flip" paper and did not find it very well written. Again, I am not placing fault on Mulmuley but just explaining why people aren't rushing to read the paper or try this approach. Note that people said (and still say) the much the same thing about Perelman's papers on the Poincare conjecture. It's a sad state of affairs when community needs to be told to value content over 11. Note that people said (and still say) the much the same thing about Perelman's papers on the Poincare conjecture. But Perlman's paper solves a long-standing open problem. The GCT stuff only claims to be an approach to solving a long-standing open problem. Also, the (apparent) intended purpose of the GCT paper mentioned in this post is to introduce people to the topic, whereas Perlman's paper was directed at experts. It's a sad state of affairs when community needs to be told to value content over form. If I was convinced the content was worthwhile, I would put up with the bad form. The point of Anon 8 was that it is not clear the approach is worthwhile. 12. Anonymous 8: When one tries to cover basics in algebraic geometry, rep theory and use plethora of technicalities not used by our community on a regular basis in 103 pages, people are bound to say that the paper is not well written. On the other hand, if it were "well-written", it would have turned in to a 200 page paper, which again would have troubled people (because of the length). I would recommend reading the GCT-Abstract, available on Ketan's homepage. Now it is easy going and might give you a reason to proceed with this paper. Moreover, Ketan suggested devoting 1 month to the paper, which I guess is enough to get a basic understanding of the paper. Besides if Ketan markets his work, then people might take interest in it. 13. Yes, as anon 12 pointed out GCTabstract available on the personal Chicago web page should be read before GCTflip. Also available on the web page is GCTintro, which is a monograph consisting of lecture notes for an introductory course on GCT for computer science graduate students in Chicago. It covers a small part of GCTflip, which however gives a glimpse of the basic idea of GCT, in a leisurely self-contained way. The graduate students here feel that it is easier to read than GCTflip, which has to cover much more ground in short space. GCTflip had to be terse because of the page-limit constraints, as anon 12 correctly guessed. The monograph GCTintro has to be polished before its publication (by Cambridge University Press). please let me know whatever comments you may have so as to make this monograph as easy to read as possible for the TCS community. One has to accept the responsibility of communicating the basic idea of GCT to the community as simply as possible. But it is hoped that the community would also understand the huge challenge in this task given a mismatch between the language of GCT (algebraic geometry, representation theory, quantum groups, so forth) and the traditional language of the TCS community. It is a trying time, but we can overcome the difficulties together. please let me know (by email) whatever difficulties you encounter in reading GCTabstract, GCTintro, and GCTflip (in that order) and i will do my best. 14. WRT anon8, as is evident from the post, someone other than KM (KVS from CMI) gave a series of talks on the subject. Also Regan wrote an expository article, so others are reading it. Its just that people not well-versed with algebraic geometry are not inclined to admit it or improve upon it. 15. Ketan Mulmuley=Vinay Deolalikar 1. This is horrendously disrespectful for a number of reasons.
{"url":"http://blog.computationalcomplexity.org/2008/04/ketan-mulmuley-responds.html","timestamp":"2014-04-16T13:03:40Z","content_type":null,"content_length":"192506","record_id":"<urn:uuid:57524517-363a-4767-9b04-eb05a8eff4fa>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to statistics From Wikiversity • 46% of people polled enjoy vanilla, while 54% prefer chocolate (+/-4% margin of error). • A school's graduation rate has increased by 2%. • A couple has 4 boys, and they are pregnant again: what is their chance of having another boy? • 88% of people questioned feel that it is humane to put stray animals to sleep. These are basic examples of statistics we see every day, but do we really understand what they mean? With the study of statistics, these 'facts' that we hear every day can hopefully become a little more clear. Statistics is permeated by probability. An understanding of basic probability is critical for the understanding of the basic mathematical underpinning of statistics. Strictly speaking the word 'statistics' means one or more measures describing the characteristics of a population. We use the term here in a more idiomatic sense to mean everything to do with sampling and the establishment of population measures. Most statistical procedures use probability to make a statement about the relationship between the independent variables and the dependent variables. Typically, the question one attempts to answer using statistics is that there is a relationship between two variables. To demonstrate that there is a relationship the experimenter must show that when one variable changes the second variable changes and that the amount of change is more than would be likely from mere chance alone. There are two ways to figure the probability of an event. The first is to do a mathematical calculation to determine how often the event can happen. The second is to observe how often the event happens by counting the number of times the event could happen and also counting the number of times the event actually does happen. The use of a mathematical calculation is when a person can say that the chance of the event of rolling a one on a six sided dice is one in six. The probability is figured by figuring the number of ways the event can happen and divide that number by the total number of possible outcomes. Another example is in a well shuffled deck of cards, what is the probability of the event of drawing a three. The answer is four in fifty two since there are four cards numbered three and there are a total of fifty two cards in a deck. The chance of the event of drawing a card in the suite of diamonds is thirteen in fifty two (there are thirteen cards of each of the four suites). The chance of the event of drawing the three of diamonds is one in fifty two. Sometimes, the size of the total event space, the number of different possible events, is not known. In that case, you will need to observe the event system and count the number of times the event actually happens versus the number of times it could happen but doesn't. For instance, a warranty for a coffee maker is a probability statement. The manufacturer calculates that the probability the coffee maker will stop working before the warranty period ends is low. The way such a warranty is calculated involves testing the coffee maker to calculate how long the typical coffee maker continues to function. Then the manufacturer uses this calculation to specify a warranty period for the device. The actual calculation of the coffee maker's life span is made by testing coffee makers and the parts that make up a coffee maker and then using probability to calculate the warranty period. Experiments, Outcomes and Events[edit] The easiest way to think of probability is in terms of experiments and their potential outcomes. Many examples can be drawn from everyday experience: On the drive home from work, you can encounter a flat tire, or have an uneventful drive; the outcome of an election can include either a win by candidate A, B, or C, or a runoff. Definition: The entire collection of possible outcomes from an experiment is termed the sample space, indicated as $\Omega$ (Omega) The simplest (albeit uninteresting) example would be an experiment with only one possible outcome, say $A$. From elementary set theory, we can express the sample space as follows: $\Omega = \{ A \}$ A more interesting example is the result of rolling a six sided dice. The sample space for this experiment is: $\Omega = \{ 1,2,3,4,5,6 \}$ We may be interested in events in an experiment. Definition: An event is some subset of outcomes from the sample space In the dice example, events of interest might include a) the outcome is an even number b) the outcome is less than three These events can be expressed in terms of the possible outcomes from the experiment: a) : $\{2,4,6\}$ b) : $\{ 1,2 \}$ We can borrow definitions from set theory to express events in terms of outcomes. Here is a refresher of some terminology, and some new terms that will be important later: $\cup$ represents the Union of two events $\cap$ represents the Intersection of two events $\{\cdots\}^{c}$ represents the complement of an event. For instance, "the outcome is an even number" is the complement of "the outcome is an odd number" in the dice example. $A \backslash B$ represents difference, that is, $A$but not $B$. For example, we may be interested in the event of drawing the queen of spades from a deck of cards. This can be expressed as the event of drawing a queen, but not drawing a queen of hearts, diamonds or clubs. $\varnothing$ or $\{\}$ represent an impossible event $\Omega$ represents a certain event $A$ and $B$ are called disjoint events if $A\cap B = \varnothing$ Now that we know what events are, we should think a bit about a way to express the likelihood of an event occurring. The classical definition of probability comes from the following. If we can perform our experiment over and over in a way that is repeatable, we can count the number of times that the experiment gives rise to event $A$. We also keep track of the number of times that we perform the same experiment. If we repeat the experiment a large enough number of times, we can express the probability of event $A$ as follows: $P(A) = \frac{N_{A}}{N}$ where $N_{A}$ is the number of times event $A$ occurred, and $N$ is the number of times the experiment was repeated. Therefore the equation can be read as "the probability of event $A$ equals the number of times event $A$ occurs divided by the number of times the experiment was repeated (or the number of times event $A$could have occurred)." As $N$ approaches infinity, the fraction above approaches the true probability of the event $A$. The value of $P(A)$ is clearly between 0 and 1. If our event is the certain event $\Omega$, then for each time we perform the experiment, the event $ \Omega$ is observed; $N_{\Omega} = N$ and $P(\Omega)=1$. If our event is the impossible event $\varnothing$, we know $N_{\varnothing}=0$ and $P(\varnothing) = 0$. If $A$ and $B$ are disjoint events, then whenever event $A$ is observed, then it is impossible for event $B$ to be observed simultaneously. Therefore the number of times events $A$ union $B$ occurs are equal to the number of times event $A$ occurred plus the number of times $B$ occurs. This can be expressed as: $N(A\cup B) = N(A) + N(B)$ Given our definition of probability, we can arrive at the following: $P(A\cup B) = P(A) + P(B)$ At this point it's worth remembering that not all events are disjoint events. For events that are not disjoint, we end up with the following probability definition. $P(A\cup B) = P(A) + P(B) - P(A\cap B)$ How can we see this from example? Well, let's consider drawing from a deck of cards. I'll define two events: "drawing a Queen", and "drawing a Spade". It is immediately clear that these are not disjoint events, because you can draw a queen that is also a spade. There are four queens in the deck, so if we perform the experiment of drawing a card, putting it back in the deck and shuffling (what statisticians refer to as sampling with replacement, we will end up with a probability of $\tfrac{1}{13}$ for a queen draw. By the same argument, we obtain a probability for drawing a spade as $\tfrac{1}{4}$. The expression $P(A\cup B)$ here can be translated as "the chance of drawing a queen or a spade". If we incorrectly assume that for this case $P(A\cup B) = P(A) + P(B)$, we can simply add our probabilities together for "the chance of drawing a queen or a spade" as $\tfrac{1}{13}+\tfrac{1}{4}$. If we were to gather some data experimentally, we would find that our results would differ from the prediction -- the probability observed would be slightly less than $\tfrac{1}{13}+\tfrac{1}{4}$. Why? Because we're counting the queen of spades twice in our expression, once as a spade, and again as a queen. We need to count it only once, as it can only be drawn with probability of $\tfrac{1}{52}$. Still confused? Proof: If $A$ and $B$ are not disjoint, we have to avoid the double counting problem by exactly specifying their union. $A \cup B = A \cup (B \backslash A)$ $P(A \cup B) = P(A \cup (B \backslash A))$ A and B \ A are disjoint sets. We can then use the definition of disjoint events from above to express our desired result: $P(A \cup B) = P(A) + P(B \backslash A)$ We also know that $P(B \backslash A) = P(B) - P(B\cap A)$ $P(A \cup B) = P(A) + P(B) - P(B\cap A)$ Whew! Our first proof. I hope that wasn't too dry. Conditional Probability[edit] Many events are conditional on the occurance of other events. Sometimes this coupling is weak. One event may become more or less probable depending on our knowledge that another event has occured. For instance, the probability that your friends and relatives will call asking for money is likely to be higher if you win the lottery. In my case, I don't think this probability would change. Let's get formal for a second and remember our original definition of probability. $P(A) = \frac{N_{A}}{N}$ Consider an additional event $B$, and a situation where we are only interested in the probability of the occurrence of $A$ when $B$ occurs. A way at this probability is to perform a set of experiments (trials) and only record our results when the event $B$ occurs. In other words $\frac{N_{A\cap B}}{N_{B}}$ We can divide through on top and bottom by $N$ the total number of trials to get $P(A\cap B)/P(B)$. We define this as 'conditional probability': $P(A|B) = \frac{P(A\cap B)}{P(B)}$ which when spoken, takes the sound "probability of $A$ given $B$." Bayes' Law[edit] An important theorem in statistics is Bayes' Law, which states that if we segment the probability space $\Omega$ into disjoint $B_1,...,B_k$, then $P(B_i|A) = \frac{P(A|B_i)P(B_i)}{\sum_{n=1}^kP(A|B_n)P(B_n)}.$ As a proof, first note that since $B_1,...,B_k$ are disjoint then $P(A) = \sum_{n=1}^k P(A|B_n)P(B_n).$ The theorem follows then by substituting into the conditional probability inequality $P(B_i|A) = \frac{P(A\cap B_i)}{P(A)}.$ For a more detailed explanation, see http://en.wikipedia.org/wiki/Bayes'_theorem Two events $A$ and $B$ are called independent if the occurence of one has absolutely no effect on the probability of the occurence of the other. Mathematically, this is expressed as: $P(A\cap B) = P(A)P(B)$. Random Variables[edit] It's usually possible to represent the outcome of experiments in terms of integers or real numbers. For instance, in the case of conducting a poll, it becomes a little cumbersome to present the outcomes of each individual respondant. Let's say we poll ten people for their voting preferences (Republican - R, or Democrat - D) in two different electorial districts. Our results might look like $\{RRRDRRDRRR\}$ and $\{DDDDDRDDDD\}$ But we're probably only interested in the overall breakdown in voting preference for each district. If we assign an integer value to each outcome, say 0 for Democrat and 1 for Republican, we can obtain a concise summary of voting preference by district simply by adding the results together. Discrete and Continuous Random Variables[edit] There are two important subclasses of random variables: discrete random variable (DRV) and continuous random variable (CRV). Discrete random variables take only countably many values. It means that we can list the set of all possible values that a discrete random variable can take, or in other words, the number of possible values in the set that the variable can take is finite. If the possible values that a DRV X can take are a0,a1,a2,...an, the probability that X takes each is p0=P(X=a0), p1=P(X=a1), p2=P(X=a2),...pn=P(X=an). All these probabilites are greater than or equal zero. For continuous random variables, we cannot list all possible values that a continuous variable can take because the number of values it can take is extremely large. It means that there is no use in calculating the probability of each value separately because the probability that the variable takes a particular value is extremely small and can be considered zero: P(X=x)=0. Distribution Functions[edit] show that P(A-B)=P(A)-P(B) Expectation Values[edit] See also[edit]
{"url":"http://en.wikiversity.org/wiki/Introduction_to_Statistics","timestamp":"2014-04-16T10:49:33Z","content_type":null,"content_length":"53897","record_id":"<urn:uuid:6331cded-5b45-4164-97d8-ae391f483462>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
problem solving February 17th 2009, 09:06 PM #1 Jan 2009 problem solving hey guys so im stuck again the question is A square has an area of 400cm2 what is the length of each side? i know its pretty easy but im sort of clueless, thanks for any help, and i apologize if im in the wrong forum Area is generally: $A=l \cdot w$. But since you know its a square its $A=(w \cdot w)$ or $A=(l \cdot l)$, whichever you prefer. Then you put the numbers you know in the equation. $400=x^2$ then remove the ^2, which is called "squared" for a reason. Square root.. $\sqrt{400}=x$ I'm sure you can do the rest. Last edited by ixo; February 17th 2009 at 09:31 PM. February 17th 2009, 09:20 PM #2 Junior Member Feb 2009
{"url":"http://mathhelpforum.com/math-topics/74209-problem-solving.html","timestamp":"2014-04-20T10:34:14Z","content_type":null,"content_length":"32285","record_id":"<urn:uuid:26448b65-c530-4703-ab06-c8d4ad8b4c44>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Admissible Sets and Structures: An Approach to Definability Theory e-bookAdmissible Sets and Structures: An Approach to Definability Theory e-book Admissible Sets and Structures: An Approach to Definability Theory e-book Admissible Sets and Structures: An Approach to Definability Theory book download J. Barwise For ex- ample, if A. and a structure $01 for L be given. Sections. Admissible Sets and Structures: An Approach to Definability Theory. Jon Barwise, Admissible Sets and Structures: An Approach to. CiteSeerX – Scientific documents that cite the following paper: Admissible sets and structures: An approach to definability theory Chapter I: Admissible Set Theory – Project Euclid – mathematics. An Approach to Definability Theory book. Jon Barwise. Download Eating Disorders book – Gutyn . 副标题: an approach to definability theory. An Approach to Definability Theory book download J. Admissible Sets and Structures: An Approach to Definability Theory Admissible Sets and Structures: An Approach to Definability Theory. An admissible. The Admissible Cover and its Properties There are many admissible sets which cover a given structure 50Ϊ. Perspectives in Mathematical Logic, Volume 7 Berlin: Springer-Verlag, 1975. 2. An Approach to Definability Theory. Alibris has Admissible Sets and Structures: An Approach to Definability Theory and other books by Jon Barwise, including new & used copies, rare, out-of-print signed Prime Obsession: Bernhard Riemann and the greatest unsolved problem in mathematics download I commenti sono chiusi.
{"url":"http://pyyrbo.altervista.org/admissible-sets-and-structures-an-approach-to-definability-theory-e-book/","timestamp":"2014-04-17T09:33:56Z","content_type":null,"content_length":"16959","record_id":"<urn:uuid:985e54b5-fcd6-435f-881c-719b036e7859>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Shortest Path – Python August 24, 2010 – 2:26 pm Problem: Write a python function to calculate the shortest path given a start node (or vertex), an end node and a graph. Use Dijkstra’s algorithm. Return the distance from the start node to the end node, as well as the path taken to get there. The implementation below sticks pretty closely to the algorithm description in the wikipedia entry, which I turned into something a little more pseudo-code-ish to help me implement it: Initial state: 1. give nodes two properties – node.visited and node.distance 2. set node.distance = infinity for all nodes except the start node which is set to zero. 3. set node.visited = false for all nodes 4. set current node = start node. Current node loop: 1. if current node = end node, finish and return current.distance & path 2. for all unvisited neighbors, calc their tentative distance (current.distance + edge to neighbor). 3. if tentative distance < neighbor's set distance, overwrite it. 4. set current.isvisited = true. 5. set current = remaining unvisited node with smallest node.distance Here’s my implementation – it’s recursive, as suggested by the algorithm description: import sys def shortestpath(graph,start,end,visited=[],distances={},predecessors={}): """Find the shortest path between start and end nodes in a graph""" # detect if it's the first time through, set current distance to zero if not visited: distances[start]=0 if start==end: # we've found our end node, now find the path to it, and return while end != None: return distances[start], path[::-1] # process neighbors as per algorithm, keep track of predecessors for neighbor in graph[start]: if neighbor not in visited: neighbordist = distances.get(neighbor,sys.maxint) tentativedist = distances[start] + graph[start][neighbor] if tentativedist < neighbordist: distances[neighbor] = tentativedist # neighbors processed, now mark the current node as visited # finds the closest unvisited node to the start unvisiteds = dict((k, distances.get(k,sys.maxint)) for k in graph if k not in visited) closestnode = min(unvisiteds, key=unvisiteds.get) # now we can take the closest node and recurse, making it current return shortestpath(graph,closestnode,end,visited,distances,predecessors) if __name__ == "__main__": graph = {'a': {'w': 14, 'x': 7, 'y': 9}, 'b': {'w': 9, 'z': 6}, 'w': {'a': 14, 'b': 9, 'y': 2}, 'x': {'a': 7, 'y': 10, 'z': 15}, 'y': {'a': 9, 'w': 2, 'x': 10, 'z': 11}, 'z': {'b': 6, 'x': 15, 'y': 11}} print shortestpath(graph,'a','a') print shortestpath(graph,'a','b') (0, ['a']) (20, ['a', 'y', 'w', 'b']) You’ll see I turned the example in wikipedia into a graph for the test case in the code above. I found this useful about how to represent and process graphs in python: http://www.python.org/doc/essays/graphs.html. I think this was written by Guido himself. There’s a bunch of interesting implementations on the web, including two here. More elegant, but it took me a while to understand them. 1. 7 Responses to “Shortest Path – Python” 2. Hey there, I was wondering… This looks pretty cool and I’d like to know how to make it work with cities and distances like this: NewYork Chicago 200 Chicago LA 100 Chicago SanFrancisco 1000 SanFrancisco Florida 1400 This means, that the distance from NewYork to Chicago is 200, Chicago to LA is 100, etc. Note: Chicago to NewYork is unknown we only know NewYork to Chicago. Then the user inputs ‘END’ and in each line the starting city and finishing city, respectively. f = [] while True: a = raw_input() if a == ‘FIM’: city1 = raw_input() city2 = raw_input() This would work and create an array with all of them. My question is how to “Convert” that array to a dictionary usable by this function? Here’s an example of how the array would be: NewYork Chicago 200 Chicago LA 100 Chicago SanFrancisco 1000 SanFrancisco Florida 1400 ['NewYork Chicago 200', 'Chicago LA 100', 'Chicago SF 100', 'SF Florida 1400'] Thanks much for your time. By Omicron Alpha on Mar 3, 2011 3. Can I get your email address, I need help to find out maximum capacity path from a source node to all other nodes. By Dip on Apr 28, 2011 4. I think you have a bug: If start == end at first iteration: print shortestpath(graph,’a',’a') Traceback (most recent call last): File “./shortpath.py”, line 39, in print shortestpath(graph,’a',’a') File “./shortpath.py”, line 13, in shortestpath return distances[start], path[::-1] KeyError: ‘a’ You need to init distances[] before enter while() loop. By Int-0 on May 31, 2011 5. Thanks Int-0 Now fixed. my email address is in the about page of the blog. By nolfonzo on Jun 18, 2011 6. You should use float(“inf”) (aka python’s infinity) instead sys.maxint (a fixed value). Remember in python ints are automatically parsed to longs when required, so you can raise easily maxint on a graph without overflows. By Spayder26 on Jun 28, 2011 7. if i want all shortest path. How to edit this? By nat on Nov 30, 2011 8. There is a bug in your code. Using mutable objects as default values in your function definition may cause it to execute correctly only the first time it is called in a program, any subsequent calls that receive different input data may fail or return incorrect values. This is because the function arguments that are mutable objects will carry over their contents to subsequent calls of this function if those arguments are omitted, they are not initialized to the default values in your function definition, as you are intending. The default values of mutable objects are only used when the function is first called in a program. This behaviour is by design, see- This behaviour can be avoided by not relying on the default argument values when calling the function, ie. always include all argument values that are mutable. By Boogs on Dec 24, 2012
{"url":"http://rebrained.com/?p=392","timestamp":"2014-04-19T07:59:16Z","content_type":null,"content_length":"25434","record_id":"<urn:uuid:bd01f1aa-887e-481f-9036-ecde2efe7d64>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorem 1 Help for Probability Concepts, Engineering - Transtutors Probability of occurrence of an event is a number lying between 0and 1. Let S be the sample space and E be an event. Then Φ⊆ E ⊆n S ? n(Φ) < n(E) < n(S) Or 0 < n(E) < n(S) Or 0/n(S) < n(E) / n(S) < n(S) / n(S) [Dividing by n(S)] Or 0 < P(E) < 1 [P(E) = n(E)/n(S)] (i) If f is the impossible event, n(S) n(S) (ii) If S be the sure event, then P(S) = 1. (iii) P(E) = 0 <=> E = $ and P(E) = 1 <=> E = S Live Online Email Based Homework Assignment Help in Probability Theorem Transtutors comprises highly qualified and certified teachers, college professors, subject professionals in various subjects like mathematics etc. All our probability theorem tutors are highly experienced and can clear your dobt regarding probability theorem and can explain the different concepts to you effectively. Related Questions • Hydraulics- Numerical Solution 28 mins ago Can i please get tutor Stratos Christianakis assistance? Tags : Engineering, Mechanical Engineering, Kinematics, University ask similar question • solve question 2 4 hrs ago solve question 2 Tags : Engineering, Chemical Engineering, Others, Graduation ask similar question • Concrete Structures Homework 5 hrs ago Check attached file for question. Tags : Engineering, Civil Engineering, Structures, University ask similar question • fault current to an air switch 5 hrs ago Hi, we have aspriong loadedair switch that has a rating of 25KA. Our system is 4160V at 1200A bus rating. If we open the switch with a 300a Load could it fail? i am unsure exactly what the 25KA rating means and think the air... Tags : Engineering, Electrical Engineering, Power, College ask similar question • design machine 9 hrs ago Objective: The teams will build a machine to accomplish some function. Educational Goals: Experience in designing a simple machine, (including creativity, sketching, prototyping); interfa Tags : Engineering, Electrical Engineering, Electrical Machines, University ask similar question • The height of water in a hydroelectric dam is 250 m. Turbine fall 35 m3 of... 16 hrs ago The height of water in a hydroelectric dam is 250 m. Turbine fall 35 m3 of water per second. How is this hydroelectric power? Losses are not taken into account. From the response, to show how you can increase the power of a... Tags : Engineering, Electrical Engineering, Power, University ask similar question • Name four physical quantities that are conserved 21 hrs ago Name four physical quantities that are conserved and two quantities that are not conserved during a process? Tags : Engineering, Mechanical Engineering, Thermodynamics, College ask similar question • 2.1. A 6600/400 V, 50 Hz single-phase core type transformer has a net... 21 hrs ago 2.1. A 6600/400 V, 50 Hz single-phase core type transformer has a net cross-sectional area of the core of 428 cm2. The maximum flux density in the core is1.5 T. Calculate the number of turns in the primary and secondary... Tags : Engineering, Electrical Engineering, Electrical Machines, University ask similar question • how to calculate a strength of a bracket 22 hrs ago how to calculate a strength of a bracket Tags : Engineering, Mechanical Engineering, Strength of Materials, Junior & Senior (Grade 11&12) ask similar question • In a shell and tube type steam condenser, assume that there are 81 tubes... 1 day ago In a shell and tube type steam condenser, assume that there are 81 tubes arranged in a square pitch with 9 tubes per column. The tubes are made of copper with an outside diameter of 1 in. The length of the condenser is 1.5 m.... Tags : Engineering, Mechanical Engineering, Thermodynamics, University ask similar question more assignments »
{"url":"http://www.transtutors.com/homework-help/mathematics/probability-concepts/theorem1.aspx","timestamp":"2014-04-18T08:13:29Z","content_type":null,"content_length":"83507","record_id":"<urn:uuid:72a7998d-fb27-41cf-9423-81289cbf1b72>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Properties of the Dirac point and Topological Insulators I apologies for being to vague in my initial question, I think my confusion with the subject came through. I'm aware of the dissipationless conduction of electrons in the surface state but I was hoping for an explanation of some of the other properties predicted for the electrons that lie in this surface state and also an explanation of the properties of electrons that lie exactly at the Dirac point. For example would the electrons at the Dirac point lie within the conduction band, the valence band or neither? Or is it more like a node? Where there can't be occupancy. In this paper by Robert Cava, he states of the electrons in the surface state 'their energy quantization is more Dirac-like (i.e. photon-like) than bulk-electron-like. These states have inspired predictions of new kinds of electronic devices and exotic physics, including proposals for detecting a long sought neutral particle obeying Fermi statistics called the “Majorana Fermion” ' Why are they 'photon-like'? Is this to do with the spin-locked states? I.e. like cooper pairs.
{"url":"http://www.physicsforums.com/showthread.php?s=443f0b2b5b60627d5498a5bfd0e6d26b&p=4631426","timestamp":"2014-04-16T04:21:38Z","content_type":null,"content_length":"32212","record_id":"<urn:uuid:e5f405a9-7c50-43ca-a727-c8c0832c8653>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Cambria Heights Jamaica, NY 11434 Miss Gil: Cares & Gets Results - Regents, SAT & NYS ELA & Math Exams ...have been tutoring for a while, and I really enjoy doing so. My students vary from Kindergarten through grade 12, as well as college students for and Writing. I tutor a variety of subjects but mainly Mathematics and English, Reading, and Writing for elementary... Offering 10+ subjects including algebra 1
{"url":"http://www.wyzant.com/Cambria_Heights_Algebra_tutors.aspx","timestamp":"2014-04-24T10:25:46Z","content_type":null,"content_length":"59369","record_id":"<urn:uuid:e2f837e9-1ec3-4838-8ee2-b86e2af4661f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Fail 1. Wolfram Alpha was down last night. 2. I got stuck in an infinite loop while using L’Hôpital’s rule. 3. I was computing the kernel of a matrix and got hungry for popcorn and ended up watching a movie instead. 4. I have a note from my doctor which proves I have dyscalculia. 5. I got an answer of 42 for every question. 6. I left it in the 11th dimension. 7. I already did it in a parallel universe. 8. My computer crashed while I was trying to calculate pi to the five trillion’th decimal place. 9. I started by doing 1/2 of my homework, then 1/4th of it, then 1/8th of it, then 1/16th of it… and am still working on finishing it completely. 10. I lost my homework in a nullspace and can’t seem to find it. 11. My homework is isomorphic to Joe’s homework so just give me the same grade as him. 12. I did the first question and truncated the rest. 13. I was proving a ring was commutative and ended up watching LOTR all night. 14. I computed the inverse of a singular matrix and my homework blew up. Walter Anthony has some pretty neat optical illusions on his website. The one below is very cool and has been nicknamed the “purple nurple”: Another optical illusion I like (not sure who created this one) is the following: A geometry teacher used the example “as­sassinating President Bar­ack Obama” as a way to teach angles to his geome­try students. He was teaching the students about parallel lines and angles and used the example of where to stand and aim if shooting Obama. 

He said: “If you’re in this building, you would need to take this angle to shoot the president.” Authorities were called and the high school math teacher was questioned by the Secret Service, but was later let go as he didn’t pose a threat. Recent Comments • Russ on Plane trigonometry • Joshua on Right Triangle • Márcio on Priority Peter • Neil on Priority Peter • ropata on Sugar Cubed • jlhgkljhgkjhgbv,kjhb on Close Enough • Thomas on Lesson of the Day • Thomas on Root Beer
{"url":"http://math-fail.com/page/139","timestamp":"2014-04-18T23:15:44Z","content_type":null,"content_length":"61270","record_id":"<urn:uuid:d1459eb1-f6f8-4e07-b1fb-bd6281004194>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Missing on x-variable - solution? Cristian Dogaru posted on Wednesday, August 06, 2008 - 2:59 pm Dear Linda and Bengt, I need an opinion on an x-variable missing problem. In a model that I am running, an observed (x) variable greatly reduces the sample analyzed (combined with other x-variables, the sample goes down with about 50%). I am thinking that a solution would be to make it endogenous, by making it latent (maybe with an formative-indicator factor). The problem is like this: I have a variable, match, obtained by summing three binary variables (tmatch, scmatch, stmatch). Al three are important, but they have missing values, a great deal. I am thinking of a model like this: matchl by; t by tmatch; sc by scmatch; st by st match; matchl on t sc st; y1 on matchl; y2 on matchl; (these last two are are needed for identification, according to Jarvis, MacKenzie, & Podsakoff, 2003; they can be two extra indicators or, actually, parts of the larger model). or i can use define: matchl=t+sc+st; and drop matchl by; Does it sound like a reasonable solution? Is this what you mean by "Covariate missingness can be modeled if the covariates are brought into the model and distributional assumptions such as normality are made about them." in Mplus manual, Chapter 1? Thank you in advance. Cristian Dogaru Linda K. Muthen posted on Wednesday, August 06, 2008 - 3:14 pm I think it is more straightforward to bring it into the model by mentioning the variance of the variable in the MODEL command. For example, Cristian Dogaru posted on Wednesday, August 06, 2008 - 4:35 pm Thank you! One of my advisers keeps telling me that I tend to see things too complicated... Dana Garbarski posted on Tuesday, February 01, 2011 - 5:46 pm I’m interested in modeling the over-time relationship between two binary variables in a variety of models: autoregressive cross-lagged model, parallel process growth model, and an autoregressive latent trajectory model. I will have missing data on each of the repeated measures as well as at least some of the covariates that will be used as control variables. I’m planning to use weighted least squares estimation for these analyses rather than ML. I have come across 3 possible solutions for dealing with missing data for the binary dependent variables and the covariates on the discussion board, and I’m hoping for some guidance on which would be the best given my situation (or an alternative suggestion if I’ve overlooked something): 1) Multiple imputation for both of the binary dependent variables as well as the covariates. 2) Allow for WLS to estimate missingness as a function of the covariates for the two binary dependent variables, and use multiple imputation for just the covariates. 3) Allow for WLS to estimate missingness as a function of the covariates for the two binary dependent variables, and mention the variances of the covariates in the MODEL command. Bengt O. Muthen posted on Tuesday, February 01, 2011 - 6:04 pm 2) and 3) would not allow for missingness predicted by the binary outcomes before dropout. Alternative 1) seems most reasonable. You may also want to try a Bayesian approach which like ML gives a full-information analysis. Dana Garbarski posted on Thursday, February 03, 2011 - 5:36 am Thank you for your help! I'll also look into the Bayesian approach. Rod Bond posted on Tuesday, April 10, 2012 - 5:58 am I have missing data on covariates and am trying to deal with the problem by bringing the x variable(s) into the model. When I bring some variables into the model, however, I get the following warning "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX." I get this warning even if there is no missing data for the covariate in question and even when I have only one covariate. For example, I run an analysis with Gender as a covariate on which there is no missing data. If I run it without bringing it into the model, it runs fine. If I bring it into the model by referring to its variance in the MODEL command, I get the warning but also identical results. I have tried rescaling Gender so that its variance is similar to that of other variables, but that makes no difference. Any ideas? Thanks Linda K. Muthen posted on Tuesday, April 10, 2012 - 1:54 pm This message comes about because the mean and variance of a binary variable are not orthogonal. If you intend to bring the variable into the model, you can ignore the message. Rod Bond posted on Wednesday, April 11, 2012 - 5:54 am Thanks for your prompt reply. That's very helpful. Katja Haberecht posted on Thursday, November 29, 2012 - 1:24 am I have some missings on covariates and brought them into the model by adding them to the MODEL command as mentioned above. But, as there are some binary variables, how can I tell Mplus which of them don't have normal distribution? Bengt O. Muthen posted on Thursday, November 29, 2012 - 6:48 am For that you should use multiple imputation. But ignoring the binary aspect may not be a big sin unless you also have categorical DVs in your model. Tracy Witte posted on Wednesday, April 03, 2013 - 10:40 am I have a couple of questions about bringing predictor variables into the model so that missing data on predictor variables can be handled with FIML: 1) I realize that doing so means that the same assumptions for the rest of the model will now be applied to the predictor variables, which may not be tenable. However, if one is using MLR as the estimator, is it less problematic to include predictor variables in the model, even if they deviate somewhat from a normal distribution? 2) Does including predictor variables in the model change the substantive interpretation of the results? Or, will any differences in parameter estimates primarily be a function of the degree to which the model assumptions are untenable for the predictor variables? 3) This issue is addressed on the following website (http://www.ats.ucla.edu/stat/mplus/faq/fiml_counts.htm). I noticed that they included the predictors in the model by modeling the intercepts, rather than the variance of the predictor variables. Is this because this model included count variables, rather than continuous variables? 4) In general, is it considered better to use multiple imputation if you have missing data for predictor variables? That is, how "experimental" is it considered to be to use this approach with FIML? Thank you very much for your help! Linda K. Muthen posted on Wednesday, April 03, 2013 - 4:58 pm 1. It may make it less problematic. 2. No. 3. You can mention any parameter of the covariate. It does not matter which one. 4. The two methods are asymptotically equivalent. Imputation may be better for categorical variables. Imputation has fewer testing options. Stine Hoj posted on Wednesday, January 22, 2014 - 5:04 pm I am wondering if you could help me to understand why bringing a covariate into the model has a pronounced effect on model fit statistics. I have a linear growth model with a continous covariate (Y1) measured at 3 time points. The model includes one time-varying covariate (X1) and several time-invariant covariates (Z1-Z7). i s | Y11@0 Y12@1 Y13@2; i s ON Z1-Z7; Y11 on X11; Y12 on X12; Y13 on X13; (RMSEA = 0.05, CFI = 0.968, SRMR = 0.036) Approximately 20% of the sample are missing values on X1 at some time point. However, if I bring X1 into the model to retain these observations, the fit statistics are markedly worse. i s | Y11@0 Y12@1 Y13@2; i s ON Z1-Z7; Y11 on X11; Y12 on X12; Y13 on X13; X11 X12 X13; (RMSEA = 0.11, CFI = 0.658, SRMR = 0.113) Any guidance would be appreciated. Linda K. Muthen posted on Thursday, January 23, 2014 - 10:29 am Please send the two outputs and your license number to support@statmodel.com. Include TECH1 in both outputs. Malte Jansen posted on Thursday, February 27, 2014 - 7:28 am Dear Mplus Team, it would be great if had could give me some advice with regard to the following situation: I am trying to regress student achievement on a number of predictors and several interactions between continuous predictors. The predictors include binary variables (e.g. sex), continuous variables with one indicator and continuous variables with several indicators. The students are nested in classes, but all predictors are on the individual student level. The dataset includes some missing values on both the outcome and all three kinds of predictor variables. I am unsure which modeling approach to use because: 1. It would like to use FIML to handle the missing data as it’s more convenient than multiple imputation. When I use FIML (by estimating the variances of all predictors in the MODEL statement), all manifest predictors are treated as latent variables with one indicator, right? I guess this might be problematic for binary predictors such as sex. Would you recommend using FIML on the binary 2. If I use FIML only for continuous predictors and do not include their covariances with the binary predictors in the model, a bad model fit results. If I include the covariances, again FIML is used on the binaries. Do I necessarily have to include the covariances? [part 2 coming up] Malte Jansen posted on Thursday, February 27, 2014 - 7:28 am 3. When I include latent interaction, no standardized coefficients are reported. I tried to standardize all variables prior to the analysis as well as set the variance of all latent variables to 1. However, the results for the models without interactions are different from the STDYX output, especially the regression coefficients for the binary predictors. Is there any way to obtain “STDY” coefficients (or coefficients that could be interpreted in a similar way) for the binary predictor variables when interactions are included? Which modeling approach do you think would be the most suitable? It would be great if you could help me. Best regards and thank you in advance Linda K. Muthen posted on Thursday, February 27, 2014 - 10:46 am I would use FIML. You must mention the variances of all of the predictors. You cannot mention a subset. Please send the outputs and your license number to support@statmodel.com. In the future, please limit your post to one window. Stine Hoj posted on Wednesday, March 26, 2014 - 7:52 pm Can you point me to any resources that might help me understand how important the assumption of continuous normality is when bringing covariates into the model? As in the last post, I would like to use FIML for reasons of convenience, but most of my covariates are binary. In actual fact, these binary covariates are missing almost no data; it is just the one continuous covariate that is missing ~20% of responses. I am aware that I need to bring all of the covariates into the model, not just a subset, but I am wondering how to assess what the implications of this might be. Thank you. Linda K. Muthen posted on Thursday, March 27, 2014 - 1:58 pm I don't know of any references on this. In our experience we don't think it has too much of an effect. You could do a Monte Carlo simulation study to investigate this. Back to top
{"url":"http://www.statmodel.com/discussion/messages/22/3463.html?1390501775","timestamp":"2014-04-17T00:52:34Z","content_type":null,"content_length":"47449","record_id":"<urn:uuid:d624492c-29fa-4bde-8812-ee52296ce7a1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Nuclear Structure and Dynamics at Short Distances R. Ent Jefferson Laboratory G. A. Miller University of Washington M. Sargsian Florida International University J. P. Vary Iowa State University Program Coordinator: Inge Dolan (206) 685-4286 Poster - Click for full size Talks online Application form Friends of the INT Obtain an INT preprint number INT homepage INT Workshop INT-13-52W Nuclear Structure and Dynamics at Short Distances February 11 -22, 2013 Recently there has been significant experimental progress in observing the effects of two-nucleon correlations, in studying quasi-elastic scattering by nuclei at large Bjorken x, and in detailing the nuclear dependence of deep-inelastic lepton scattering. In particular, quantitative relationships have been found between these apparent disconnected processes that point to a local density short-range nuclear effect. The current experimental situation is intriguing because it touches the basic issues regarding the physics of nuclei at short distances as well as the physics of hadrons in a strongly-interacting field. In parallel, the recent progress in ab-initio and modern nuclear structure calculations, in lattice QCD calculations of nuclear interactions, and in high energy approaches of description of the quark-gluon structure of nuclei, provides significant theoretical and computational capabilities to resolve the basic questions that now exist underlying the surprising phenomenological relations. These include: • Can the many-body-effects appearing in the interaction current be separated from those appearing in the wave function? • Can conventional nuclear theory provide calculations of the observables measured in coincidence experiments? • What is the relation between two-nucleon correlations and the EMC effect? • What is the role of relativistic effects in the present context? • What experiments can determine the role of three nucleon correlations? • What is the role of quark, as opposed to nucleon or meson, effects in understanding the plateau and the EMC effect? • Which other reactions can be used to elucidate the effects of short-ranged correlations? • How can the EMC effect be studied in semi-inclusive DIS? • How do hadronization effects reveal themselves in semi-inclusive DIS? The first two days would be planned mainly as an organized workshop with presentations. The remaining three days of the first week, and four days of the second week are intended for topical collaboration, discussions and informal presentations, all aimed to bring together hadronic physics and nuclear structure communities to propel a better theoretical underpinning of the nuclear structure and dynamics at short distances, and the excitement of recent phenomenological observations. During the second week one day will be dedicated to a collaboration meeting of an ongoing Jefferson Lab data mining effort to further study nuclear structure and dynamics. There is a registration fee of $55 to attend this workshop. You may pay in cash or by check drawn on a U.S. Bank. Sorry, we cannot accept credit cards. Please make your payment when you arrive at the INT.
{"url":"http://www.int.washington.edu/PROGRAMS/13-52w/","timestamp":"2014-04-16T15:07:47Z","content_type":null,"content_length":"5167","record_id":"<urn:uuid:4a692cc0-2e09-4da5-8788-3c4ad89de27b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply The function: is very useful for calculating sums. That's because: If you want to calculate this: , you may express the f(x) = x^i as: where r(x) is the "remainder", which has less power than f(x). Example for f(x)=x^1: S_2 (x) = (x+1)^2-x^2=x^2+2x+1-x^2=2x+1 (S_2 (x))/2=x+1/2 f(x)=x^1=x=0.5 S_2(x) - 0.5 = 0.5(S_2(x)-1). There's also and binomial proof, which is more usable and universal, but it's harder too.
{"url":"http://www.mathisfunforum.com/post.php?tid=3146&qid=31756","timestamp":"2014-04-16T18:57:56Z","content_type":null,"content_length":"21314","record_id":"<urn:uuid:c12f30bb-7230-4691-883c-ad3f28c40b80>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search Generalized Scattering Coefficients Generalizing the scattering coefficients at a multi-tube intersection (§C.12) by replacing the usual real tube wave impedance impedance from Eq.C.149), or, as a special case, the conical-section wave impedance C.148), we obtain the junction-pressure phasor [440] where admittance of the pressure-wave phasor in branch Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/pasp/Generalized_Scattering_Coefficients.html","timestamp":"2014-04-18T06:09:46Z","content_type":null,"content_length":"10913","record_id":"<urn:uuid:c3b0bd31-695b-4102-8531-780635eb88c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
4aSA6. Scattering by elastic elliptical cylinders of different aspect ratio. Session: Thursday Morning, May 16 Time: 9:30 Author: Jacques Lanfranchi Author: Gerard Maze Location: Lab. d'Acoust. Ultrason. et d'Electron., LAUE URA CNRS 1373, Univ. du Havre, Pl. Robert Schuman, 76610 Le Havre, France The scattering of elliptical cylinders of infinite length insonified by an incident beam perpendicular to their axis is investigated. The method of isolation and identification of resonances (MIIR) allows one to plot resonance spectra. These spectra are obtained from three targets characterized by their major radius/minor radius ratio equal to 4/3, 4/2, 4/1. The resonance spectra obtained from a circular cylinder excited in the same conditions show peaks related to circumferential waves which, for particular frequencies, constitute standing waves. In this case, the phase velocity depends only on the frequency. Contrary to the circular cylinder, resonance spectra obtained by an experimental monostatic method depend on the azimuthal position of the transducer. Some resonances vanish at certain positions. In this case, the phase velocity and the coupling coefficient are a function of the curvature radius and the frequency. The wavelength is not identical on the circumference of the elliptical cylinder. To explain the experimental results, a phase-matching model is developed to determine the resonance frequencies [H. Uberall et al., J. Acoust. Soc. Am. 81, 312--315 (1987)]. This method allows one to represent the vibration state on the circumference. An integral calculus applied on the latter result gives the far-field pressure. from ASA 131st Meeting, Indianapolis, May 1996
{"url":"http://www.auditory.org/asamtgs/asa96ind/4aSA/4aSA6.html","timestamp":"2014-04-18T19:16:21Z","content_type":null,"content_length":"2210","record_id":"<urn:uuid:1320110f-261e-4301-a1f0-4c155ec7cdaa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Michael can wash and detail his car in 3 hours, and Jeanne can wash and detail her car in 5 hours. If they work together, how long will it take them to wash and detail both cars? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/502f10aee4b0ac288316531f","timestamp":"2014-04-18T08:13:32Z","content_type":null,"content_length":"103719","record_id":"<urn:uuid:bb4cd153-4b5b-460e-bfe0-58c5a64d4bf2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Constants Return a scalar, matrix, or N-dimensional array whose elements are all equal to the pure imaginary unit, defined as sqrt (-1). I, and its equivalents i, j, and J, are functions so any of the names may be reused for other purposes (such as i for a counter variable). When called with no arguments, return a scalar with the value i. When called with a single argument, return a square matrix with the dimension specified. When called with more than one scalar argument the first two arguments are taken as the number of rows and columns and any further arguments specify additional matrix dimensions. The optional argument class specifies the return type and may be either "double" or "single". See also: e, pi, log, exp.
{"url":"http://www.gnu.org/software/octave/doc/interpreter/Mathematical-Constants.html","timestamp":"2014-04-18T06:56:54Z","content_type":null,"content_length":"17184","record_id":"<urn:uuid:c8e5a7d7-dbce-4c3c-bb04-df2e756881bb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic and Geometric Topology 5 (2005), paper no. 27, pages 653-690. Differentials in the homological homotopy fixed point spectral sequence Robert R. Bruner, John Rognes Abstract. We analyze in homological terms the homotopy fixed point spectrum of a T-equivariant commutative S-algebra R. There is a homological homotopy fixed point spectral sequence with E^2_{s,t} = H^{-s}_{gp}(T; H_t(R; F_p)), converging conditionally to the continuous homology H^c_{s+t}(R^{hT}; F_p) of the homotopy fixed point spectrum. We show that there are Dyer-Lashof operations beta^ epsilon Q^i acting on this algebra spectral sequence, and that its differentials are completely determined by those originating on the vertical axis. More surprisingly, we show that for each class x in the E^{2r}-term of the spectral sequence there are 2r other classes in the E^{2r}-term (obtained mostly by Dyer-Lashof operations on x) that are infinite cycles, i.e., survive to the E^infty-term. We apply this to completely determine the differentials in the homological homotopy fixed point spectral sequences for the topological Hochschild homology spectra R = THH(B) of many S-algebras, including B = MU, BP, ku, ko and tmf. Similar results apply for all finite subgroups C of T, and for the Tate- and homotopy orbit spectral sequences. This work is part of a homological approach to calculating topological cyclic homology and algebraic K-theory of commutative S-algebras. Keywords. Homotopy fixed points, Tate spectrum, homotopy orbits, commutative S-algebra, Dyer-Lashof operations, differentials, topological Hochschild homology, topological cyclic homology, algebraic AMS subject classification. Primary: 19D55, 55S12, 55T05 . Secondary: 55P43, 55P91 . E-print: arXiv:math.AT/0406081 DOI: 10.2140/agt.2005.5.653 Submitted: 2 June 2004. (Revised: 3 June 2005.) Accepted: 21 June 2005. Published: 5 July 2005. Notes on file formats Robert R. Bruner, John Rognes Department of Mathematics, Wayne State University, Detroit, MI 48202, USA Department of Mathematics, University of Oslo, NO-0316 Oslo, Norway Email: rrb@math.wayne.edu, rognes@math.uio.no AGT home page Archival Version These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
{"url":"http://www.emis.de/journals/UW/agt/AGTVol5/agt-5-27.abs.html","timestamp":"2014-04-17T15:35:14Z","content_type":null,"content_length":"4006","record_id":"<urn:uuid:7c99fb83-fd0d-4611-8a36-95894f934d07>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts tagged with equivalence class One of my projects in life is to (i) become “fluent in mathematics" in the sense that my intuition should incorporate the objects and relationships of 20th-century mathematical discoveries, and (ii) share that feeling with people who are interested in doing the same in a shorter timeframe. Inspired by the theory of Plato’s Republic that “philosopher kings” should learn Geometry—pure logic or the way any universe must necessarily work—and my belief that the shapes and feelings thereof operate on a pre-linguistic, pre-rational “gut feeling” level, this may be a worthwhile pursuit. The commercial application would come in the sense that, once you’re in a situation where you have to make big decisions, the only tools you have, in some sense, are who you have become. (Who knows if that would work—but hey, it might! At least one historical wise guy believed the decision-makers should prepare their minds with the shapes of ultimate logic in the universe—and the topologists have told us by now of many more shapes and relations.) To that end I owe the interested a few more blogposts on: • automorphisms / homomorphisms • the logic of shape, the shape of logic • breadth of functions • "to equivalence-class" which I think relate mathematical discoveries to unfamiliar ways of thinking. Today I’ll talk about the breadth of functions. If you remember Descartes’ concept of a function, it is merely a one-to-at-least-one association. “Associate” is about as blah and general and nothing a verb as I could come up with. How could it say anything worthwhile? The breadth of functions-as-verbs, I think, comes from which codomains you choose to associate to which domains. The biggest contrast I can come up with is between 1. a function that associates a non-scalar domain to a ≥0 scalar domain, and 2. a domain to itself. If I impose further conditions on the second kind of function, it becomes an automorphism. The conditions being surjectivity ≥↓ and injectivity ≤↑: coveringness ≥ ↓ If I impose those two conditions then I’m talking about an isomorphism (bijection) from a space to itself, which I could also call “turning the abstract space over and around and inside out in my hands” — playing with the space. If I biject the space to another version of itself, I’m looking at the same thing in a different way. Back to the first case, where I associate a ≥0 scalar (i.e., a “regular number” like 12.8) to an object of a complicated space, like • the space of possible neuron weightings; • the space of 2-person dynamical systems (like the “love equations”); • a space containing weird objects that twist in a way that’s easier to describe than to draw; • a space of possible things that could happen; • the space of paths through London that spend 90% of their time along the Thames; then I could call that “assigning a size to the object”. Again I should add some more constraints to the mapping in order to really call it a “size assignment”. For example continuity, if reasonable—I would like similar things to have a similar size. Or the standard definition of a metric: dist(a,b)=dist(b,a); dist(x,x)=0; no other zeroes besides dist(self,self), and triangle law. Since the word “size" itself could have many meanings as well, such as: • volume • angle measure • likelihood • length/height • correlation • mass • how long an algorithm takes to run • how different from the typical an observation is • how skewed a statistical distribution is • (the inverse of) how far I go until my sampling method encounters the farthest-away next observation • surface area • density • number of tines (or “points” if you’re measuring a buck’s antlers) • how big of a suitcase you need to fit the thing in (L-∞ norm) which would order objects differently (e.g., lungs have more surface area in less volume; fractals have more points but needn’t be large to have many points; a delicate sculpture could have small mass, small surface area, large height, and be hard to fit into a box; and osmium would look small but be very heavy—heavier than gold). Let’s stay with the weighted-neurons example, because it’s evocative and because posets and graphs model a variety of things. An isomorphism from graphs to graphs might be just to interchange certain wires for dots. So roads become cities and cities become roads. Weird, right? But mathematically these can be dual. I might also take an observation from depth-first versus breadth-first search from computer science (algorithm execution as trees) and apply it to a network-as-brain, if the tree-ness is sufficiently similar between the two and if trees are really a good metaphor after all for either algorithms or brains. More broadly, one hopes that theorems about automorphism groups on trees (like automorphism groups on T-shirts) could evoke interesting or useful thoughts about all the tree-like things and web-like things: be they social networks, roads, or brains. So that’s one example of a pre-linguistic “shape” that’s evoked by 20th-century mathematics. Today I feel like I could do two: so how about To Equivalence-Class. Probably due to the invention of set theory, mathematics offers a way of bunching all alike things together. This is something people have done since at least Aristotle; it’s basically like Aristotle’s categories. • The set of all librarians; • The set of all hats; • The set of all sciences; • Quine’s (extensional) definition of the number three as “the class of all sets with cardinality three”. (Don’t try the “intensional” definition or “What is it intrinsically that makes three, three? What does three really mean?” unless you’re trying to drive yourself insane to get out of the capital punishment.) • The set of all cars; • The set of all cats; • The set of all even numbers; • The set of all planes oriented any way in 𝔸³ • The set of all equal-area blobs in any plane 𝔸² that’s parallel to the one you’re talking about (but could be shifted anywhere within 𝔸³) • The set of all successful people; • The set of all companies that pay enough tax; • The set of all borrowers who will make at least three late payments during the life of their mortgage; • The set of all borrowers with between 1% and 5% chance of defaulting on their mortgage; • The set of all Extraverted Sensing Feeling Perceivers; • The set of all janitors within 5 years of retirement age, who have worked in the custodial services at some point during at least 15 of the last 25 years; • The set of all orchids; • The set of all ungulates; The boundaries of some of these (Aristotelian, not Lawverean) categories may be fuzzy or vague— • if you cut off a cat’s leg is it still a cat? What if you shave it? What if you replace the heart with a fish heart? • Is economics a science? Is cognitive science a science? Is mathematics a science? Is Is the particular idea you’re trying to get a grant for scientific? and in fact membership in any of these equivalence classes could be part of a rhetorical contest. If you already have positive associations with “science”, then if I frame what I do as scientific then you will perceive it as e.g. precise, valuable, truthful, honourable, accurate, important, serious, valid, worthwhile, and so on. Scientists put Man on the Moon. Scientists cured polio. Scientists discovered Germ Theory. (But did “computer scientists” or “statisticians” or “Bayesian quantum communication” or “full professors” or “mathematical élite” or “string theorists” do those things? Yet they are classed together under the STEM label. Related: engineers, artisans, scientists, and intelligentsia in Leonardo da Vinci’s time.) But even though it is an old thought-form, mathematicians have done such interesting things with the equivalence-class concept that it’s maybe worth connecting the mathematical type with the everyday type and see where it leads you. What mathematics adds to the equivalence-class concept is the idea of “quotienting” to make a new equivalence-class. For example if you take the set of integers you can quotient it in two to get either the odd numbers or the even numbers. • If you take a manifold and quotient it you get an orbifold—an example of which would be Dmitri Tymoczko’s mathematical model of Bach/Mozart/Western theory of harmonious musical chords. • If you take the real plane ℝ² and quotient it by ℤ² (ℤ being the integers) you get the torus 𝕋² • Likewise if you take ℝ and quotient it by the integers ℤ you get a circle. • If you take connected orientable topological surfaces S with genus g and p punctures, and quotient by the group of orientation-preserving diffeomorphisms of it, you get Riemann’s moduli space of deformations of complex structures S. (I don’t understand that one but you can read about it in Introduction to Teichmüller theory, old and new by Athanase Papadopoulos. It’s meant to just suggest that there are many interesting things in moduli space, surgery theory, and other late-20th-century mathematics that use quotients.) • If you quotient the disk D² by its boundary ∂D² you get the globe S². • Klein bottles are quotients of the unit rectangle I²=[0,1]². So equivalence-classing is something we engage in plenty in life and business. Whether it is • grouping individuals together for stereotypes (maybe based on the way they dress or talk or spell), • or arguing about what constitutes “science” and therefore should get the funding, • or about which borrowers should be classed together to create a MBS with a certain default probabilities and covariance (correlation) with other things like the S&P. Even any time one refers to a group of distinct people under one word—like “Southerners” or “NGO’s” or “small business owners”—that’s effectively creating an (Aristotelian) category and presuming certain properties hold—or hold approximately—for each member of the set. Of course there are valid and invalid ways of doing this—but before I started using the verb “to equivalence-class” to myself, I didn’t have as good of a rhetoric for interrogating the people who want to generalise. Linking together the process of abstraction-from-experience—going from many particular observations of being cheated to a model of “untrustworthy person”—with the mathematical operations of • slicing off outliers, • quotienting along properties, • foliating, • considering subsets that are tamer than the vast freeness of generally-the-way-anything-can-be —formed a new vocabulary that’s helpfully guided my thinking on that subject. Ordine geometrico demonstrata! We want to take theories and turn them over and over in our hands, turn the pants inside out and look at the sewing; hold them upside down; see things from every angle; and sometimes, to quotient or equivalence-class over some property to either consider a subset of cases for which a conclusion can be drawn (e.g., “all fair economic transactions” (non-exploitive?) or “all supply-demand curveses such that how much you get paid is in proportion to how much you contributed” (how to define it? vary the S or the D and get a local proportionality of PS:TS? how to vary them?) Consider abstractly a set like {a, b, c, d}. 4! ways to rearrange the letters. Since sets are unordered we could call it as well the quotient of all rearangements of quadruples of once-and-yes-used letters (b,d,c,a). /p> Descartes’ concept of a mapping is “to assign” (although it’s not specified who is doing the assigning; just some categorical/universal ellipsis of agency) members of one set to members of another • For example the Hash Map of programming. '_why' => 'famous programmer', 'North Dakota' => 'cold place', ... } • Or to round up ⌈num⌉: not injective because many decimals are written onto the same integer. • Or to “multiply by zero” i.e. “erase” or “throw everything away”: Linear maps with det=0 :: - | /dev/null :: matter into a black hole :: (or, on Earth, a trash grinder) — protea (@isomorphisms) January 2, 2013 In this sense a bijection from the same domain to itself is simply a different—but equivalent—way of looking at the same thing. I could rename A=1,B=2,C=3,D=4 or rename A='Elsa',B='Baobab',C=√5,D= Hypathia and end with the same conclusion or “same structure”. For example. But beyond renamings we are also interested in different ways of fitting the puzzle pieces together. The green triangle of the wooden block puzzle could fit in three rotations (or is it six rotations? or infinity right-or-left-rotations?) into the same hole. By considering all such mappings, dividing them up, focussing on the easier classes; classifying the types at all; finding (or imposing) order|pattern on what seems too chaotic or hard to predict (viz, economics) more clarity or at least less stupidity might be found. The hope isn’t completely without support either: Quine explained what is a number with an equivalence class of sets; Tymoczko described the space of musical chords with a quotient of a manifold; PDE’s (read: practical engineering application) solved or better geometrically understood with bijections; Gauss added 1+2+3+...+99+100 in two easy steps rather than ninety-nine with a bijection; …. It’s hard for me to speak to why we want groups and what they are both at once. Today I felt more capable of writing what they are. So this is the concept of sameness, let’s discuss just linear planes (or, hyperplanes) and countable sets of individual things. Leave it up to you or for me later, to enumerate the things from life or the physical world that “look like” these pure mathematical things, and are therefore amenable by metaphor and application of proved results, to the group theory. But just as one motivating example: it doesn’t matter whether I call my coordinates in the mechanical world of physics (x,y,z) or (y,x,z). This is just a renaming or bijection from {1,2,3} onto Even more, I could orient the axis any way that I want. As long as the three are mutually perpendicular each to the other, the origin can be anywhere (invariance under an affine mapping — we can equivalence-class those together) and the rotation of the 3-D system can be anything. Stand in front of the class as the teacher, upside down, oriented so that one of the dimensions helpfully disappears as you fly straight forward (or two dimensions disappear as you run straight forward on a flat road). Which is an observation taken for granted by my 8th grade physics teacher. But in the language of group theory means we can equivalence-class over the special linear group of 3-by-3 matrices that leave volume the same. Any rotation in 3-D Sameness-preserving Groups partition into: • permutation groups, or rearrangements of countable things, and • linear groups, or “trivial” “unimportant” “invariant” changes to continua (such as rescaling—if we added a “0” to the end of all your currency nothing would change) • conjunctions of smaller groups The linear groups—get ready for it—can all be represented as matrices! This is why matrices are considered mathematically “important”. Because we have already conceived this huge logical primitive that (in part) explains the Universe (groups) — or at least allows us to quotient away large classes of phenomena — and it’s reducible to something that’s completely understood! Namely, matrices with entries coming from corpora (fields). So if you can classify (bonus if human beings can understand the classification in intuitive ways) all the qualitatively different types of Matrices, then you not only know where your engineering numerical computation is going, but you have understood something fundamental about the logical primitives of the Universe! Aaaaaand, matrices can be computed on this fantastic invention called a computer! • "X does something whilst preserving a certain structure" • "There exist deformations of Y that preserve certain properties" • "∃ function ƒ such that P, whilst respecting Q" This common mathematical turn of phrase sounds vague, even when the speaker has something quite clear in mind. Smeet Bhatt brought up this unclarity in a recent question on Quora. Following is my answer: It depends on the category. The idea of isomorphism varies across categories. It’s like if I ask you if two things are “similar” or not. Think about a children’s puzzle where they are shown wooden blocks in a variety of shapes & colours. All the blocks that have the same shape are shape-isomorphic. All the blocks of the same colour are colour-isomorphic. All the blocks are wooden so they’re material-isomorphic. In common mathematical abstractions, you might want to preserve a property like □ still a group □ still a vector space □ still a simplicial complex □ still a plane □ still similar to the original triangle □ still sum to the same constant (symplectic) □ deforming the path without pushing it over a singularity doesn’t change the contour integral after some transformation φ. It’s the same idea: "The same in what way?" As John Baez & James Dolan pointed out, when we say two things are "equal", we usually don’t mean they are literally the same. x=x is the most useless expression in mathematics, whereas more interesting formulæ express an isomorphism: □ “Something is the same about the LHS and RHS”. □ "They are similar in the following sense". Just what the something is that’s the same, is the structure to be preserved. A related idea is that of equivalence-class. If I make an equivalence class of all sets with cardinality 4, I’m talking about “their size is equivalent”. Of course the set "What is the same?" and "What is different?" just like on Sesame Street. Two further comments: “structure” in mathematics usually refers to a tuple or a category, both of which mean “a space" in the sense that not only is there a set with objects in it, but also the space or tuple or category has mappings relating the things together or conveying information about the things. For example a metric space is a tuple (And: having a definition of distance implies that you also have a definition of the topology (neighbourhood relationships) and geometry (angular relationships) of the space.) In the case of a metric space, a structure-preserving map between metric spaces would not make it impossible to speak of distance in the target space. The output should still fulfill the metric-space criteria: distance should still be a meaningful thing to talk about after the mapping is done. I’ve got a couple drafts in my 1500-long queue of drafts expositing some more on this topic. If I’m not too lazy then at some point in the future I’ll share some drawings of structure-preserving maps (different “samenesses”) such as the ones Daniel McLaury mentioned, also on Quora: □ Category: Structure-preserving maps; Invertible, structure-preserving maps □ Groups: (group) homomorphism; (group) isomorphism □ Rings: (ring) homomorphism; (ring) isomorphism □ Vector Spaces: linear transformation, invertible linear transformation □ Topological Spaces: continuous map; homeomorphism □ Differentiable Manifolds: differentiable map; diffeomorphism □ Riemannian Manifolds: conformal map; conformal isometry The last time I was reading the Zhoangdze, I got sidetracked by footnotes about the following 白馬論: • "A white horse is not a horse." This is apparently attributable to Gongsun Long 公孙龙, a Chinese philosopher of the 3rd century B.C. = Era of Warring States 战国时代. How could such a wack philosopher be worth dignifying with a mention in the 庄子? I got my answer from page 40 of James Gleick’s The Information. Like most ancient debates, this all relates back to Bill Clinton and Monica Lewinsky. • Big Bill Clinton, Rhodes Scholar: “That depends what the meaning of the word is, is.” Too true, my man. As Bill Thurston pointed out, students of mathematics regularly use the = symbol when it’s not appropriate (maybe an → or ← or ⊂ or set comprehension is what’s needed) just because it’s the only “connecting word” they know. But the meaning of “is” is too multifarious to always translate to the = symbol. For more tools of how to think about what exactly we’re saying with “is”, check out two papers I’ve linked on this site: Barry Mazur's When is a thing equal to some other thing? and John Baez + James Dolan’s From Finite Sets to Feynman Diagrams. Back to the ancient Chinese stuff. What 公孫龍 was trying to say, was that in the sense of the = sign. The = sign means you can freely substitute one thing for another—to the point of ridiculousness if you wish—without distorting the truth value. But using Gleick’s example, • "Lana doesn’t like white horses” does not mean “Lana doesn’t like horses” Really "white horse" ⊂ "horse", a white horse is a kind of horse, but that means in an object-oriented programming sense we’re talking about class inheritance, not ===. Now afore you go runnin afeart that the English | Chinese language is being shoehorned into mathematical symbology, read a couple sentences of Quine. In this case a bit of set theory and technical statements like “The set of all referents satisfying the criteria X also satisfy the criteria Y”, and thinking about alternatives to = like → and ⊂, actually makes our English-language thinking UPDATE: Jeremy Tran explains that: [As] I understand it. For “is ≡”, we use the phrase 一樣, while for “is ⊂”, we have 一種. These two concepts are relatively distinct from each other. The ‘proper’ translations for the two are “the same” and “a type of”, respectively. But often, those two are translated into English simply as “is”, which can lead to issues. My Chinese isn’t exactly the best, so hopefully I haven’t made any big mistakes. But this is the gist of it. :D When I was a maths teacher some curious students (Fez and Andrew) asked, “Does i, √−1, exist? Does infinity ∞ exist?” I told this story. You explain to me what 4 is by pointing to four rocks on the ground, or dropping them in succession — Peano map, Peano map, Peano map, Peano map. Sure. But that’s an example of the number 4, not the number 4 itself. So is it even possible to say what a number is? No, let’s ask something easier. What a counting number is. No rationals, reals, complexes, or other logically coherent corpuses of numbers. Willard van Orman Quine had an interesting answer. He said that the number seventeen “is” the equivalence class of all sets of with 17 elements. Accept that or not, it’s at least a good try. Whether or not numbers actually exist, we can use math to figure things out. The concepts of √−1 and ∞ serve a practical purpose just like the concept of ⅓ (you know, the obvious moral cap on income tax). For instance • if power on the power line is traveling in the direction +1 then the wire is efficient; if it travels in the direction √−1 then the wire heats up but does no useful work. (Er, I guess alternating current alternates between −1 and −1.) • ∞ allows for limits and therefore derivatives and calculus. Just one example apiece. Do 6-dimensional spheres exist? Do matrices exist? Do power series exist? Do vector fields exist? Do eigenfunctions exist? Do 400-dimensional spaces exist? Do dynamical systems exist? Yes and no, in the same way.
{"url":"http://isomorphismes.tumblr.com/tagged/equivalence+class","timestamp":"2014-04-18T00:28:40Z","content_type":null,"content_length":"213578","record_id":"<urn:uuid:42f42aad-1e2a-4267-8b0a-a28bb16dab64>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
pengtoh's blog This post was from before the big reset. I keep coming back to this post to recompute the magic number for various lenses. I hope somebody finds this useful. The magic number for Canon 5D mk II with 40mm 2.8 STM is 53 which is twice that of X100 with 23mm. Once upon a time, I stumbled upon this concept called hyperfocal distance . For each given focal length, aperture and size of the circle of confusion , you can calculate a special number called the hyperfocal distance. When you focus the lens at the hyperfocal distance, everything from half the distance to infinity will be acceptably sharp . This maximizes the depth of field. This is useful for landscapes or groups of people. There is only one problem. I have to memorize a table of values for each focal length and aperture. Yucks! Sure, I can carry a little pre-calculated table of values. Better but still yucks! Or use a phone app that will calculate it on the fly. That's too slow. Still yucks. The X100 only has one focal length, 23mm. That's easy. I only need to memorize one chart: f/2 13.2m f/2.8 9.37m f/4 6.64m f/5.6 4.7m f/8 3.33m Easier, but I'm no Tiger mom trained memorizing machine. Oh, wait. If I multiply the aperture number by the hyperfocal distance, I get a number slightly above 26. All I need to remember is the number 26! That I can do! It can't be a coincidence. So, I plugged in numbers into DOFmaster for 5D Mk II at 50mm. The product is about 83. That's when I finally went for the formula. Yes, I should have done it years ago. Stupid me. H = f * f / N / c (ignoring the irrelevant +f) H is the hyperfocal distance f is the focal length N is the aperture number c is the size of the circle of confusion Using 0.02mm for circle of confusion, 23mm for focal length, I reproduced the table for X100. That checked out. Since f and c are constant, the formula reduces to: H = C / N C = f * f / c For the X100, C = 23mm * 23mm / 0.02mm = 26450mm = 26.45m! So, to find the hyperfocal length for any aperture, just divide 26.45m by the aperture number! Piece of cake! Also notice that if you double the focal length, the hyperfocal distance goes up by 4. For 5D Mk II with a 24mm wide angle lens, the magic number is 19.2m. For 50mm normal, you won't be too far off if you guessed it is about 80m (50 * 50 / 0.03 = 83.33m). Interestingly, if I'm shooting at f/5.6 on the X100, everything from 2.3m will be in focus if I focus at 4.7m. I can just pre-focus at 4.7m, switch to manual focus to lock it and forget about focusing altogether! This will also be great for videos. The X100's movie mode does not track faces and it tends to shift the focus unnecessarily especially if the background has some high-contrast items like lights.
{"url":"http://pengtoh.blogspot.com/","timestamp":"2014-04-16T07:26:06Z","content_type":null,"content_length":"60522","record_id":"<urn:uuid:54991841-e9b0-4051-bc57-be4ef25db47e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Nickie on Saturday, January 26, 2013 at 2:35pm. A population has a mean of 30 and a standard deviation of 5. a. If 5 points were added to every score in the population, what would be the new values for the mean and standard deviation? b. If every score in the population were multiplied by 3 what would be the new values for the mean and standard deviation? • Statistics - MathGuru, Saturday, January 26, 2013 at 5:17pm a. mean = 35; standard deviation of 5 stays the same. b. mean = 90; standard deviation = 15 • Statistics - Nickie, Saturday, January 26, 2013 at 6:43pm A distribution has a standard deviation of 12. Find the z-score for each of the following locations in the distribution. a. Above the mean by 3 points. b. Above the mean by 12 points. c. Below the mean by 24 points. d. Below the mean by 18 points. For the following population of N 6 scores: 3, 1, 4, 3, 3, 4 a. Sketch a histogram showing the population b. Locate the value of the population mean in your sketch, and make an estimate of the standard deviation (as done in Example 4.2). c. Compute SS, variance, and standard deviation for the population. (How well does your estimate compare with the actual value of ?) In a population of exam scores, a score of X 48 corresponds to z1.00 and a score of X 36 corresponds to z –0.50. Find the mean and standard deviation for the population. (Hint: Sketch the distribution and locate the two scores on your sketch.) A sample consists of the following n 6 scores: 2, 7, 4, 6, 4, and 7. a. Compute the mean and standard deviation for the b. Find the z-score for each score in the sample. c. Transform the original sample into a new sample with a mean of M 50 and s 10. Related Questions statistics - Sample with size n = 100 has mean = 30. Assuming the population ... statistics - Sample with size n = 100 has mean = 30. Assuming the population ... statistics - How do we compute sample size fluctuations? Suppose we have a ... statistics - A statistics practitioner formulated the following hypothesis: Ho: ... Statistics - True or False If a sample of at least 30 scores is randomly ... math statistics - The average price of a gallon of unleaded regular gasoline was... math statistics - please help me I've waited for hours The average price of a ... Statistics - 1. In thinking about doing statistical analysis, the sample mean ... statistics - 2. We would like to test whether the population mean is less than ... statistics - 14) A population of N = 20 scores has a mean of µ = 15. One score ...
{"url":"http://www.jiskha.com/display.cgi?id=1359228928","timestamp":"2014-04-17T20:54:24Z","content_type":null,"content_length":"10033","record_id":"<urn:uuid:eb73367b-fbca-4823-9f62-4a64d437e521>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
what is factoring What is factoring? What is factoring? It is the process by which one tries to make a mathematical expression look like a multiplication problem by looking for factors. There are three concepts you will need to understand very well before you attempt to factor an expression Start by studing the topics in the introduction Factoring integers Important to understand in order to grasp the meaning of factor Finding the greatest common factor Important to understand before factoring polynomials or expressions with at least two terms Multiplying binomials Important to understand before factoring trinomials Factoring algebraic expressions Learn how to factor a polynomial or an algebraic expression with two or more terms Factoring trinomials Learn to factor a trinomial that has the form x^2 + bx + c Factoring by grouping Learn how to factor a trinomial of the form ax^2 + bx + c by grouping terms Factoring perfect square trinomials Learn to factor perfect square trinomials Factoring using the quadratic formula Learn to factor using the quadratic formula x^2 + bx + c Factoring radicals Learn how to factor and simplify radicals Factoring calculator Use this calculator to factor a trinomial Quadratic equation solver Use this quadratic equation calculator to solve any quadratic equation. Still struggling with fractions? Get rid of your fears and frustrations once and for all! Buy my ebook. It offers a thorough coverage of fractions! Need a Quick Answer to your Math Problems? Get an answer in 10 minutes or less from a math expert! Justanswer features top-notch math experts handpicked by personnel after they have taken and passed a rigourous math test and after their credentials have been verified by a third party Most math experts have bachelor's or master's degree in math or a related field I am also an expert for justanswer. If you want me to answer your questions, sign in, browse the list of math experts, and select my name (Jetser Carasco) before sending your question(s) Justanswer is 100% RISK FREE.You Pay Only for the Answers You Like. Fees are Typically $9-$15
{"url":"http://www.basic-mathematics.com/what-is-factoring.html","timestamp":"2014-04-20T20:56:52Z","content_type":null,"content_length":"39037","record_id":"<urn:uuid:a985c9c4-1879-41b0-b075-107384a011d2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
East Somerville, MA Math Tutor Find an East Somerville, MA Math Tutor ...From these perspectives I can help you grasp case studies and paper topics at a more fundamental level. I've been training public speakers for in-person, on radio, and on television occasions for decades. I especially value helping speakers master speaking in sound bites and reviewing with them videos of their speaking. 55 Subjects: including trigonometry, ACT Math, reading, algebra 1 ...As noted above, trigonometry is usually encountered as a part of a pre-calculus course. In my view, much of the traditional material associated with trigonometry should be replaced by an introduction to the linear algebra of vectors, which provides alternative methods of solving many of the prob... 7 Subjects: including algebra 1, algebra 2, calculus, trigonometry My name is Derek H. and I recently graduated from Cornell University's College of Engineering with a degree in Information Science, Systems, and Technology. I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. 17 Subjects: including algebra 2, geometry, prealgebra, precalculus I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry ...As a fundamental component of the biological sciences, I have a comprehensive understanding of genetics. I have studied genetics at both the undergraduate and graduate level and have taken courses in cellular biology, molecular biology, medical genetics, genetic counseling, genetic case studies,... 22 Subjects: including ACT Math, SAT math, reading, writing Related East Somerville, MA Tutors East Somerville, MA Accounting Tutors East Somerville, MA ACT Tutors East Somerville, MA Algebra Tutors East Somerville, MA Algebra 2 Tutors East Somerville, MA Calculus Tutors East Somerville, MA Geometry Tutors East Somerville, MA Math Tutors East Somerville, MA Prealgebra Tutors East Somerville, MA Precalculus Tutors East Somerville, MA SAT Tutors East Somerville, MA SAT Math Tutors East Somerville, MA Science Tutors East Somerville, MA Statistics Tutors East Somerville, MA Trigonometry Tutors Nearby Cities With Math Tutor Beachmont, MA Math Tutors Cambridgeport, MA Math Tutors Charlestown, MA Math Tutors East Milton, MA Math Tutors East Watertown, MA Math Tutors Grove Hall, MA Math Tutors Kendall Square, MA Math Tutors Kenmore, MA Math Tutors Reservoir, MS Math Tutors Somerville, MA Math Tutors South Waltham, MA Math Tutors Squantum, MA Math Tutors West Lynn, MA Math Tutors West Somerville, MA Math Tutors Winter Hill, MA Math Tutors
{"url":"http://www.purplemath.com/East_Somerville_MA_Math_tutors.php","timestamp":"2014-04-16T19:00:35Z","content_type":null,"content_length":"24265","record_id":"<urn:uuid:0527704a-de5c-48a9-9180-f29233c106f1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: whats pie • 5 months ago • 5 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/526dede6e4b0c12158626a13","timestamp":"2014-04-18T23:43:17Z","content_type":null,"content_length":"34646","record_id":"<urn:uuid:88e1075d-fa6a-4c60-ad9e-c897c5ae1c08>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Subjects B. Stimuli C. Procedure D. Comparison with spatial tuning curve bandwidths E. Results and discussion F. Experiment 1b: Effects of spectral edges on ripple discrimination A. Rationale B. Methods C. Results and discussion A. Rationale B. Methods C. Results D. Discussion A. Rationale B. Methods C. Results 1. Broadband ripple discrimination and speech recognition 2. STC bandwidth and speech recognition D. Discussion A. Spectral ripple discrimination as a measure of spectral resolution B. Relationships between spatial tuning curves, spectral ripple discrimination, and speech perception
{"url":"http://scitation.aip.org/content/asa/journal/jasa/130/1/10.1121/1.3589255","timestamp":"2014-04-16T14:25:49Z","content_type":null,"content_length":"87471","record_id":"<urn:uuid:5d4671a2-58d7-47f6-a49b-963f91497d52>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
The Logic behind the Magic of DAX Cross Table Filtering Automatic cross filtering between columns of the same table or related tables is a very powerful feature of DAX. It allows a measure to evaluate to different values for different cells in a pivot table even though the DAX expression for the measure does not change. Filter context is the underlying mechanism that enables this magic behavior. But it is also a very tricky concept that even befuddles some DAX experts. Marco Russo and Alberto Ferrari have introduced DAX filter context in Chapter 6 of their book Microsoft PowerPivot for Excel 2010. Marco has also blogged about how Calculate function works. Recently I have run into many questions from advanced DAX users which tell me that people are still confused about how filter context works exactly. And this will be the subject of today’s post. This post assumes that you already have basic knowledge about measures, row context, filter context, and DAX functions Calculate, Values, All, etc. A level 200 pop quiz on DAX If you think you already know how filter context works, let me ask you a couple of level 200 questions on DAX to see if you can explain the nuances of some DAX expressions. If you don’t feel like being challenged now, it is still beneficial to read the questions so you have some examples to better understand the following sections. The questions are based on the data model inside the publicly available sample PowerPivot workbook Contoso Samples DAX Formulas.xlsx. You can download the sample workbook to try out the formulas yourself if you want to, but it is not required to answer the Question #1. People have heard that fact tables are automatically filtered by slices on dimension tables, but not the other way around, or in more general terms, if there is a relationship from table A to table B, A is automatically filtered by any slices on columns of B but B is not automatically filtered by any slices on columns of A. So if you select DimProductSubcategory[ProductSubcategoryName] = “Air Conditioners” on a pivot table slicer, measure returns 62 as DimProduct is limited to air conditioners. On the other hand, if you select DimProduct[ProductLabel] = “0101001”, returns 44 instead of just 1 although only a single product is selected. To filter DimProductSubcategory by the selected product label, you can define a measure as Calculate(CountRows(DimProductSubcategory), DimProduct) which returns 1. So it seems like when you explicitly add DimProduct as a setfilter argument of Calculate, DimProductSubcategory will be filtered by DimProduct. But if I define a measure as Calculate(CountRows(DimProductSubcategory), Values(DimProduct[ProductLabel])) to explicitly add the column that I know having a slice from the pivot table to the Calculate function , the measure formula returns 44 again. So what makes setfilter expression DimProduct work but Values(DimProduct[ProductLabel]) not work even though the filter only comes from [ProductLabel] column? If you think you have to add foreign key DimProduct[ProductSubcategoryKey] to the filter context in order for DimProductSubcategory to be filtered by DimProduct, you can try Calculate(CountRows(DimProductSubcategory), Values(DimProduct[ProductSubcategoryKey])) but it still returns 44. If you have enough patience, you can use Values function to explicitly add all 33 columns in DimProduct one by one as setfilter arguments to Calculate function and you still will get 44 back. So what is the difference between table expression DimProduct and the enumeration of all 33 columns in that table? Question #2. There are 2556 records in DimDate table, therefore if you add a measure with expression to a pivot table without any filters, the measure value would be 2556. Now if you add a second measure with expression Calculate(CountRows(DimDate), FactSales) to the same pivot table, the measure value would be 1096 since DimDate table is filtered by FactSales table and only dates with sales records are included. But if you add a third measure with Calculate(CountRows(DimDate), All(FactSales)) to the pivot table, the measure value becomes 2556 again. Since this pivot table has no filters anywhere, shouldn’t FactSales and All(FactSales) return the same table? Now add a fourth measure with Calculate(CountRows(DimDate), Filter(All(FactSales), true)) to the pivot table, the measure value becomes 1096 again. All three setfilter arguments return exactly the same table, why would we get back different results? With these questions in mind, let’s examine the logic foundation upon which the magic world of DAX is built. At the end of the post, you will be able to find a logical explanation to all these seemingly inconsistent results. The expanded view of a DAX base table The best way to understand DAX cross table filtering is to think of each base table as extended by its related tables. When a relationship is created from table A to table B, the new A, which is really A left outer join B, includes both columns of A and columns of B. So in DAX, a table reference FactSales really refers to LOJ DimProduct LOJ DimProductSubcategory LOJ DimProductCategory LOJ DimStore LOJ DimGeography LOJ DimDate LOJ DimChannel LOJ DimPromotion, where LOJ means left outer join. This interpretation makes it easy to understand some other DAX syntax. For example, in DAX expression Filter(FactSales, Related(DimProduct[ProductLabel]) = “0101001”), Related(DimProduct[ProductLabel]) refers to the value of column DimProduct[ProductLabel] in the extended FactSales table. As a second example, DAX expression AllExcept(FactSales, DimProduct[ProductLabel]) returns a table with all columns of extended FactSales table except for column DimProduct[ProductLabel]. Build initial filter context DAX filter context is a stack of tables. At the beginning, the stack is empty. Given a pivot table, a filter context is initially populated by adding slicers and page filters. For each cell in a pivot table, current members of row labels and column labels also add filters to filter context. Other pivot table operations like visual totals add to initial filter context as well but I will keep things simple here. At this point, we have set up an initial filter context in which the measure expression of the current cell is to be evaluated. Measure invocation If SumOfSales is the name of a measure and Sum(Sales[Amount]) is its DAX formula, DAX expression is equivalent to and DAX expression [SumOfSales](Date[Year] = 2001, Store[Country] = “USA”) is equivalent to Calculate(Sum(Sales[Amount]), Date[Year] = 2001, Store[Country] = “USA”). So the syntax sugar which makes a measure name look like a function name is just a clever way to add tables to filter context before evaluating the expression associated with the measure. Since invoking a measure implicitly calls Calculate, from now on I’ll just focus on Calculate function as the same rules apply equally to measures. Add tables to filter context Calculate function performs the following operations: 1. Create a new filter context by cloning the existing one. 2. Move current rows in the row context to the new filter context one by one and apply blocking semantics against all previous tables. 3. Evaluate each setfilter argument in the old filter context and then add setfilter tables to the new filter context one by one and apply blocking semantics against all tables that exist in the new filter context before the first setfilter table is added. 4. Evaluate the first argument in the newly constructed filter context. If a new table is added to filter context and it has blocking semantics against some tables already in the filter context, the affected tables are checked one by one, all common columns with the new table are marked as blocked on the existing table. Let’s look at an example. Assume the current filter context has two filters: one filter is Date[Year] = 2011, the other filter is Store[Country] = “Canada”. We want to evaluate the following expression in the context AverageX(Distinct(Date[Month]), Calculate(Sum(Sales[Amount]), Store[Country] = “USA”)). The first argument of AverageX sets a month in row context. When it comes to Calculate, it first removes the month from row context and adds it to filter context, it does not block anything since there is no [Month] column in existing filters. Next Calculate adds Store[Country] = “USA” to filter context which blocks existing filter Store[Country] = “Canada”. When Sum(Sales[Amount]) is evaluated, Sales table is filtered by the current month in 2011 and stores in USA. Targets of filter context After so much effort populating and modifying a filter context, when will the filters be applied? In DAX, the filters in a filter context apply to following DAX table expressions: 1. A table expression that is simply a table reference, such as FactSales. 2. Values(Table[Column]). 3. Distinct(Table[Column]). In cases of 2 and 3, the Table is filtered by filter context and then distinct values of [Column] are extracted from the filtered table. So if your expression is Calculate(SumX(Filter(FactSales, [SalesQuantity] > 1000), [SalesAmount]), Date[Year] = 2011), the filter context only restricts FactSales and has no effect whatsoever on other parts of the formula. If you image every DAX formula is represented as a tree of parent and child function calls, a filter context is built at the top or in the middle of the tree but takes effect at leaf level table nodes. Note that DAX function Sum(T[C]) is just a shorthand for SumX(T, [C]), the same is true for other aggregation functions which take a single column reference as argument. Therefore the table in those aggregation functions is filtered by filter context. Apply filters to a target table Finally we have identified a target table and are ready to apply filters from filter context. For each filter table in the filter context, we check to see if there are any common columns between the target table and the unblocked columns of the filter table. If there is at least one common column, the target table is semi-joined with the filter table, or in SQL-like terms SELECT * FROM TargetTable AS t SELECT * FROM FilterTable AS f WHERE t.CommonColumns = f.CommonColumns Each filter table is applied to the target table independently, so the target table is filtered by all relevant filters. All, AllExcept, AllNoBlankRow So far I have said that each setfilter argument of Calculate function returns a table which is added to filter context. Well, that is true as long as the setfilter is not one of the All functions. The All functions should really be renamed as BlockColumns when they are used as setfilter arguments. If one of the All functions is used as the top-level function of setfilter, it only blocks common columns of earlier tables but does not add itself to filter context. In all other places, including as a sub-expression of a setfilter but not at the top level, All functions behave like any other DAX table expressions and always return a table. One special feature of All functions is that the Table argument inside All(Table), All(Table[Column]), AllExcept(Table, …), AllNoBlankRow(Table), etc. is not filtered by the current filter context. Pop quiz answers Answer to question #1. When the initial filter context contains column DimProduct[ProductLabel], table DimProductSubcategory is not filtered as it does not have that column. Now look at the next formula Calculate(CountRows(DimProductSubcategory), DimProduct). The setfilter argument DimProduct is filtered by [ProductLabel], and then table DimProductSubcategory is filtered by table DimProduct since they both share the columns from table DimProductSubcategory and table DimProductCategory. Move onto the next two formulas Calculate(CountRows(DimProductSubcategory), Values(DimProduct[ProductLabel])) Calculate(CountRows(DimProductSubcategory), Values(DimProduct[ProductSubcategoryKey])) Both setfilter arguments are a single column table and the column comes from table DimProduct. Since table DimProductSubcategory does not have any column from DimProduct, it is not filtered by filter context. For the same reason, you can add any columns from DimProduct to the filter context and none of them would impact DimProductSubcategory. Answer to question #2. In the first formula Calculate(CountRows(DimDate), FactSales) Both table DimDate and table FactSales share columns from DimDate, so DimDate is filtered by FactSales. In the second formula Calculate(CountRows(DimDate), All(FactSales)) All(FactSales) blocks any columns from FactSales, but since the filter context is empty, it has no effect. When DimDate is evaluated, filter context is still empty. In the third formula Calculate(CountRows(DimDate), Filter(All(FactSales), true)) The All function is not at the top level of setfilter argument, table Filter(All(FactSales), true) is added to filter context, table DimDate is filtered by filter context for the same reason as in the first formula. 15 comments: 1. I really enjoyed this post. You write about this topic very well. There are many cherished moments in life, why not wear a beautiful dress! When looking back on special memories of your child wearing a gorgeous dress, it will make a fond memory. online pharmacy 2. Jeffrey, this is a very interesting post - thank you so much for sharing this knowledge on DAX internals I have a question regarding the first example. For the following DAX expression Calculate(CountRows(DimProductSubcategory), Values(DimProduct[ProductLabel])) You said that [Since table DimProductSubcategory does not have any column from DimProduct, it is not filtered by filter cont] Does that mean that because VALUES return a 1 column table, it does not have the same base table + extended table configuration as a regular table reference would have? In other words, VALUES would only have one column, hence it won't block any other table column [except the one that is returning]? - Javier Guillen 3. Did you heard what Rob Matts said about that? generic nolvadex 4. The statement is a bit confusing. Let me clarify. The extended version of DimProductSubcategory does not have any common columns with the non-extended part of DimProduct. Column DimProduct [ProductLabel] comes from the non-extended part of DimProduct, therefore does not filter DimProductSubcategory directly. You are right that Values() function returns a single column table therefore can only block one column in the filter context. 1. Can you please tell me why Calculate(CountRows(DimProductSubcategory), Values(DimProduct[ProductSubcategoryKey])) doesn't have common shared column? I think they are both contain ProductSubcategoryKey. I guess the reason why it doesn't filter is because DimProduct[ProductLabel] has no common column with Values(DimProduct[ProductSubcategoryKey]), so DimProduct [ProductLabel] = “0101001” will not filter Values(DimProduct[ProductSubcategoryKey]). 2. never mind, I think I understand now. 5. Hi Jeffrey, Firstly amazing post ! I have a question WRT to Filter functions in a filter context of pivot table. Why is it an error if we write Calculate(Sum(Quota),Filter(Budget,Budget[Month]=Date[Month])) When i have Date[Month] on my rows ? Is the filter unable to equate Budget[Month]=Date[Month]to the existing Date[Month] on Rows ? Secondly, If I replace Date[Month] with Max(Date[Month]) it works . Can you share your thoughts ? 6. A column added to Rows on a pivot table is in the filter context, but row context. A column reference like Date[Month] in DAX is used to get a value in the current row context, you have to use Values(Date[Month]) to retrieve values from the current filter context. Max(Date[Month]) is equivalent to MaxX(Date, Date[Month]), here MaxX function starts a new scan over the Date table, hence establishing a new row context from which the second argument Date [Month] can retrieve a value. 1. Thank you 7. This comment has been removed by the author. 8. Hi Jeffrey, Could you please explain what the term "blocking semantic" means ? 9. The more recent terminology should be overwrite semantics. That means an inner filter will overwrite an outer filter on the same column. For example, in the following DAX expression with nested Calculate(Calculate(Sum(Sales[Amount]), Customer[Name]="Ian"), Customer[Name]="Oliver") the outer filter on Oliver is overwritten by the inner filter therefore the expression returns sum of sales amount by Ian. The outer filter on Oliver has no effect (because it's overwritten or blocked) when DAX engine calculates Sum(Sales[Amount]). 10. ReplyDelete 11. I have a question regarding this sentences. "DAX filter context is a stack of tables." Is a filter like "Store[Country] = “USA” either as a pivot filter or as a filter in a calculate function, "converted" into the full table from where the column is taken and than added to the filter context? 12. Store[Country] = "USA" is equivalent to Filter(All(Store[Country]), Store[Country] = "USA"). Therefore, the filter added to the filter context is a table of one column Store[Country] and one row "USA". This is different from Filter(All(Store), Store[Country] = "USA") which is a table containing all columns fore the underlying physical table 'Store'.
{"url":"http://mdxdax.blogspot.com/2011/03/logic-behind-magic-of-dax-cross-table.html","timestamp":"2014-04-17T18:23:35Z","content_type":null,"content_length":"131021","record_id":"<urn:uuid:4cf85ce6-be17-4ffc-be39-79339df85703>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
select another subject area TitlesOfConstructivistMathCurricula 19 Jul 2005 - 01:46 CatherineJohnson Jo Anne Cobasko has taken the time to construct a complete list of NCTM standards based math programs. update: Department of Corrections This list is David Klein's handiwork, not Jo Anne's. Thank you, David! (For everything you do.) All of us should keep this handy, because none of these programs ever calls itself constructivist, and schools don't seem to advertise this piece of information, either. When I first raised the issue of TRAILBLAZERS being a constructivist curriculum with a teacher on the textbook selection committee, she looked at me blankly. I got a number of those blank looks before I discovered that everyone in the school knows what the word constructivism means, and knows what a constructivist curriculum . The reason I know this is that I finally read the original committee report, which states explicitly that the new curricula must have a constructivist approach with modeling . I was a little behind the curve there. Elementary school Everyday Mathematics (K-6) TERC's Investigations in Number, Data, and Space (K-5) Math Trailblazers (TIMS) (K-5) Middle school Connected Mathematics (6-8) Mathematics in Context (5-8) MathScape: Seeing and Thinking Mathematically (6-8) MATHThematics (STEM) (6-8) Pathways to Algebra and Geometry (MMAP) (6-7, or 7-8) High school Contemporary Mathematics in Context (Core-Plus Mathematics Project) (9-12) Interactive Mathematics Program (9-12) MATH Connections: A Secondary Mathematics Core Curriculum (9-11) Mathematics: Modeling Our World (ARISE) (9-12) SIMMS Integrated Mathematics: A Modeling Approach Using Technology (9-12) Programs explicitly denounced by over 220 Mathematicians and Scientists: Cognitive Tutor Algebra College Preparatory Mathematics (CPM) Connected Mathematics Program (CMP) Core-Plus Mathematics Project Interactive Mathematics Program (IMP) Everyday Mathematics Middle-school Mathematics through Applications Project (MMAP) Number Power The University of Chicago School Mathematics Project (UCSMP) printable page Thanks, Jo Anne, for taking the time to do this! key words: constructivist textbooktitles WhatIsConstructivism 14 May 2006 - 17:18 CarolynJohnston AndyJoy asked on this thread Can someone explain extreme constructivism to me? Is the problem that proponents never want to introduce the standard algorithm for a problem or make children memorize facts? The short answer is yes, but for the record, here is a fuller explanation. I think the best quick introduction to constructivism and its recent history in U.S. educational practice is Barry An A-maze-ing Approach To Math , which appeared in Education Next this year. I'll excerpt a little piece of it to answer Andy's question, entirely without Barry's permission (but hopefully with his blessing). Discovery learning has always been a powerful teaching tool. But constructivists take it a step beyond mere tool, believing that only knowledge that one discovers for oneself is truly learned. There is little argument that learning is ultimately a discovery. Traditionalists also believe that information transfer via direct instruction is necessary, so constructivism taken to extremes can result in students' not knowing what they have discovered, not knowing how to apply it, or, in the worst case, discovering (and taking ownership of) the wrong answer. Additionally, by working in groups and talking with other students (which is promoted by the educationists), one student may indeed discover something, while the others come along for the ride. Texts that are based on NCTM's standards focus on concepts and problem solving, but provide a minimum of exercises to build the skills necessary to understand concepts or solve the problems. Thus students are presented with real-life problems in the belief that they will learn what is needed to solve them. While adherents believe that such an approach teaches "mathematical thinking" rather than dull routine skills, some mathematicians have likened it to teaching someone to play water polo without first teaching him to swim. The Standards were revised in 2000, due in large part to the complaints and criticisms expressed about them. Mathematicians felt that the revised standards, called The Principles and Standards for School Mathematics (PSSM 2000), were an improvement over the 1989 version, but they had reservations. The revised standards still emphasize learning strategies over mathematical facts, for example, and discovery over drill and kill. So how does this fine-sounding idea play out in the classroom? Kids tend to spend too much deriving everything from first principles. What gets sacrificed is time spent learning advanced skills, as Barry shows: Concept still trumps memorization. Textbooks often make sure students understand what multiplication means rather than offering exercises for learning multiplication facts. Some texts ask students to write down the addition that a problem like 4 x 3 represents. Most students do not have a difficult time understanding what multiplication means. But the necessity of memorizing the facts is still there. Rather than drill the facts, the texts have the students drill the concepts, and the student misses out on the basics of what she must ultimately know in order to do the problems. I've seen 4th and 5th graders, when stumped by a multiplication fact such as 8 x 7, actually sum up 8, 7 times. Constructivists would likely point to a student's going back to first principles as an indication that the student truly understood the concept. Mathematicians tend to see that as a waste of time. Another case in point was illustrated in an article that appeared last fall in the New York Times. It described a 4th-grade class in Ossining, New York, that used a constructivist approach to teaching math and spent one entire class period circling the even numbers on a sheet containing the numbers 1 to 100. When a boy who had transferred from a Catholic school told the teacher that he knew his multiplication tables, she quizzed him by asking him what 23 x 16 equaled. Using the old-fashioned method (one that is held in disdain because it uses rote memorization and is not discovered by the student) the boy delivered the correct answer. He knew how to multiply while the rest of the class was still discovering what multiples of 2 were. Now, consider the constructivists' argument for allowing this lack of 'domain knowledge' to persist -- kids develop deeper understanding, 21st century skills, bla bla bla -- after having read KDeRosa's " Terminator essay " on math education. That essay just puts this nonsense to death, don't you think? p.s. from Catherine I found the smart constructivism post. Here are the 2 best passages. Smart constructivism says: A common misconception regarding 'constructivist' theories of knowing (that existing knowledge is used to build new knowledge) is that teachers should never tell students anything directly but, instead, should always allow them to construct knowledge for themselves. This perspective confuses a theory of pedagogy (teaching) with a theory of knowing. Constructivists assume that all knowledge is constructed from previous knowledge, irrespective of how one is taught (e.g., Cobb, 1940)--even listening to a lecture involves active attempts to construct new knowledge.** Radical constructivism says: It is possible for students to construct for themselves the mathematical practices that, historically, took several thousand years to evolve. TwoWaysOfTeachingMath 19 May 2006 - 21:12 CatherineJohnson HowToGetParentBuyInPart2 27 May 2006 - 02:30 CatherineJohnson Getting Your Math Message Out to Parents how to get parent buy in, part 1 newsletter excerpt Getting Your Math Message Out to Parents (pdf file) - 26 May 2006 NctmReformsAgain 14 Sep 2006 - 16:52 CatherineJohnson In today's Wall Street Journal Arithmetic Problem New Report Urges Return to Basics In Teaching Math Critics of 'Fuzzy' Methods Cheer Educators' Findings; Drills Without Calculators Taking Cues From Singapore By JOHN HECHINGER September 12, 2006; Page A1 The nation's math teachers, on the front lines of a 17-year curriculum war, are getting some new marching orders: Make sure students learn the basics. In a report to be released today, the National Council of Teachers of Mathematics, which represents 100,000 educators from prekindergarten through college, will give ammunition to traditionalists who believe schools should focus heavily and early on teaching such fundamentals as multiplication tables and long division. The council's advice is striking because in 1989 it touched off the so-called math wars by promoting open-ended problem solving over drilling. Back then, it recommended that students as young as those in kindergarten use calculators in class. Those recommendations horrified many educators, especially college math professors alarmed by a rising tide of freshmen needing remediation. The council's 1989 report influenced textbooks and led to what are commonly called "reform math" programs, which are used in school systems across the country. The new approach puzzled many parents. For example, to solve a basic division problem, 120 divided by 40, students might cross off groups of circles to "discover" that the answer was three. Infuriated parents dubbed it "fuzzy math" and launched a countermovement. The council says its earlier views had been widely misunderstood and were never intended to excuse students from learning multiplication tables and other fundamentals. Nevertheless, the council's new guidelines constitute "a remarkable reversal, and it's about time," says Ralph Raimi, a University of Rochester math professor. Francis Fennell, the council's president, says the latest guidelines move closer to the curriculum of Asian countries such as Singapore, whose students tend to perform better on international tests. So maybe it wasn't such a great idea after all for IUFSD to ban my Singapore Math course new timeline According to their report, "Curriculum Focal Points," which is subtitled "A Quest for Coherence," students, by second grade, should "develop quick recall of basic addition facts and related subtraction facts." By fourth grade, the report says, students should be fluent with "multiplication and division facts" and should start working with decimals and fractions. By fifth, they should know the "standard algorithm" for division -- in other words, long division -- and should start adding and subtracting decimals and fractions. By sixth grade, students should be moving on to multiplication and division of fractions and decimals. By seventh and eighth grades, they should use algebra to solve linear equations. Here's the Singapore sequence Lutherans turning into Catholics A recent study by the Thomas B. Fordham Foundation, a Washington nonprofit group, found that only two dozen states specified that students needed to know the multiplication tables. Many allowed calculators in early grades. Chester E. Finn Jr., the foundation's president and a former top official at the U.S. Department of Education, blamed the earlier math-council guidelines for state standards that neglect the basics. He described the new advice as a "sea change," saying that "it's a little bit like Lutherans deciding to become Catholics after the Reformation." Understanding math, rather than parroting answers to poorly understood equations, was the goal of the council's controversial 1989 standards. Those guidelines called on teachers to promote estimation, rather than precise answers. For example, an elementary-school student tackling the problem 4,783 divided by 13 should instead divide 4,800 by 12 to arrive at "about 400," the 1989 report said. The council said this approach would enable children using calculators to "decide whether the correct keys were pressed and whether the calculator result is reasonable." "The calculator renders obsolete much of the complex pencil-and-paper proficiency traditionally emphasized in mathematics courses," the council said then. In 2000, in another report, the council backed away somewhat from that position. Still, in response to the earlier recommendations, many school systems required children to describe in writing the reasoning behind their answers. Some parents complained that students ended up writing about math, rather than doing it. As the debate heated up, concern grew about U.S. students' math competence. In 2003, Trends in International Mathematics and Science Study, a test that compares student achievement in many countries, ranked U.S. students just 15th in eighth-grade math skills, behind both Australia and the Slovak Republic. Singapore ranked No. 1, followed by South Korea and Hong Kong. Fueling concern about the quality of elementary and high-school instruction: one in five U.S. college freshmen now need a remedial math course, according to the National Science Board. low-income students This is very exciting. The AIR report (pdf file) led me to believe that Singapore Math had been a flop in low-income schools because the student mobility is so high (and see on this subject, too): If school systems adopt the math council's new approach, their classes might resemble those at Garfield Elementary School in Revere, Mass., just north of Boston. Three-quarters of Garfield's students receive free and reduced lunches, and many are the children of recent immigrants from such countries as Brazil, Cambodia and El Salvador. Three years ago, Garfield started using Singapore Math, a curriculum modeled on that country's official program and now used in about 300 school systems in the U.S. Many school systems and parents regard Singapore Math as an antidote for "reform math" programs that arose from the math council's earlier recommendations. According to preliminary results, the percentage of Garfield students failing the math portion of the fourth-grade state achievement test last year fell to 7% from 23% in 2005. Those rated advanced or proficient rose to 43% from 40%. Last week, a fourth-grade class at Garfield opened its lesson with Singapore's "mental math," a 10-minute warm-up requiring students to recall facts and solve computation questions without pencil and paper. "In your heads, take the denominator of the fraction three-quarters, take the next odd number that follows that number. Add to that number, the number of ounces in a cup. What is nine less than that number?" asked teacher Janis Halloran. A sea of hands shot up. (The answer: four.) Ms. Halloran then moved on to simple pencil-and-paper algebra problems. "The sum of two numbers is 63," one problem reads. "The smaller number is half the bigger number. What is the smaller number? What is the bigger number?" (The answers: 21 and 42.) In this class, the students didn't use the lettered variables that are so prevalent in standard algebraic equations. Instead, they arrived at answers using Cuisenaire rods, sticks of varying colors and lengths that they manipulate into patterns on the tops of their desks. The children use the rods to learn about the relationship between multiplication and geometry. The goal: a visceral and deep understanding of math concepts. "It just makes everything easier for you," says fifth-grader Jailene Paz, 10 years old. Cuisinaire rods for bar models! That's so cool! TERC time The Singapore Math curriculum differs sharply from reform math programs, which often ask students to "discover" on their own the way to perform multiplication and division and other operations, and have come to be known as "constructivist" math. One reform math program, "Investigations in Number, Data and Space," is used in 800 school systems and has become a lightning rod for critics. TERC, a Cambridge, Mass., nonprofit organization, developed that program, and Pearson Scott Foresman, a unit of Pearson PLC, London, distributes it to schools. parents don't get it part 1 Ken Mayer, a spokesman for TERC, says many parents have a "misconception" that Investigations doesn't value computation. He says many school systems, such as Boston's, have seen gains in test scores using the program. "Fluency with number facts is critical," he says. parents don't get it part 2 Polle Zellweger and her husband, Jock Mackinlay, both computer scientists, moved to Bellevue, Wash., from Palo Alto, Calif., two years ago so their two children could attend its highly regarded public schools. She and her husband grew suspicious of the school's Investigations program. This summer, they had both children take a California grade-level achievement test, and both answered only about 70% of the questions correctly. Ms. Zellweger and her husband started tutoring their children an hour a day to catch up. "It was a really weird feeling," says their daughter, Molly Mackinlay, 15. "I do really well in school. I am getting A-pluses in math classes. Then, I take a math test from a different state, and I'm not able to finish half the questions." Eric McDowell, who oversees Bellevue's math curriculum, says parents misunderstand Investigations. If it weren't for the parents, teaching would be a great job. math wars and war wars In the Alpine School District in Utah, parent Oak Norton, an accountant, has gathered petitions from 1,000 families to protest the use of Investigations. His complaints began more than two years ago, when he discovered at a parent conference that his oldest child, then in third grade, wasn't being taught the multiplication tables. Barry Graff, a top Alpine school administrator, says the system has added more traditional computation exercises. Over the next year, Alpine plans to give each school a choice between Investigations or a more conventional approach. Mr. Graff, who says Alpine test scores tend to be at or above state averages, expects critics to keep up the attacks and welcomes the national math council's efforts to provide grade-by-grade guidance on what children should learn. "Other than the war in Iraq, I don't think there's anything more controversial to bring up than math," he says. "The debate will drive us eventually to be in the right wow I bet things are hopping over at hmm No action thus far. Once Wayne Bishop posts this baby, we'll be in a shooting war. update: Bishop's got it! let the fun begin what Singapore students can do at the end of 7th grade -- CatherineJohnson - 12 Sep 2006 NationalMathAdvisoryPanelLinks 21 Nov 2006 - 18:07 CatherineJohnson email updates about the panel where you can find links I'm posting links to the Math Panel homepage, transcripts, & ktm posts here: You can find both pages on the menu to the left. If all else fails you can search posts using the keyword nationalmathematicsadvisorypanel with no spaces between words. (Works pretty well with spaces, too.) I'm thinking this is about as findable and redundant as I can make the links now...unfortunately, you will have to remember some constellation of the words "national mathematics advisory panel" to find these links (that could be iffy for me these days....) But I think I've just raised the odds of re-finding the transcript links considerably. panel members w/links Polite agreement or something we can use? National Math Panel announcement National Math Panel update short story by Vern Williams -- CatherineJohnson - 07 Nov 2006 LindaMoranListserv 11 Dec 2006 - 19:25 CatherineJohnson I think everyone here knows about Linda Moran's Teens and Tweens blog. I've recently (re)discovered that she has a listserv attached to the blog. I joined last week, and I think some of you might like to join as well. There have been some very interesting posts to the listserv that I don't believe have been posted to the blog itself — and that I don't expect to see posted to the blog itself. -- CatherineJohnson - 09 Dec 2006
{"url":"http://www.kitchentablemath.net/twiki/bin/view/Kitchen/ImpCurriculum","timestamp":"2014-04-20T20:58:24Z","content_type":null,"content_length":"45535","record_id":"<urn:uuid:bf051a8e-7fa2-43bf-b7fa-7685c94ed49b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
20th ANNIVERSARY YEAR Centre de recherches mathematiques Fields Institute Pacific Institute for the Mathematical Sciences September 6, 2012 at 3:30 p.m. 2012 CRM-Fields-PIMS Prize Lecture Stevo Todorcevic, University of Toronto at Fields Institute WALKS ON ORDINALS AND THEIR CHARACTERISTICS (presentation) Discovering new canonical structures in set theory such as, for example, the Hausdorff gap or the Aronszajn tree are rare phenomena. This is particularly true when the structures are not countable due to the fact that the set-theoretic independence results are much more frequent in that realm. Characteristics associated to walks on ordinals have shown to be a powerful technology in discovering and describing canonical structures of set theory. This lecture will present some of the best known such characteristics together with the canonical structures they lead to. Professor Todorcevic obtained his Ph.D. in 1979 in Belgrade and currently holds a Canada Research Chair at the University of Toronto. His contributions to set theory made him a world leader in this topic with a particular impact on combinatorial set theory and its connections with topology and analysis. His work is recognized for its striking originality and technical brilliance. He was an invited speaker at the 1998 ICM in Berlin for his work on rho-functions. He made major contributions to the study of S- and L- spaces in topology, proved a remarkable classification theorem for transitive relations on the first uncountable ordinal, made a deep study of compact subsets of the Baire class 1 functions thus continuing work of Bourgain, Fremlin, Talagrand, and others in Banach space theory. Together with P. Larson he completed the solution of Katetov s old compact spaces metrization problem. Among the most striking recent accomplishments of Todorcevic (and co-authors) are major contributions to the von Neumann and Maharam problems on Boolean algebras, the theory of non-separable Banach spaces, including the solution of an old problem of Davis and Johnson, the solution of a long standing problem of Laver, and the development of a duality theory relating finite Ramsey theory and topological dynamics. Todorcevic is an organizer of the Fall 2012 Fields Thematic Program on Forcing and its Applications. The Fields Institute, located in Toronto, is recognized as one of the world's leading independent mathematical research institutions. With a wide array of pure, applied, industrial, financial and educational programs, the Fields Institute attracts over 1,000 visitors annually from every corner of the globe, to collaborate on leading-edge research programs in the mathematical sciences. The Fields Institute is funded by the Natural Sciences and Engineering Research Council, the Ontario Ministry of Training, Colleges and Universities, seven principal sponsoring universities, sixteen affiliate universities and several corporate sponsors. CRM-Fields-PIMS Prize The CRM-Fields-PIMS prize is intended to be the premier mathematics prize in Canada. The winner receive a monetary award, and an invitation to present a lecture at each institute during the semester when the award is announced. The prize recognizes exceptional achievement in the mathematical sciences. CRM-Fields-PIMS Prize - Call for Nominations Back to top
{"url":"http://www.fields.utoronto.ca/programs/scientific/12-13/crm-fields-pims/index.html","timestamp":"2014-04-19T12:28:42Z","content_type":null,"content_length":"15381","record_id":"<urn:uuid:680d97d4-3afc-4cc1-add5-4df851dbd4ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Math past, present and future Math has never been one of my strongest subjects in school, period. For some people in my classes the numbers just seemed to always jump off the page and they could always find the solution quickly. Me, well, that’s a different story. It’s always difficult for me to grasp the concept of whatever I am learning as I am taking notes. Heck, sometimes even when I am doing my homework I have no clue what I am doing. However, no adays I’ve learned to become more disciplined. Whether it means going to weekly math help sessions or meeting with a tutor. I think the only way one can be successful is if you put in the time and effort and don’t take short cuts. Now, I may not have the best grade in this class but for some reason, I feel like more of the concepts are beginning to click. For the first time, I find myself excited to sit down and start to do a problem. I like the challenge of knowing that a problem for evaluating a limit using direct substitution only has one solution. I LOVE DIRECT SUBSTITUION!! There is no gray area, no theorem you have to explain, but just simply find the answer. There’s nothing better than getting down to the last step of the problem, plugging in the value for x and getting the correct answer! Plus, I think that it’s nice to have a teacher that is a grad student because then I think its less intimidating to ask a question because they were in our shoes not that long ago. I like taking a new concept that I’ve learned for evaluating a limit and applying it. Limits have a lot to do with all of chapter 2. So I feel like once you’ve learned the initial concept you’re set because it never goes away, it’ll come back later in the chapter. Often times, it’s difficult for me to retain information if it just all of a sudden goes away after a test or quiz. So, it’s been interesting to see how limits and derivatives have contined to play a role in daily notes. I am excited and interested to learn the more complex concept because I feel as though I already have a grasp of what was originally taught. For example, today in class we started learning Chapter 3 and derivatives came up once again. I am also interested to see how complex and complicated a limit or derivative could get. I feel like once we’ve learned how to do it one way, that that’s the only thing that you could do with a limit or derivative. But in reality, it’s just a small piece of the puzzle, so im curious to see how many pieces of the puzzle there are because we learn all the complex concepts of limits and derivatives. One response to “Math past, present and future” 1. nonscence This entry was posted in Reusable Topics. Bookmark the permalink.
{"url":"https://mth151.wordpress.com/2009/02/05/math-past-present-and-future/","timestamp":"2014-04-16T16:39:14Z","content_type":null,"content_length":"54312","record_id":"<urn:uuid:3cd4ed4f-edcc-48b4-af3f-9560e0e0d6f7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help May 15th 2005, 10:02 AM #1 Apr 2005 Perhaps matrices seem difficult to you at first. Solving for determinants of larger systems are a little time consuming. How long would it take you to compute a 4 by 4 determinant? How many two by two determinants would you need to evaluate the determinant Last edited by paultwang; May 16th 2005 at 12:30 AM. May 16th 2005, 12:12 AM #2 Apr 2005
{"url":"http://mathhelpforum.com/math-topics/243-matrices.html","timestamp":"2014-04-17T14:07:04Z","content_type":null,"content_length":"30455","record_id":"<urn:uuid:78dc79f5-ba2e-40cc-a757-db57a016c473>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US8126260 - System and method for locating a three-dimensional object using machine vision This invention relates to machine vision systems, and more particularly to machine vision systems that locate viewed objects having three-dimensional characteristics. The use of advanced machine vision systems and their underlying software is increasingly employed in a variety of manufacturing and quality control processes. Machine vision enables quicker, more accurate and repeatable results to be obtained in the production of both mass-produced and custom products. Typical machine vision systems include one or more cameras (typically having solid-state charge couple device (CCD) or CMOS-based imaging elements) directed at an area of interest, a frame grabber/image processing elements that capture and transmit CCD images, a computer or onboard processing device, and a user interface for running the machine vision software application and manipulating the captured images, and appropriate illumination on the area of interest. Many applications of machine vision involve the inspection of components and surfaces for defects that affect quality or determination of the relative position of a part in multiple degrees of freedom with respect to the field of view. Machine vision is also employed in varying degrees to assist in manipulating manufacturing engines in the performance of specific tasks. A particular task using machine vision is visual servoing of robots in which a robot end effector is guided to a target using a machine vision feedback. Other applications also employ machine vision to locate a stationary and/or moving pattern. The advent of increasingly faster and higher-performance computers has enabled the development of machine vision tools that can perform complex calculations in analyzing the pose of a viewed part in multiple dimensions. Such tools enable a previously trained/stored image pattern to be acquired and registered/identified regardless of its viewed position. In particular, existing commercially available search tools can register such patterns transformed by at least three degrees of freedom, including at least three translational degrees of freedom (x and y-axis image plane and the z-axis) and two or more non-translational degree of freedom (rotation, for example) relative to a predetermined origin. An object in 3D can be registered from a trained pattern using at least two discrete images of the object with dissimilar poses. There are a number of challenges to registering an object in three-dimensions from trained images using this approach. For example, when non-coplanar object features are imaged, different features of the acquired image undergo different transformations, and thus, a single affine transformation can no longer be relied upon to provide the registered pattern. In addition the object's highlights and shadows “move” as the part is rotated in 3D and the features undergo true perspective projection. Also, any self-occlusions in the acquired image will tend to appear as boundaries in the simultaneously acquired images. This fools the 2D vision system into assuming an acquired image has a different shape than the trained counterpart. A powerful machine vision tool for registering objects in 2D is the well-known PatMax® system available from Cognex Corporation of Natick, Mass. This system allows the two-dimensional pose of a subject part to be registered and identified quite accurately from a trained pattern, regardless of rotation and scale. Advanced machine vision search tools such as PatMax® also have the ability to take advantage of the previous known position of a search subject or target. This narrows the search area to positions relatively near the last known location. Therefore, searching is relatively faster on the next cycle since a smaller area is searched. In addition, these search tools can tolerate partial occlusion of a pattern and changes in its illumination, adding further to their robustness with respect to less-advanced machine vision approaches. PatMax® operates by first acquiring a coarse image of the object, and then refining this image using an iterative, reweighted least-squares approximation of the acquired image with respect the trained image that progressively reduces the relative error therebetween with each iteration. That is, the original coarse pose is initially compared to the trained data. The data that appears to match best is reweighted to give that data a higher value in the overall calculation. This serves to remove portions of the image that may represent occlusions, shadows or other inconsistencies, and focusing the analysis upon more-reliable portions of the feature. After reweighting, a least-squares statistical approximation is performed on this data to refine the approximation of location and position of the object in the field of view. After (for example) approximately four iterations, the final position and location is determined with high accuracy. Two-dimensional (2D) machine vision tools, such as PatMax® are highly robust. It highly desirable to adapt the techniques employed by such tools to provide a machine vision system that could resolve a trained pattern in three dimensions (3D) characterized by as many as six degrees of freedom (x, y and z translational degrees and three corresponding rotational degrees), or more. This invention overcomes the disadvantages of the prior art by providing a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces (or other portions of the image object image that can rotate with the origin and be divided into edgelets) of the object, and thereby refining the location of the object. The object can be at a previously unknown location and orientation or can be at a relatively known location (e.g. on the end effector of a robot that includes position feedback). First a rough pose estimate of the object is derived. This rough pose estimate can be based upon (for example) location data for an end effector holding the object, a comparison of the acquired object image to a table of possible silhouettes of the trained object in different orientations, or by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing a weighted, least squares calculation to minimize the error between the defined edgelets of trained model image and the acquired runtime edgelets. In the illustrative embodiment, the overall, refined/optimized pose estimate incorporates data from each of the cameras' (or a single camera, acquiring multiple views) acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/ view's acquired runtime edgelets. In the illustrative embodiment, the pose estimate is used to determine which 3D model point of the trained image corresponds to each acquired runtime image edgelet. 3D model points are collected into discrete coplanar sets and a look-up table of coordinate values is constructed to find the closest model point corresponding to each image edgelet with each lookup table plane. Each plane of model features then is intersected by an imaginary ray that extends from the camera through the runtime image and the plane of model points. This provides the 2D position on the plane of model features. Using a correspondence lookup table, the model feature closest to the ray is determined. The model edgelet's angle is estimated through the pose. The system then confirms that the edges of the model are consistent with those of the runtime image. The error between the plane through the runtime edgelet and the camera origin versus the center point of the trained, model edgelet is considered. The error is then characterized as dot product between a plane normal to the runtime edgelet plane and the runtime edgelet point. A transformation of the trained edgelets relative to the runtime edgelets (as a whole) is derived using a refinement procedure. This refinement procedure computes a minimum sum-squared error between the edgelets and associated center points of training edgelets for the planes. The error is further refined by weighting the results for each edgelet and recomputing the sum-squared error for each trained/runtime edgelet with the new weight, over a given number of iterations (e.g. using an iterative reweighted least squares technique). Upon completion of a predetermined number of iterations a final, highly accurate transformation of the trained features relative to the runtime features is obtained from the error results. In an illustrative embodiment, the system can be trained using the machine vision procedures described above that are used to establish the rough pose of at least two discrete views of an appropriate feature or face. In various implementations, two views of a single, discrete, planar face can be acquired, using two or more different cameras. Alternatively, the two or more cameras can acquire views of different planar faces, or one camera (using, for example, a fish eye lens) can acquire two different views of two different planar faces or a single planar face. In an alternate embodiment, the system and method can take advantage of object silhouette data, whereby a plurality of silhouettes of object features are stored with position data from images of a training object acquired at a number of differing orientations. Silhouettes of the runtime object are acquired and compared to the training silhouettes to derive the best matching pose. A method for registering an object in three dimensions according to an embodiment of this invention includes acquiring training images of an object used for training with one or more cameras at training time and, at runtime, subsequently acquiring runtime images of an object to be registered at runtime with the one or more cameras. A three-dimensional pose transformation between the pose of the object used training time and the pose of the object to be registered at runtime is then determined. This determination includes (a) defining features in each of the runtime images as three-dimensional rays through an origin of each of the one or more camera's, respectively, (b) matching the three-dimensional rays to corresponding runtime features from the training images, and (c) computing an optimal pose estimate which maps the training features onto the corresponding three-dimensional rays of runtime features using iterative, reweighted least squares analysis. The acquiring of training images can be based upon views of at least two planar faces, or like features, on the object. The invention description below refers to the accompanying drawings, of which: FIG. 1 is a schematic perspective view of an exemplary machine vision arrangement with one or more imaging devices (cameras) interfaced to a processor, and showing an exemplary calibration procedure according to an illustrative embodiment of this invention; FIG. 2 is a schematic perspective view of the machine vision arrangement of FIG. 1 showing an exemplary training procedure for recognizing a three-dimensional object and/or subsequent runtime registration procedure according to an illustrative embodiment of this invention; FIG. 3 is a schematic diagram showing the registration of discrete, affinely transformed planar patterns from the object using the 2D machine vision process for training and subsequent runtime location of the object; FIG. 4 is a schematic diagram showing a graphical representation of the combination of a plurality of 2D poses of planar faces of the object to generate an overall 3D pose of the object; FIG. 5 is a schematic diagram showing a graphical representation of a procedure for corresponding a set of trained 3D model data, grouped into planes with edgelets of the acquired image of a runtime object and a ray projected between the camera, plane and runtime edgelet; FIG. 6 is a flow diagram of an exemplary system draining procedure in accordance with an illustrative embodiment; FIG. 7 is a flow diagram of an exemplary runtime object registration procedure in accordance with an illustrative embodiment; FIG. 8 is a schematic diagram showing n plane containing a runtime edgelet and its relationship to an associated center point of a traintime edgelet for use in refinement of the registered runtime pose; and FIG. 9 is a flow diagram of a refinement procedure for deriving a final transformation in six degrees of freedom for the training features versus the runtime features using an iterative, weighted least squares error computation. I. Calibration of the Cameras FIG. 1 details an exemplary setup for a machine vision system 100 capable of registering objects in three dimensions (3D) using six degrees of freedom (three translational degrees and three rotational degrees). The system 100 includes a plurality of cameras 110, 112 and 114 in this embodiment that are each oriented to derive a different view of an underlying subject. The number of cameras and their positioning with respect to a subject is highly variable. In alternate embodiments, only two camera's can be employed, or even a single camera with a sufficiently wide field of view, and some inherent distortion (for example a camera equipped with a fish-eye lens) so that, either, at least two discrete poses of a subject can be derived from a single image (if no coarse pose estimate is supplied) or at least four features on the image occur along non-parallel rays (described below) from the camera (if a coarse pose estimate is supplied). For the purpose of this description the term “imaging system” can be used to describe any acceptable camera arrangement. The cameras 110, 112, 114, or imaging system, are operatively connected with a data processor, such as the depicted personal computer 120. The computer 120 includes appropriate interfaces (not shown) for converting electronic, pixelized image data from each of the cameras 110, 112, 114 to image pixel data that can be stored in the online memory (RAM) (not shown) and/or storage (disk) 124. The computer runs various operating system processes that are well-known and, among other functions, support a display 126 and associated user interface (keyboard 128, mouse 130, etc.). The user interface allows the operator of the system 100 to interact with it during setup (calibration) and runtime. The interface and machine vision processes are contained in software processes 132 that are loaded into the computer's online memory as needed. The working data from the cameras is also stored and handled by the computer's online memory in association with the machine vision application (132). The processes employed by the application are described further below. It should be clear that a variety of interfaces and processors can be employed in alternate embodiments. For example, disconnectable and or portable interfaces can be used for setup and/or runtime operation. Likewise, the processor can be an onboard processor within the imaging system and can include a solid state, non-volatile storage memory and processes that are (in whole or in part) resident on a specialized chipset, including, for example, one or more application-specific integrated circuit(s) (ASIC(s)). Thus, as defined herein, the term machine vision “processor” should be taken broadly to include a variety of systems that are interfaced with the imaging system and can perform the machine vision processes described herein. As shown in FIG. 1, the system 100 is initially calibrated so that the cameras are properly registered with respect to a common three-dimensional (3D) coordinate system including position of the frame origin and direction and scale of the orthogonal frame axes. As shown a calibration plate 140 is placed in a fixed position while each camera 110, 112, 114 acquires an image of the plate. The plate in the embodiment defines a square boundary with four corners 142. Within the plate is a plurality of tessellated light and dark squares 144, 146. The center of the plate includes a fiducial 148 that can be any acceptable shape capable of resolution in multiple dimensions. In this embodiment, the fiducial 148 is an L-shaped region that encompasses a plurality of individual tessellated squares 144, 146. The fiducial typically defines the origin 160 of a three orthogonal axes, X, Y and Z. Each axis defines a direction of translation within the space of the cameras. Likewise, a rotational degree of freedom RX, RY and RZ is disposed about each respective axis X, Y and Z. Each camera 110, 112, 114 is oriented along a different, respective, optical axis 150, 152, 154 (a line perpendicular to the camera's image plane), and is registered with respect to the common origin 160 of the axes X, Y, and Z. The system application, thus, stores calibration data 170 in the memory 124 and online. This data is used by the system to allow any image derived by a given camera in the imaging system to register that image within a common three-dimensional coordinate system as shown. The system employs well-known calibration techniques to provide such a common coordinate system based upon a single, viewed calibration plate and fiducial. By way of further background, a discussion of camera calibration and use of calibration plates can be found in the CVL Library under the general heading Multiple Field of View Camera Calibration and or “Checkerboard” calibration, commercially available from Cognex Corporation of Natick, Mass. In addition, a discussion of mutually calibrated cameras using the same calibration plate is provided in commonly assigned U.S. patent application Ser. No. 11/246,024, entitled METHODS AND APPARATUS FOR PRACTICAL 3D VISION SYSTEM by Aaron Wallack, et al., the teachings of which are expressly incorporated herein by reference. II. Training and Runtime Image Acquisition Referring now to FIG. 2, an arrangement 200 used for registration of a viewed object 210 for training and subsequent runtime operation is now shown in further detail. For the purposes of the illustrative embodiment to be effectively registered, the object 210 must undergo a single rigid transform in all applicable degrees of freedom. That is, the object cannot be deformable nor have any internal links. In addition the object should have at least one surface that can be rotated in one or more degrees about the origin in a manner that is registerable. This surface should be defined by planar boundaries or texture that can be resolved by the system 100 into a plurality of edgelets. While the object may include self-occluding boundary curves, these are ignored for the purposes of registration. Such surfaces are generally referred to by the simplified term “planar pattern” or “planar face.” In this embodiment at least two planar patterns should be acquired to allow registration of the object in the desired six degrees of freedom. In alternate embodiments, a single planar pattern, viewed from different vantage points or two different orientations can be acquired. This can be accomplished using two or more cameras, or a single camera with an ability to create two images with differing viewpoints (for example, a fish eye lens). As shown in FIG. 2, the subject object includes two exposed planar faces 212, 214 of interest for registration purposes. These faces are somewhat complex and can result in some self-occlusion of features for certain acquired poses. Note that the viewing of two different faces with two discrete cameras is only illustrative of a variety of possible arrangements. In alternate embodiments, the system may acquire multiple images from a single planar face. In training, the object image is acquired by the imaging system (110, 112 and 114). Again the number of cameras is highly variable and the images of cameras 110 and 114 are discussed herein, without particular reference to the image of camera 112. The techniques for registering a pattern in three dimensions (3D) described hereinbelow in can be employed at training time to derive the three-dimensional shape of the object 210. In an automated training embodiment, the object 210 is imaged to derive at least two simultaneously acquired runtime images while the object remains fixed with respect to the origin and axes, X, Y and Z. It can be assumed the training object 210 generates an affinely transformed image with respect to the model image, and that there exists a weak perspective projection of the image, i.e. the system assumes a linearized approximation of the perspective image. Alternate training techniques can also be employed. Such techniques can include the use of well-known search tools for resolving edges and surfaces. Conversely, in a manual training embodiment, the user manually inputs the position coordinates for the object's edges and surfaces of interest. This training information is stored in the memory (124) for subsequent use during the runtime registration process. Note that, during training, the user can alternatively manually specify which training edgelets are on the same plane, and then specify that plane. Then the individual training edgelet positions can be determined automatically by intersecting imaginary rays (described further below) from the camera origin through each training edgelet with the specified plane. III. Estimation of Object Pose in Training and Runtime Referring to FIG. 3, a multi-part view illustrates how each of the two exemplary cameras 110, 114 derives a different image of the object 210 for both automated training (according to one embodiment) and runtime operation (described below). The camera 110, acquires the image 310. In this image the most prominent planar face is the face 214, which appears as image face 314. The adjoining object face 212 appears as corresponding oblique image face 312. The system may automatically choose the face that provides the least oblique projection. Likewise, the differently oriented camera 114 acquires the image 320, in which the object face 212 is more clearly viewed as image face 322. Note that it is also contemplated that the camera or camera's herein may acquire different images of a single planar face in another implementation of the invention. Both of these images undergo a degree of perspective projection. The above-described PatMax 2D, and available other vision systems that use similar techniques, are a very effective and robust tools for finding affinely transformed patterns in images. In this case, each depicted planar image pattern 314 and 322 is a 3D-rotated version of the corresponding model 334 and 342, which appears as an affine transformation of the model. Thus, the system assumes that the apparent affine approximation of the actual perspective projection sufficiently approximates the perspective projection which is actually occurring. Note that the system need not choose the most prominent face for registration. However, this may render the process more reliable. In both automated and manual training modes, the training model images of the object 210 are bounded by rectangular/square subwindows 350 and 352, having respective corners 360, 362 that are a subset of each camera's overall field of view. The corners 360, 362 define a known area of this overall camera field. The spatial coordinates of each subwindow are known by the system via the calibration After a trained model image data for the object 210 is generated by manual or automated processes, the system can undertake the runtime registration process. First, as again shown in FIG. 2, the (now) runtime object 210 is imaged to derive at least two simultaneously acquired runtime images 310, 320, while the object remains fixed with respect to the origin and axes, X, Y and Z. Perspective effects are again linearized as described above. IV. Registration Computation As described above, the system has knowledge of the 3D positions corresponding to the corners 360, 362 of the training pattern subwindows 350, 352. Referring to FIG. 4, the acquired runtime poses 410 , 414 and their associated subwindow corners 420, 424, the system can extrapolate the 2D positions corresponding to the corners 360, 362 of the training pattern subwindows. The system can then compute the object pose from sets of 2D image points (corners 360, 362 of training subwindows) and corresponding 3D points (3D positions corresponding to the corners 360, 362 of the training subwindows in the object's home position). The system can then proceed to compute the runtime image's 3D poses from 2D image points which correspond to 3D model positions. As will be described further below, with reference to FIG. 5, the 2D image positions correspond to rays through each camera's origin. Thus, finding a pose which maps the 2D image points to the corresponding 3D model points is analogous to finding a pose which maps the 3D model points onto the rays corresponding to the 2D image points. The computation procedure employs a function which maps from 3D pose to sum squared error, and then finding the pose with minimum error relative to the training model image. The 3D poses are characterized by quaternions, which are four numbers (a, b, c, d) that characterize a rigid 3D rotation by quadratic expressions (rather than trigonometric expressions). This form of expression is more efficiently employed by a computer processor. Note that quaternions require that a^2+b^2+c^2+d^2=1. The rotation matrix corresponding to the quaternions (a, b, c, d) is shown below: $ ⁢ a 2 + b 2 - c 2 - d 2 2 ⁢ ⁢ bc - 2 ⁢ ⁢ ad 2 ⁢ ⁢ a ⁢ ⁢ c + 2 ⁢ ⁢ bd 2 ⁢ ⁢ ad + 2 ⁢ ⁢ bc a 2 + c 2 - b 2 - d 2 2 ⁢ ⁢ c ⁢ ⁢ d - 2 ⁢ ⁢ ab 2 ⁢ ⁢ bd - 2 ⁢ ⁢ a ⁢ ⁢ c 2 ⁢ ⁢ ab + 2 ⁢ ⁢ c ⁢ ⁢ d a 2 + d 2 - c 2 - b 2 ⁢ $ Using quaternions, the system constructs a polynomial expression characterizing the error in terms of (a, b, c, d, X, Y, Z), where a, b, c, d represent the rotation and X, Y, Z (also referred to as respective translation variables tx, ty, tz) represent the translation. The Levenberg-Marquardt Algorithm (LMA) is used in this embodiment to find the configuration which induces minimum error. Note that the system only solves for six variables at a time, while there exist a total of seven variables—rotation variables a, b, c, d, and translation variables tx, ty, tz. Since the above process has yielded the approximate, coarse pose estimate, the system can enforce that one of the four (a, b, c, d) variables is fixed at 1 (for example the system can choose one of the a, b, c, or d variables which has maximal magnitude at the coarse pose estimate). The solution is mainly concerned with the ratios between a, b, c, and d; i.e., the result remains constant if all of a, b, c, and d are multiplied by the same constant value. Thus, by fixing one of the a, b, c, d values, the primary restriction is that the value that is fixed does not correspond to 0 in the optimal solution. For this reason, the variable having the maximum magnitude of a coarse pose candidate (if one exists) is selected to be equal to 1. If an initial coarse pose estimate is unavailable, the system can alternatively run four separate solvers where a, b, c, and d are each set to 1 respectively. This is because it is difficult to numerically minimize an error function while simultaneously enforcing the constraint (a^2+b^2+c^2+d^2=1). Rather, either a, b, c, or d is set to 1 and then the remaining six variables are solved-for. The computation is described further below. The system can now determine which runtime image featurelet/edgelet corresponds to which 3D model image point. Given a pose estimate, the system can efficiently employ lookup tables to determine which 3D model point corresponds to each image featurelet or edgelet. A featurelet/edgelet is a small segment of a defining boundary of a planar face or other registered feature on the training model image and runtime object image. The 3D model points can be collected into distinct coplanar sets from which a lookup table is constructed, which finds the “closest” model point to every point in the plane. Then, given the pose, the projection corresponding to each image edgelet is intersected with each lookup table plane. Then, the system performs a look up from the table for the “closest” model As discussed above, and referring to FIG. 5, for each model plane 510 of model features 511, the system intersects an imaginary ray 510 through the runtime image featurelet/edgelet 514, camera (110), and that plane 516. This projection provides a 2D position (within a 2D coordinate system as symbolized by the grid lines 520) on the plane 516 of model features. The system can employ a conventional correspondence lookup table to determine the model feature 520 closest (symbolized by arrows 522) to this ray 510. Such a 2D lookup table can be constructed as a Voronoi table based on the model points similar to the manner described by D. W. Paglieroni in “Distance Transforms” published in Computer Vision, Graphics, and Image Processing, 54:56 (74, 1992) or the manner described by D. W. Paglieroni in “A Unified Distance Transform Algorithm and Architecture” published in Machine Vision and Applications 47-55, Volume 5, Number 1, December 1992, the teachings of which are herein incorporated by reference. Each entry in the 2D lookup table should record the closest model feature as computed/determined in the Voronoi table. There may be multiple candidate matches (i.e. one closest-point for each model plane), in which all the identified matches (on various model planes) satisfy the feature orientation constraint and any distance constraint (e.g. maximum allowable distance that the system recognizes for determining closeness). The feature orientation constraint relates to the orientation match between the model feature viewed at its current pose and the orientation of the runtime feature—i.e. the system is aware of the 3D direction of the edge at each model point, because it knows the edgelet orientation at each model point and it knows the plane that the model feature resides in. Alternatively, the system can choose to only use the model feature (from all of the candidate planes) which “best matches” the runtime feature. It should be noted that every edgelet can be characterized by a discrete ray, thereby allowing each edgelet to be matched readily with the closest 3D position/model feature for that ray. V. Training and Registration Procedure Overview Reference is now made to FIGS. 6 and 7, which respectively show a summary of the procedure 600 for training model features and the basic procedure 700 for subsequently registering objects in accordance with the above description. In the training procedure 600 (FIG. 6), the 3D positions of selected object model features are first determined by manual or automated techniques as described above (step 610). The 3D model features are then grouped into appropriate planes (step 620). Finally, the planes and their associated features are placed into an associated 2D correspondence lookup table that can be used at runtime to register the runtime pose (step 630). During the runtime procedure 700 (FIG. 7), the system first obtains a rough pose estimate of the runtime image in accordance with the procedures described above (step 710). Each runtime edgelet in the acquired image is then associated with an intersecting imaginary ray from the camera (step 720). Each plane of model features is then transformed in accordance with the pose estimate and intersected with the ray (step 730). Next, the runtime procedure 700 employs the 2D correspondence lookup table established in the training procedure step 630 to locate the closest positioned 3D model feature/point to that ray on a given plane (step 740). Various procedures can be employed for deciding the best candidate point where there is a plurality of planes with a “closest” point. The procedure 700 estimates the model feature's angle through the runtime pose and confirms that the feature's edges are consistent with those of the pose (step 750). If the orientations and distances of the respective edges of the runtime pose and the training features are consistent (decision step 760 and branch 762), then the procedure 700, in step 770, includes the present orientations and distances in the runtime image refinement procedure (steps 910-940 in FIG. 9 below) to determine a more-accurate pose. If the edges are not sufficiently consistent (e.g. they do not meet the angular orientation and/or distance constraints), then the procedure 700 ignores the subject edgelet from which the orientation and distance measurements are derived and moves to the next edgelet to again perform the procedure steps 720-760 on the new edgelet. When all edgelets have been analyzed by the procedure, the procedure 700 ends and any consistent edgelets have been refined. VI. Refinement of Registration Through Error Contributions of Data In general, an illustrative refinement procedure for the coarse pose derived above relies on knowledge of the 3D position of each 2D edgelet in the training model. Also, in this embodiment, the refinement procedure may incorrectly register a discrete image edgelet corresponding to a projected 3D position even though that position is self-occluded. In other words, the 3D registration procedure does not concern itself with knowing occluding faces and checking each registered edgelet for occlusion. Rather, the feature as a whole is registered. Referring now, to FIG. 8, the systems refines the registration of runtime images relative to the trained object by taking into consideration the error between the plane 810 through the runtime edgelet 820 (and the camera origin 830) and the center point 840 of the traintime edgelet. This error can be characterized as a distance 850, and is derived for each runtime edgelet corresponding to a traintime edgelet. Notably, by characterizing the error in this manner, it reduces to a dot product between a plane normal (normal to the runtime edgelet plane) 860 and the runtime edgelet. Thus, the sum squared error can be characterized algebraically as: (RuntimePlaneNormal·[Matrix for mapping Traintime Points]* TraintimePosition−x)^2. VII. Weighted Least Squares Error Computation The error corresponding to a particular set of data can be characterized by the monomial coefficients of the error expression. As a simplified example, if the error expression was: 2x^2+3x+7, then the system need track only the coefficient values 2, 3, and 7. As discussed above, quaternions are used to characterize the 3D transformation, i.e. the 3D rotation component of the transformation is characterized by degree two in a, b, c, d, and in degree one in tx, ty, and tz. (also denoted simply as x, y and z). Thereby, the total error between 3D lines to 3D points can be characterized as a rational algebraic expression in terms of a, b, c, d, x, y, and z. Given a set of points and corresponding planes, the system can compute the coefficients of the sum squared error polynomial. Furthermore, the system can incorporate weights for each edgelet and corresponding normal plane to compute coefficients (including the weight data). In particular, the system employs the above-described Levenberg-Marquardt Algorithm (LMA) to determine the pose which induces the minimum squared error. An iterative reweighted least squares techniques is applied to compute the optimal pose estimate. In general, at each iteration, following the computation of a pose estimate, the model features corresponding to the runtime image features are redetermined. In particular, for a given pose, the system estimates the distance between each runtime edgelet and corresponding normal plane, and then reweights each pair according to how well the transformed model points (as a whole) match the planes. Then, the system reestimates the pose based on these new weights, and in a subsequent step recomputes the weights based on the reestimated pose; and continues reestimating the pose and recomputing the weights until finally converging on a pose (or performing a maximum number of iterations). While the present embodiment employs only weighting based on image feature distance, in alternate embodiments, weighting schemes based on more comprehensive parameters can be employed. In general, the individual runtime edgelet/model edgelet (normal plane/point) correspondences can be weighted according to certain match criteria, including: □ (a) The image distance between each individual runtime edgelet/model edgelet (if the image distance exceeds twice the overall root-mean-square (RMS) image distance discrepancy of all runtime/ image edgelets, then the system weights this correspondence as 0, and if the image distance discrepancy is between 1*RMS and 2*RMS, then the system linearly weights this correspondence as (1− (discrepancy−RMS)/RMS). If the image distance discrepancy is less than RMS image distance, then the weight should be 1; □ (b) The image angular discrepancy between each individual runtime edgelet/model edgelet—for example, if the angular discrepancy is greater than 15 degrees, then the weight is 0; if the angular discrepancy is between 7.5 degrees and 15 degrees, then the weight is (1−angular discrepancy/7.5)/7.5); and if the angular discrepancy is less than 7.5 degrees, then the weight is 1; □ (c) The correspondences are weighted by the distance from the camera—for example, model edgelets which are twice as far from their respective cameras are weighted to one-half (½). For any of the above (or other) weighting schemes, all of the weight parameters are combined—for example a weighting of 0.64 due to image distance discrepancy, and a weighting of 0.5 due to image angular discrepancy, and a weighting of 0.5 due to distance from the camera all combine to form an overall weight of (0.64*0.5*0.5)=0.16. To summarize, FIG. 9 details a simplified flow diagram of the generalized procedure 900 for refining the transform between the trained model features the runtime features using a weighted least squares computation in accordance with an embodiment of this invention. First, in step 910, the procedure establishes a dot product result between the coordinates for the plane normal to the runtime edgelet-containing plane and the associated, closest traintime feature center point for that plane for each runtime feature. Next, the minimum least squares error (or any other acceptable statistical operation) for each result is computed using appropriate computational techniques (step 920). The results are then each weighted based upon the image distance discrepancy between the runtime and traintime edgelet versus the average (RMS) image discrepancy across all features/edgelets (step 930). As noted above, other weighting schemes can be employed. In step 940, the minimum least squares error is recomputed using the weighting of each point result from step 930. In decision step 950, the procedure 900 determines whether sufficient iterations have occurred. Typically four iterations should be sufficient, but the number of iterations is highly variable. If a sufficient number of iterations have not occurred, then the procedure branches back to step 720 in the edgelet correspondence and consistency procedure 700 of FIG. 7. All runtime edgelets are analyzed to determine their corresponding training features, and are also analyzed for relative consistency with respect to training features. When all edgelets have been included as consistent, or ignored, the procedure step 770 branches back to refinement procedure (900) step 910 and the next iteration of the procedure 900 occurs. When sufficient iterations have occurred, the decision step 950 branches to step 960, and the final six-degree-of-freedom transformation of the training pose is derived with which the runtime image is registered. Given the complex and voluminous nature of the weighted sum square calculation, the system typically employs a publicly available symbolic mathematics software package (such as the popular Maple package from Waterloo Maple, Inc. of Canada) to compute the symbolic weighted error contributions automatically from the corresponding points and associated planes and also to generate source code which can be complied to run quickly on a computer. Then, at runtime, the expressions are instantiated from the sets of points and corresponding planes via the compiled computer-generated code, and then solved numerically using numerical routines which also include code which has been automatically generated from symbolic mathematics software. In operation, approximately four iterations will generate a transform of the runtime pose that is highly registered with respect to the training data. The following is a more-detailed description of the procedure for estimating of the pose of a set of 3D model edgelets corresponding to a set of runtime image edgelets viewed from multiple cameras. Stated again, each runtime image edgelet viewed from a previously calibrated camera (110, 112, and/or 114) corresponds to a plane in physical space. A point in the image corresponds to a line in physical space e.g. all of the points along the line which project to the same image coordinate. The edgelet in the image can be considered two endpoints, wherein each endpoint corresponds to a ray through the camera's origin and that endpoint on the image plane—which thereby corresponds to two discrete rays in physical space, and wherein the three points (i.e. the camera's origin and the two edgelet endpoints) span a plane. The distance between a plane and a point is a linear function. The system characterizes the plane by the normal thereto and the distance from the origin as (ax+by+cz=d). The squared distance between a given point and an associated normal plane is defined by the relationship (ax+by+cz−d)^2. The planes are fixed/determined by the runtime edges as described above, and the (x, y, z) positions are the model positions transformed by the six-degree-of-freedom pose. The squared error contribution (ax+by+cz−d)^2 can be weighted according to the distance between the point and the normal plane, and/or according to the orientation discrepancy between the model and runtime features. The discrepancy can also be scaled by the inverse of the distance from the camera (to characterize the perspective projection). The general form of a sum of squared error between fixed planes and corresponding points is a fourth order equation in a, b, c and a second order equation in tx, ty, tz. The pose can be solved with minimum error by finding the configuration for which the partial derivatives of the error function (with respect to the variables) is zero. In particular, the pose can be solved using numerical methods. In order to improve runtime solver performance, the equation is made generic in terms of the monomial coefficients of the sum squared equation. As such, each model point mapped by the six-degree-of-freedom transform (a, b, c, d, tx, ty, tz) can be computed as follows: $⁢ ⁢ a 2 + b 2 - c 2 - d 2 2 ⁢ ⁢ bc - 2 ⁢ ⁢ ad 2 ⁢ ⁢ bd + 2 ⁢ ⁢ a ⁢ ⁢ c 2 ⁢ ⁢ bc + 2 ⁢ ⁢ ad a 2 - b 2 + c 2 - d 2 2 ⁢ ⁢ bc - 2 ⁢ ⁢ ad 2 ⁢ ⁢ bd - 2 ⁢ ⁢ a ⁢ ⁢ c 2 ⁢ ⁢ bc + 2 ⁢ ⁢ ad a 2 - b 2 - c 2 + d 2 ⁢ ⁢ M x M y M z + t x ⁡ ( a 2 + b 2 + c 2 + d 2 ) t y ⁡ ( a 2 + b 2 + c 2 + d 2 ) t z ⁡ ( a 2 + b 2 + c 2 + d 2 ) ⁢ ⁢$ Then, the sum squared error between a point and a fixed normal plane is: $⁢ Plane ⁢ ⁢ Norm x Plane ⁢ ⁢ Norm y Plane ⁢ ⁢ Norm z ⁢ Mapped x Mapped y Mapped z - d ⁢ 2$ Note that this expression is rational polynomial in tx, ty, tz, and a, b, c, d. In particular, it is quadratic in the variables tx, ty, and tz, and quartic in the variables a, b, c, d. In practice, a solution for this expression requires it to be divided by the sum of the squares of a, b, c and d. However, the solution in the illustrative embodiment computes only the numerator, and the denominator is taken into account when partial derivatives are computed for the expression by employing the quotient rule of differentiation. For a further description, refer to the COMMENT in the exemplary Maple code listing below. Consequently, this polynomial can be characterized by keeping track of the coefficients, and the equation can be generalized (made generic) using the tools in the publicly available MARS (a multi-polynomial system-solver environment) library implemented using the Maple-format computer-readable code described further below. This MARS library is publicly available via the World Wide Web from the Department of Computer Science at the University of North Carolina at the Web address www.cs.unc.edu/˜geom/MARS/. Next, the system computes C code functions from a standard library to update the generically-named coefficients (named, for example, atx0ty0tz0a1b1c1d1 to characterize the coefficient corresponding to the tx^0ty^0tz^0a^1b^1c^1d^1 term). Then, the system solves for the pose with minimum error by finding the configuration for which the partial derivatives of the error function (with respect to each of the variables) is zero. An iterative numerical technique, such as the Levenberg-Marquardt Algorithm (LMA), can be used to solve for the roots of the system of partial derivatives. An exemplary Maple-based program for deriving the weighted sum-squared error at each point is as follows: quatRot := val := matrix(1,4,[x,y,z,1]); dot := matrix(1,4,[px,py,pz,pt]); pos := multiply(quatRot,transpose(val)); unit := (a*a+b*b+c*c+d*d); weightMat := multiply(dot,pos); readlib({grave over ()}C{grave over ()}); read({grave over ()}genericpoly2.map{grave over ()}); COMMENT: Note that genPoly corresponds to the numerator of the rational algebraic expression, and that it is supposed to be divided by unit*unit. We use the quotient rule of differentiation to compute the overall derivative, which is (bottom*derivative(top) − top*derivative(bottom))/(bottom*bottom), but since derivative(unit*unit) = unit*2*derivative(unit), and since we are only interested in roots, we can factor out 1 unit and ignore the denominator genPolyDA := simplify(expand(unit*diff(eval(genPoly[1]),a)−eval(genPoly[1])*4*a)); genPolyDB := simplify(expand(unit*diff(eval(genPoly[1]),b)−eval(genPoly[1])*4*b)); genPolyDC := simplify(expand(unit*diff(eval(genPoly[1]),c)−eval(genPoly[1])*4*c)); genPolyDD := simplify(expand(unit*diff(eval(genPoly[1]),d)−eval(genPoly[1])*4*d)); genPolyDTX := simplify(expand(diff(eval(genPoly[1]),tx))); genPolyDTY := simplify(expand(diff(eval(genPoly[1]),ty))); genPolyDTZ := simplify(expand(diff(eval(genPoly[1]),tz))); The following function ptRayGenericPoly_addToValsWeighted(ptRayGenericPoly( ) is used to update the coefficients based upon a 3D model point (x, y, z) and a plane (corresponding to an image edgelet viewed by a camera: px, py, pz, pt), and the contribution corresponding to this 3D model point and corresponding plane are weighted by the given weight: • void ptRayGenericPoly_addToValsWeighted(ptRayGenericPoly*vals, □ double x, □ double y, double z, □ double px, □ double py, □ double pz, □ double pt, □ double weight) The resulting code generated by Maple based upon the above program expression is used by the processor to solve the weighted least squares computation in accordance with an embodiment of this invention. The code is conventional and is omitted for brevity. VIII. Coverage Measurement PatMax 2D provides a coverage measure for each result. This coverage expresses how much of the traintime model is explained by the runtime data. It is contemplated that the illustrative embodiment can also provide a coverage measurement using conventional techniques applicable in a 2D vision system. Generally, coverage refers to the fraction of traintime edges that are matched by the runtime edges. However, the “coverage” of a particular traintime edge is defined as the computed weighting of that correspondence between the traintime edge and the corresponding runtime edge. Additionally, it is contemplated that the illustrative embodiment (like PatMax 2D) computes the coverage by “shmearing” the coverage along the traintime chains of points/edgelets. In other words, if a traintime edgelet along a chain has a weight of 1.0, then both traintime edgelet neighbors (along the chain) are granted a weight of 1.0 (or more generally, the maximum weight of either of the traintime edgelet's neighbors). The reason for this “shmearing” is that edges are not always sampled evenly—a line segment at traintime might correspond to 10 traintime edges, but at runtime, that edge may be further from the camera, and may only correspond to 8 runtime edges. Without “shmearing”, the coverage would be 0.8 (because there are only 8 runtime edges to match 10 traintime edges), even though the line was completely seen in the runtime image. Note that the amount of “shmearing” the amount of shmearing applied to the coverage analysis can be analytically determined by the angle between the model features and the camera, such that if the object's pose significantly foreshortens a particular set of model features, then a single model feature in that set may shmear its coverage to neighboring model features which are more than one edgetlet removed from the covered model feature. IX. Other Considerations Unlike certain competing approaches to 3D registration, it should be clear that this approach has increased efficiencies, accuracy and robustness in that it is based on use of edges in the training and runtime images, rather than upon triangulation of computed 2D image locations, particularly where the 2D image locations were computed by registering a pattern or model in the image. This approach also employs direct computational methods such as least squares optimization. Moreover, the illustrative invention can make use of silhouette-based approaches as discussed briefly above. Such silhouette-based approaches involve the use of multiple (possibly hundreds or thousands) images of the part are acquired at different poses, and from which the image silhouettes are extracted and stored in memory as traintime image silhouettes in association with relevant position information. At runtime, the runtime silhouette is acquired, and is matched to all of the traintime silhouettes. The best-matching training silhouette provides coarse-alignment with that traintime silhouette's pose information used to provide the coarse pose from which transforms are derived as described above. The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope if this invention. Each of the various embodiments described above may be combined with other described embodiments in order to provide multiple features. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, the use of an iterative, weighted least squares calculation for refinement of the pose is only one of a variety of possible techniques for refining the transformation between the trained features and the runtime features. Other statistical techniques for minimizing error are expressly contemplated. Likewise, it should be clear that the procedures described herein can be implemented in hardware, software, consisting of program instructions executing on a computer processor or a combination of hardware and software. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
{"url":"http://www.google.ca/patents/US8126260?ie=ISO-8859-1","timestamp":"2014-04-16T07:38:50Z","content_type":null,"content_length":"193825","record_id":"<urn:uuid:8b79de34-ac5d-4959-899b-f050f2feef7e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Does this method have a name? Function Approximation by Polynomial Sum I knew about it thanks. I think I thought of this when I heard about Taylor series and I was guessing what it was. It looks bad with functions like sin(πx) and x^n; it looks too symmetric; it's limit is almost always + or - infinity; it completely ignores the negative of the function. But some problems would be partially solved if I could find a way to take the limit as n goes to infinity. Help would be appreciated if possible. This math is messy unfortunately. I'll have a look at those other methods. Maybe they'll be helpful, thank you.
{"url":"http://www.physicsforums.com/showthread.php?p=3902325","timestamp":"2014-04-19T12:32:15Z","content_type":null,"content_length":"31150","record_id":"<urn:uuid:4bafb4df-fbcc-419f-a505-5e1be0ef79ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and apparatus for performing channel equalization in communication systems - Patent # 7535954 - PatentGenius Method and apparatus for performing channel equalization in communication systems 7535954 Method and apparatus for performing channel equalization in communication systems (3 images) Inventor: Kim, et al. Date Issued: May 19, 2009 Application: 10/659,286 Filed: September 11, 2003 Inventors: Kim; Min-Ho (Suwon, KR) Park; Jae-Hong (Seoul, KR) Chung; Jung-Wha (Seoul, KR) Assignee: Samsung Electronics Co., Ltd. (Gyeonggi-do, KR) Primary Ghebretinsae; Temesghen Attorney Or Harness, Dickey & Pierce, P.L.C. U.S. Class: 375/229; 375/232; 708/322 Field Of 375/229; 375/232; 375/350; 708/322; 708/323 International H03H 7/30 U.S Patent Patent 2001-196978 Abstract: A method and apparatus for performing channel equalization in a communication system may potentially reduce power consumption in channel equalizers of communication systems. A filtering circuit filters a received data sequence as a plurality of data values to be stored in a plurality of filter cells. Each filter cell may store at least one data value and may contain a coefficient related to the stored data value. A coefficient updating circuit may update the coefficients based on at least one parameter, and may compare the updated coefficients to a threshold. Based on the comparison, filter cells of selected coefficients may be selected for restoring the received data sequence to its original state. Claim: What is claimed is: 1. A method of reducing a number of filter cells that require updating in a channel equalizer of a communication system, the method comprising: filtering a data sequence intoa plurality of data values for storage in a plurality of filter cells having a plurality of adjustable coefficients; deriving an optimum value of at least one coefficient among the coefficients; updating the at least one coefficient with the derivedoptimum value to provide at least one updated coefficient value; comparing the at least one updated coefficient value to a threshold to eliminate at least one of the filter cells from having to be updated; and repeating, at least once, the filtering,deriving, updating, and comparing such that the updating and comparing do not occur for the at least one of the filter cells that is eliminated from having to be updated. 2. The method of claim 1, further comprising: setting an updated coefficient value to zero, if the updated coefficient value is less than the threshold. 3. The method of claim 1, wherein deriving an optimum value is performed during filtering a data sequence. 4. A channel equalizing method, comprising: filtering a data sequence into a plurality of data values for storage in a plurality of filter cells having a plurality of adjustable coefficients; deriving an optimum value of at least onecoefficient among the coefficients based on a training sequence associated with the data sequence currently being filtered and a known training sequence; updating the at least one coefficient based on the derived optimum value, a Kalman gain, and adifference between the training sequence associated with the data sequence currently being filtered and the known training sequence to provide an updated coefficient value; comparing the updated coefficient value to a threshold; reducing a number offilter cells with coefficients to be updated, based on the comparison; and repeating, at least once, the filtering, deriving, updating, and comparing, such that the updating and comparing do not occur for the filter cells that do not have coefficientsto be updated. 5. The method of claim 4, wherein reducing a number of filter cells includes setting an updated coefficient value to zero, if the updated coefficient value is less than the threshold. 6. A coefficient updating circuit of a channel equalizer in a communication system, the circuit comprising: storage means storing coefficients related to data values of a received data sequence, at least one data value of the received datasequence received in one of a plurality of filter cells, each filter cell having a coefficient related to the received at least one data value; update means updating the coefficients based on at least one parameter; compare means comparing the updatedcoefficients to a threshold; and selecting means selecting filter cells of selected coefficients based on the comparison; wherein the coefficients of filter cells that are not selected are not updated by the update means, and wherein the coefficientsof the filter cells that are not selected are not compared to the threshold. 7. The circuit of claim 6, wherein the received data sequence includes an associated training sequence, the circuit further comprising: deriving means determining an optimum value for each coefficient based on the associated training sequenceand a known training sequence. 8. The circuit of claim 7, wherein the update means updates the coefficients based on one or more of the optimum values, a Kalman gain value, and a difference value between the associated training sequence and the known training sequence. 9. The circuit of claim 6, wherein the compare means; sets an updated coefficient to zero, if a value of the updated coefficient is less than the threshold, else selects filter cells to be updated, for updated coefficients equal to orexceeding the threshold. 10. A channel equalizer, comprising: a filtering circuit filtering a data sequence and having a plurality of filter cells to receive data values of the filtered data sequence, each filter cell having an adjustable coefficient; and acoefficient updating circuit deriving an optimum value of at least one coefficient among a plurality of coefficients during the filtering, determining an updated coefficient value based on the optimum value, comparing the updated coefficient value to athreshold, and setting a the updated coefficient value to zero, if the updated coefficient value is less than the threshold; wherein the coefficient updating circuit does not determine the updated coefficient value or compare the updated coefficientvalue for filter cells in which the updated coefficient value is set to zero. 11. A channel equalizer, comprising: a filtering circuit filtering a data sequence and having a plurality of filter cells to receive data values of the filtered data sequence, each filter cell having an adjustable coefficient; and acoefficient updating circuit deriving an optimum value for at least one coefficient among a plurality of coefficients during the filtering based on a training sequence associated with the data sequence that is currently being filtered and a knowntraining sequence, determining an updated coefficient value based on the optimum value, a Kalman gain, and a difference between the associated training sequence and the known training sequence, comparing the updated coefficient value to a threshold, andreducing a number of the filter cells having coefficients to be updated, based on the comparison; wherein the coefficient updating circuit does not determine the updated coefficient value or compare the undated coefficient value for filter cells that donot have coefficients to be updated. 12. An apparatus which implements channel equalization in a communication system in accordance with the method of claim 1. 13. An apparatus which implements channel equalization in a communication system in accordance with the method of claim 4. 14. A channel equalizer in a communication system operating in accordance with the method of claim 1. 15. A channel equalizer in a communication system operating in accordance with the method of claim 4. 16. The method of claim 1, wherein a training sequence is associated with the data sequence. 17. The method of claim 16, wherein the channel equalizer includes a known training sequence. 18. The method of claim 17, wherein the associated training sequence is the same as the known training sequence. 19. The method of claim 17, wherein the associated training sequence is not the same as the known training sequence. 20. The method of claim 4, wherein repeating, at least once, the filtering, deriving, updating, and comparing comprises: repeating, at least once, the filtering, deriving, updating, comparing, and reducing, such that the updating, comparing,and reducing do not occur for the filter cells that do not have coefficients to be updated. Description: PRIORITY STATEMENT This application claims the priority of Korean Patent Application No. 2002-73325 filed 23 Nov. 2002, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference. BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a channel equalizing method and channel equalizer in a digital communication system. 2. Description of the Related Art Channel equalization is a signal processing technique typically used in a digital communication system. Channel equalization is typically performed in a digital communication system in order to prevent the occurrence of channel noise, channeldistortion, multi-path error and multi-user interference, thereby improving system performance. Channel equalizers may typically be found in household appliances such as digital TVs and personal communication systems. The use of a variety of differenttypes of channel equalizers in household appliances such as described above may increase a carrier interference to noise ratio, known as a signal-to-noise ratio (SNR) and reduce a symbol error rate (SER) of an input signal. The Advanced Television Systems Committee (ATSC) provides standards for digital high-definition television (HDTV). The ATSC document A53, dated Sep. 16, 1995 describes an approved standard for digital TV. This standard specifies specifictraining sequences that are incorporated into video signals transmitted over a terrestrial broadcast, cable and/or satellite channel, etc. ATSC document A54, dated Oct. 4, 1995, describes general implementation of this standard. ATSC document A54 discloses a method of adapting on equalizer's filter response of an equalizer to adequately compensate for channel distortion. Thismethod may be disadvantageous, however, in that there is a higher probability that coefficients which are set in the equalizer are not set so as to adequately compensate for channel distortion which may be present as the equalizer first operates (i.e.,upon start-up or initialization of the To force a convergence of the equalizer coefficients, a well-known `original training sequence` is transmitted. An error signal is formed by subtracting a locally generated copy of the training sequence from the output of the equalizer. Thecoefficients are set so as to minimize the error signal. After adaptation of the equalizer with the training signal, the equalizer may be used for filtering a video signal, for example. In general, linear filters are used for channel equalization. Feedback-type non-linear filters are also commonly used in order to effectively remove impulse noise and non-linear distortion present in a communication channel, so as to improveequalizer performance. Further, a least mean square algorithm, which has a simple structure and requires a small amount of calculation, may be used as a `tap coefficient updating algorithm` in the equalizer. However, coefficients typically convergeslowly when using the least mean square algorithm, which means the convergence time increases. Thus, this algorithm is typically unsuitable for a multi-path communication environment, in which the speed of data transmissions, and transmission delays,are increased. Accordingly, an equalizer is required which is capable of converging coefficients as fast as possible, during a short duration such as a period of a training signal, for example. A `Kalman algorithm` is one of a group of algorithms having fast converging characteristics. However, the Kalman algorithm requires a substantial amount of calculation, thus there are difficulties in applying this algorithm to a communicationsystem. Although substantial advances in hardware have enabled the use of the Kalman algorithm in digital communication systems, the increased processing power needed for these substantial calculations is a problem to be addressed. SUMMARY OF THE INVENTION Exemplary embodiments are directed to a method and apparatus for performing channel equalization in a communication system. A filtering circuit filters a received data sequence as a plurality of data values to be stored in a plurality of filtercells. Each filter cell may store at least one data value and may contain a coefficient related to the stored data value. A coefficient updating circuit may update the coefficients based on at least one parameter, and may compare the updatedcoefficients to a threshold. Based on the comparison, filter cells of selected coefficients may be selected for restoring the received data sequence to its original state. BRIEF DESCRIPTION OF THE DRAWINGS The above and other aspects and advantages of the present invention will become more apparent by describing, in detail, exemplary embodiments thereof with reference to the attached drawings, in which: FIG. 1 is a schematic block diagram of a channel equalizer according to an exemplary embodiment of the present invention. FIG. 2 is a flowchart illustrating a method of equalizing a channel according to an exemplary embodiment of the present invention. FIG. 3 is a detailed block diagram of a channel equalizer according to an exemplary embodiment of the present invention. FIG. 4 is a block diagram illustrating the functions of a coefficient updating circuit in FIG. 1. DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the present invention are shown. However, exemplary embodiments of the present invention may be embodied inmany different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept ofthe invention to those skilled in the art. The same reference numerals represent the same elements throughout the drawings. To provide a context for understanding a method and apparatus for performing channel equalization according to the exemplary embodiments of the present invention, a method of realizing a filter cell (or tap) of a filter using the Kalman algorithmis briefly described. A data signal D(t) received at an instant of time t (i.e., at time t) can be expressed by Expression (1): D(t)=[D.sub.1(t), D.sub.2(t), . . . , D.sub.N(t)].sup.T. (1) In Expression (1), T denotes a transpose matrix. The received data signalD(t) is a signal that is distorted when passing through a multi-path environment which causes interference between signals and multi-path distortion. In a digital communication system such as a DTV, for example, a channel equalizer receives data througha channel, compares the received data with a training signal, determines the characteristics of the channel based on the comparison result, and restores the received data to its original data. At time t, a filter cell coefficient vector C(t), which is known as a tap coefficient or a coefficient for the equalizer, may be expressed by Expression (2): C(t)=C(t-1)+K(t)e(t). (2) In Expression (2), e(t) denotes a difference between a signal output from the channel equalizer and a training signal that is known to a receiving side (or the channel equalizer), at time t, and K(t) denotes a Kalman gain that is expressed byExpression (3): K(t)=[.lamda..sup.-1P(t-1)D(t)]/[1+.lamda..sup.-1D(t).sup.TP(t-1)D(t)]. (3) In Expression (3), 0.9<.lamda.<1, and P(t-1) denotes an error covariance matrix. The error covariance matrix P(t) is expressed by Expression (4): P(t)=.lamda.P(t-1)-.lamda.K(t)D (t).sup.TP(t-1). (4) A coefficient for a filter cell (or tap) may be updated using the Kalman algorithm, as shown by the following equations of Expression (5): C(t)=N+(1.times.N)C K(t)=[C(N.times.N) (N.times.1)]/[(1+C)(1.times.N)(N.times.N)(N.times.1)]P(t)=C(N.times.N)-C(N.times.1)(1.times.N)(N.times.N). (5) In Expression (5), C denotes a constant, N denotes the sum of the number of feedforward filter cells and the number of feedback filter cells, and (N.times.N) denotes a matrix consisting of N columns of the sum and N rows of the sum. Expression (5) reveals that the amount of calculation by a filter using the Kalman algorithm increases when calculating (N.times.N) and (N.times.1)(1.times.N). The amount of calculation by the filter using the Kalman algorithm may be expressedas 0 (N.sup.2). FIG. 1 is a schematic block diagram of a channel equalizer 100 according to an exemplary embodiment of the present invention. The channel equalizer 100 can be used in a receiver in a digital communication system such as a terrestrial digitaltelevision (DTV), high definition television (HDTV), etc., for example. The channel equalizer 100 may include a filtering circuit 200 which receives and filters input data D.sub.i and outputs a filtering result to a coefficient updating circuit 300. The filtering circuit 200 may include a plurality of filter cells. Each filter cell may have a plurality of adjustable coefficients. For each filter cell, the coefficient updating circuit 300 adjusts at least one of the plurality of Respective received input data sets D.sub.i (i is a natural number) may contain respective training sequences TS.sub.i and data sequences DS.sub.i. The coefficient updating circuit 300 equalizes the received training sequences TS.sub.i andupdates a given coefficient from the plurality of adjustable coefficients. Filtering circuit 200 thus contains a plurality of filter cells having adjustable coefficients. Coefficient updating circuit 300 derives an optimum value for at least one coefficient for each filter cell. This may be done using an already-knowntraining sequence and a received training sequence (TS.sub.i) transmitted in association with the data sequence (DS.sub.i) that is currently being filtered. Thus, channel equalizer 100 may update a corresponding coefficient based on a number ofparameters, The parameters may include the obtained optimum value, a Kalman gain, and a difference between the known training sequence and the transmitted training sequence. The channel equalizer 100 compares the updated coefficient with a given (orspecified) threshold value, and may reduce the number of filter cells having a coefficient requiring an update, based on the comparison result. FIG. 2 is a flowchart illustrating a method of performing channel equalization of a channel according to a preferred embodiment of the present invention. Referring to FIG. 2, initially a received data sequence (DS.sub.i) is filtered (function410). An optimum value of at least one of the coefficients may be derived (function 420) using a data sequence which is already known to a receiving side (or to the equalizer 100) and a training sequence (TS.sub.i) that is transmitted with a datasequence (DS.sub.i) that is currently being filtered. The optimum value represents an adjusted value of a selected coefficient that can be retrieved from a memory in equalizer 100 at the end of an equalization process, for example. The filtering andderiving functions 410, 420 may overlap in time. The value of the coefficient may be updated (function 430) based on at least one of the plurality of parameters. The parameters, as discussed above, may include the derived optimum value, a Kalman gain and a difference between the transmittedtraining signal and the training signal which is already known to the receiving side. The updated coefficient may be compared (function 440) with a given threshold value, and filter cells (or tabs) to be updated may be selected based on the comparison result. For example, filter cells having coefficients less than the thresholdvalue may be set to zero. Those filter cell equal to or exceeding the threshold may thus be selected, thereby potentially reducing the number of filter cells that need to be updated. After the selected coefficient are updated, a new data sequence isfiltered. FIG. 3 is a detailed block diagram of a channel equalizer 100 according to an exemplary embodiment of the present invention. Referring to FIG. 3, the channel equalizer 100 receives continuously input data sets D.sub.i, and outputs an outputsignal X.sub.out. A filtering circuit 200 includes a plurality of filter cells 210.sub.1 through 210.sub.m, a plurality of filter cells 220.sub.1 through 220.sub.n (m and n may be natural numbers greater than 2), and an adder 290. Each of the filtercells 210.sub.1 to 210.sub.m may be configured as a feedforward filter; each of the plurality of filter cells 220.sub.1 through 220.sub.n may be configured as a feedback filter. As shown in FIG. 3, filter cells 210.sub.1 through 210.sub.m may include corresponding data registers 230.sub.1 through 230.sub.m, coefficient registers 240.sub.1 through 240.sub.m, and multipliers 250.sub.1 through 250.sub.m. Filter cells220.sub.1 through 220.sub.n may include corresponding data registers 260.sub.1 through 260.sub.n, coefficient register 270.sub.1 through 270.sub.n, and multipliers 280.sub.1 through 280.sub.n. Each data registers 230.sub.1 through 230.sub.m stores a current data value for a data sequence that is related to each coefficient CK. Similarly, each coefficient register 240.sub.1 through 240.sub.m and each coefficient register 270.sub.1through 270.sub.n stores current values of CK. Each data register 260.sub.1 through 260.sub.n stores a data value for a feedback data sequence currently related to the CK. A training sequence TS.sub.i may be stored in a training sequence memory 350. The training sequence TS.sub.i may be composed of m continuous data having values T.sub.i(1), T.sub.i(2), . . . , T.sub.i(m), for example. When a data sequence DS.sub.i is transmitted to the filtering circuit 200, the filtering circuit 200 receives values DS.sub.i(t) of a data sequence DS.sub.i at a time t. That is, the data sequence DS.sub.i is transmitted to the filtering circuit200, and the data sequence values DS.sub.i(t) are stored in the filter cells 210.sub.i through 210.sub.m and the filter cells 220.sub.i through 220.sub.n . At time t, the data sequence value DS.sub.i(t) is transmitted to filtering circuit 200 and storedin data register 230.sub.1 of the first filter cell 210.sub.1. At time t+1, the data sequence value DS.sub.i(t) is transmitted to the data register (not shown) of the second filter cell 210.sub.2 from the data register 230.sub.1 of the first filter cell 210.sub.1. During the transmission of the datasequence value DS.sub.i(t), a next data sequence value DS.sub.i(t+1) is also transmitted to the filtering circuit 200, received by the filtering circuit 200 and stored in the data register 230.sub.1 of the first filter cell 210.sub.1. At time t+2, the data sequence value DS.sub.i(t) is transmitted to the data register (not shown) of the third filter cell 210.sub.3 from the data register of the second filter cell 210.sub.2, and the data sequence value DS.sub.i(t+1) istransmitted to the data register of the second filter cell 210.sub.2 from the data register 230.sub.1 of the first filter cell 210.sub.1. At the same time, a next data sequence value DS.sub.i(t+2) is transmitted to the filtering circuit 200 and storedin the data register 230.sub.1 of the first filter cell At time t, each of the multipliers 250.sub.1 through 250.sub.m, of the respective filter cells 210.sub.1 through 210.sub.m, receives a value of a coefficient CK stored in the corresponding coefficient register 240.sub.1 through 240.sub.m, andreceives a data value stored in the corresponding data register 230.sub.1 through 230.sub.m. Also, each of the multipliers 280.sub.1 through 280.sub.n, of the respective filter cells 220.sub.1 through 220.sub.n, receives a value of a coefficient CKstored in the corresponding coefficient register 270.sub.1 through 270.sub.n, and receives a data value stored in the corresponding data register 260.sub.1 through 260.sub.n. Each of the multipliers 250.sub.1 through 250.sub.m and the multipliers280.sub.1 through 280.sub.n multiplies the received two values and provides the multiplication result to the adder 290. The adder 290 calculates a difference Z.sub.out between the sum of the multiplication results received from the multipliers 250.sub.1 through 250.sub.m, and the sum of the multiplication results received from the multipliers 280.sub.1 to280.sub.n, and outputs the Z.sub.out to a digital signal processor (DSP) 370. In coefficient updating circuit 300, a coefficient memory 310 stores the coefficients CK for the filter cells 210.sub.1 through 210.sub.m, and 220.sub.1 through 220.sub.n. A Kalman gain memory 320 stores a Kalman gain (or vector) K(t). An errorcovariance memory 330 stores an error covariance matrix P(t); and a data memory 340 stores received values of the training sequence TS.sub.i. The original training sequence, discussed previously above, is stored in a training sequence memory 350. A comparator 360 receives signals output from the coefficient memory 310, compares these signals with a given threshold value in response to a given (or specified) control command, and sets values for the output signals (corresponding to CKs) tozero when these signals are less than the threshold value. The DSP 370 receives signals output from comparator 360, Kalman gain memory 320, error covariance memory 330, data memory 340, training sequence memory 350, and adder 290. The DSP 370 processes a received training sequence TS.sub.i, calculatesthe optimum value of a particular one of coefficients CK for filter cells, and outputs the calculation result to a decision feedback equalizer (DFE) input data memory 380. The DFE input data memory 380 receives signal (X.sub.out)' from the DSP 370 andoutputs output signal X.sub.out to the data registers 260.sub.1 through 260.sub.n of the filter cells 220.sub.1 through 220.sub.n. The DSP 370 performs an equalization of the training sequence TS.sub.i. During the equalization of the training sequence TS.sub.i, the DSP 370 also rearranges the coefficients CK and allocates a Kalman gain value, an error covariance value, adata value, and a training sequence value for each coefficient, in order to equalize all CKs except for the CKs set to zero by comparator 360. The equalization of the received training sequence TS.sub.i includes filtering the received training sequence TS.sub.i by the DSP 370, based on the values of the coefficients CK stored in the coefficient memory 310. During the filtering of thetraining sequence TS.sub.i, the stored CKs are repeatedly adjusted by DSP 370, so that the received training sequence TS.sub.i emulates the original training sequence after equalization. As discussed above, the original training sequence is stored intraining sequence memory 350. As previously discussed, the optimum values for CK represent adjusted values for the CKs that can be retrieved from coefficient memory 310 at the end of the equalization process. FIG. 4 is a block diagram of the coefficient updating circuit 300 of FIG. 1, and repeats many of the elements of FIG. 3. Referring to FIG. 4, in general, coefficient updating circuit 300 uses values of coefficients CK stored in the coefficientmemory 310, and outputs values obtained by multiplying the values of data sequences DS.sub.i, which are stored in the data memory 340, and the values of training sequences TS.sub.i, which are stored in the training sequence memory 350, respectively. Also, coefficient updating circuit 300 receives a signal Z.sub.out output from the adder 290 and outputs an output signal X.sub.out. A controller 385 maintains the number of filter cells (count of sum of feedforward and feedback filters) as an initial value and controls the comparator 360 at an instant of time when comparator 360 starts to operate. As the comparator 360starts to operate, and during a training period, the controller 385 calculates identification numbers of the filter cells stored in a tap address memory 383, estimates the delay time of a delayed multi-path of a channel, and operates the comparator 360again after an interval corresponding to the estimated time. The comparator 360 receives the coefficient values of the filter cells output from coefficient memory 310, compares the received coefficient values of the filter cells with a given threshold value, selects certain coefficient values for comparingand updating, and sets the selected coefficients which are less than the threshold to zero. Then, comparator 360 stores identification numbers of the selected filter cells having the coefficient values which are not set to 0 (i.e., greater than or equalto the threshold value) in a tap allocation memory 381, and stores the coefficient values corresponding to the numbers of the filter cells in a coefficient allocation memory 387. The numbers of the filter cells are stored in a tap address memory 383, so as to correspond to the initially allocated numbers (initial count) of the filter cells. The numbers of the filter cells stored in tap allocation memory 381 are input toa Kalman gain memory 320, an error covariance memory 330, and data memory 340. The Kalman gain memory 320 applies the input numbers of the filter cells to the Kalman gain vector stored in the Kalman gain memory 320, so as to generate a new Kalman gain vector. The error covariance memory 330 generates a new matrix using thenumbers of the filter cells received from the tap allocation memory 381, as to be described in further detail below. As a result, a new coefficient vector consisting only of the coefficients passing through comparator 360 is generated. A Kalman gain calculator 325 calculates a Kalman gain to update the current Kalman gain, using a Kalman gain vector output from Kalman gain memory 320, an error covariance matrix output from error covariance memory 330, a data vector receivedfrom the data memory 340 at time t, and a value .lamda. (0.9<.lamda.<1). The Kalman gain calculator 325 stores the updated Kalman gain in Kalman gain memory 320. An error covariance calculator 335 updates the error covariance matrix based on the error covariance matrix output from the error covariance memory 330, the updated Kalman gain vector output from the Kalman gain calculator 325, the data vectorreceived from the data memory 340 at time t, and the value .lamda. (0.9<.lamda.<1). The error covariance calculator 335 stores the updated error covariance matrix in error covariance memory 330. A multiplexer 397 receives a training sequence from the training sequence memory 350 at time t (during a training period), and inputs the training sequence to an error calculator 393 and the DFE input data memory 380. After the training period,the multiplexer 397 receives Z.sub.out passing through the slicer 395, and provides Z.sub.out to error calculator 393 and DFE input data memory 380. The error calculator 393 calculates a difference, e(t), between the output of the multiplexer 397 and the output of adder 290, and transmits the difference e(t) to a multiplier 391. The slicer 395 changes the output of the adder 290 into a valueapproximately equal to the original transmission signal. The DEE input data memory 380 receives the output of the multiplexer 397 and provides it to data registers 260.sub.1 through 260.sub.n of corresponding filter cells 220.sub.1 through 220.sub.n,for example. A multiplier 391 multiplies the difference e(t) output from the error calculator 393 and an output of the Kalman gain calculator 325 and sends the multiplication result to an adder 389. The adder 389 adds the multiplication result output fromthe multiplier 391 to a new coefficient vector stored in the coefficient allocation memory 387, and outputs the addition result to the coefficient memory 310. In this case, the coefficient memory 310 receives a signal output from the adder 389, updatesthe coefficient vector using the signal, and stores the updated coefficient vector. In accordance with the exemplary embodiments, modified Kalman algorithms may be used to minimize the values of error signals when updating Accordingly, coefficient updating circuit 300 is designed to provide a number of a filter cell, which passes through comparator 360 and is input to tap allocation memory 381, to Kalman gain memory 320 and to error covariance memory 330. Coefficient updating circuit 300 may also update a Kalman gain and an error covariance matrix. Therefore, a channel equalizer according to the exemplary embodiments of the present invention may use only small filter cells to calculate a Kalman gain andan error covariance matrix, thereby potentially reducing power consumption. Additionally, tap allocation memory 381 is capable of generating a new coefficient vector using only necessary filter cells, and may thus effectively update only a distorted portion of a channel which is adversely affecting the performance of theoriginal signal. For this reason, the method and apparatus for forming channel equalization according to the exemplary embodiments of the present invention may be more advantageous than a conventional channel equalizing method of updating coefficientsusing all filter cells. Modified Kalman Algorithms Assuming that a channel equalizer according to the exemplary embodiments of the present invention uses ten filter cells, a filter cell coefficient vector C(t)=[10.times.1] at time t may be calculated by the following Expression (6): C(t)=[C1, C2,C3, C4, C5, C6, C7, C8, C9, C10].sup.T. (6) A Kalman gain K(t)=[10.times.1] can be expressed by Expression (7): K(t)=[K1, K2, K3, K4, K5, K6, K7, K8, K9, K10].sup.T. (7) An error covariance matrix P(t)=[10.times.10] can be expressed by Expression (8): .function..function..times..function..times..times..times..times..function- ..function..times..function..times..times..times..times..function..times..- times..function..times..function..times..times..times..times..function..fu-nction..times..function..times..times..times..times..function. ##EQU00001## Also, data D(t)=[10.times.1] can be expressed by Expression (9): D(t)=[D1, D2, D3, D4, D5, D6, D7, D8, D9, D10].sup.T (9) If initial conditions under which the comparator 360 operates are satisfied, and a coefficient vector C(t)=[10.times.1], which passes through the comparator 360, is equivalent to a value obtained by Expression (10), the numbers of filter cells(or a memory address) having values other than zero, e.g., 1, 2, 7, and 10, may be input to the tap allocation memory 381, and a new filter cell coefficient vector C.sub.n(t)=[4.times.1], which can be expressed by Expression (11), may be stored incoefficient allocation memory 310. C(t)=[C1, C2, 0, 0, 0, 0, C7, 0, 0, C10].sup.T (10) C.sub.n(t)=[C1, C2,C7, C10].sup.T (11) A Kalman gain K.sub.n(t)=[4.times.1], which is generated in response to an address output from the tap allocation memory 381, can be expressed by Expression (12) as follows: K.sub.n (t)=[K1, K2, K7, K10].sup.T. (12) An error covariance matrix P.sub.n(t)=[4.times.4], which is generated in response to the address output from the tap allocation memory 381, can be expressed by Expression (13) as .function..function..times..function..times..times..function..times..times- ..function..function..times..function..times..times..function..times..time- s..function..function..times..function..times..times..function..times..tim-es..function..function..times..function..times..times..function..times..ti- mes..function. ##EQU00002## In this case, data signal D(t)=[4.times.1] can be described by Expression (14): D(t)=[D1, D2, D7, D10].sup.T. (14) During a training period, the delay time of a multi-path can be estimated with reference to a value stored in the tap allocation memory 381, and it is possible to continuously reduce the number of filter cells to be updated by operating thecomparator 360 again, after a lapse of time corresponding to the delay time of the multipath. In this way, the amount of calculation spent on updating coefficients may be reduced, thereby potentially reducing total power consumption. While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made thereinwithout departing from the spirit and scope of the exemplary embodiments of the present invention as defined by the following * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/7535954.html","timestamp":"2014-04-19T02:40:00Z","content_type":null,"content_length":"54575","record_id":"<urn:uuid:7e9e2b54-0be7-40ef-97c1-0c6dffc4f3f0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
PopEcol Lect 20 Lecture notes for ZOO 4400/5400 Population Ecology Lecture 20 (Wed. 6-Mar-13) Analysis of interspecific competition by means of paired continuous logistic growth equations. The Lotka-Volterra competition equations. Return to Main Index page Go back to lecture 19 of Mon. 4-Mar Go forward to lecture 21 of Mon. 11-Mar Download Excel demo of time-step analysis When I discussed competitive exclusion I said that the most interesting case is when two competitors manage to coexist. We will now look at models for competitive coexistence using systems of paired logistic equations. To work with a simple model of competition between two species we will return to a modified form of the continuous logistic equation (compare and contrast this with Eqn 5.1 of Lecture 5). where the subscript 1 refers to Species 1 and the 2 to Species 2. The g symbol (this is the Greek letter gamma) describes the nature of the competition between the species -- we will get to that in more detail shortly. Notice that if we took out the g[12]N[2] term and removed the subscripts, we'd have the simple form of continuous logistic equation that we had in Eqn 5.1. Eqns 20.1 and 5.1 are forms of logistic growth that cannot overshoot the carrying capacity because the density dependence [= response to crowding] is instantaneous -- as opposed to the discrete logistic [Eqn 16.1] that we analyzed for the contest vs. scramble competition section. With the discrete logistic, overshooting and chaotic dynamics were possible because of time lags. Whereas Eqn 20.1 describes the dynamics of Species 1, a very similar equation describes the dynamics of Species 2: Let's look back at the interaction term g[ij]. What the interaction term tells us is how strong an effect Species j has on growth of Species i, relative to the effect of i on itself. For example, if g[12] = 0.1, then every additional individual of Species 2 has one tenth as much effect on Species 1 as would adding another individual of Species 1 -- that means that Species 2 doesn't have all that big a detrimental effect on Species 1. Possible outcomes of the paired logistic equation model: It turns out that the interactions described by Eqns. 20.1 and 20.2 have three possible outcomes: 1) Stable coexistence (both species persist, as in Fig. 20.1) 2) Unstable coexistence (one species will "win" but which wins depends upon initial conditions or the direction of even the slightest perturbation from an unstable equilibrium for coexistence) 3) Competitive exclusion a) Species 1 always wins b) Species 2 always wins What we will see is that the outcome depends on ratios of the carrying capacities, K[i], and the interaction coefficients g[ij]. The way we will describe those ratios is by equations for inequalities such as those shown in Eqns 20.6. In order to examine the basis in inequality equations, we will conduct a stability analysis. As is often the case, we do so by setting the rate of change (dN/dt) to zero and solving the resulting equations. The additional complication here (compared to say the stability analysis of the single-species continuous logistic equation) is that we will be solving for the case where neither species has a tendency to change. Once we have established the stability criteria, we will look at some "cartoons" of population trajectories under various different initial conditions and differing combinations/ratios of the K[i] and g[ij] terms. Stability analysis -- where does the system have no tendency to change? Let's say we have the following values for the parameters r and K: r[1] = r[2] = 0.5, and K[1] = K[2] = 1000. Assume also that g[12 ]=g[21] = 0.67. First, we'll set both equations to zero -- in other words, we will ask "under what conditions will each of the species have no tendency to change population size?" (we'll be doing a stability analysis). Setting dN/dt in Eqns 20.1 and 20.2 to zero, we can divide out the r[1]N[1] term (start with Eqn 20.1; zero divided by r[1]N[1] is still zero) to yield: which simplifies to (the double arrows in the middle of each line mean that we are moving from one form of the equation to a rearranged or simplified version of the same equation): From equations to graphs -- plotting isoclines (meaning lines of equilibrium or zero change). Eqn 20.4 describes a straight line on a plot of N[1] against N[2], as shown in Fig. 20.1. [That is, Eqn 20.4 has the general form Y = b + mX]. The line describes the value of N[2] for which N[1] is at equilibrium (has no tendency to change -- but note that along most of the line Species 2 will not be at equilibrium, so the size of N[1] can be changed by changes in N[2]). Let's look at the extremes. If N[1] is zero, then N[2] will be at a Y-intercept value of K[1]/g[12 ] = 1,500. Along the other axis, N[2] will equal zero when N[1] = 1,000 (look at the left-most version of Eqn 20.4 to convince yourself of that -- only when N[1] = K[1] will that denominator go to zero). So we will have a line from a Y-intercept of N[2] = 1,500 (=K[1]/g[12]) to an N[1] value of 1,000, as shown below. This line is called the N[1] isocline. 1) What is happening at {N[1], N[2] } = {1,000, 0}? We SHOULD be at equilibrium because with no N[2] around, the system boils down to Species 1 all by itself, which (because it is just the "regular" logistic) has an equilibrium at K = 1,000. 2) Why an equilibrium at {N[1], N[2] } = {0, 1,500} which is the Y-intercept? Think of it this way. Every 1.5 individuals of Species 2 "hurts" Species 1 as much as 1 individual of Species 1 would "hurt" Species 1 (1/g[12] = 1/0.667 =1.5). All along the line connecting those two points we are just substituting one-and-a-half individuals of Species 2, for every one individual of Species 1. {All along the red line in Fig. 20.1 we are just substituting more and more N[2] as we move up and left from the X-axis equilibrium point at K[1]}. The end point ({0, 1,500} is just the place where an all N[2] group have been substituted for the 1,000 = K[1] individuals that we had along the X-axis. That is, it takes 1.5 * 1,000 = 1,500 N[2] to "make up" the effect produced at the other end of the line by the 1,000 N[1]. Connect those X- and Y-intercept dots and we have a line. That (equilibrium) line is the red N[1 ]isocline -- a linear equation describing N[2] (Y-axis) as a function of N[1] (X-axis), as shown in Fig. 20.1. Now do the same thing for Eqn 20.2 (i.e., set dN[2]/dt to zero) to get: (You should do the algebra to show that this equation is correct, by setting Eqn 20.2 to zero and then solving for N[2]). This will be a line from N[2] = 1,000 on the Y-intercept to N[1] = 1,500 on the X-axis. This line is called the N[2] isocline. In this case we move from a "pure" single-species logistic (N[2] only) at K[2] = 1,000 on the Y-axis to a spot where we have 1.5 K on the X-axis (just Species 1 individuals). {The reason I solved for N[2] again is that I want it to be in the form Y = b - mX}. Now plot the two linear equations on a graph that shows the density of N[1] against N[2]. Let's look at the plot of the isoclines on a population size graph. Fig. 20.1 Population plots for paired equations of continuous logistic population growth under interspecific competition. The red line depicts the isocline for Species 1 (dN[1]/ dt = 0 from Eqn 20.1, as solved in Eqn 20.4), while the blue line depicts the isocline for Species 2 (dN[2]/dt = 0 from Eqn 20.2 as solved in Eqn 20.5). The isoclines are given by Eqns 20.4 and 20.5. Note that the isoclines intersect at approximately N[1] = N[2] = 600. The thin black line connects the two carrying capacities and shows a total N[1] + N[2] = 1000 (K[1]= K[2] = 1,000) -- I will refer to this line connecting the two carrying capacities as the " K-connector". The fact that the isoclines cross above the K-connector line means that we have a greater TOTAL number of animals at equilibrium (1,200) than we would have if just one or the other species were present (1,000). Parameter values inserted into Eqns 20.1 and 20.2: r[1]=r[2] = 0.5; K[1]=K[2] = 1000; g[ 12]= g[21]= 0.67. The arrows represent a vector field analysis showing the size and directionality of two-dimensional population change. Vector field analysis. What are the arrow heads on this graph? They show the direction and strength of change for the two species analyzed jointly -- mathematically we do this by putting values into both equations at the same time and getting the change in the population sizes and then repeating that as many times as necessary to have the populations move to an equilibrium (stable coexistence or a "win" by one of the species). Start with the N[1] axis. Given that N[2] is zero, this reduces to the single species continuous logistic equation (Eqn 1.18). For N[1] <K[1] the population will grow; for N[1] > K[1] the population will decline, leading to an equilibrium at K[1] = 1,000. On the N[2] axis we see the same pattern. Near the origin, both populations will be increasing, so the vector points with an essentially 45° orientation. Anywhere above the red isocline, Species 1 will be tending to decline -- when we are near the bottom right, that decline will be the dominant force (with a slight tendency for Species 2 to increase when we are under its blue isocline). All together the arrows all point in toward the intersection of the two isoclines -- the stable equilibrium where neither species has a tendency to change size. Conclusion from Fig. 20.1 -- with values such that: K[1]/g[12] > K[2] and K[2] /g[21] > K[1 ]Eqns 20.6 stable coexistence is possible despite the interspecific competition (i.e., despite the fact that the niches overlap). {We will add Eqns 20.6 to a set of inequality "rules" in Eqns 20.1 that will delimit the major possible outcomes of stable coexistence, unstable coexistence and competitive exclusion}. Total carrying capacity. Note that the equilibrium point (intersection of the red and blue isoclines) is above the black line that shows a combined N[1]+N[2] of 1,000. That means that with this kind of competition the total number of animals in the habitat is larger than the total if only one of the two species were present. {With curved isoclines, we could get a combined carrying capacity that was below the black combo. line -- we'll see examples of that later}. We'll begin by redoing Fig. 20.1 as an animated cartoon. Below is an example of several starting points for Species1/Species 2 plots, and their trajectories over time toward the equilibrium for the case of stable coexistence. The yellow dots, for example, represent the path along which one of the dots in the "movie" moved towards the equilibrium point where the two isoclines intersect. Fig. 20.2 Stable coexistence case. Trajectories through time for three different starting points of competition equations based on modified continuous logistic equations (Eqns 20.1 and 20.2). The three starting points were: yellow -- N[1 ]= 800, N[2 ]= 20; green -- N[1 ]= 1,300, N[2 ]= 500; blue -- N[1 ]= 200, N[2] = 700. Dots that are far apart represent very rapid movement; dots closer together represent slower rates of change (the time interval between dot "paintings" is constant, so fast rates make jumps, whereas slow change makes a smooth band). Note that the first green dot jumps very quickly to a value on the red isocline, then moves more and more slowly toward the equilibrium. Note also that the yellow trajectory takes a sharp corner. At first it moves quickly toward the red isocline (and N[1] grows rapidly), then it follows the N[1] isocline fairly closely toward the equilibrium point (so that N[1] is decreasing). Note the characteristic that the "visiting team" (N[1]) has a higher intersection with the Y-axis than the "home team" (N[2]). The "visiting team" (N[2]) is also further out the X-axis than the "home team's"K. The inequalities of Eqns 20.6 satisfy the conditions for stable coexistence. Parameter values: r[1 ]= r[2 ]= 0.5; K[1]=K[2]= 1,000; g[12 ]= g[21] = 0.667. Go to "movie" of stable coexistence starting with low numbers of each species. Go to "movie" of stable coexistence starting with 800 N[1] and 50 N[2] (the yellow arc in Fig. 20.2). Here's a similar set of trajectories for the unstable equilibrium case. Fig. 20.3. Unstable coexistence case. Trajectories through time for three different starting points of competition equations based on modified continuous logistic equations (Eqns 20.1 and 20.2). The three starting points were: yellow -- N[1 ]= 500, N[2 ]= 1,200; green -- N[1 ]= 1,300, N[2 ]= 400; blue green -- N[1 ]= 200, N[2] = 600. Dots that are far apart represent very rapid movement, those closer together represent slower rates of change. Note that the first yellow dot jumps very quickly to a value between the isoclines, then moves more and more slowly toward the equilibrium extinction of N[1] and persistence of N[2] at K[2]. Note also that the green trajectory takes a corner. At first it moves quickly toward the red isocline (and N[1 ]decreases rapidly), then it follows the N[1] isocline fairly closely toward extinction of N[2 ]and persistence of N[1 ]at K[1]. Note the characteristic that the "visiting team" (N[1]) has a lower intersection with the Y-axis than the "home team" (N[2]). Likewise, the visitors (N[2]) have a less-than-K[1] isocline crosspoint on the X-axis. Parameter values: r[1 ]= r[2 ]= 0.5; K[1]=K[2]= 1,000; g[12 ]= g[21 ]= 1.5. Go to "movie" of unstable coexistence starting with 500 N[1] and 1,200 N[2]. Note that for the unstable case, if either species exceeds some threshold population size it will drive the other species to (local) extinction. N.B. that if the parameters (r, K, g) are exactly equal as in our simplest case then the species with higher population size "wins". If the parameters are not exactly equal, then the threshold ratio may be different from 50:50. Conclusion from Fig 20.3: if K[1 ]/g[12] < K[2] and K[2] /g[21] < K[1 ] Eqns 20.7 then the coexistence is UNSTABLE (the only parameters we have changed between Fig. 20.2 and 20.3 are the two g[ij ]values, BUT THAT CAUSES THE INEQUALITY SIGNS TO POINT IN THE OTHER DIRECTION). Thus, the inequalities of Eqns 20.7 satisfy the conditions for unstable coexistence. {We will add Eqns 20.7 to a set of inequality "rules" in Eqns 21.1 that will delimit the major possible outcomes of stable coexistence, unstable coexistence and competitive exclusion}. What we have done in the material covered here is to find conditions under which different outcomes will occur. We have done so by means of a combination of equation-based analyses and graphical Although we need to use the equations to solve the stability analysis, it is easier (at least for me) to use the graphs and some fairly simple rules-of-thumb to decide which of the three outcomes we have for a particular combination of r, K and g values. Return to top of page Go forward to lecture 21 of Mon. 11-Mar
{"url":"http://www.uwyo.edu/dbmcd/popecol/marlects/lect20.html","timestamp":"2014-04-18T20:50:56Z","content_type":null,"content_length":"26653","record_id":"<urn:uuid:84373764-a4d4-47a2-bcad-294d187dbb9f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
The Fisher Exact Test Matthew Healy is involved in "proprietary" biomedical research (probably biological weapons, if you ask me, but for now it appears he's on our side, so we're cool). He stumbled across the main hypergeometric distribution probability calculator here at nerdbucket.com, and wanted to share some tips and tricks for real-world use. His application of the hypergeometric distribution is to determine if statistical results are significant, via the Fisher Exact Test: Curious where the hypergeometric distribution is used in the real (non-gambling) world? Of course you are! You're a nerd! Knowledge of obscure things is your calling! Well, the Fisher Exact Test is a way of basically determining if some results that were measure are likely to be random chance, or if there's really some significance to the data. Its best use is when the data has a small sample size, which is apparently common in medical research. Matthew sums it up nicely: Out of N1 patients given drug A, M1 get better (and N1 - M1 don't) Out of N2 patients given drug B, M2 get better (and N2 - M2 don't) From these data can we conclude there is a significant difference between these drugs? I won't try to explain the formula, as I don't fully understand it myself. But sufficed to say, it's very important in all kinds of medical research situations, even those that don't involve biological weapons. The important thing here isn't so much the Fisher Exact test, and not really even the real-world use of hypergeometric distributions, as the fact that Mr. Healy gave me a piece of his code and agreed to let me put it up on my site: . The code may be hard to read if you don't know what's going on, but the most interesting function is pretty simple: sub LnFactorial { my $n = shift; return $cache[$n] if defined $cache[$n]; return undef if $n < 0; for (my $i = scalar(@cache); $i <= $n; $i++) { $cache[$i] = $cache[$i-1] + log($i); return $cache[$n]; I've made a few minor formatting changes, but for all intents and purposes, the code is the same as he sent it to me. There are a couple things I really liked about his approach here. , he built this LnFactorial function to compute huge factorial values in a "compressed" form. By storing the natural log of each value, he is able to use addition instead of multiplication, and can make use of relatively small floating-point numbers - which modern CPUs can deal with very effeciently - instead of the typical approach for factorials of doing multiplication on extremely large integers - something that modern CPUs don't inherently support . The trick is that the natural logarithm of any number returns a value such that to the power of that value will return the original number. Once you're dealing with exponents, you can add them together to simulate multiplication or subtract them to simulate division - once you raise E to the power of the the new value, it's as if you'd multiplied or divided the original values, only it's much more CPU-friendly. Here's a simplistic example: Method 1 - normal math: 25 * 48 = 1200 Method 2 - logarithmic method: exp(ln(25) + ln(48)) => exp(3.21888 + 3.87120) => exp(7.09008) = 1200.00380 Clearly there's a bit of error if you round the values too much, but on a computer, where a lot of precision is kept, this isn't a large enough issue to be concerned with. Additionally, when you're looking at the factorial of a large number like 10,000, the CPU savings are incredible. When your language of choice is Perl, but you really need speedy computations of this nature, using the logarithmic method is not just clever or "cool", but absolutely necessary. The thing Healy did in a clever way was his memoization. If you're anything at all of a programmer, you know about memoization. But in a typical situation, a memoization system just memoizes values one at a time. For Perl's Memoize module, this is the case - you call a function with the input of X, and it will know that every time you call with X, the result is pre-computed. But what if in computing X, you've also computed the result for X-1, X-2, etc? In the case of factorials, this is obviously what happens. The obvious solution is to use recursion. If calling factorial(x) internally calls factorial(x-1), you get free memoization! But recursion has a huge price tag associated with it. On large values of x, recursion will simply crash due to lack of stack space. Even if it doesn't crash, the overhead of calling a function isn't trivial - hundreds of thousands of function calls within CPU-sensitive code will kill performance. So while Matthew's "roll your own" memoization solution isn't likely to win any major awards, I think it shows he was thinking about the problem instead of just grabbing up the first pre-made solution available. Today, we're jaded when it comes to programming - we know that if we have a problem, somebody has likely solved it before us, and we can probably just find a library to do what we need. But if we don't pay attention, we could use a really great library in just the wrong situation, saving us time writing code, but hurting our applications in other, unexpected ways.
{"url":"http://www.nerdbucket.com/statistics/hypergeometric/hypergeometric_biomedical_fisher_exact_test.php","timestamp":"2014-04-21T09:42:05Z","content_type":null,"content_length":"12784","record_id":"<urn:uuid:404add41-601a-43ae-9cad-2408fd06166e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hello, can someone help me? I have a reaction: 2Mg+O2->2MgO Also, I know that mass of Mg is 9g mass of O2 is 8g What is the mass of MgO? • one year ago • one year ago Best Response You've already chosen the best response. find the moles f each mg and o2. check which is the limiting reagent. that many moles of mgo will be formed. also note x moles of mg will give x moles of mgo. but x moles of o2 will give 2x oles of mgo. Best Response You've already chosen the best response. 1. Find moles of Mg and O2 Mg → 9/24.3 = .37 mol Mg O2 → 8/16.0 = .50 mol O2 2. Find the limiting reactant (moles of each substance divided by coefficient) Mg → .37mol ÷ 2 mol = .19 O2 → .50 mol ÷ 1 mol = .5 Limiting reactant (smallest number) = Mg 3. Use moles of Mg to find how many moles of MgO formed \[\frac{ 2 mol Mg }{ 2 mol MgO } = \frac{ .5 mol O2 }{ x }\] x = .5 mol MgO formed 4. Convert moles of MgO to grams MgO → .5(24.3g) + .5(16.0g) = 20.15g Answer: 20.15 grams MgO Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bb70e8e4b0bcefefa0374c","timestamp":"2014-04-20T06:35:40Z","content_type":null,"content_length":"30650","record_id":"<urn:uuid:536a3e31-ea0a-493c-a010-cac28a398da1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/jhannybean/answered","timestamp":"2014-04-16T17:39:10Z","content_type":null,"content_length":"122582","record_id":"<urn:uuid:626199ba-4d19-4425-acce-2eaded378f57>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Digit sum of extremely large numbers I figured it out, can't figure out how to delete it, sorry to make a post about nothing. Roughly speaking n(n+1) will have twice as many digits as n and the resulting exponential growth of the digit length of n! is going to be a problem sooner or later. Perhaps some number theory is called for. Some people make a distinction between digit sums and reduced digit sums (ie 456->15 vs 456->6): which do you mean? [Edit] The question was to find the digit sum of 50000! I ended up figuring it out by using big int to string method differently, then I just looped through the produced string, changing each item to an int with parseInt, and finally adding it to sum.
{"url":"http://www.java-forums.org/new-java/37928-digit-sum-extremely-large-numbers-print.html","timestamp":"2014-04-21T15:04:33Z","content_type":null,"content_length":"4686","record_id":"<urn:uuid:606a5f50-ff24-487d-b7c1-36aa9f516549>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Methods Some Remarks on Experiments Comparison between Theory and Experiment Designing an Airfoil The design of an airfoil usually starts with the definition of the desired or required characteristics. These can be a certain range of lift coefficients, Reynolds- or Mach numbers, where the airfoil should perform best, stall characteristics, moment coefficient, thickness, low drag, high lift, cavitation (for hydrofoils), insensitivity with regard to dust and dirt, easy to build (flat bottom) or any combination of such requirements. When these requirements have been written down, the next step would be to look around, what's available. If there is an airfoil available, which perfectly fits the desired conditions, why create a new one? Often there is no existing airfoil, which fulfills all requirements, or the designer believes, that he can design something new with improved performance. Starting from this point, each designer has his own way and his preferred tools to proceed. Some like to use an inverse design code (like the Eppler code) to prescribe flow parameters and get the resulting geometry (airfoil) from the code. Others like to use a starting airfoil and use analysis codes (or a wind tunnel) to continue in a trial and error style (albeit with a lot of experience) to find a better airfoil shape. This second method is often used in combination with a numerical optimization code: a computer tries hundreds or even thousands of different airfoil shape modifications until it cannot find further improvements. A drawback of the numerical trial and error optimization process is that it can take a long time, and that the optimization programs tend to move into a corner of the requirements: the resulting airfoil might indeed have a low drag and high lift, but maybe only in a very small operating range and it may have catastrophic stall characteristics. It is very difficult to tell a computer, what the desired stall characteristics should look like, or what you expect from different flap settings. Currently a good designer is still necessary to get good results, but he can use the computer and numerical optimization as a tool to perform time consuming polishing work or to get hints on possible improvements. The selection of an airfoil for a model aircraft depends mainly on the lift and drag characteristics of the airfoil. If no experimental data are available, theoretical methods can be used to get an approximation of these data. Whereas the lift can be calculated reasonable well from the frictionless pressure- respectively velocity distribution on the airfoil surface, the friction drag can be determined by an analysis of the boundary layer with a lesser degree of accuracy. There are several methods for the design and analysis of airfoils available. This section briefly presents two of the most popular computer codes suitable for low Reynolds number airfoils: • PROFIL by Professor Richard Eppler, University of Stuttgart, Germany, and • XFOIL by Professor Mark Drela, Massachusetts Institute of Technology, USA. The data sheets of my airfoils show results which were calculated by these two codes. My page «analyze an airfoil» uses a proprietary code, whose boundary layer module is a rewrite of Eppler's integral boundary layer method. Professor Richard Eppler is a pioneer in the field of computational aerodynamics. He wrote his first codes using punched paper strips, the high speed main memory at these times was a magnetic drum, which could hold several bytes! - Nah, not mega- or kilo-, just bytes. Eppler developed a very fast and elegant design method, based on conformal mapping, which is the heart of his computer code. Because an airfoil also has to operate outside of its design point(s), a fast integral boundary layer method and (for the analysis of given airfoils) an accurate third order panel method (parabolic velocity variation) was added. Furthermore the code offers possibilities to modify the geometry, to calculate drag polars, and various plotting options. Due to its early roots, the computer code has been developed as a batch code. Textual and graphical output is directed to files, which makes the FORTRAN 77 code easily portable and system independent. On the other hand, the input files are quite cryptic and hard to handle for beginners. The elaborate description of theory and code [14] even contains an (now outdated) version of the FORTRAN-IV program. The strength of the code is the design part and the fast analysis part, which makes it very well suited for the design task. The results of the integral boundary layer method agree astonishingly well with experiments, if the Reynolds numbers are above 500'000. The design module can be used to design very smooth airfoils shapes, including the leading edge region, which is often difficult with other codes. On the other hand, the design method is quite abstract and difficult to handle for beginners. The boundary layer analysis is performed using the calculated, inviscid (without friction) velocity distributions as input; there is no direct coupling between boundary layer flow and the external flow field. Transition prediction is performed by testing the boundary layer parameters against a set of empirically derived transition relations, which work quite well for attached flow in a wide range of Reynolds numbers. In the low Reynolds number regime the results are usually not very accurate if a laminar separation bubble or larger separated flow regions occur. This is a result of the integral boundary layer method, which simply cannot model separation (this would require some sort of coupling between boundary layer analysis and the calculation of the external flow). The code has a option to perform a displacement iteration in order to take the displacement effects of the boundary layer into account, but there is no direct interaction, as, for example, in Xfoil. Recent (2007) additions to the code however, are an improved model of laminar separation bubbles and turbulent separation. The code itself is available for a fee directly from Prof. Richard Eppler in Germany or from his US distributor Dan Somers. This code also includes design as well as analysis tools, which have been combined very comfortably, with ease of use in mind. The code is intended as an interactive application, but because it does not directly make use of a proprietary window system, it is quite portable and can be used in batch mode by input redirection. The design module of the FORTRAN 77 code has to be used as a redesign method, i.e. you need a starting geometry, whose velocity distribution can be modified, and you have to be very careful to avoid waves and wiggles when adding changes to the leading edge region. It is practically impossible to design the leading edge region as smooth as with Eppler's method. The analysis module consists of a 2nd order panel method (linear varying velocity), which cannot achieve the accuracy of Eppler's panel module. But in contrast to Eppler's simple boundary layer method Drela has implemented a more sophisticated method, which takes the boundary layer into account while solving for the flow field. Thus the interaction between boundary layer and external flow is modeled quite realistic and the code can also handle small to medium sized separated regions. When the separation is getting larger or extends into the wake, the results are getting worse, but usually are still acceptable for an impression of the airfoil behavior. The transition prediction, which is of utmost importance for low Reynolds number airfoils, is based on a so called e^n method, which is used as a simplified envelope method. In some cases, the errors introduced by this envelope method can be quite large though. The disadvantage of the more complex model is the much longer calculation time, which compares to Eppler's analysis method by a factor of 20. Most calculations shown on this web were performed with XFOIL version 5.4, which produced more realistic results when compared with the more recent versions. Some of the later versions like 5.9 seems to give good and realistic results, though. Since December 2000, Xfoil can be downloaded from its WEB page. Some people have the impression, that an experiment, where they can measure everything up to umpteen digits behind the decimal, are the only truth. The people who are doing the real work, i.e. performing measurements at an experimental facility, like a wind tunnel, know better. It is extremely difficult to provide well defined test conditions and to make accurate measurements, especially on airfoils at Reynolds numbers below 500'000. Some important points are the facts, that • the flow at low Reynolds numbers is very sensitive to external influences (turbulence, noise), • spanwise flow may occur in airfoil tests, especially when separation occurs, • the wind tunnel changes the flow field around the airfoil, • the forces and pressures to acquire are very small, and • the wind tunnel model must be built to high standards to closely represent the desired shape. Unfortunately not all wind tunnel experiments are documented as well as the results from the tests at the University of Illinois Urbana-Champaign (UIUC). Usually only one drag polar per Reynolds number is given and the user has no idea about fluctuations or errors in the system. The following plots show the published data for the E 374, as it has been investigated at the UIUC. The symbols represent the data, as sampled at four different spanwise locations; the line connects the mean values for each angle of attack (this is what you will find in the final publication). I can assure you, that the bandwidth of these results at lowest Reynolds numbers is not unusually large and is not a sign of poor measurement techniques. It is a matter of the physics of the flow. At this low Reynolds number of 61'500 the data points are scattered in a wide band. At medium Reynolds numbers the scatter is considerably reduced. Increasing the Reynolds number moves the scatter to the corners of the laminar bucket. As you can see, the band width at Reynolds numbers above 150'000 is quite narrow, as long as you focus around medium lift coefficients. Increasing or decreasing the lift coefficient can lead to separation, usually starting at the trailing edge and moving more or less slowly towards the leading edge. The flow in separated regions is neither stationary nor two dimensional, which introduces additional scatter into the forces and pressures. These spanwise variations of lift and drag can occur due to wind tunnel deficiencies, non uniform model accuracy and local separation bubbles. In most experiments the lift coefficient is determined by a force measurement, giving a mean value for the whole model, but the drag coefficient is measured locally. Most modern drag measurements are performed at several spanwise locations, but not all of them. For example the drag coefficients, published in [20], were sampled at a single spanwise location. The reader is referred to [12], [13] and [17] for further discussion of the problems of tests at low Reynolds numbers, as well as to the section «MH 32: Wind Tunnel Results». Something, which seems to be never (?) published, is the time history of the measurements, which might yield an even wider scatter band. Drag measurements can be performed by a single sensor, moving through the wake and sampling the total and the static pressure at different locations at different times, or by using a wake rake, sampling at different locations at the same time (depending on the equipment). Thus the sampling rate and sampling time will have some influence on the results, especially when periodic fluctuations of the flow occur. The purpose of this section is not to blame the people who contribute to the excellent work at UIUC or other facilities - these guys are doing excellent and hard work, which is appreciated very much by so many all around the world. The scatter of the experimental data is just a matter of fact, caused by the underlying physics. When comparing experimental and theoretical results, you should bear in mind, that neither wind tunnels nor computers perfectly represent the reality. As discussed above, wind tunnels have shortcomings like fixed walls, turbulence, noise and model quality, whereas theory is always based on a mathematical model of the real world - assuming this and neglecting that. A typical comparison is given below; at lower Reynolds numbers the discrepancies can be even larger. Comparison of theoretical with experimental results. The figure above shows a typical comparison between both theoretical analysis methods and experimental results at low Reynolds numbers. The experiment shows the strong influence of a laminar separation bubble, which roughly doubles the drag coefficient. The Eppler code (voersion of 2000) cannot cope with laminar separation bubbles, but issues a warning for all the calculated data points, saying that there might be a separation bubble. Drela's XFOIL (version 5.4) takes the laminar separation bubble into account, but under predicts the drag. Thus both methods cannot predict the drag values at Reynolds numbers below 200'000 if laminar separation occurs, but hint the user that something might be happening. My experiences with XFOIL indicate, that it tends to shift the polars to higher lift coefficients and that the simple panel method (in conjunction with the spline method) has some problems with leading edges, which often results in jaggy velocity distributions even for perfectly smooth airfoils. It is very, very difficult to make a fair comparison of airfoils at low Reynolds numbers. On the one hand, numerical methods have serious shortcomings, but can be used to judge airfoils under comparable conditions. On the other hand, experiments are not only difficult and time consuming, but can also yield very different results even in the same wind tunnel, due to the three dimensional nature of laminar separation bubbles and spanwise variations in lift and drag. These variations can be much bigger than the differences between different airfoils, thus making a comparison of two dimensional polars, as measured by a single wake rake questionable. Last modification of this page: 15.10.07 [Back to Home Page] Suggestions? Corrections? Remarks? e-mail: Martin Hepperle. Due to the increasing amount of SPAM mail, I have to change this e-Mail address regularly. You will always find the latest version in the footer of all my pages. It might take some time until you receive an answer and in some cases you may even receive no answer at all. I apologize for this, but my spare time is limited. If you have not lost patience, you might want to send me a copy of your e-mail after a month or so. This is a privately owned, non-profit page of purely educational purpose. Any statements may be incorrect and unsuitable for practical usage. I cannot take any responsibility for actions you perform based on data, assumptions, calculations etc. taken from this web page. © 1996-2008 Martin Hepperle You may use the data given in this document for your personal use. If you use this document for a publication, you have to cite the source. A publication of a recompilation of the given material is not allowed, if the resulting product is sold for more than the production costs. This document may accidentally refer to trade names and trademarks, which are owned by national or international companies, but which are unknown by me. Their rights are fully recognized and these companies are kindly asked to inform me if they do not wish their names to be used at all or to be used in a different way. This document is part of a frame set and can be found by navigating from the entry point at the Web site http://www.MH-AeroTools.de/. Impressum und weitere rechtliche Hinweise für Deutschland
{"url":"http://www.mh-aerotools.de/airfoils/methods.htm","timestamp":"2014-04-17T00:48:29Z","content_type":null,"content_length":"20538","record_id":"<urn:uuid:c1f4d095-a8e8-47e0-9cfd-703955b6e4fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
P> A Cyclotron Used To Accelerate Alpha [articles ... | Chegg.com p>&#160;A cyclotron used to accelerate alpha [articles (m=6.55 x 10^-27kg, q=3.2x10^19C) has a radius pf .50, and a magnetic field of 1.8T<br />(a)what is the period of revolution of the alpha particles?<br />(b)what is their kinetic energy</p> Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/p-cyclotron-used-accelerate-alpha-articles-m-655-x-10-27kg-q-32x10-19c-radius-pf-50-magnet-q4022439","timestamp":"2014-04-18T14:06:09Z","content_type":null,"content_length":"20801","record_id":"<urn:uuid:2e0f7da8-9c27-46fc-bcd5-65f905c22e35>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Testing Einstein's famous equation E=mc<sup>2</sup> in outer space University of Arizona physicist Andrei Lebed has stirred the physics community with an intriguing idea yet to be tested experimentally: The world's most iconic equation, Albert Einstein's E=mc^2, may be correct or not depending on where you are in space. With the first explosions of atomic bombs, the world became witness to one of the most important and consequential principles in physics: Energy and mass, fundamentally speaking, are the same thing and can, in fact, be converted into each other. This was first demonstrated by Albert Einstein's Theory of Special Relativity and famously expressed in his iconic equation, E=mc^2, where E stands for energy, m for mass and c for the speed of light Although physicists have since validated Einstein's equation in countless experiments and calculations, and many technologies including mobile phones and GPS navigation depend on it, University of Arizona physics professor Andrei Lebed has stirred the physics community by suggesting that E=mc^2 may not hold up in certain circumstances. The key to Lebed's argument lies in the very concept of mass itself. According to accepted paradigm, there is no difference between the mass of a moving object that can be defined in terms of its inertia, and the mass bestowed on that object by a gravitational field. In simple terms, the former, also called inertial mass, is what causes a car's fender to bend upon impact of another vehicle, while the latter, called gravitational mass, is commonly referred to as "weight." This equivalence principle between the inertial and gravitational masses, introduced in classical physics by Galileo Galilei and in modern physics by Albert Einstein, has been confirmed with a very high level of accuracy. "But my calculations show that beyond a certain probability, there is a very small but real chance the equation breaks down for a gravitational mass," Lebed said. If one measures the weight of quantum objects, such as a hydrogen atom, often enough, the result will be the same in the vast majority of cases, but a tiny portion of those measurements give a different reading, in apparent violation of E=mc^2. This has physicists puzzled, but it could be explained if gravitational mass was not the same as inertial mass, which is a paradigm in physics. "Most physicists disagree with this because they believe that gravitational mass exactly equals inertial mass," Lebed said. "But my point is that gravitational mass may not be equal to inertial mass due to some quantum effects in General Relativity, which is Einstein's theory of gravitation. To the best of my knowledge, nobody has ever proposed this before." Lebed presented his calculations and their ramifications at the Marcel Grossmann Meeting in Stockholm last summer, where the community greeted them with equal amounts of skepticism and curiosity. Held every three years and attended by about 1,000 scientists from around the world, the conference focuses on theoretical and experimental General Relativity, astrophysics and relativistic field theories. Lebed's results will be published in the conference proceedings in February. In the meantime, Lebed has invited his peers to evaluate his calculations and suggested an experiment to test his conclusions, which he published in the world's largest collection of preprints at Cornell University Library (see Extra Info). "The most important problem in physics is the Unifying Theory of Everything -- a theory that can describe all forces observed in nature," said Lebed. "The main problem toward such a theory is how to unite relativistic quantum mechanics and gravity. I try to make a connection between quantum objects and General Relativity." The key to understand Lebed's reasoning is gravitation. On paper at least, he showed that while E=mc^2 always holds true for inertial mass, it doesn't always for gravitational mass. "What this probably means is that gravitational mass is not the same as inertial," he said. According to Einstein, gravitation is a result of a curvature in space itself. Think of a mattress on which several objects have been laid out, say, a ping pong ball, a baseball and a bowling ball. The ping pong ball will make no visible dent, the baseball will make a very small one and the bowling ball will sink into the foam. Stars and planets do the same thing to space. The larger an object's mass, the larger of a dent it will make into the fabric of space. In other words, the more mass, the stronger the gravitational pull. In this conceptual model of gravitation, it is easy to see how a small object, like an asteroid wandering through space, eventually would get caught in the depression of a planet, trapped in its gravitational field. "Space has a curvature," Lebed said, "and when you move a mass in space, this curvature disturbs this motion." According to the UA physicist, the curvature of space is what makes gravitational mass different from inertial mass. Lebed suggested to test his idea by measuring the weight of the simplest quantum object: a single hydrogen atom, which only consists of a nucleus, a single proton and a lone electron orbiting the Because he expects the effect to be extremely small, lots of hydrogen atoms would be needed. Here is the idea: On a rare occasion, the electron whizzing around the atom's nucleus jumps to a higher energy level, which can roughly be thought of as a wider orbit. Within a short time, the electron falls back onto its previous energy level. According to E=mc^2, the hydrogen atom's mass will change along with the change in energy level. So far, so good. But what would happen if we moved that same atom away from Earth, where space is no longer curved, but flat? You guessed it: The electron could not jump to higher energy levels because in flat space it would be confined to its primary energy level. There is no jumping around in flat space. "In this case, the electron can occupy only the first level of the hydrogen atom," Lebed explained. "It doesn't feel the curvature of gravitation." "Then we move it close to Earth's gravitational field, and because of the curvature of space, there is a probability of that electron jumping from the first level to the second. And now the mass will be different." "People have done calculations of energy levels here on Earth, but that gives you nothing because the curvature stays the same, so there is no perturbation," Lebed said. "But what they didn't take into account before that opportunity of that electron to jump from the first to the second level because the curvature disturbs the atom." "Instead of measuring weight directly, we would detect these energy switching events, which would make themselves known as emitted photons -- essentially, light," he explained. Lebed suggested the following experiment to test his hypothesis: Send a small spacecraft with a tank of hydrogen and a sensitive photo detector onto a journey into space. In outer space, the relationship between mass and energy is the same for the atom, but only because the flat space doesn't permit the electron to change energy levels. "When we're close to Earth, the curvature of space disturbs the atom, and there is a probability for the electron to jump, thereby emitting a photon that is registered by the detector," he said. Depending on the energy level, the relationship between mass and energy is no longer fixed under the influence of a gravitational field. Lebed said the spacecraft would not have to go very far. "We'd have to send the probe out two or three times the radius of Earth, and it will work." According to Lebed, his work is the first proposition to test the combination of quantum mechanics and Einstein's theory of gravity in the solar system. "There are no direct tests on the marriage of those two theories," he said. " It is important not only from the point of view that gravitational mass is not equal to inertial mass, but also because many see this marriage as some kind of monster. I would like to test this marriage. I want to see whether it works or not." Story Source: The above story is based on materials provided by University of Arizona. The original article was written by Daniel Stolte. Note: Materials may be edited for content and length. Cite This Page: University of Arizona. "Testing Einstein's famous equation E=mc^2 in outer space." ScienceDaily. ScienceDaily, 8 January 2013. <www.sciencedaily.com/releases/2013/01/130108162227.htm>. University of Arizona. (2013, January 8). Testing Einstein's famous equation E=mc^2 in outer space. ScienceDaily. Retrieved April 21, 2014 from www.sciencedaily.com/releases/2013/01/130108162227.htm University of Arizona. "Testing Einstein's famous equation E=mc^2 in outer space." ScienceDaily. www.sciencedaily.com/releases/2013/01/130108162227.htm (accessed April 21, 2014).
{"url":"http://www.sciencedaily.com/releases/2013/01/130108162227.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+sciencedaily%2Fspace_time+%28Space+%26+Time+News+--+ScienceDaily%29","timestamp":"2014-04-21T09:47:18Z","content_type":null,"content_length":"91656","record_id":"<urn:uuid:44fa1156-f685-492d-b996-754ccfa23d68>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Having trouble simplifying this integral October 1st 2010, 11:42 AM Having trouble simplifying this integral So I have to evluate the integral and simplify it. I'm stuck near the end after applying F(b) - F(a) and can't simplify it further. October 1st 2010, 12:59 PM $\displaystyle \int (y^2-\sin y)~dy =\frac{y^3}{3}+\cos y eq \frac{y^3}{3}-\cos y$ October 1st 2010, 01:06 PM Wait so it's + cos y and not - cos y? But to get -sin y it has to be -cos y. October 1st 2010, 01:10 PM October 3rd 2010, 12:09 PM I'm having trouble simplifying this. I got: [(4pi/5)^3 / 3] - [COSpi4/5] - [(-4pi/5)^3 / 3] - [-COS(-4pi/5)] I think the COS term will cancel out and then we get 2[(4pi/5)^3 / 3] as the answer. Things I am not sure about: the signs. I'm not sure about: - [-COS(-4pi/5)] *** does this become positive? What about the -4pi/5? October 3rd 2010, 12:16 PM Okay I think I got it, the COS part cancel out leaving just the 4pi/5 parts. My end result is then: [(4pi/5)^3 / 3] + [(4pi/5)^3 / 3] I am trying to simplify this and have some trouble. I turned it into: 2 * [(4pi/5^3) * 3] Can it be simplified further? October 3rd 2010, 12:16 PM I'm having trouble simplifying this. I got: [(4pi/5)^3 / 3] - [COSpi4/5] - [(-4pi/5)^3 / 3] - [-COS(-4pi/5)] I think the COS term will cancel out and then we get 2[(4pi/5)^3 / 3] as the answer. Things I am not sure about: the signs. I'm not sure about: - [-COS(-4pi/5)] *** does this become positive? What about the -4pi/5? cosine is an even function ... $\cos(-x) = \cos(x)$ October 3rd 2010, 12:36 PM Skipping the details here, the format above would then be the first term with pi - cosine term + second pi term + second cosine term. The is the most I can simplify it: 2 * [(4pi/5^3) * 3] How does this look? Basically the cosine terms cancel out and leaves just 2 times the pi terms.
{"url":"http://mathhelpforum.com/calculus/158114-having-trouble-simplifying-integral-print.html","timestamp":"2014-04-20T09:05:51Z","content_type":null,"content_length":"8843","record_id":"<urn:uuid:1bdb8440-98c5-4ea1-a113-56fb97ca9c8f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionParallel Robot DesignTwo-DOF DesignsThree-DOF DesignsFour-DOF DesignsFive-DOF DesignsSix-DOF DesignsSensor Application of Parallel MechanismsKinematic ModelClassical methods or computational intelligence?CalibrationPrinciplesCalibration ProcedureJoint CalibrationKinematic CalibrationWhat type of errors can appear and how do they influence?How many parameters are necessary in the kinematic model?Data acquisitionHow can we measure?Self-calibrationConstrained calibration methodsExternal calibration methodsHow many sensors are necessary?How many configurations must be measured?Define the objective functionObjective function defined in terms of position and orientation of the end-effector (DKM)Objective function defined in terms of distances (IKM)ConclusionsReferences and NotesFigures and Table As is known, the calibration procedures present three well differentiated levels, level one calibration or joint level calibration, level two or kinematic calibration and level three or dynamic calibration [94,95]. The kinematic calibration consists of determining the kinematic geometry of the mechanism and the correct joint angle relationship. (a) Kinematic model Just like in the joint calibration, in this level the first phase is to determine the kinematic model. This model allows us to obtain equations which relate the joint variables of the mechanism with the position and orientation of the end-effector. It is not an easy task to obtain a suitable model that ensures the optimal accuracy of the system, and one of the unsolved problems in this working area is that it is very difficult to obtain a generalizable model – it is specific to the design under study. In a parallel mechanism, the end-effector position is limited by certain restrictions. Calibration will obtain this position for several poses of the end-effector or configurations. Constraint equations are a function of m geometric parameters of the robot, of the measurements obtained at the position and of the n pose parameters of the process of calibration. For N calibration poses, there are (N × c) constraint equations with (m + N × n) unknowns, and the number of constraint equations must be greater or equal to the number of unknowns. The solution to this non-linear system is usually obtained by means of numerical methods, such as minimizing the sum of squares of the constraint equations. Everett divided kinematic calibration models into two categories [97]. Models belonging to the first category assume that all the joints in the mechanism can be modeled as revolute or prismatic joints. To be able to make this assumption, the kinematic model should fulfill three characteristics: complete, equivalent and proportional. In these models an objective function is usually minimized, and the characteristic of proportionality is very important to guarantee numerical stability. The second category includes those models in which it is considered that some of the joints can contain higher pairs. In these models, besides revolute and prismatic joints, some additional movements can appear and must be expressed as a function of the variable joints. These models add an offset to each joint, adding three new parameters to each one. This fact originates a multitude of possible functions to model the joint, and the concepts of equivalent and complete model are not applied to this category. To attain a high level of accuracy, the model must consider the most significant geometric and non-geometric parameters for the mechanism designed. Geometric errors Geometric errors may appear from manufacturing errors or from the deviation of the offsets of the components. Joints in the links are not perfect, so the axes cannot be perpendicular between them, and they cannot intersect in the exact center of the joint. Errors when assembling actuators can cause the axis of each actuator not to pass through the center of the joint. Other errors can appear when measuring the offset of the components at the location of the mechanism’s joint. One of the most widely used geometric methods for modelling an open-loop or a closed-loop mechanism is the well-known Denavit-Hartenberg method [98]. This method allows us to model the joints with four parameters. One of the limitations of this method appears when it is applied to those mechanisms that present two consecutive parallel joint axes. In this case, an infinite number of common normals of the same length exist, and the location of the axis coordinate system may be made arbitrarily. In [97], Everett mentioned the most relevant publications which propose some solutions for this limitation. In [99–102], the authors developed methods to obtain a complete, equivalence and proportional model. Some research has been carried out on obtaining an accuracy model [103,104], in which manufacturing tolerances, assembly errors and offsets are studied to develop an algorithm for the identification of the kinematic parameters of the Stewart platform. In each joint–link chain three types of parameters appear: measurable variables for describing the extension of the prismatic joints, un-measurable variables describing joint angles and geometric parameters describing the dimensions of the platform. Once the kinematic model has been determined the number of parameters will be fixed, and will depend on the selected method. Wang [103] determined that the number of geometric parameters to define a kinematic chain is 22 [103], and this number can be reduced to 7 if passive joints are considered as perfect joints. Kinematic parameters usually correspond to the position of spherical, universal or revolute joints. Therefore, each spherical ball joint has three kinematic parameters, and each prismatic joint adds a new parameter, corresponding to its elongation [105]. The method employed to calibrate the Stewart platform establishes the orientation constraint by maintaining two attitude angles of the end-effector constant. Non-geometric errors The non-geometric errors can appear from backlash, gear transmission, friction, gravity, temperature or compliance [106]. The non-geometric models try to predict and compensate these errors. Some authors [107] have developed non-geometric models to achieve this. In [108], Renders has gathered the influence of non-geometric errors. The errors that have the most significant effect on accuracy are joint flexibility, link flexibility, gear transmission error, backlash in gear transmission, and temperature effect. Flexibility in joints and in links causes 8% - 10% of the position and orientation errors of the end-effector, and link flexibility is usually 5%. Joint flexibility errors can be reduced by mounting the joint encoders directly on the joint after the transmission units, instead of mounting them on the motor shaft. Gear transmission errors are mainly due to runout and orientation errors. The contribution of backlash is from 0.5%–1%. The error due to temperature effects causes 0.1% of the total error. Calibration is usually performed in an environment where the temperature is controlled. Next, a correction model that considers the working temperature is applied. Judd [109] developed a model to correct problems with robot accuracy resulting from imperfections in the main spur and encoder pinion gears, errors in the link and joint parameters and structural deformations. Hollerbach [110] introduced a calibration index that considers sensed and un-sensed joints and single and multiple loops. In [111], Gong developed an algorithm for non-geometric error identification and compensation by means of the inverse calibration of the system, analyzing the effect of geometric errors with temperature variation and compliance. These methods separated the influence of the errors due to geometric and non-geometric parameters, in order to optimize geometric parameters by means of a traditional static calibration, and to model and correct non-geometric errors by means of a dynamic calibration. However, they are not generalizable and they do not allow us to know the individual influence of each non-geometric component. To consider all the possible errors in the same kinematic model is a laborious task that, due to the complexity of the model, does not always increase the accuracy of the result. Moreover, it can add errors in the resolution of the problem, for example in the case where the model adds discontinuous functions for backlash or gearing errors, or when parameters are a function of joint variables instead of being a function of constants. It is not possible to know a priori what parameters must be used in the calibration process to obtain the desired accuracy, but the mechanism repeatability must be considered in order to predict the order of magnitude of the accuracy that can be reached. Everett [112] analyzed models based on forward kinematics and explained that there are a maximum number of parameters that must be identified, and that the model accuracy cannot be improved by adding extra parameters. The author explained that unless all joints are moved, not all parameters can be identified, because if one or more joints do not move, some unknowns can be decomposed, shifted and absorbed into others. He determined that four parameters must be considered for each revolute joint, two of which must be orientational. And for a prismatic joint two orientational parameters are necessary, applied about the non-colinear axes before and perpendicular to the translational joint axes. Thus, it is more frequent to add parameters to compensate non-geometric errors in the geometric model [113]. Regardless of the method selected, calibration can be solved by means of inverse or forward kinematics. The calibration problem can be formulated in terms of residual measurements. For example differences between the joint variable measurements and the values obtained by the inverse kinematic model. This model offers significant advantages compared to the one based on forward kinematics, since calculations in the latter case are more complex and require more time to be solved. Besides, the solution in the inverse model is unique, unlike the forward one in which several possible solutions can appear. The inverse model allows us to decouple the calibration problem for every kinematic chain, and the constraints can be expressed by analytical equations. This method gives numerical efficiency but that measurement of the positions has to be very precise. Another possibility is to perform partial measurements of the position, posing the problem in terms of errors between measured values and computed values via forward kinematics. In [114], the kinematic model was solved using this method. In [103], Wang presented a method for the calibration of a 6-UPS robot that uses parallel kinematics according to the lengths of kinematic chains and positioning parameters of the platform. Zhuang [115] developed a model in terms of residual measurements of the difference between the measurement length of the kinematic chain and the one obtained from the model without using forward kinematics. The author solved the problem by means of minimizing the sum of squares of the constraint equations. Parameter errors are mainly due to errors in the assembly of spherical and universal joints, and solutions obtained for some parameters are out of the range provided by the method, which means that some constraint equations are not satisfied. In [116], Daney developed an algorithm to perform the calibration by means of partial measurements of the position, thanks to the elimination of the rotation parameters. This algorithm endeavours to obtain the advantages of the two models, inverse and forward, combining the elimination of symbolic variables through numerical optimization, which allows us to obtain numerical stability. The method is applied for three different cases in which the number of restrictions and equations to perform the calibration varies. The conclusions are that methods that obtain better results do not match those which have a higher algebraic computational cost, and that the elimination of the rotation parameters improves the accuracy of initial estimations. In [117], the kinematic problem was solved by equalizing constraint equations to a value, ɛ[i], to obtain a solution. This is probably one of the steps with more unresolved questions, mainly due to the difficulty in finding a general methodology. Therefore researchers try to find the best procedure, usually applied to a specific mechanism. This subsection shows how different authors have dealt with these unsolved problems. Any measurement error of the external instrument is propagated to the results of the identified parameters. Therefore, it is recommended to use an instrument for data acquisition that is, at least, one magnitude order more accurate than the mechanism whose parameters are going to be identified. The ability to measure the global reference system of the mechanism which is going to be calibrated usually determines the different options of data acquisition and the sensors that are going to be used at this step. If the global reference system can be measured by means of an external measurement instrument (for example a laser tracker or a coordinate measuring machine), a direct geometric transformation can be established. This transformation obtains the coordinates of the measured points in the global reference system of the mechanism. In this case, direct comparisons in the objective function, between measured data (or their geometric composition) and mechanism model nominal data, can be made providing both are expressed in the same reference system. Unfortunately this relation is not usually easy to obtain through a direct measure. In these cases, least-square methods can be used with a finite set of data. These methods allow us to obtain an approximation of this transformation, which depends on the mechanism error in the points and configurations used in data acquisition. Moreover, this approximation is absorbed by the objective function. For that reason, it has direct influence on the value of the identified parameters. This method is therefore not suitable for parameter identification procedures in which positioning accuracy is mandatory or when it is necessary to generalize the positioning accuracy obtained in the identification process to other areas of the workspace. For all these reasons, the geometric relation between the reference system of the measurement instrument and the mechanism global reference system must be established accurately. Otherwise, the objective function should be obtained starting from a reference position and evaluating Euclidean distances between datasets. Classical robot calibration methods use additional sensors to measure the position and orientation of the end-effector and the joint variables of the ball joints, where the calibration process optimizes the error between the measured and computed variables. The type of sensors used in a parallel mechanism affects not only the design process but also the calibration procedure. Sensors can be used to measure the variables of the mechanism, usually the active ones, in order to obtain the necessary data to solve the kinematic problem. In the calibration procedure, mechanism internal sensors are used to obtain information of the system. These data will be the input to the mathematical model. The output of the forward model will be the calculated position and orientation of the end-effector. In the calibration procedure, the nominal and the calculated position and orientation of the end-effector are compared and the mechanism geometric parameters are obtained. On the other side, external devices having measurement systems allow us to measure the nominal position and orientation of the end-effector. It is important to note that every measurement error of the measurement device will be propagated to the calibration results. Therefore, measurement devices must be more accurate than the desired accuracy of the mechanism that is going to be calibrated. In parallel mechanisms, the most used internal sensors are lineal optical sensors (for measuring the elongation of the actuator), rotary optical sensors (for measuring the motor rotation of the actuator), linear variable differential transformer (LVDT) and force-torque sensors (for the dynamic calibration). Accuracy of linear and rotary optical sensors is highly dependent on the method used to couple the encoder to a shaft. This value can commonly reach ±0.5 μm and ±1 arcsecond, respectively, and resolution 1 nm and 0.02 arcseconds, respectively. LVDTs present a very high reliability. Accuracy and resolution are limited only by the signal conditioning electronics and the analog-to-digital converters. Resolution can reach the nanometer range. These types of sensors are used to measure relative motion between objects whose surfaces only move a little bit with respect to each other. Besides, their measurement range is low (about 0.5 m). On the contrary, linear optical sensor measurement range is up to 30 m and rotary sensors offer not rotation limit for incremental encoders and several turns for absolute encoders. Force-torque sensors are commonly used to measure the applied forces on the mechanism links. These devices frequently present a force sensor accuracy of 6 mN and a torque sensor accuracy of 30 mN·mm External devices typically used in the calibration procedure to improve the mechanism accuracy are cameras, laser trackers, coordinate measuring machines (CMM) or autocollimators. Cameras and autocollimators are non-contact measurement instruments. These devices are therefore more suitable when the influence of measurement forces can affect the results. Cameras and 3D imaging sensors present compactness, robustness and flexibility [118]. The rapid development of these devices in the last decades has significantly improved their accuracy. Another advantage of these sensors is their portability. Moreover, the recent development in this technology allows us to perform a massive data acquisition. CMMs offer high resolution and accuracy (about a few micrometers). The laser tracker volumetric accuracy is about tens of micrometers. A typical measurement range is about 900 mm × 1,200 mm × 700 mm in a CMM and up to 40 m radius range in a laser tracker. Therefore, these last devices are suitable for large parallel mechanisms. Although autocollimator resolution (0.1 arcsecond) and accuracy (0.2 arcseconds) is very high too, the measurement range is very low (from arcseconds to a few degrees). By contrast, measurements with a CMM take a lot of time. In order to obtain a high accuracy and efficiency, optical and contact sensors are used in combination. A visual sensor provides global information of the object surface. On the contrary, force and tactile sensors obtain local information. In recent years, several studies have focused on the combination of visual and force/tactile sensors to obtain a high knowledge of the mechanism behavior [119]. Everett [120] designed a sensor for measurements in the calibration of mechanisms. This sensor did not apply external constraint forces on the mechanism. The sensor used three LED/phototransistor pairs as optical switches. Each switch used a LED that shone through an optical fiber. The light left the fiber and passed through a gap. On the other side, another fiber received the beam, and a phototransistor received the light. This work described the sensor construction, the sensor performance and calibration. The sensor was installed in the gripper of a robot as a tool. Precision spheres, having a diameter of 12.7 mm, were mounted in the workspace of the robot. The spheres were located within an area of 1,600 cm^2. Firstly, the relative positions of the spheres were measured with a coordinate measuring machine. Secondly, the sensor was positioned over these spheres automatically. Three light beams defined position with respect to the sensor. Two beams were separated 10 mm and the third beam was mounted 30° away from them. The robot was programmed to find the trip point of the sensor. This point defined the origin of the sensor coordinate frame. Then, the sensor input was examined to determine which light beams were broken. The test obtained 100 trip points and the measurement error was calculated as the standard deviation of the position errors. The measurement error was 0.06 mm and the repeatability was between 0.06 and 0.08 mm. To calibrate the mechanism by means of the developed sensor, the phases are the following: - To develop a model that relates measurable joint positions to mechanism pose - To measure a sufficient set of joints positions and their corresponding mechanism poses - To identify the parameters of the model - To determine the spheres centre relative to the fixture datum (for example by the coordinate measuring machine) - To collect calibration data by the sensor. A widespread classification in the kinematic calibration of parallel robots is the one presented by Merlet in [13]. Calibration models are classified into three types: external calibration, constrained calibration and auto-calibration or self-calibration. Although the simplest way of obtaining the necessary data is by using internal sensors, their assembly is difficult in most of the systems. In external measurement systems, it is usually necessary to establish, in an approximate way, the relation between the measurement system and the reference system of the end-effector. And this procedure has the problems described above. In self-calibration methods, additional sensors are added to passive joints and each pose of the mechanism can be used as a calibration pose. These methods require that the number of internal sensors is greater than the number of DOF of the mechanism. Self-calibration methods are usually low-cost and can be performed on-line. They can be divided into two groups: (a) the mechanism has more internal sensors than necessary; (b) a passive chain is added to the mechanism. In [121], Yang used built-in sensors in the passive joints and the parameter errors were identified by the least-square algorithm. In [122], Hesselbach developed algorithms to determine the resolution of the required sensors to reach the desired accuracy. These algorithms can be easily adapted to any 6-DOF parallel mechanisms that consist of kinematic chains with 6-DOF. Chains must include at least one ball joint. The author designed an absolute angle measuring micro sensor system. This sensor is added into passive joints of parallel mechanisms. One of the objectives is therefore to obtain a robust compact sensor. The use of passive joint sensors usually simplifies the optimization procedure. In self-calibration, the measured passive joint angles and the calculated joint angles, by means of the model, can be compared. Hollerbach [123] used nine precision potentiometers to calibrate a 6-DOF parallel mechanism. The joint angle offsets and the joint angle gains were the identified kinematic parameters. These parameters related the raw analog input data from the potentiometers to the joint angles. The mechanism was placed into a number of poses, and the joint angles were read. The obtained potentiometer readings were converted to predicted joint angles and to predicted poses by solving the forward kinematics. In the following step, these values were compared with the theoretical joint angles obtained by means of the inverse kinematics. Developments based on constraining the mobility do not require extra sensors [124,125]. In [124], Ryu analyzed a design which consisted of a link of fixed length with spherical ball joints in its ends. The measurement data is obtained by the internal measurement system of six actuators which measure the 5-DOF moving platform motion. This information is used in the calibration procedure without using any extra sensing device. The results show that the position and orientation error and the measurement noise and the link inaccuracy have the same order. Therefore, the calibration accuracy depends on the sensor accuracy and on the constraint link. Constrained calibration methods decrease the number of DOF of the mechanism restricting the movement of the end-effector or the mobility of any joint. In these methods, the mechanism mobility is constrained during the calibration, thus some geometric parameters will remain constant in this process. In [126], the mechanism was constrained in such a way that only platform rotations around a fixed point were allowed, and constraint equations were solved by means of the forward model. Chiu [127] limited the movement of a 6-UPS mechanism by adding a seventh leg, connected to the base through a universal joint linked to the end-effector. However, this design considerably restricts the working range and causes interferences between links. Besides, some parameter errors related to immobilize joints can occur unseen. In [128], Ren kept two attitude angles of the end-effector constant at different measurement configurations using a biaxial inclinometer. These methods have lower costs than external calibration, but on the other hand they are more complex than self-calibration. Another problem is that not all the workspace is available due to constraints, and it is usually a less accurate method than external and self-calibration. However, in practice it is not easy to add extra redundant sensors or restrictions, so the most frequently used method is external calibration in which the necessary information is obtained by means of external devices such as theodolite [129], inclinometers [130,131], vision devices [117,132], the laser tracker [133,134] or the coordinate measuring machine [135–137]. Whitney [129] developed a forward calibration method. The method defined link lengths and joint sensor offsets as parameters. The theodolite measurements of tool position and the readings of the robot joint sensors were introduced in the model to perform the calibration procedure. The results show that the calibration model predicts theodolite readings with an error of 0.13 mm. Daney [117] developed a vision-based measurement method. Although the measurement data, for example the measured poses of the mechanism, are given by the sensor, it is necessary to consider the noise of this device. The mechanism poses were measured and sensors measured the six leg lengths for every pose. Besnard [130] developed a method for the kinematic calibration of a 6-DOF parallel mechanism where the calibration model considered the error on the angle between the inclinometer axes. The two inclinometers were fixed to the platform. These sensors were used to measure the prismatic joint variables and the platform orientation. Each inclinometer measured its orientation from terrestrial horizontal, and both inclinometers had a null value when the platform was horizontal. The prismatic joint values and the inclinometer values were used to calibrate the geometric parameters for a number of configurations. The calibration procedure minimized the residual between the inclinometer measured values and the calculated values. The results show that inclinometers with a precision of 1e-3° and motorized joint sensors with a precision of 0.02 mm are necessary to obtain a position accuracy of about 0.4 mm. Rauf [131] developed a calibration method of parallel mechanism with partial pose measurements, by measuring the rotation of the end-effector along with its position. The device consists of a linear variable differential transformer and a biaxial inclinometer, to measure the position, and an optical encoder to measure the rotation. In [105], inclinometers were not used to measure precise values, rather to indicate if the values measured in one configuration were equal to the ones measured in a different configuration, thereby making the calibration method independent of the range of the measurements carried out and the positioning accuracy of the inclinometer. Although is not easy to obtain high accuracy and great workspace with conventional inclinometers, results obtained in calibration are usually satisfactory. The inclinometers were installed on the end-effector. The calibration was performed by keeping two attitude angles of the end-effector constant. The results show that the position and orientation accuracy, after calibration, can be 0.1 mm and 0.01°, when the inclinometer repeatability is 0.001°. And the precision of the leg length measurements is 0.002 mm. The results show that, before calibration, the errors were from 4 to 8 mm, and after calibration they were from −2 to 2 mm. Renaud [132] developed a monocular high precision measuring device based on a vision sensor. A calibration target was placed in different positions and some images were taken with the vision sensor. The mathematical model obtained the pose of the target with respect to the vision sensor. The measuring device, based on a vision sensor, presents an accuracy on the order of 10 μm in position and 5e-4° in orientation; and, in [138], the author performed a calibration for a mechanism having linear actuators on the base. The forward model offers more stability, although it cannot converge if there are noise problems, and it is possible to perform the calibration by means of partial measurements of the end-effector pose. In [133], Koseki used a laser tracking coordinate measuring machine. This device consisted of 4 laser stations and a wide-angle retro-reflector. When the laser beam was incident upon the retro-reflector, the reflected beams returned parallel to the incident beam. The retro-reflector had a lens whose focus coincided with its spherical surface and the surface mirrors incident beam. The interferometer measured the change in distance from the intersection of two axes of pan and tilt to the retro-reflector incrementally. Some advantages of this device are non-contact measurement, wide measuring range and ability to measure a high speed object. The results show that the accuracy obtained as the distance between measured position and calculated position presents an average error of 1.63 mm, before calibration, and 0.30 mm after calibration. In [135] three methods were compared: a method using external measurement, a method using additional redundant sensors and a method using both. The author affirms that by using the implicit calibration, the basic system of equations may be obtained by using only the sensor information. The implicit calibration considers the basic system of equations obtained by the closed loop nature of parallel robots, and the system is specified by a function of the available data. The kinematic parameters can be identified by solving this system. The results show that the kinematic parameters were well identified as a function of the dimension of the redundant information on the state of the robot, and that the accuracy was higher for the method that used both: external measurements and additional redundant sensors. In [139], Last classified the calibration techniques following the criteria of degree of automation and data-acquisition method. Thus, calibration techniques can be divided in four groups: - Type 1: Non-autonomous methods by additional sensors from data acquisition, such as calibration by means of a laser tracker, camera systems or an extensible ball bar - Type 2: Non-autonomous methods by kinematic constraints from data acquisition, such as calibration by contour tracking or by passive joint clamping - Type 3: Autonomous methods by additional sensors from data acquisition, such as calibration by passive joint sensors or with actuation redundancy - Type 4: Autonomous methods by kinematic constraints from data acquisition To assure that the number of equations is not smaller than the number of unknowns the minimum number of measurements, m, is given by Equation 3: m ≥ ρ + ζ Defining η as the coefficients that relate the transducer signal (corresponding to the joint) to the real displacement of the joint and a as the coefficients in the kinematic model, ρ is the number of elements in the vector η and ζ is the number of elements in the vector a [94]. Besides, the positions chosen for data acquisition of the optimization process should guarantee the influence of all the parameters that are going to be identified, guaranteeing the generality of the parameters obtained for all the workspace [110]. In [140], Driels analyzed the optimum positions for data acquisition, and concluded that all the possible variation range of the joints in the mechanism should be covered. Nahvi [141] concluded that for performing a calibration, the number of joint sensors must be higher than the mobility of the mechanism. The author defined the noise amplification index and demonstrated that this index is an indicator of the amplification of the sensor noise and unmodeled errors. Moreover, the results show that the effects of sensor noise and unmodeled sources of error dominate the effect of length and other kinematic parameter variations of the mechanism. Merlet [13] explained that the number of constraint equations must be greater or equal than the number of unknowns. In practice the number of equations is usually greater, reducing the sensitivity of calibration to the uncertainty associated with measurements. This uncertainty is usually caused by measurement device noise. Thus, it is usual to develop an over-constrained system [142]. Huang [143] identified the parameter errors with an endpoint sensor and a dial indicator by measuring the flatness of a fictitious plane, the straightness and squareness of two orthogonal axes and the orientation error of the end-effector. These methods are usually based on data acquisition for fixed positions similar to work positions. Besides, it should be possible to generalize the parameter identification to positions different from those used for the identification process. The determination of the optimal number of configurations to the data acquisition, in order to perform a successful calibration, is still one of the unsolved problems, and in the specialized literature different criteria and opinions can be found. Therefore decisions are made without a specific methodology to obtain the configurations for a calibration process. According to Zhuang [144], the number of necessary configurations is given by n+1, n being the number of DOF of the mechanism. In [145], Borm defined the index of observability based on the non-zero singular values of the Jacobian matrix, and represented the data scatter. By maximizing this index, the errors of the parameters can be better observed. In [146], Sun related five observability indexes and analyzed how to minimize the variance of the parameters and minimize the uncertainty of the end-effector position. Agheli [147] showed that the boundaries of the workspace should be examined for the maximum observability errors. In [121], Yang illustrated the effect of the measurement noise and robot repeatability on the calibration results. If the measurement noise exists, more measured end-effector poses must be considered. The author simulated 100 end-effector poses (50 poses to calibrate the robot and 50 to verify the results), and he concluded that the quantified orientation and position deviations and the calibrated initial poses of the modules frames become stable when the number of poses used for calibration is greater than 20. The results show that the quantified orientation deviation becomes stable in 0.004 radians, and the quantified position deviation becomes stable in 0.09 mm. Bai [148] recommended that the calibration should consider more than 10 measured poses to improve the calibration accuracy. Moreover, in [105] Ren concluded that selecting an optimal set of configurations is more efficient in decreasing the influence of measurement noise. Also, increasing the number of measurement configurations will decrease the pose error but only in a limited way, as when the number of measurement configurations is increased over a certain amount the improvement is not clear but the runtime is increased considerably. The number of configurations needs to be adjusted to reach a balance between accuracy and efficiency. On the contrary, Horne [149] studied the effectiveness of five pose selection criteria: the geometric mean of the singular values, the inverse condition number, the minimum singular value, the noise amplification index and the inverse of the sum of the reciprocals of all of the singular values, and the results show that the pose selection criteria did not significantly improve the calibration process for the 4-DOF parallel mechanism studied, and moreover some of the results using the criteria are worse than those results where no criteria had been used. (b) Optimization procedure The calibration process can be solved by two ways [93]: (a) by multiplying the constraints by Lagrange multipliers and using a modified objective; (b) by solving the constraint equations and substituting into the objective functions. In [93,114] authors applied Lagrange multipliers to perform the calibration. The model can be solved by means of Newton method. The second method presents non-linear equations and the inverse matrix is not easy to obtain, however nowadays there are powerful computers that allow us to perform this type of procedure. The objective of the parameter identification or optimization is to search for the optimum values of all parameters included in the model that minimize the position error of the platform. The objective function to minimize can be formulated in terms of a linear least-square problem. This function is usually defined as the quadratic difference of the error (obtained between the measured value of the end-effector position and the value computed by the kinematic model). The increment established for parameters must be defined for each iteration, and its value will depend on the optimization method chosen. In most of the cases numerical optimization techniques are used to minimize the end-effector error. The equation that relates joint variables with the final position of the end-effector by means of the forward method is given by the Equation 4: y = f ( θ i , p )where y is the vector that expresses the position and orientation of the end-effector according to the Euler angles, y=[x, y, z, α, β, γ], θ[i], are the joint variables, with i from 1 to the number of DOF of the mechanism, and p=[p[1], p[2],…, p[j]]^T is the vector of the model parameters. The number of parameters j will depend on the model chosen. Thus, it is possible to identify the geometric parameters from vector p by iterative optimization methods. These methods minimize the difference between the coordinates obtained by the model and the nominal values measured in the same position. These differences are denominated residues (see Equation 5): ϕ = ∑ i = 1 n ( y i − f ( θ i , p ) ) T ( y i − f ( θ i , p ) ) In this equation, y[i] is given by the vector of the nominal position and orientation values for the n configurations utilized in the parameter identification. In each configuration, the position and orientation of the end-effector will be obtained by means of the mechanism model given by f, for the joint variables θ[i], corresponding to this configuration. This equation represents the objective function to minimize, whose value will be obtained as the sum of squares of the n poses, used in the parameter identification of the mechanism. A common way to express this equation is the one shown in Equation 6: ϕ = ∑ i = 1 n [ ( x m i − x p i ) 2 + ( y m i − y p i ) 2 + ( z m ii − z p i ) 2 + ( α m i − α p i ) 2 + ( β m i − β p i ) 2 + ( γ m i − γ p i ) 2 ]where the values with m[i] sub-index are the external measured values and the p[i] sub-index values are the computed values by means of the mathematical model, for the n identification poses. The optimum values will be given by the minimum of the objective function ϕ. Traditionally, this equation has been widely used for open chain mechanisms [150], since in these systems is easier to obtain the forward kinematics, but in recent years it has also been widely used in parallel mechanisms such as in [93,95,104,105,136,143,147,148], where the position and orientation of the end-effector is measured and compared with the value given by applying the forward kinematic model, in function of model parameters and joint variables. Another method to perform the calibration is by comparing the joint variables, which are given by the inverse kinematic model [115,117,123,138,141,151,152], by means of Equation 7: ϕ = ∑ i = 1 n ( q i − g ( X i , p ) ) T ( q i − g ( X i , p ) )where q[i] are the joint variables, which are externally measured and g(X[i], p) are the joint variables obtained by means of the inverse model, in function of the position and orientation of the platform and the model parameters. Another widely used equation for obtaining the error is the representation as differences between the measured and the computed distances, d[mi] and d[i] respectively, obtained by the kinematic model, between two points, as it is shown in Equation 8: ϕ = ∑ i = 1 n ( d m i − d i ) 2where the sub-index mi indicates the values externally measured, and the values with sub-index i correspond to the values obtained by the kinematic model. Once the most suitable calibration model has been selected for the mechanism, and the objective function is defined, the next step will be to solve the system. These systems are non-linear, thus it is not possible to obtain an analytical solution to the parameter identification problem. Non-linear optimization iterative techniques are usually used to obtain the optimum parameters that minimize the error in the identification poses. For these systems, the most suitable resolution techniques are those based on least squares, specially used to adjust a parametric model to a set of data. A usual approach to the optimization problem consists of linearizing the equations of the model in an environment of the parameter to identify by means of a development in Taylor series. A suitable formulation to the optimization problem and a good approach to the function are achieved in a small interval in relation to its current value. In parallel kinematics it is usual to use developments in Taylor series for every parameter pi. This approach can be first or second order. The optimization problem can be solved through several methods: (1) Gradient optimization methods The simplest ones are those methods based on the gradient, also known as line search. These methods are usually used when the objective function to be minimized is approximated through a first-order Taylor series development. The following step will be to consider a stop criterion based on the convergence of the method or on small increments of parameters between iterations to obtain the set of parameters that minimizes the function. (2) Least-square optimization methods In the second group we can find those methods that consider second-order approaches in the Taylor series development. In this second term the Hessian matrix appears, with components that are the second derivatives of the objective function with respect to the vector of parameters. This matrix should be invertible. Problems derived from the singularity of this matrix in numerical optimization procedures must be solved by choosing a suitable mathematical model and a set of data for the optimization, or by employing optimization methods that avoid this singularity. Optimization methods based on least-square are different from the previous ones when obtaining the direction of search and when defining the increment that parameters should have in this direction. One of the most frequently used least-square methods that take into account second order terms is the Gauss-Newton method. In this method the Hessian matrix should be positive or semi-defined positive, that is all its own values are positive or positives and zero, which will not always be produced for any mathematical model and for any set of values of the objective function. Moreover, convergence is not always guaranteed. (3) Levenberg-Marquardt optimization method The methods studied can present problems when processing the objective function, and are therefore not always suitable for the parameter identification process. The gradient method ensures that a local minimum of the function is found and usually requires more iterations to find it. It also needs the objective function to be continuous in every parameter, which not is always the case. Numerical problems that appear in the two methods studied are solved by the algorithm developed by Levenberg and Marquardt [153,154]. The method adds a positive value λ to the Hessian main diagonal elements, obtaining a non-singular and therefore invertible matrix. The main problem of the Newton Gauss-Newton method with the processing of the second order components is therefore solved. To choose λ, a compromise between the speed of convergence of the method and the invertibility of the matrix must be reached in every iteration. This is why the Levenberg-Marquardt algorithm results in a combination of the two methods presented above. When parameters are far from the optimum solution, the value of λ increases and the algorithm behaves similarly to the gradient method; when the parameters approach their optimum value, the value of λ decreases, behaving in a similar way to the Gauss-Newton method in both search direction and parameter increment. This method is widely used in the calibration of parallel mechanisms by authors such as in [105,110,115, 123–126,130,132,135,143,151,155–158]. In the specialized literature we can find other alternatives to the Gauss-Newton method problem. For example, the singular value decomposition (SVD) [114,116,147,148,156,159]. Another alternative is the use of the decomposition QR for parameter identification [125]. In [160] an algorithm allows us to introduce range limits of the joints in a configuration selection process and avoids the problem of local minimum, although it is a high-cost computational method. In [161,162] the problem is solved by means of genetic algorithms. Yu applies the inverse kinematics model and improves the accuracy in the parallel robot position by means of an artificial neural network. In [163], Stan performs the optimization of a 2-DOF parallel robot using genetic algorithms. The author considers transmission quality index, manipulability, stiffness and workspace. In [164] Liu obtains residual measures through inverse kinematics and develops a calibration method using genetic Techniques based on least-square usually present lower computational cost, providing we consider as an initial value a solution close to the optimum solution for the set of parameters. However, in methods based on genetic and neural networks algorithms this premise is not usually so significant. These algorithms are usually used for parameter optimization and identification when it is not known whether the initial values are close to the optimum solution. Furthermore, the combinatorial nature of these methods is purely stochastic, which avoids problems in the definition of the search direction in traditional least-square methods. (c) Evaluation of the identified parameters The evaluation of the identified parameters consists of evaluating the mechanism behaviour with the set of the identified parameters obtained in the previous step. This procedure is performed in configurations different from those utilized in the optimization process. This phase must evaluate the degree of compliance of the error values obtained in other positions of the workspace. In [133], Koseki utilized a laser tracking coordinate measuring system to evaluate the accuracy of a parallel mechanism. Cheng [165] analyzed the relationship between original errors and position-stance error of a moving platform by means of the complete differential-coefficient theory and evaluates the error model. The conclusion is that improving manufacturing and assembly techniques allows us to reduce the moving platform error, and that a small change in position-stance error in different kinematic positions proves that the error-compensation of software can considerably improve the precision of parallel mechanisms. (d) Correction model To end the calibration process, a correction model can be obtained to improve the accuracy of the mechanism. Huang [166] performed the calibration of a parallel mechanism and compensated geometrical and position errors in x and y coordinates. Gong [111] identifies non-geometric errors and developed a method to compensate these errors with a laser tracker by means of the inverse calibration model. An extensive guide of error compensation methods can be found in [167]. In this paper, Oiwa explained in detail how to compensate joints errors, link length, forces in a measurement loop and frame deformation using a coordinate measuring machine. This method considers thermal effect and external forces. The results show that the deflection of measured Z-coordinates is not completely eliminated with the compensation system, but compensation using displacement sensors built in spherical joints improves moving accuracy of parallel kinematic mechanisms when the mechanism moves in a large working space. The thermal and elastic deformations of the limb can be compensated by connecting the scale unit with the joints through low expansion material rods. Bringing the rod end into contact with the ball of the spherical joint enables the scale unit to measure the joint error and the limb deformations. And, the measured position and the orientation of the base platform compensate the thermal and elastic deformations of the frame.
{"url":"http://www.mdpi.com/1424-8220/10/11/10256/xml","timestamp":"2014-04-19T08:13:35Z","content_type":null,"content_length":"231979","record_id":"<urn:uuid:1450fe7b-61e5-4b48-ad08-713d5159e19e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetry and Conservation of Charge ...isn't that a contradiction in terms? To "gauge" a symmetry means to make it local... No, global gauge symmetries are independent of space; local gauge symmetries depend on spatial coordinates. I might this wrong (it's been a while), but I seem to recall that gauge symmetries in general are symmetries of a potential field, such as the electric potential field, the derivatives of which give you the electric field. EDIT: You know, as I stir up my old memories of this, I now seem to recall that people use "gauge" to refer to local gauge symmetries, especially in gauge field theory. What is confusing me now is that global choices of gauge, like the Lorentz or Coulomb gauge in Classical E&M, also reflect a gauge symmetry.
{"url":"http://www.physicsforums.com/showthread.php?t=209416","timestamp":"2014-04-21T07:13:52Z","content_type":null,"content_length":"33896","record_id":"<urn:uuid:de316c1c-06aa-4286-b8db-f36949673e9e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 3 • 5 = 5 • 3 is an example of which algebraic property? Distributive Property Associative Property of Multiplication Reflexive Property Commutative Property of Multiplication • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5088669de4b004fc96eb6a0b","timestamp":"2014-04-18T20:48:22Z","content_type":null,"content_length":"37238","record_id":"<urn:uuid:f569a13b-9ff2-4076-bc59-140d2b1152f6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: arXiv:1006.3147v1[math.SP]16Jun2010 Karlin Theory On Growth and Mixing Extended to Linear Differential Equations Lee Altenberg June 17, 2010 Karlin's (1982) Theorem 5.2 shows that linear systems alternating between growth and mixing phases have lower asymptotic growth with greater mixing. Here this result is extended to linear differential equations that combine site-specific growth or decay rates, and mixing between sites, showing that the spectral abscissa of a matrix D + mA decreases with m, where D = cI is a real diagonal matrix, A is an irreducible matrix with non-negative off-diagonal elements (an ML- or es- sentially non-negative matrix), and m 0. The result is based on the inequality: u Av < r(A), where u and v are the left and right Perron vectors of the matrix D + A, and r(A) is the spectral abscissa and Perron root of A. The result gives an analytic solution to prior work that relied on two-site or numerical simulation of models of growth and mixing, such as source and sink ecological models, or mul- tiple tissue compartment models of microbe growth. The result has applications to the Lyapunov stability of perturbations in nonlinear systems.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/581/2271218.html","timestamp":"2014-04-23T21:34:52Z","content_type":null,"content_length":"8358","record_id":"<urn:uuid:64128035-555c-4c92-ac7b-a1e534490d95>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
9.61% annualized return high for risk for Baytex Energy9.61% annualized return high for risk for Baytex Energy 9.61% annualized return high for risk for Baytex Energy In analyzing a company’s cost of debt, current tax rate, cost of equity and WACC, we will get an understanding of the company’s financial risk, and future ratings. Based on this analysis a shareholder will be able to calculate an expected return over the long-term for their investment. In this article I will look at Baytex Energy’s (BTE:TSX) cost of debt, tax rate, cost of equity and WACC to calculate an expected return from this point. Cost of Debt The cost of debt is the effective rate that a company pays on its total debt. As a company acquires debt through various bonds, loans and other forms of debt, the cost of debt metric is useful, because it gives an idea as to the overall rate being paid by the company to use debt financing. This measure is also useful because it gives investors an idea as to the riskiness of the company compared with others. The higher the cost of debt, the higher the risk. 8. Cost of debt (before tax) = Corporate Bond rate of company’s bond rating. • S&P rated Baytex Energy bonds “BB” • Baytex Engy Tr 9.15% Rate of “BB” = 9.15% • Current cost of Debt as of January 17th 2013 = 9.15% According to the S&P rating guide, the “BB” rating is – “Less vulnerable in the near-term but faces major ongoing uncertainties to adverse business, financial and economic conditions.” Baytex Energy has a rating that meets this description. 9. Current tax rate (Income Tax total / Income before Tax) • 2008 – $26 million / $289 million = 9.00% • 2009 – $(19) million / $68 million = -27.94% • 2010 – $(24) million / $154 million = -15.58% • 2011 – $52 million / 270 million = 19.26% • 2012 TTM – $135 million / $420 million = 32.14% As the variations over the past 5 years are so great I will use the tax rate of the years Baytex paid income tax. 2008, 2011 and 2012 TTM tax rate = 20.13% Baytex Energy has averaged a tax rate of 20.13%. 10. Cost of Debt (After Tax) = (Cost of debt before tax) (1 – tax rate) The effective rate that a company pays on its current debt after tax. • .0915 x (1 – .2013) = Cost of debt after tax The cost of debt after tax for Baytex Energy is 7.31% Cost of equity or R equity = Risk free rate + Beta equity (Average market return – Risk free rate) The cost of equity is the return a firm theoretically pays to its equity investors, for example, shareholders, to compensate for the risk they undertake by investing in their company. • Risk free rate = U.S. 10-year bond = 1.88% (Bloomberg) • Average market return 1950 – 2012 = 7% • Beta = (Google Finance) Baytex Energy’s beta = 1.51 Risk free rate + Beta equity (Average market return – Risk free rate) • 1.88 + 1.51 (7-1.88) • 1.88 + 1.51 x 5.12 • 1.88 + 7.73 = 9.61% Currently, Baytex Energy has a cost of equity or R Equity of 9.61%, so investors should expect to get a return of 9.61% per-year average over the long term on their investment to compensate for the risk they undertake by investing in this company. (Please note that this is the CAPM approach to finding the cost of equity. Inherently, there are some flaws with this approach and that the numbers are very “general.” This approach is based off of the S&P average return from 1950 – 2012 at 7%, the U.S. 10-year bond for the risk-free rate which is susceptible to daily change and Google finance beta.) Weighted Average Cost of Capital or WACC The WACC calculation is a calculation of a company’s cost of capital in which each category of capital is equally weighted. All capital sources such as common stock, preferred stock, bonds and all other long-term debt are included in this calculation. As the WACC of a firm increases, and the beta and rate of return on equity increases, this states a decrease in valuation and a higher risk. By taking the weighted average, we can see how much interest the company has to pay for every dollar it finances. For this calculation, you will need to know the following listed below: Tax Rate = 20.13% (Baytex Energy 2008, 2011 and 2012 TTM Tax Rate) Cost of Debt (before tax) or R debt = 9.15% Cost of Equity or R equity = 9.61% Debt (Total Liabilities) for 2012 TTM or D = $1.347 billion Stock Price = $44.40 (January 17th, 2013) Outstanding Shares = 121.22 million Equity = Stock price x Outstanding Shares or E = $5.382 billion Debt + Equity or D+E = $6.729 billion WACC = R = (1 – Tax Rate) x R debt (D/D+E) + R equity (E/D+E) (1 – Tax Rate) x R debt (D/D+E) + R equity (E/D+E) (1 – .2013) x .0915 x ($1.347/$6.729) + .0961 ($5.382/$6.729) .7987 x .0915 x .2001 + .0961 x .7998 .0146 + .0758= Based on the calculations above, we can conclude that Baytex Energy pays 9.04% on every dollar that it finances, or 9.04 cents on every dollar. From this calculation, we understand that on every dollar the company spends on an investment, the company must make $.0904 plus the cost of the investment for the investment to be feasible for the company. Currently Baytex Energy has a bond rating currently stands at “BB,” this states that the company bonds are “Less vulnerable in the near-term but faces major ongoing uncertainties to adverse business, financial and economic conditions.” Please note that a bond rating of “BB” is considered “speculative”. The CAPM approach for cost of equity states that shareholders need 9.61% average per year over a long period of time on their equity to make it worthwhile to invest in the company. This calculation is so based on the average market return between 1950 and 2012 at 7%. The WACC calculation reveals that the company pays 9.04% on every dollar that it finances. As the current WACC of Baytex Energy is currently 9.04% and the beta is slightly above average at 1.51, this implies that the company needs at least 9.04% on future investments and will have above average volatility moving forward. The above analysis of Baytex Energy indicates a significant amount of risk for the investor. The Bond rating of “BB” by Standard & Poor’s indicates that the company bonds are “speculative”. The rating of “BB” states “”Less vulnerable in the near-term but faces major ongoing uncertainties to adverse business, financial and economic conditions.” The WACC reveals that Baytex Energy has the ability to add future investments and assets at around 9.04%. As the CAPM indicates, the investor needs 9.61% return so, if you believe that you can get 9.61% year over year over the long term on this investment then you will get good value on your investment. Chart provided by Finviz To read more on Baytex energy please read: BTE – Baytex Energy Corp. (BTE-TSX, BTE-NYSE) Stock Price Target 2012
{"url":"http://www.stockresearching.com/2013/01/18/9-61-annualized-return-high-for-risk-for-baytex-energy/","timestamp":"2014-04-18T18:28:42Z","content_type":null,"content_length":"92262","record_id":"<urn:uuid:43f40370-2e47-468b-8dc8-26cdbdd27e02>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Expected Values | The Classroom | Synonym In statistics and probability, the formula for expected value is E(X) = summation of X * P(X), or the sum of all gains multiplied by their individual probabilities. The expected value is comprised on two components: how much you can expect to gain, and how much you can expect to lose. It's the sum of both your expected losses and expected gains in any situation from gambling on the stock market to buying a lottery ticket. Step 1 Draw or sketch a two-row probability chart. The probability chart should be two rows wide and two columns deep. Label the first row "Gain" and the second row "Probability." Label the first column "Win" and the second column "Lose." Step 2 Work out how much money you could win or lose. For example, you buy a lottery ticket for $10. You have one chance of winning $10,000. Your potential loss is the cost of the ticket: $10. Your potential gain is $10,000 less the cost of the $10 ticket ($10,000 -- $10) = $9,990. Step 3 Place the values from Step 2 in the relevant spaces on the probability chart---$9,990 should go at the top left and --$10 should go at the top right. Note the negative sign on the $10 to denote a Step 4 Write your odds of winning on the bottom row. If the lottery has sold 1,000 tickets, your odds of winning are 1 in 1,000, or 1/1000. Place this at the bottom left of the probability chart. Step 5 Write your odds of losing on the bottom row. If your odds of winning are 1/1000, then your odds of losing are 999 out of 1,000, or 999/1000. Step 6 Multiply the figure at the top of each column by the figure at the bottom of each column. In the above example, the calculations for the expected values are: --$10 * 999/1000 = --9.99 $9,990 * 1/1000 = 9.99 Step 7 Add all of the values together to compute the expected value. In the above example, --9.99 + 9.99 = 0, so your expected value from purchasing one ticket in the lottery is zero. Style Your World With Color • For more than one item, just extend the probability chart to the right, adding more columns for more items. Remember to add all of the probabilities multiplied by their gains.
{"url":"http://classroom.synonym.com/calculate-expected-values-2764.html","timestamp":"2014-04-17T12:30:26Z","content_type":null,"content_length":"32790","record_id":"<urn:uuid:16dcb112-040e-42d1-baa9-414964306595>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Some math about resistances I thought a bit about resistances... i dont like that you can only use 2 approaches: 1. dont take resist at all 2. take all you find This is mainly because if you have 60% resistance and find another 10%, it is MUCH stronger than when you have 0% resistance before... If the resistance curve would be altered, this would not happen anymore Here is my idea: Instead of resistance percentages, every item/god boon gives you a resistance rating that is cummultative. The real resistance is computed by Here is a list how that would effectively go: 0 rating -> 0% resistance 10 rating -> 20% resistance 20 rating -> 33% resistance 30 rating -> 43% resistance 40 rating -> 50% resistance 50 rating -> 56% resistance 60 rating -> 60% resistance 70 rating -> 64% resistance 80 rating -> 67% resistance 90 rating -> 69% resistance 100 rating -> 71% resistance With this formula, the arbitrary border of 65% would be not necessary anymore. Let the stackers stack as high as they want... but to get to 80%, 150 rating is needed and so on... reaching 90% is nearly impossible (rating of 360) It would be worth it to even use one item and everyone would be happy What do you think? Re: Some math about resistances Was discussed before and I seem to recall there was some talk that while most of DD's math is pretty straightforward, that having this sort of "armor rating" system that most MMOs or MOBA use would be against the spirit of the game or something. I mean, I support this kind of change because I think straight resistances is a problem, but that's the argument as I see it. Re: Some math about resistances Happy as is, actually. I even use the tower shield more often than not. I mean, 10% less damage taken from (probably) most of the dungeon mobs? Being able to withstand 2 shots instead of 1 from higher level mobs? Count me in I understand ratings would somewhat even it out, but that would make it into just another -damage taken stat and convolute it (I don't want to count/convert ratings). Re: Some math about resistances This was discussed to death, and while I think what you suggest has merit (and I liked your comment on monks), this has always been a very touchy subject. I think it has something to do with the asymptotic nature of % based damage resistance coupled with the "small numbers" nature of DD. A long while ago it was "too late" to do anything about it, if there actualy is a solution. I'm generally staying out of these discussions on principle. I just wanted to say it's one thing where a lot of effort went into striking a compromise, such as it is, and as it currently stands nobody really wants to open this particular can of worms. Or at least I don't... The compromise, however, seems to be - Berserker is balanced by a crippling glyph penalty where he doesn't just rule, Monk is balanced by... becoming boring after a while..., Paladin is balanced by not being as fun because of forced monotheism and Crusader is balanced by not having any starting resists and a misledaing first ability. And Gorgon is ballanced by being difficult to unlock and frustrating to play, apparently... Everybody else can't stack resists just anywhere, Body Pact can't be prepped and Binlor is there as a catch all noob-tube so anyone at all can deal with all the magic damage bosses in the generally cramped VICIOUS dungeons (as far as %res are concerned). For everybody else, picking up just enough to survive a boss hit earlier is probably how u're supposed to lok ar %res, more or less... It's a bit of a cynical view, I know, but when was the last time anyone saw me whining about it even if I see it that way? Re: Some math about resistances Ok I understand that you want to keep your math simple... but if the actual percentage is displayed, with the latest changes, the exact numbers are shown by the combat prediction. So for a new player, you only need to know "better rating, better resistances!" And if you are a vet, the game is SO complex, that this simple computation will not really disturb you... but ok if this was discussed a thousand times, I will speak no more about it
{"url":"http://www.qcfdesign.com/forum/viewtopic.php?f=3&t=2474&p=21627","timestamp":"2014-04-18T06:14:01Z","content_type":null,"content_length":"23444","record_id":"<urn:uuid:83a4f2df-3e14-40bd-a259-446f7f89a4bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Leeward CC Math Lab Basic Services Tutoring is available for students currently enrolled in a Leeward CC MATH or QM discipline course, at no charge, on a first-come-first-serve-basis. Please be aware that tutors are not able to help students on any work that they will receive credit for. This includes take home exams or quizzes, extra-credit homework, or bonus problems. Tutors are located in MS 204. Basic, scientific, and graphing calculators are available for student use in the Math Lab. Please see a tutor to borrow a calculator. A picture ID is required to borrow a calculator: NO ID -- NO CALCULATOR. All calculators must be returned on the same day borrowed by the Math Lab closing time. Textbooks & Solutions Current math textbooks and solutions manuals are available to borrow. Please see a tutor to borrow a textbook or solution manual. All current math textbooks and solutions manuals must be returned on the same day borrowed by the Math Lab closing time. Reference Textbooks Reference textbooks are available to borrow. Please see a tutor to borrow a reference textbook. Reference textbooks may be taken home and used for the entire semester if necessary. Math Computer Lab 19 computer stations are available for students to use for MATH and QM discipline courses and are located in the back room of the Math Lab. Be sure to check-in with your picture ID with a tutor before entering the Computer Lab.
{"url":"http://www.leeward.hawaii.edu/mathlab-services","timestamp":"2014-04-17T15:38:00Z","content_type":null,"content_length":"16047","record_id":"<urn:uuid:396af694-4cc6-47cf-acc5-b8b6bda3308d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
A National Prediction Model for PM2.5 Component Exposures and Measurement Error–Corrected Health Effect Inference PDF Version 1.5 MB • Background: Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. Objective: To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). Methods: We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM[2.5] components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM[2.5] component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Results: Our models performed well, with cross-validated R^2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. Conclusion: The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate. • Citation: Bergen S, Sheppard L, Sampson PD, Kim SY, Richards M, Vedal S, Kaufman JD, Szpiro AA. 2013. A national prediction model for PM[2.5] component exposures and measurement error–corrected health effect inference. Environ Health Perspect 121:1017–1025;http://dx.doi.org/10.1289/ehp.1206010 Address correspondence to A.A. Szpiro, Department of Biostatistics, University of Washington, Health Sciences Building, Box 357232, 1705 NE Pacific St., Seattle, WA 98195-7232 USA. Telephone: (206) 616-6846. E-mail: aszpiro@u.washington.edu We thank the three reviewers for their helpful comments. Research in this publication was supported by grants T32ES015459, P50ES015915, and R01ES009411 from the National Institute of Environmental Health Sciences of the National Institutes of Health (NIH). Additional support was provided by an award to the University of Washington under the National Particle Component Toxicity initiative of the Health Effects Institute and by the U.S. Environmental Protection Agency (EPA), Assistance Agreement RD-83479601-0 (Clean Air Research Centers). This publication was developed under a STAR (Science to Achieve Results) program research assistance agreement, RD831697, awarded by the U.S. EPA. The views expressed in this document are solely those of the University of Washington, and the U.S. EPA does not endorse any products or commercial services mentioned in this publication. The Multi-Ethnic Study of Atherosclerosis (MESA) is conducted and supported by the National Heart, Lung, and Blood Institute (NHLBI) in collaboration with MESA investigators. Support for MESA is provided by NHLBI contracts N01HC-95159 through N01HC95169 and UL1RR024156. MESA Air is funded by the U.S. EPA’s STAR grant RD831697. The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors declare they have no actual or potential competing financial interests. Received: 13 September 2012 Accepted: 7 June 2013 Advance Publication: 11 June 2013 Final Publication: 1 September 2013 The relationship between air pollution and adverse health outcomes has been well documented (Pope et al. 2002; Samet et al. 2000). Many studies focus on particulate matter, specifically particulate matter ≤ 2.5 μm in aerodynamic diameter (PM[2.5]) (Kim et al. 2009; Miller et al. 2007). Health effects of PM[2.5] may depend on characteristics of the particles, including shape, solubility, pH, or chemical composition (Vedal et al., in press), and a deeper understanding of these differential effects could help inform policy. One of the challenges in assessing the impact of different chemical components of PM[2.5] in an epidemiologic study is the need to assign exposures to study participants based on monitoring data from different locations (i.e., spatially misaligned data). When doing this for many components, the prediction procedure needs to be streamlined in order to be practical. Whatever the prediction algorithm, using the estimated rather than true exposures induces measurement error in the subsequent epidemiologic analysis. Here we describe a flexible and efficient prediction model that can be applied on a national scale to estimate long-term exposure levels for multiple pollutants and that implements existing methods of correcting for measurement error in the health model. Current methods for assigning exposures include land-use regression (LUR) with geographic information system (GIS) covariates (Hoek et al. 2008) and universal kriging, which also exploits residual spatial structure (Kim et al. 2009; Mercer et al. 2011). Often hundreds of candidate correlated GIS covariates are available, necessitating a dimension reduction procedure. Variable selection methods that have been considered in the literature include exhaustive search, stepwise selection, and shrinkage by the “lasso” (Mercer et al. 2011; Tibshirani 1996). However, variable selection methods tend to be computationally intensive, feasible perhaps when considering a single pollutant but quickly becoming impractical when developing predictions for multiple pollutants. A more streamlined alternative is partial least squares (PLS) regression (Sampson et al. 2009), which finds a small number of linear combinations of the GIS covariates that most efficiently account for variability in the measured concentrations. These linear combinations reduce the covariate space to a much smaller dimension and can then be used as the mean structure in a LUR or universal kriging model in place of individual GIS covariates. This provides the advantages of using all available GIS covariates and eliminating potentially time-consuming variable selection processes. Using exposures predicted from spatially misaligned data rather than true exposures in health models introduces measurement error that may have implications for β̂[x], the estimated health model coefficient of interest (Szpiro et al. 2011b). Berkson-like error that arises from smoothing the true exposure surface may inflate the SE of β̂[x]. Classical-like error results from estimating the prediction model parameters and may bias β̂[x] in addition to inflating its SE. Bootstrap methods to adjust for the effects of measurement error have been discussed by Szpiro et al. (2011b). Here we present a case study to illustrate a holistic approach to two-stage air pollution epidemiologic modeling, which includes exposure modeling in the first stage and health modeling that incorporates measurement error correction in the second stage. We build national exposure models using PLS and universal kriging, and employ them to estimate long-term average concentrations of four chemical species of PM[2.5]—elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S)—selected to reflect a variety of different PM[2.5] sources and formation processes (Vedal et al., in press). After developing the exposure models, we derive predictions for the Multi-Ethnic Study of Atherosclerosis (MESA) cohort. These predictions are used as the covariates of interest in health analyses assessing associations between carotid intima-media thickness (CIMT), a subclinical measure of atherosclerosis, and exposure to PM[2.5] components. We apply measurement error correction methods to account for the fact that predicted rather than true exposures are being used in these health models. We discuss our results and their implications with regard to the effect of spatial correlation in exposure surfaces on estimated associations between exposures and health outcomes. Monitoring data. Data on EC, OC, Si, and S were collected to build the national models. These data consisted of annual averages from 2009–2010 as measured by the Interagency Monitoring for Protected Visual Environments (IMPROVE) and Chemical Speciation Network (CSN) of the U.S. Environmental Protection Agency (U.S. EPA 2009). The IMPROVE monitors are a nationwide network located mostly in remote areas. The CSN monitors are located in more urban areas. These two networks provide data that are evenly dispersed throughout the lower 48 states (Figure 1). Figure 1. Locations of IMPROVE and CSN monitors and predicted national average PM[2.5] component concentrations from final predictions models. (A) EC, (B) OC, (C) Si, and (D) S. Insets show predictions for St. Paul, MN. All IMPROVE and CSN monitors that had at least 10 data points per quarter and a maximum of 45 days between measurements were included in our analyses. Si and S measurements were averaged over 1 January 2009–31 December 2009. The EC/OC data set consisted of measurements from 204 IMPROVE and CSN monitors averaged over 1 January 2009–31 December 2009, and measurements from 51 CSN monitors averaged over 1 May 2009–30 April 2010. We used the latter period because the measurement protocol used by CSN monitors prior to 1 May 2009 was incompatible with the IMPROVE network protocol. Comparing values averaged over 1 May 2009–30 April 2010 to those averaged over 1 January 2009–31 December 2009 indicated little difference between the time periods (data not shown). The annual averages were square-root transformed prior to modeling. Geographic covariates. Approximately 600 LUR covariates were available for all monitor and subject locations. These included distances to A1, A2, and A3 roads [census feature class codes (CFCCs; U.S. Census Bureau 2013)]; land use within a given buffer; population density within a given buffer; and Normalized Difference Vegetation Index (NDVI; National Oceanic and Atmospheric Administration 2013 ), which measures the level of vegetation in a monitor’s vicinity. CFCC A1 roads are limited-access highways; A2 and A3 roads are other major roads such as county and state highways without limited access (Mercer et al. 2011). For NDVI a series of 23 monitor-specific, 16-day composite satellite images were obtained, and the pixels within a given buffer were averaged for each image. PLS incorporated the 25th, 50th, and 75th percentile of these 23 averages. The median of “high-vegetation season” image averages (defined as 1 April–30 September) and “low-vegetation season” averages (1 October–31 March) were also included. The geographic covariates were pre-processed to eliminate LUR covariates that were too homogeneous or outlier-prone to be of use. Specifically, we eliminated variables with > 85% identical values, and those with the most extreme standardized outlier > 7. We log-transformed and truncated all distance variables at 10 km, and computed additional “compiled” distance variables such as minimum distance to major roads and distance to any port. These compiled variables were then subject to the same inclusion criteria. All selected covariates were mean-centered and scaled by their respective SDs. MESA cohort. MESA is a population-based study that began in 2000, with a cohort consisting of 6,814 participants from six U.S. cities: Los Angeles, California; St. Paul, Minnesota; Chicago, Illinois; Winston-Salem, North Carolina; New York, New York; and Baltimore, Maryland. Four ethnic/racial groups were targeted: white, Chinese American, African American, and Hispanic. All participants were free of physician-diagnosed cardiovascular disease at time of entrance. [For additional details about the MESA study, see Bild et al. (2002).] These participants were also utilized in the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air), an ancillary study to MESA funded by the U.S. EPA to study the relationship between chronic exposure to air pollution and progression of subclinical cardiovascular disease (Kaufman et al. 2012). Both the MESA and MESA Air studies were approved by the institutional review board (IRB) at each site, including the IRBs at the University of California, Los Angeles (Los Angeles, CA), Columbia University (New York, NY), Johns Hopkins University (Baltimore, MD), the University of Minnesota (Minneapolis-St. Paul, MN), Wake Forest University (Winston-Salem, NC), and Northwestern University (Evanston, IL). All subjects gave written informed consent. We selected the CIMT end point in MESA as the health outcome for our case study. CIMT, a subclinical measure of atherosclerosis, was measured by B-mode ultrasound using a GE Logiq scanner (GE Healthcare, Wauwatosa, WI), and the end point was quantified as the right far wall CIMT measures conducted during MESA exam 1, which took place during 2000–2002 (Vedal et al., in press). We considered the 5,501 MESA participants who had CIMT measures during exam 1; our analysis was based on the 5,298 MESA participants who had CIMT measures during exam 1 and complete data for all selected model covariates. The first stage of the two-stage approach included building the exposure models using PLS as the covariates in universal kriging models. We used cross-validation (CV) to select the number of PLS scores, determine how reliable predictions from each exposure model were, and assess the extent to which spatial structure was present for each pollutant. The health modeling stage of the two-stage approach included the health models we fit and the measurement error correction methods we employed. [For more detailed technical exposition, see Bergen et al. (2012).] Spatial prediction models. Notation. Let X[t]* denote the N* × 1 vector of observed square-root transformed concentrations at monitor locations; R* the N* × p matrix of geographic covariates at monitor locations; X[t] the N × 1 vector of unknown square-root transformed concentrations at the unobserved subject locations; and R the N × p matrix of geographic covariates at the subject locations. Note that for our exposure models, X[t]* and X[t] are dependent variables, and R* and R are independent variables. We used PLS to decompose R* into a set of linear combinations of much smaller dimension than R*. Specifically, R*H = T*. Here, H is a p × k matrix of weights for the geographic covariates, and T* is an N* × k matrix of PLS components or scores. These scores are linear combinations of the geographic covariates found in such a way that they maximize the covariance between X[t]* and all possible linear combinations of R*. One might notice similarities between PLS and principal components analysis (PCA). Although the two methods are similar in that they are both dimension reduction methods, the scores from PLS maximize the covariance between X[t]* and all other possible linear combinations of R*, whereas the scores from PCA are chosen to explain as much as possible the covariance of R*. [For more details see Sampson et al. (2013)]. PLS scores at unobserved locations are then derived as T = RH. Once the PLS scores T and T* were obtained for the subject and monitoring locations, respectively, we assumed the following joint model for unobserved and observed exposures: Here α is a vector of regression coefficients for the PLS scores, and η and η* are N × 1 and N* × 1 vectors of errors, respectively. Our primary exposure models assumed that the error terms exhibited spatial correlation that could be modeled with a kriging variogram parameterized by a vector of parameters θ = (τ^2, σ^2, ϕ) (Cressie 1992). The nugget, τ^2, is interpretable as the amount of variability in the pollution exposures that is not explained by spatial structure; the partial sill, σ^2, is interpretable as the amount of variability that is explained by spatial structure; and the range, ϕ, is interpretable as the maximum distance between two locations beyond which they may no longer be considered spatially correlated. We estimated these parameters and the regression coefficients α via profile maximum likelihood. Once these parameters were estimated, we obtained predictions at unobserved locations by taking the mean of X[t] conditional on X[t]* and the estimated exposure model parameters. Because our measurement error correction methods rely on a correctly specified exposure model, we took care to choose the best-fitting kriging variogram to model our data. We initially fit exponential variograms for all four pollutants and investigated whether plots of the estimated variogram appeared to fit the empirical variogram well. If they appeared to fit poorly, we investigated spherical and cubic variograms. The exponential variogram fit well for EC, OC, and S, but provided a poor fit for Si (data not shown). We therefore examined cubic and spherical variograms and found the spherical variogram provided a much better fit and used it to model Si in our exposure models. As a comparison to our primary kriging models, we also derived predictions from PLS alone without fitting a kriging variogram. This is analogous to a pure LUR model but using the PLS scores instead of actual geographic covariates. For this analysis η and η* were assumed to be independent, and α was estimated using a least-squares fit to regression of X[t]* on T*. PLS-only predictions at the unobserved locations were then derived as the fitted values from this regression using the PLS scores at the subject locations. CV and model selection. We used 10-fold CV (Hastie et al. 2001) to assess the models’ prediction accuracy, to select the number of PLS components to use in the final prediction models, and to compare predictions generated using PLS only to our primary models, which used both PLS and universal kriging. Data were randomly assigned to 1 of 10 groups. One group (a “test set”) was omitted, and the remaining groups (a “training set”) were used to fit the model and generate test set predictions. Each group played the role of test set until predictions were obtained for the entire data set. At each iteration, the following steps were taken to cross-validate our primary models (similar steps were followed to derive cross-validated predictions that used PLS only): • PLS was fit using the training set, and K scores were computed for the test set, for K = 1,…,10. • Universal kriging parameters θ and coefficients α were estimated via profile maximum likelihood using the training set. The first K PLS scores correspond to T* in Equation 1, for K = 1,…,10. • Predictions were derived using the first K PLS components and the corresponding universal kriging, using kriging parameters estimated from the training set. We used the R package pls to fit the PLS. universal kriging was performed using the R package geoR. The best-performing models were selected out of those that used both PLS and kriging based on their cross-validated root mean squared error of prediction (RMSEP) and corresponding R^2. For a data set with N* observations and corresponding predictions, the formulae for these performance metrics are given by These metrics are sensitive to scale; accordingly, they are useful for evaluating model performance for a given pollutant but not for comparing models across pollutants. Health modeling. Disease model. Multivariable linear regression models were used to estimate the effects of each individual PM[2.5] component exposure on CIMT. Each model included a single PM[2.5] component along with a vector of subject-specific covariates. Let Y be the 5,298 × 1 vector of health outcomes for the 5,298 MESA participants included in the analysis, W the 5,298 × 1 vector of exposure predictions on the untransformed scale, and Z a matrix of potential confounders. We assumed linear relationships between Y, the true exposures, and Z, and fit the following equation via ordinary least squares (OLS): Measurement error correction. The model in Equation 4 was fit using the predicted exposures W instead of the true exposures as the covariate of interest. Using predictions rather than true exposures in health modeling introduces two sources of measurement error that potentially influence the behavior of β̂[x]. Berkson-like error arises from smoothing the true exposure surface and could inflate the SE of β̂[x]. Classical-like error arises from estimating the exposure model parameters α and θ. The classical-like error potentially inflates the SE of β̂[x] and could also bias the point estimate. We implemented the parameter bootstrap, an efficient method to assess and correct for the effects of measurement error. [See Szpiro et al. (2011b) for additional background and details.] We used the parameter bootstrap in the context of predictions that use both PLS and universal kriging; the approach would be very similar if PLS alone was used (although we did not implement that correction here). 1. Estimate a sampling density for α̂ and Θ̂ with a multivariate normal distribution. 2. For j = 1,…,B bootstrap samples a. Simulate new “observed” bootstrap exposures at monitoring locations from Equation 1 and health outcomes from Equation 4. b. Sample new exposure model parameters and, from the sampling density estimated in step 1, using a constant covariance matrix multiplied by a scalar λ ≥ 0. λ controls the variability of (α̂ [j], Θ̂[j]): the larger λ is, the greater the variability of (α̂[j], Θ̂[j]). c. Use the simulated health outcomes and newly-sampled exposure model parameters to derive W[j]. d. Calculate β̂[x,j] using W[j] by OLS. 3. Let E[λ](β̂[x]^B) denote the empirical mean of the β̂[x,j]. The estimated bias is defined as Bias[λ](β̂[x]) = E[λ](β̂[x]^B)–E[0](β̂[x]^B) with corresponding bias-corrected effect estimate β[x,λ]^ corrected = β̂[x]–Bias[λ](β̂[x]). 4. Estimate the bootstrap SE as For our implementation of the parameter bootstrap, we set B = 30,000 and λ= 1. The goal of the parameter bootstrap is to approximate the sampling properties of the measurement error-impacted β̂[x] that would be estimated if we performed our two-stage analysis with many actual realizations of monitoring observations and subject health data sets. Accordingly, step 2(a) gives us B new “realizations” of our data. For λ= 1, step 2(b) accounts for the classical-like error by resampling the exposure model parameters. Step 2(c) accounts for the Berkson-like error by smoothing the true exposure surface. Step 2(d) then calculates B new β̂[x,j]’s, the sampling properties of which have incorporated all sources of measurement error. Comparing these to the mean of bootstrapped β̂[x,j] derived using fixed exposure model parameters (i.e., λ= 0) gives us an approximation of the bias induced by the classical-like error (step 3), and the empirical SD approximates the SE that accounts for both sources of measurement error (step 4). We also implemented the parameter bootstrap for λ = 0. This is equivalent to the “partial parametric bootstrap” described by Szpiro et al. (2011b), which accounts for the Berkson-like error only because the exposure surface is still smoothed, but with fixed parameters. A desirable trait of the parameter bootstrap is the ability to “tune” the amount of the classical-like error by varying λ, which allows us to investigate how variability in the sampling distribution of (α̂[j], Θ̂[j]) affects the bias of β̂[x]. This can be useful in refining our bootstrap bias estimates by simulation extrapolation (SIMEX) (Stefanski and Cook 1995). (For additional information on our approach to SIMEX and the results of applying it to the MESA data, see Supplemental Material, pp. 2–3 and Figure S1.) Data. Monitoring data. Mean concentrations of the four pollutants according to monitoring network are shown in Table 1. EC and OC concentrations measured by CSN monitors tended to be higher than concentrations measured by IMPROVE monitors. Average Si and S concentrations measured by CSN monitors were also higher than the IMPROVE averages; however, relative to their SDs, the differences between CSN and IMPROVE monitors in Si and S concentrations were not as great as the differences between EC and OC concentrations. Table 1. Summary data for observed pollution concentrations (mean ± SD) at monitoring networks; predicted concentrations (mean ± SD) for the MESA cohort at exam 1 and summaries of selected LUR Geographic covariates. The geographic variables that we used are listed in Table 2. Most of these variables were used for modeling all four pollutants, but not all. The following variables were used for modeling Si and S but not EC and OC: PM[2.5] and PM[10] emissions, streams and canals within a 3-km buffer, other urban or built-up land use within a 400-m buffer, lakes within a 10-km buffer, industrial and commercial complexes within a 15-km buffer, and herbaceous rangeland within a 3-km buffer. On the other hand, the following variables were used for modeling EC and OC but not Si and S: industrial land use within 1- and 1.5-km buffers. Table 2. LUR covariates (Figure 2 abbreviations) and (where applicable) covariate buffer sizes that made it through preprocessing and were considered by PLS. The distributions of selected geographic covariates are shown according to monitoring network and MESA locations in Table 1. Although relatively few monitors belonging to either IMPROVE or CSN were within 150 m of an A1 road, there was a larger proportion of CSN monitors within 150 m of an A3 road (44%) than IMPROVE monitors (19%), consistent with the placement of CSN monitors in more urban locations compared with IMPROVE monitors (Table 1). The median distance to commercial and service centers was much smaller for CSN monitors (127 m vs. 4,696 m), and the median population density was much larger for CSN monitors (805 persons/mi^2) than for IMPROVE monitors (only 3 persons/mi^2). Median summer NDVI values within 250 m were slightly smaller for CSN monitors than for IMPROVE monitors, consistent with the placement of IMPROVE monitors in greener areas. Geographic covariate distributions among MESA participant locations were more consistent with the CSN monitors, as is especially evident for the number of sites < 150 m from an A3 road and median population density (Table 1). Density plots of the geographic covariates for monitoring and subject locations indicated noticeable overlap for all geographic covariates (data not shown), suggesting differences in geographic covariates between monitor and MESA locations were consistent with the concentration of MESA subjects in urban locations, not extrapolation beyond our data. MESA cohort. Distributions of health model covariates among MESA cohort participants are summarized in Table 3. The mean CIMT (0.68 ± 0.19 mm); mean age (62 ± 10 years); sex (52% female); race (39% white, 12% Chinese American, 27% African American, and 22% Hispanic); and status (44% hypertension status and 15% statin use) were determined by questionnaire (Bild et al. 2002). The highest percentage of participants resided in Los Angeles (19.7%), but the distribution across the six cities was quite homogeneous. Only the 5,298 participants with complete data for all the selected model covariates listed in Table 3 were included in the analysis. Table 3. Subject-specific covariates for the MESA cohort used in health modeling. Spatial prediction models. Model evaluation. The selected models corresponding to lowest cross-validated R^2 all used PLS and universal kriging. For all four PM[2.5] components and for all numbers of PLS scores, kriging improved prediction accuracy, as indicated by the R^2 and RMSEP statistics for the selected prediction models corresponding to the best performing PLS-only and PLS + universal kriging models (Table 4). Comparing the R^2 with and without universal kriging indicates that EC and OC were not much improved by kriging, whereas universal kriging improved prediction accuracy for Si and even more so for S. The ratio of the nugget to the sill (i.e., τ^2/σ^2) also supports improved predictions with spatial smoothing by kriging. For a fixed range, smaller values of this ratio indicate that concentrations at nearby locations receive greater weight when kriging. We see this relationship in Table 4 where τ^2/σ^2 was large when universal kriging did little to improve prediction accuracy, and very small when universal kriging helped improve prediction accuracy. Table 4. Cross-validated R^2 and RMSEP for each component of PM[2.5], for both primary models and comparison PLS-only models, and the estimated kriging parame­ters from the likelihood fit on the entire data set for each pollutant. As a sensitivity analysis we also carried out CV using nearest-monitor exposure estimates. This method performed very poorly for EC and OC (R^2s of 0 and 0.06, respectively), relatively poorly for Si (R^2 = 0.36), but performed well for S (R^2 = 0.88). Interpretation of PLS. Figure 2 illustrates the geographic covariates that were most important for explaining pollutant variability. Specifically, Figure 2 summarizes the p × 1 vector m, the vector such that Rm equals the 5,298 exposures predicted with PLS only. Each element of m is a weight for a corresponding geographic covariate. Positive elements in m (i.e., values > 0 in Figure 2) indicate that higher values of the geographic covariate were associated with higher predicted exposure; the larger the absolute value of an element in m, the more the corresponding geographic covariate contributed to exposure prediction. Figure 2. Coefficients of the PLS fit, where the coefficients describe the associations of each geographic covariate with exposure for (A) EC, (B) OC, (C) Si, and (D) S. The size of each circle represents covariate buffer size, with larger circles indicating larger buffers. Each closed circle for “distance to feature” represents a different feature (listed in Table 2): A1 road, nearest road, airport, large airport, port, coastline, commercial or service center, railroad, and rail yard. Variable abbreviations and buffer sizes are indicated in Table 2. Most of the variables shown here were used for modeling all four pollutants, but not all. Variables used for modeling Si and S but not EC and OC were PM[2.5] and PM[10] emissions, streams and canals within a 3-km buffer, other urban or built-up land use within a 400-m buffer, lakes within a 10-km buffer, industrial and commercial complexes within a 15-km buffer, and herbaceous rangeland within a 3-km buffer. The variables used for modeling EC and OC but not Si and S were industrial land use within 1- and 1.5-km buffers. Population density was associated with larger predicted values of all pollutants, particularly for EC, OC, and S. Industrial land use within the smallest buffer was very predictive of EC and OC, and evergreen forest land within a given buffer was strongly predictive of decreases in S. NDVI, industrial land use, emissions, and line-length variables were positively associated with all exposures except Si, whereas all the distance-to-features variables were negatively associated with all exposures except Si. The NDVI variables were more important for prediction of OC and S than they were for EC. For Si, the NDVI and transitional land use variables appeared to be the most informative for prediction, with NDVI negatively and transitional land use positively associated with Si exposure. Distance to features appeared to be informative for all four pollutants. Exposure predictions. Figure 1 shows predicted concentrations across the United States, with finer detail illustrated for St. Paul, Minnesota. The EC and OC predictions were much higher in the middle of urban areas, and quickly dissipated further from urban centers. S predictions were high across the midwestern and eastern states and in the Los Angeles area, and lower in the plains and mountains. Si predictions were low in most urban areas, and high in desert states. Mean predicted EC and OC exposure concentrations predicted for MESA participants were 0.74 ± 0.18 and 2.17 ± 0.36 μg/m^3, respectively (Table 1). Mean predicted Si and S exposure concentrations were 0.09 ± 0.03 ng/m^3 and 0.78 ± 0.15 μg/m^3, respectively. Health models. The results from the naïve health model that did not include any measurement error correction, as well as the results from the health model that included bootstrap-corrected point estimates and SEs of β̂[x], are displayed in Table 5. The naïve analysis indicated significant positive associations (p < 0.05) of CIMT with OC, Si, and S. There was also a positive but nonsignificant association between CIMT and EC. SEs for the EC and OC health effects were virtually unchanged when measurement error correction was implemented, whereas the bootstrap-corrected SEs for Si and S were about 50% larger than their respective naïve estimates. The estimated biases resulting from the classical-like measurement error were so small as to be uninteresting from an epidemiologic perspective because the point estimates of all four pollutants after implementing measurement error correction were unchanged out to three decimal places. Table 5. Point estimates ± SEs and 95% CIs for the different pollutants, using naïve analysis and with bootstrap correction for measurement error in covariate of interest. Summary. Our comprehensive two-stage approach to estimating long-term effects of air pollution exposure includes a national prediction model to estimate exposures to individual PM[2.5] components and corrects for measurement error in the epidemiologic analysis using a methodology that accounts for differing amounts of spatial structure in the exposure surfaces. In a case study of four components of PM[2.5] and measurement error–corrected associations between these components and CIMT in the MESA cohort, corrected SEs corresponding to pollutants that exhibited significant spatial structure (i.e., Si and S) were 50% larger than naïve estimates, whereas corrected SE estimates for EC and OC were very similar to the naïve estimates. National exposure models. We find that a national approach to exposure modeling is reasonable and performs well in terms of prediction accuracy. Our primary PLS + universal kriging models resulted in cross-validated R^2 ≤ 0.95 (for predicting S concentrations) and ≥ 0.62 (for predicting Si) for any of the PM[2.5] components. Use of kriging improved the cross-validated R^2 for all four pollutants compared with models that used PLS only, although the improvement was not equal across all four pollutants. These results are useful in terms of understanding the spatial nature of our exposure surfaces. For EC and OC, the R^2 only improved by ≤ 0.09 when kriging was used compared to when PLS alone was used, indicating little large-scale spatial structure in these pollutants. For Si, the R^ 2 improved from 0.36 to 0.62; and for S, from 0.63 to 0.95. This indicates that S (and to a lesser extent Si) had substantial large-scale spatial structure that kriging was able to exploit. For all models, using kriging improved R^2, indicating that no prediction accuracy was lost (and quite a bit stood to be gained, when spatial structure was present) by using PLS+universal kriging as opposed to using PLS alone. Our results also suggest that exposure models such as the ones we have built may be preferable in many cases to simpler approaches such as nearest-monitor interpolation. Our models produced cross-validated R^2 that were higher than the nearest-monitor approach, and our results indicate that unless there is considerable spatial structure in the exposure surface, a substantial amount of prediction accuracy may be lost when the nearest-monitor approach is used. We used two-stage modeling instead of joint modeling of exposure and health for a variety of reasons. One is pragmatic: Joint modeling is computationally intensive, so our two-stage approach is especially desirable when modeling multiple pollutants. Joint modeling may also be more sensitive to outliers in the health data. Two-stage modeling also appeals more intuitively in the context of modeling multiple health outcomes because it assigns one exposure per participant that can then be used to model a number of different health outcomes. Joint modeling, on the other hand, would assign different levels of the same pollutant depending on what health outcome was being modeled. Epidemiologic case study. In this case study, we focused on four PM[2.5] components selected to gain insight into the sources or features of PM[2.5] that might contribute to the effects of PM[2.5] on cardiovascular disease. EC and OC were chosen as markers of primary emissions from combustion processes, with OC also including contributions from secondary organic aerosols formed from atmospheric chemical reactions; Si was chosen as a marker of crustal dust; and S was chosen as a marker of sulfate, an inorganic aerosol formed secondarily from atmospheric chemical reactions (Vedal et al., in press). The mechanisms whereby exposures to PM[2.5] or PM[2.5] components produce cardiovascular effects such as atherosclerosis are not well understood, although several mechanisms have been proposed (Brook et al. 2010). [For discussion of other studies examining the effects of these pollutants, see Vedal et al. (in press).] The relatively poor performance of nearest-monitor interpolation for EC, OC, and Si raises concerns about epidemiologic inferences based on predictions derived from that method. For S, the only pollutant for which our models and nearest-monitor interpolation performed comparably, the estimated increase in CIMT for a 1-unit increase in exposure based on nearest-monitor interpolation was 0.074 ± 0.018, comparable to the naïve inference made using predictions from our exposure models (0.055 ± 0.017). However, there is no way to correct for measurement error using this method, which is another significant advantage of our models. Naïve health analyses based on exposure predictions from our national models indicated significant associations of CIMT with 1-unit increases in average OC, Si, and S, but not EC. Using the parameter bootstrap to account and correct for measurement error led to noticeably larger SEs and wider CIs for Si and S; however, OC, Si, and S were still significantly associated with CIMT even after correcting for measurement error. Measurement error correction. For EC and OC, using PLS alone was sufficient to make accurate predictions, whereas the spatial smoothing from universal kriging substantially improved prediction accuracy for Si and S. It is accordingly no coincidence that the bootstrap-corrected SE estimates for EC and OC were unchanged from the naïve estimates, whereas the corrected SE estimates for Si and S were about 50% larger (and the resulting 95% CIs 50% wider) than their respective naïve estimates. The fact that the EC and OC exposure predictions were derived mostly from the PLS-only models, which assumed independent residuals, implies that the Berkson-like error was almost pure Berkson error (i.e., independent across location), which was correctly accounted for by naïve SE estimates. On the other hand, much more smoothing took place for Si and S, which induced spatial correlation in the residual difference between true and predicted exposure. Accordingly, SEs that correctly account for the Berkson-like error in these two pollutants are inflated because the correlated errors in the predictions translate into correlated residuals in the disease model that are not accounted for by naïve SE estimates (Szpiro et al. 2011b). The fact that the SE estimates from the parameter bootstrap using λ = 1 (which accounts for both Berkson-like and classical-like error) and using λ = 0 (which accounts only for Berkson-like error) were so similar further indicates that the larger corrected SE estimates were most likely a result of the Berkson-like error. None of our measurement error analyses indicated that any important bias was induced by the classical-like error. Limitations and model considerations. Although our exposure models performed well, there is still room for improvement in prediction accuracy, especially for the EC, OC, and Si models, which had cross-validated R^2 that could be improved upon. For these models it is possible that inclusion of additional geographic covariates in the PLS would help improve model performance. Examples include wood-burning sources within a given buffer for EC and OC concentrations, or dust and sand sources for Si. These covariates are currently not available in our databases. Furthermore, although it is possible to interpret the individual covariates in PLS components (Figure 2), such interpretations need to be regarded with caution because inclusion of many correlated covariates can lead to apparent associations that are counter-intuitive and the opposite of what might be expected scientifically. Finally, PLS does not consider interactions or nonlinear combinations of the geographic covariates, factors which could improve model performance. Implications and future directions. Our results show that careful investigation of the exposure model characteristics can help to clarify the implications for the subsequent epidemiologic analyses that use the predicted exposures. As noted by Szpiro et al. (2011a), an overarching framework that considers the end goal of health modeling seems more appealing than treating exposure models as if they exist for their own sake. This analysis serves as an example that will inform ongoing efforts by our group and others to construct and utilize exposure prediction models that are most suitable for epidemiologic studies. Our epidemiologic inference was based on one health model per pollutant. One might reasonably be interested in how multiple pollutants jointly affect health. However, current literature for measurement error correction does not address models that use multiple predicted pollutants as exposures. Our group is currently working on methods to address this challenge. Bergen S, Sheppard L, Sampson PD, Kim SY, Richards M, Vedal S, et al. 2012. A National Model Built with Partial Least Squares and Universal Kriging and Bootstrap-Based Measurement Error Correction Techniques: an Application to the Multi-Ethnic Study of Atherosclerosis. Berkeley, CA:Berkeley Electronic Press, UW Biostatistics Working Paper Series, Working Paper 386. Available: http:// biostats.bepress.com/uwbiostat/paper386/ [accessed 16 July 2013]. Bild DE, Bluemke DA, Burke GL, Detrano R, Diez-Roux AV, Folsom AR, et al. 2002. Multi-Ethnic Study of Atherosclerosis: objectives and design. Am J Epidemiol 156(9):871–881; doi:10.1093/aje/kwf113. Brook RD, Rajagopalan S, Pope CA III, Brook JR, Bhatnagar A, Diez-Roux AV, et al. 2010. Particulate matter air pollution and cardiovascular disease: an update to the scientific statement from the American Heart Association. Circulation 121(6):2331–2378; doi:10.1161/CIR.0b013e3181dbece1. Cressie N. 1992. Statistics for spatial data. Terra Nova 4(5):613–617; doi:10.1111/j.1365-3121.1992.tb00605.x. Hastie T, Tibshirani R, Friedman J. 2001. The Elements of Statistical Learning: Data Mining, Inference and Prediction, Vol 1. Springer Series in Statistics. New York:Springer Publishing. Hoek G, Beelen R, de Hoogh K, Vienneau D, Gulliver J, Fischer P, et al. 2008. A review of land-use regression models to assess spatial variation of outdoor air pollution. Atmos Environ 42 (33):7561–7578; doi:10.1016/j.atmosenv.2008.05.057. Kaufman JD, Adar SD, Allen RW, Barr RG, Budoff MJ, Burke GL, et al. 2012. Prospective study of particulate air pollution exposures, subclinical atherosclerosis, and clinical cardiovascular disease: the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). Am J Epidemiol 176(9):825.-837; doi:10.1093/aje/kws169. Kim SY, Sheppard L, Kim H. 2009. Health effects of long-term air pollution: influence of exposure prediction methods. Epidemiology 20(3):442–450; doi:10.1097/EDE.0b013e31819e4331. Mercer LD, Szpiro AA, Sheppard L, Lindström J, Adar SD, Allen RW, et al. 2011. Comparing universal kriging and land-use regression for predicting concentrations of gaseous oxides of nitrogen (NOx) for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). Atmos Environ 45(26):4412–4420; doi:10.1016/j.atmosenv.2011.05.043. Miller KA, Siscovick DS, Sheppard L, Shepherd K, Sullivan JH, Anderson GL, et al. 2007. Long-term exposure to air pollution and incidence of cardiovascular events in women. N Engl J Med 356 (5):447–458; doi:10.1056/NEJMoa054409. National Oceanic and Atmospheric Administration. 2013. NOAA, Office of Satellite and Product Operations. GVI—Normalized Difference Vegetation Index. Available: http://www.ospo.noaa.gov/Products/land/ gvi/NDVI.html [accessed 9 July 2013]. Pope CA III, Burnett RT, Thun MJ, Calle EE, Krewski D, Ito K, et al. 2002. Lung cancer, cardiopulmonary mortality, and long-term exposure to fine particulate air pollution. JAMA 287(9):1132–1141; Samet JM, Dominici F, Curriero FC, Coursac I, Zeger SL. 2000. Fine particulate air pollution and mortality in 20 US cities, 1987–1994. N Engl J Med 343(24):1742–1749; doi:10.1056/NEJM200012143432401. Sampson PD, Richards M, Szpiro AA, Bergen S, Sheppard L, Larson TV, et al. 2013. A regionalized national universal kriging model using partial least squares regression for estimating annual PM[2.5] concentrations in epidemiology. Atmos Environ 75:383–392; doi:10.1016/j.atmosenv.2013.04.015. Sampson PD, Szpiro AA, Sheppard L, Lindström J, Kaufman JD. 2009. Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data. Atmos Environ 45(36):6593–6606; Stefanski LA, Cook JR. 1995. Simulation-extrapolation: the measurement error jackknife. J Am Stat Assoc 90(432):1247–1256. Szpiro AA, Paciorek CJ, Sheppard L. 2011a. Does more accurate exposure prediction necessarily improve health effect estimates? Epidemiology 22(5):680–685; doi:10.1097/EDE.0b013e3182254cc6. Szpiro AA, Sheppard L, Lumley T. 2011b. Efficient measurement error correction with spatially misaligned data. Biostatistics 12(4):610–623; doi:10.1093/biostatistics/kxq083. Tibshirani R. 1996. Regression shrinkage and selection via the lasso. J R Stat Soc B 58(1):267–288. U.S. Census Bureau. 2013. Census Feature Class Codes (CFCCs). Available: http://www.maris.state.ms.us/pdf/cfcccodes.pdf [accessed 16 July 2013]. U.S. EPA (U.S. Environmental Protection Agency). 2009. Integrated Science Assessment for Particulate Matter EPA/600/R-08/139F. Available: http://www.epa.gov/ncea/pdfs/partmatt/Dec2009/ PM_ISA_full.pdf [accessed 1 July 2013]. Vedal S, Kim SY, Miller KA, Fox JR, Bergen S, Gould T, et al. In press. NPACT Epidemiologic Study of Components of Fine Particulate Matter and Cardiovascular Disease in the MESA and WHI-OS Cohorts. Research Report 178. Boston, MA:Health Effects Institute.
{"url":"http://ehp.niehs.nih.gov/1206010/","timestamp":"2014-04-21T09:54:57Z","content_type":null,"content_length":"103141","record_id":"<urn:uuid:a7c13bfb-fe7b-4732-a7ec-38fb9ad50b3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
You are here : Control System Design - Index | Book Contents | Chapter 13 13. Digital Control Models for discrete time systems have been described in Chapter 1. There we saw that digital and continuous systems were actually quite close. Hence it is often true that digital responses approach the corresponding continuous response as the sampling period goes to zero. For this reason, in the remainder of the book we will present continuous and discrete ideas in parallel. The purpose of the current chapter is to provide a smooth transition to this latter work by highlighting the special issues associated with digital control. In particular, the chapter covers: • why one cannot simply treat digital control as if it were exactly the same as continuous control, and • how to carry out designs for digital control systems so that the at-sample response is exactly treated. • There are a number of ways of designing digital control systems: □ design in continuous time and discretize the controller prior to implementation; □ model the process by a digital model and carry out the design in discrete time; • Continuous time design which is discretized for implementation; □ Continuous time signals and models are utilized for the design. □ Prior to implementation, the controller is replaced by an equivalent discrete time version. □ Equivalent means that simply maps s to □ Caution must be exercised since the analysis was carried out in continuous time and the expected results are therefore based on the assumption that the sampling rate is high enough to mask sampling effects. □ If the sampling period is chosen carefully, in particular with respect to the open and closed loop dynamics, then the results should be acceptable. • Discrete design based on a discretized process model: □ First the model of the continuous process is discretized. □ Then, based on the discrete process, a discrete controller is designed and implemented. □ Caution must be exercised with so called intersample behavior: the analysis is based entirely on the behavior as observed at discrete points in time, but the process has a continuous behavior also between sampling instances. □ Problems can be avoided by refraining from designing solutions which appear feasible in a discrete time analysis, but are known to be unachievable in a continuous time analysis (such as removing non-minimum phase zeros from the closed loop!). • The following rules of thumb will help avoid intersample problems if a purely digital design is carried out: □ Sample 10 times the desired closed loop bandwidth. □ Use simple anti-aliasing filters to avoid excessive phase shift. □ Never try to cancel or otherwise compensate for discrete sampling zeros. □ Always check the intersample response.
{"url":"http://csd.newcastle.edu.au/chapters/chapter13.html","timestamp":"2014-04-20T23:28:09Z","content_type":null,"content_length":"6351","record_id":"<urn:uuid:df737157-0d39-4278-bbf6-d8d7a442b229>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Periodic Function problem August 26th 2007, 02:58 PM #1 Super Member Mar 2006 Periodic Function problem Let $f(x)=cosx+cos \pi x$ a) Show that $f(x)=2$ has a unique solution. b) Show that f(x) is not periodic. My proof so far: Now, I realize that f(x) = 2 iff x=0 in this case, but will that be enough for a proof? And since f(x) = 2 only once, then it cannot be periodic. Last edited by tttcomrader; August 26th 2007 at 03:17 PM. i would think that's enough how did you show that f(x) = 2 iff x = 0 ? Really, I only know that cosx = 1 and cospix = 1 when x = 0, and I just can't find anything else x can equal to in order to get the same result. well, that won't work as a proof! use the sum-to-product formula for cosine: $\cos \alpha + \cos \beta = 2 \cos \left( \frac {\alpha + \beta }{2} \right) \cos \left( \frac {\alpha - \beta}{2} \right)$ ...or maybe, you can continue with your line of thought, but add that since the range of cosine is [-1,1], the only way we can get two is if both terms are 1, since neither can be greater than 1, and if one of them is less than 1, the other would have to be greater than 1 to add up to 2 etc etc etc ... Last edited by Jhevon; August 26th 2007 at 03:53 PM. The following is true: $a \le 1,\;b \le 1,\quad a + b = 2\quad \Rightarrow \quad a = 1\;\& \;b = 1$. $\cos (t) = 1\quad \Rightarrow \quad t = 2k\pi$. $\cos (\pi t) = 1\quad \Rightarrow \quad \pi t = 2k\pi \quad \Rightarrow \quad t = 2k$ $2k\pi = 2k\quad \Rightarrow \quad k = 0$ Thus the only solution is $x=0$! The following is true: $a \le 1,\;b \le 1,\quad a + b = 2\quad \Rightarrow \quad a = 1\;\& \;b = 1$. $\cos (t) = 1\quad \Rightarrow \quad t = 2k\pi$. $\cos (\pi t) = 1\quad \Rightarrow \quad \pi t = 2k\pi \quad \Rightarrow \quad t = 2k$ $2k\pi = 2k\quad \Rightarrow \quad k = 0$ Thus the only solution is $x=0$! This is not quite right. The two k's need not be the same, so we have: $<br /> t=2k_1 \pi<br />$ $<br /> t=2 k_2<br />$ for some $k_1, k_2 \in \bold{Z}$, then eliminating $t$ between these we have: $k_1 \pi=k_2$, but as $\pi$ is irrational this is imposible unless $k_1=k_2=0$. (PS it is obvious that the proof must hinge on $\pi$ being irrational so any proof that does not use this fact must be deficient in some manner) Plato solved this problem the best way. There is a puzzle which asks if $f(x) = \sin x + \sin x^{\circ}$ is a periodic function. The solution is to note that $f'(x)$ is also periodic and its maximum value is at $x=0$. And argue like above there is no other maximum value. If the proof that I posted contained all the symbols that I had intended, then it would be correct. I do understand that we need two different possible values of K. I thought that I had used a K’ for the second. But several edits must have dropped some symbols; what was posted above is flawed. The function is $f(x) = \cos (x) + \cos (\pi x)$ and as noted both of those terms must be 1 if the sum is two. We get $\cos (x) = 1\quad \Rightarrow \quad x = 2K\pi$. Because it is the same x in the sum we get the following. $\cos \left[ {\pi \left( {2K\pi } \right)} \right] = \cos \left( {2\pi ^2 K} \right) = 1\quad \Rightarrow \quad 2\pi ^2 K = 2\pi K'$ This means that $\pi K = K'$ and $\pi$ being irrational means $K = K' = 0$ or $x = 0$. August 26th 2007, 03:10 PM #2 August 26th 2007, 03:16 PM #3 Super Member Mar 2006 August 26th 2007, 03:28 PM #4 August 26th 2007, 04:10 PM #5 August 26th 2007, 10:08 PM #6 Grand Panjandrum Nov 2005 August 27th 2007, 10:36 AM #7 Global Moderator Nov 2005 New York City August 27th 2007, 01:39 PM #8 Grand Panjandrum Nov 2005 August 27th 2007, 02:35 PM #9
{"url":"http://mathhelpforum.com/calculus/18079-periodic-function-problem.html","timestamp":"2014-04-16T10:35:11Z","content_type":null,"content_length":"68235","record_id":"<urn:uuid:fc3e1e5e-4250-48b7-bf6b-ad2f82ea4d4a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hi, what´s the better measure when the capital is scarce: NPV or IRR? Best Response You've already chosen the best response. NPV is in general a better measure. But here it is most emphatically better because it will tell you precisely how much cash the project will throw off. For example, suppose a company has $1,000 capital and the choice of only two non-replicable projects: A, investment $100K, IRR = 50%, NPR = $1 million B, investment $1,000, IRR = 10%, NPV = $1.2 million Even though project A has a higher IRR, project B is preferable because it will give you more cash. Best Response You've already chosen the best response. Thanks. Moreover, if you use the MIRR you will have the same answer given by the NPV, i.e, the project B. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e76bce60b8b7d4f6d082daa","timestamp":"2014-04-20T13:49:02Z","content_type":null,"content_length":"30489","record_id":"<urn:uuid:aae7af68-0af6-4fbc-a95c-4e2a5b3426fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Following the analytical model of Marcel Duchamp's With Hidden Noise, we have counted down, by the numbers, beginning from the six external faces of the cubic volume occupied by the piece of sculpture. The aspect of "fiveness" was counted in the measurement of each edge of that cube by the standard of the English inch. The piece is held together materially by four nut/bolt combinations. It was made in an edition of three, and there are three lines of ciphered letters inscribed on each exterior face of the two brass plaques, which contain the one ball of twine. This process of counting down has brought us to the dark--initially empty--space, contained and defined when the brass plaques were bolted over the once-open ends of the toroidal twine ball. To this space we accord the number "zero"; or, let us call it by the name, the "zero-space." This zero-space, like the ball of twine itself, manifests certain unitary qualities. For one thing, it has been precisely defined, in the sense of being physically contained by elements of the sculpture. In a way, number theory also bolsters this notion because, in the set of natural numbers, zero is considered to be a form of unity. For the primitive reckoner number is always a number, a quantity, and only a number can have a symbol. Thus it is easy to see how the zero came to be the great stumbling block for the medieval arithmeticians in the West. They found it very hard to give up the old principle of ordering and grouping, in which every unit has its own symbol, which appears when that unit is present and is lacking when that unit is absent. [Karl Menninger, Number Words and Number Symbols: A Cultural History of Numbers, The MIT Press, Cambridge, Mass. (1969), p. 400, See also, pp. 399, 398.] The other numbers are either odd or even, and it would seem that zero is even, but it does have some properties not shared with the other natural numbers. Zero is neutral with respect to sign; that is, all the other numbers are either positive or negative, but zero is neither. In the "group" of natural numbers, zero is the identity element for the operations of addition and subtraction, but not so for multiplication and division in which operations the identity element is one. Ambiguity about zero arises in the power series, or exponentiation, since a number raised to the zeroth (0th) power equals unity: X to the 0 = 1. But by another rule, any number multiplied by zero is zero: X times 0 = 0. So, some ask, does 0 to the 0 = 1 or 0? There is, perhaps, something to be said here for keeping it simple. The word SIMPLE < Latin sim-plex, "once-folded" (plicare, "to fold") is one of a large group of words whose original stem is the Indo-European root sem....The German word for zero, Null, contains the other Indo-European one-form, oins > Latin unus. The Latin diminutive ending equivalent to "-ling," or the German - chen is -ulus: mus, "mouse"--musc-ulus, "little mouse." Thus from unulus > ullus, "little one, one-ling, any at all." The Indo-European negative symbol n combined with this to form the Latin word n-ullus, "none, not any." The numeral 0 obtained its name "null" because in the medieval view it was "no (numeral)," nulla (figura)....Other members of the same tribe are the English words ON-LY < "one-like," and AL-ONE < "all- one." [Menninger, Number Words, p. 171.] The two modern numerical glyphs we use to represent the values of zero and one are: for "zero" a closed curve or circle, and for "one" a straight, vertical line segment. The figure eight, as a double closed curve, is the graphic symbol which, when rotated ninety degrees, con-ventionally indicates "infinity." Viewed simply as graphic symbols, or theoretically, the line segment of the glyph for "one" also could be extended to infinity in either direction; that is, it may be read as a token of the name indicating infinite extension in space. And, with the glyph for "zero," in the form of a circle (or ellipse) one could go around and around forever; that is, it may be read as a token of the name indicating infinite extension in time. For that matter, we could imagine, say, the numeral "seven," with the two "arms" of the symbol being extended infinitely. Nevertheless, the glyphs for "zero" and "one" represent the simplest illustrations of the closed curve, and the open line (which may be considered an open, "not-closed" curve). This leads us to the famous Jordan Curve Theorem: A simple closed curve in the plane separates the plane into two regions, one finite and one infinite. In other words: the inside of the curve facing the finite region = IN; the distinguishing FUNCTION of the curve itself = AROUND; and the outside of the curve facing the space in which the curve has been drawn = OUT. In all the cosmological models of early civilizations, a wide, four-cornered planar Earth was surrounded by infinitely extensive waters, surmounted by ever higher mountain pinnacles....This flat conceptioning is manifest right up to the present in such every-day expressions as "the wide, wide world" and "the four corners of the Earth."..."Up" and "down" are the parallel perpendiculars impinging upon this flat-out world. Only a flat-out world could have a Heaven to which to ascend or a Hell into which to descend. Both Christ and Mohammed, their followers said, ascended into Heaven from Jerusalem. Scientifically speaking (which is truthfully speaking), there are no directions of "up" or "down" in Universe-- there are only the angularly specifiable directions "in," "out," and "around." Out from Earth and into the Moon, or into Mars. IN is always a specific direction--IN is point-to-able. Out is any direction....Around the world nothing has ever been formally instituted in our educational systems to gear the human senses into spontaneous accord with our scientific knowledge. In fact, much has been done and much has been left undone by powerful world institutions that prevents such reorientation of our mis-conditioned reflexes....None of the perpendiculars to a sphere are parallel to one another. The first aviators flying completely around the Earth...having completed half their circuit, did not feel "up-side-down." They had to employ other words to correctly explain their experiences. So, aviators evolved the terms "coming-in" for a landing and "going-out," not "down" and "up." Those are the scientifically accreditable words--in and out. We can only go in, out, and around. [Fuller, Critical Path, p. 55.] In our relatively complex world of space and time, we can easily show how two sides of a line differ. That is because what is "easy" is posterior, complex and superficial, on the surface, relative to what is anterior, simple and important, at the core and "hard." To the theoretical, mathematical line of one dimension-- extension--let us add a second dimension: width. Now the line of distinction is more like a ribbon or band, and we may label the two different edges, so that when closing upon itself the band actually forms a three- dimensional cylinder in which "Edge A" is rejoined to the other end of "Edge A," and similarly for "Edge B." But now let us add another dimension, a space in which we can perform a little experiment to see if the marked sides really are any different. If we give the band a single twist (we have to go into four-dimensional space to do this), and rejoin the ends--where each end of "Edge A" is matched up with an end of "Edge B"--the band forms what is called a Möbius strip. The twist that we can now see in the band shows not only that "Edge A" and "Edge B" are different, but also demonstrates that they have been switched, while the topological qualities of both edge and surface become unitary. Some of the logical problems and apparent contradictions associated with the idea of zero are related to understanding its necessary function in systems of numerical computation. At the root of the issue are the inevitable difficulties of attempting to talk about orders of complexity that are simpler than, and prior to, the order represented by language: the problem of trying to describe with a system that has certain qualities, an order of being that doesn't. In the past, some historians of science concluded...that the Babylonians used the zero only in a medial position and that their zero was therefore not functionally identical with ours. But as we now know from the work of Otto Neugebauer, Babylonian astronomers differed from Babylonian mathematicians in this respect: they used the zero at the beginning and end of written numbers, as well as in a medial position. In a Babylonian astronomical tablet from the Selucid era, now in the British Museum, the number 60 is written in the following form (the value attributed to it is assured by a mathematical relation indicated in the context): 1;0 (= 1 X 60 + 0) in which the zero sign is used to mark the absence of units of the first order....Use of the zero in the initial position enabled Babylonian astronomers to note sexagesimal fractions (that is, fractions whose denominator is equal to a power of 60) without ambiguity....Thus, at least as early as the first half of the second millennium BC[E], Mesopotamian scholars developed a written numeration that was eminently abstract and far superior to any other system used in the ancient world; they probably devised the first strictly place-value numeration in history. Later they also invented use of the zero [the Selucid dates do not go back beyond the third century BCE]....But to the Babylonians the zero sign did not signify "the number zero." Although it was used with the meaning of "empty" (that is, "an empty place in a written number"), it does not seem to have been given the meaning of "nothing," as in "10 minus 10," for example; those two concepts were still regarded as distinct. In a mathematical text from Susa, the scribe, obviously not knowing how to express the result of subtracting 20 from 20, concluded in this way: "20 minus 20...you see." And in another such text from Susa, at a place where we would expect to find zero as the result of a distribution of grain, the scribe simply wrote, "The grain is exhausted." These two examples show that the notion of "nothing" was not yet conceived as a number. [Georges Ifrah, From One to Zero: A Universal History of Numbers, Viking Penguin, New York (1985), p. 381 f.] Thus, examples of the recurring contradictions about zero were apparent in ancient times. We can appreciate this fact in the old (but possibly post-Homeric) joke in the Odyssey as a failure to distinguish between Name and Form, or between how a name is CALLED and what a name IS (what it "tokens" or "represents"). When asked by Polyphemus the Cyclops what his name was, Odysseus (always sly and wily) said that his name was called "No Man." When the two-eyed Greek hero put out the single eye of Polyphemus and escaped from his cave, the howling, blinded barbarian was asked by his fellows, WHO had hurt him; he answered "No Man," simple-mindedly, and so was ignored. Though, considering the speaker, the phrase might have sounded out of character, Polyphemus should have aswered "A man whose name is called 'No Man.'" As far as the zero is concerned, the Greek astronomer Ptolemy was familiar with the symbol o--an abbreviation of the Greek word ouden, "nothing"--as a sign indicating a missing place, and used it in writing Babylonian sexagesimal fractions...[in a way that] the symbol o indicated not only the absence of a fractional group, but even the absence of an integer (a degree). This means that the Greco-Babylonian model already possessed a zero symbol which could have been the stimulus for the creation of a zero in the Indian numeral system-- perhaps it even influenced the form of the Indian zero, for later on the zero was written as a small circle instead of a dot. There is documentary evidence that the Babylonian astronomi-cal writings had a significant influence on Indian astronomy in the century following Alexander the Great, during which Hellen-istic culture spread farthest to the east and deep into India. [Menninger, Number Words, pp. 399, 398.] The ways of writing numerals in China probably evolved from the relatively late migration of Babylonian notation systems through Hellenistic intermediaries to India, then traveling as far as China with the wave of Buddhist influence. The contacts of Chinese culture with India, its books and its numerals, were brought about by Buddhism which took root in China in the 7th century and attained great importance there. In the 13th century the zero (Chinese ling, "gap, vacancy" [but also "fragment, fraction, small change"]) first appeared in Chinese books and has continued to be used occasionally ever since. [Menninger, Number Words, p. 460 f. See, in Matthews: ling 4057.] There were fundamental differences in the written forms of ideogramic Chinese and the alphabetic Sanskrit of India. In Chinese, the line segment representing the numerical value "one" is drawn horizontally, but is otherwise apparently the same. Earlier there was no numeral corresponding to zero, because the written numerals in Chinese are ideographic, and each corresponds to a specific number word. Chinese written numbers and their ranks (units, tens, hundreds...) are expressly written down, whereas the Indian--hence the Arabic and Western--system, in ordering these ranks does not express the value of the number, but rather indicates it by the place-value of the digits. Thus the Chinese is a "named" and the Hindu an "unnamed" or abstract place-value notation, if we equate position with rank. Although both of these systems of numerals are essentially similar in reflecting the structure of the gradational spoken number sequence, the difference between them...becomes clear in the case of a number in which one or more places are left unfilled by digits: Indian: 4 0 8 9. Chinese: 4 thousands szu-ch'ien, 8 tens pa-shih, 9 (units) chiu. In other words, the Indian numerals need and have a zero sign, while the Chinese do not. Thus in China there was never a struggle in the popular mind against the concept of zero (and its cipher) such as there was in the West. [Menninger, Number Words p. 459.] The express indication of rank values in Chinese made computation extremely cumbersome; so it was always carried out on an abacus, with the aid of multiplication tables, the numerals being written in vertical columns. But when logarithmic tables came to China from India, the numerals were written in horizontal columns (running right to left). The rank values were no longer written in, so the system changed from "named" to an abstract place-value notation that required a glyph for zero, for which a round circle was employed. Having developed symbols to express the contents of each column, [the innovative abacus-user] had to invent a symbol for the numberless content of the empty column--that symbol became known to the Arabs as the sifr; to the Romans as cifra [or, ciphra]; and to the English as cipher (our modern zero). Prior to the appearance of the cipher, Roman numerals had been invented to enable completely illiterate servants to keep "scores" of one-by-one occurring events--for example, a man would stand by a gate and make a mark every time a lamb was driven through the lamb-sized gate....Since one cannot see "no sheep" and cannot eat "no sheep," the Roman world seemingly had no need for a symbol for nothing. Only an abacus's empty column could produce the human experience that called for the invention of the ciphra--the symbol for "nothing."...The discovery of the symbol for nothing became everything to humanity. the cipher alone made possible humanity's escape from the 1700-year monopoly of all its calculating functions by the power structure operating invisibly behind the church's ordained few. [Fuller, Critical Path, p. 32) In China, however, awareness of the special nature of emptiness can be seen in the famous initial poem of Lao Tzu's Tao Te Ching: The Tao that can be told is not the eternal Tao. The name that can be named is not the eternal name. The nameless is the beginning of heaven and earth. The named is the mother of the ten thousand things. Ever desireless, one can see the mystery. Ever desiring, one can see the manifestations. The two spring from the same source but differ in name; this appears as darkness. Darkness within darkness. The gate to all mystery. [Lao Tsu, Tao Te Ching, a new translation by Gia-Fu Feng and Jane English, Vintage Books, New York(1972).] The form itself manifests in as many ways as there are ways of distinction. As in the Tao Te Ching, we start with the first proposition, "The way, as told in this text, is not the eternal way, which may not be told." The eternal way may not be told because it is not susceptible to telling. It is too real for that. It manifests in as many different ways or different expressions as there are in the beings to which it manifests....And when one looks at a cow in a field and somebody says "What is it doing?" I say, "Well, I think it is contemplating Reality." And they say, "Don't be ridiculous, how can a cow contemplate Reality?" "Why not?" I ask. "What else does it have to do all day? What else has it to do? The being is contemplating Reality...what else COULD it be doing? But the Form, as it is apparent to a cow--although it is the same Form, it is the Way without a Name [the "nameless Tao"]--how it manifests to a cow, is not how it manifests to me. How it is expressed to a cow is not how it is expressed to me. [Keys, AUM Conference Transcript, p. 93.] For those of us who speak and read and write in modern American English, the origin of our Germanic language may be found in the late middle ages. Special instruction is required, in addition to a mere word list, in order to be able to read Beowulf, the poems of Caedmon, the Venerable Bede, or even Chaucer's Canterbury Tales. The written form of the letters we have adopted for writing English is, with a few modifications, essentially Roman and hence considerably older than our language. The system we use for writing numbers is apparently even older. By tracking the path of "zero" we may trace the history of our numerals back through India, the Hellenistic expansion, and the later kingdoms of Babylon, who revived their traditions of numerical reckoning originally inherited from the earlier kingdoms of Babylon, who in turn, probably got it from Sumer, who for all we know, may have got it from Susa in Elam or from the culture-bringer Oannes from the legendary island of Dilmun that some archaeologists have sought to identify with the modern Bahrain, while others have seen in the figure of Oannes a sidereal messenger from the Once and Future Star. [See George Michanowsky, The Once and Future Star: The Mysterious Vela X Supernova and the Origin of Civilization, Barnes and Noble New York (1979).] The attentive reader may note that we have frequently cited an excellent study by the distinguished German scholar, Karl Menninger, that appeared as Zahlwort und Ziffer, when published by Vandenhoek and Ruprecht in 1958. As intelligently translated by Paul Broneer for the MIT Press edition, this work, having been done once with integrity and style, ought not to need doing over. Like that work by other teachers and scholars upon whose inspired and devoted accomplishements so much of our present work depends, we acknowledge with gratitude these prior contributions to knowledge and understanding, metaphorically hoping to build upon the solid base of the great unfinished pyramid by laying our own new block in the fresh course under construction. Another way to look at this process involves the metaphors of spinning yarn, tying twine, knotting webs, and weaving, by which we hope to following some threads--with particular attention to "symbolic and mythical" interpretations--in order to suggest relationships and connections with Duchamp's exemplary work of art, With Hidden Noise. Here, a few of Karl Menninger's words from the "Preface" to to the original edition seem to be pertinent, especially in relation to our present exercise: This book is written for the lover of intellectual and cultural history, but the professional historian will discern many things in it not previously expressed. Of course the plain knowledge of the number series and number symbols...has to be cited first, but ethnology and ethnography contribute a most colorful addition: the history of language, of culture, and of politics. There are few things of this world in which these branches of of research meet each other in such an exciting and fertile manner as the concept of number. The area of its symbolic and mythical interpretation is not even included. With this overwhelming wealth of detail it became difficult to pursue the great art of following the threads in this closely woven fabric, to separate them without destroying the fabric itself....It is not often that the lover of numbers becomes acquainted with the intrinsic connection of his special area with cultural history; it is equally rare that the friend of the history of culture becomes aware of the relationship between his field and the life of numbers. I hope that the present work will serve both groups to gain the insight and the joy which comes from all knowledge of creative intellectuality in the diversity of men and peoples. [Menninger, "Preface," Number Words, p. v.] Having chosen the concept of number as the primary organizing principle for our analysis of Marcel Duchamp's piece of sculpture--so unusual in its material and structure: hard brass and soft twine, and with its "zero space"--clearly of central significance for us is the concept of "zero," including its symbolic and mythical aspects, and how its numerical representation came to Western Europe. Our numerals do not have the same origin as the phonetic alphabet we use in writing and printing. Our letters are not akin to our numerals. One would naturally be inclined to suppose, in all innocence, that the human mind, when it took the trouble to record its ideas and concepts, would have devised similar systems of writing words and numbers, "seven" and "7." But this did not happen, neither in our western culture nor anywhere else in the world. The early system of writing numerals is everywhere the older of the two sisters....Our language is Germanic, our writing [earlier] is Roman, our numerals [earlier still] are Indian. [Menninger, Number Words, p. 54 f.] The case of zero is special; in Sanskrit, it was called shunya, "void, empty" (sometimes written sunya or shunya-bindu, "empty dot") after its physical meaning: "the position (originally on the counting board) is empty." Our modern custom of indicating a missing word or line of verse by a row of dots goes back to this Indian practice. From a much later date than other Indian numerals, the inscription containing the earliest true zero known thus far has been identified in India and, delightfully, concerns the dedication to a temple of both flowers and the land upon which flowers may be grown in the future. This famous text inscribed on the wall of a small temple near Gvalior (near Lashkar in Central India) first gives the date (AD 870 in our reckoning) in words and in Brahmi numerals. Then it goes on to list four gifts to a temple, including a tract of land "270 royal hastas long and 187 wide, for a flower garden." Here, in the number 270, the zero first appears as a small circle...in the twentieth line of the inscription it appears once more in the expression "50 wreaths of flowers" which the gardeners promise to give in perpetuity to honor the Divinity. [Menninger, Number Words, p. 400 f.] In 773 there appeared at the court of the Caliph al-Mansur in Baghdad a man from India who brought with him the writings on astronomy (the Siddhanta) of his compatriot Brahmagupta (fl. ca. AD 600). Al-Mansur had this book translated from Sanskrit into Arabic (in which it became known as the Sindhind). It was promptly disseminated and induced Arab scholars to pursue their own investigations of astronomy. One of these was al-Khwarizmi...who was probably the greatest mathematician of his time, [and who] wrote among other things a small textbook on arithmetic in which he explained the use of the new Indian numerals, as he had probably learned them himself from Indian writings. This was around AD 820. [He] was also the author of a book showing how to solve equations and problems derived from ordinary life, entitled Hisab aljabr w'almuqabala, "The Book of Restoration and Equalization." ....Its translation into Latin, Algebra and Almucabala, in the 12th century, then ultimately gave its name to the discipline of algebra. The original of al-Khwarizmi's book on arithmetic is lost, but it...made its way to Spain...and it was there, at the beginning of the 12th century, that it was translated into Latin by the Englishman Robert of Chester, who "read mathematics" in Spain. Another Latin translation was produced by the Spanish Jew, John of Seville. Robert's translation is the earliest known introduction of the Indian numerals into the West. The manuscript discovered in the 19th century begins with the words, Dixit Algoritmi: laudes deo rectori nostro atque defensore dicamus dignas "Algoritmi has spoken: praise be to God, our Lord and our Defender." At about the same time (ca. 1143) an epitome of this book was written which is now in the Royal Library in Vienna. [Menninger, Number Words, p. 411.] The Codex of the Salem Monastery--now in the University Library, Heidelberg--contains fifteen pages written in abbreviated Latin and must be reckoned, together with the lost text of al-Khwarizmi, as one of the oldest manuscripts in the West describing computations with Indian numerals. It begins, Incipit liber algorithmi...(Here begins the book of Algorithmi...), and in the opening lines we also find repeated the very phrase, Omnia in mensura et pondere et numero constituisti, we have seen derives from the Apocryphal Book of Wisdom, and expressing the idea that everything is constituted of measure, weight and number. The manuscript was composed around the year 1200, and documents the spread, by that time, of al-Khwarizmi's book in the Germanic part of Europe. The text also reveals a fascinating reference obviously derived from Plato concerning the origin of number, although Plato focussed on the Unity, and did not actually mention zero. The algorism of the Salem Monastery correctly interpreted the new numerals and used them for computations, but they still created such confusion in the mind of their author that he appended the following mystical interpretation: Every number arises from One, and this in turn from the Zero. In this lies a great and sacred mystery--in hoc magnum latet sacramentum--: HE is symbolized by that which has neither beginning nor end; and just as the zero neither increases nor diminishes / another number to which it is added or from which it is subtracted / so does HE neither wax nor wane. And as the zero multiplies by ten / the number behind which it is placed / so does HE increase not tenfold, but a thousandfold --nay, to speak more correctly, HE creates all out of nothing, preserves and rules it --omnia ex nichillo creat, conservat atque gubernat. In this way the zero acquired its profound "significance" and began to represent something. But the learned men too were not sure whether the zero was a symbol, a numeral, or not. According to the name Null which they gave to it, it was not; and so the medieval writers would frequently present the "9 digits," to which they would add one more, which was called a cifra. [Menninger, Number Words, p. 423. See also, p. 411.] One of the greatest and most prolific mathematicians of the middle ages was Leonardo of Pisa (1180 to ca. 1250), also popularly called "Fibonacci," after whom the famous number sequence is named. In order to learn computation, he traveled to a trading post in Algeria where his father was governor. As he recounts the story: Ubi ex mirabili magisterio in arte per novem figuris Indorum introductus. (There I was introduced by a magnificent teacher [perhaps an Arab instructor] to the art of reckoning with the nine Indian numerals. Leonardo did not approach the new methods of computation superficially, as "just another procedure," but tested them thoroughly and came to regard them as a vast improvement. It was in this spirit that he wrote his great Book of Computations, the Liber Abaci of 1202, which prepared the ground for the widespread adoption of the Indian numerals and the new operations in the West. He introduced the new numerals in the following words: Novem figure Indorum he sunt 9 8 7 6 5 4 3 2 1. Cum his itaque nouem figuris, et cum hoc signo 0, quod arabic cephirum appellatur, scribitur quilibet mumerus. (The nine numerals of the Indians are these: 9 8 7 6 5 4 3 2 1. With them and with this sign 0, which in Arabic is called cipherum [cipher], any desired number can be written.) [Menninger, Number Words, p. 425.] The notion of counting, we may recall, is one of the things that we may "take as given"--together with the phenomenon of language--as primary functional expressions of human consciousness. The idea of a numeral--and the symbolic system of discreet numerals--should be distinguished from the more profound and historically much older concept of number itself. Number need not be marked at all; or, as on many Paleolithic bones, it may be reckoned by tally strokes; again, in a more sophisticated way, using an abacus, it may be indicated by the grouping and positioning of beads. The concept of a numeral belongs to the even more abstract and symbolic context of a notational system. Of course, numerals still merely represent a conventional way of making marks intended to be taken for signs indicating values that correspond to numbers. Conventions are just agreed-to terms, rules--like "rules of the road"--not necessarily determined by laws, whether of logic or nature. For example, the laws of human institutions only have determined that, in the vertical arrangement of traffic lights on the streets of one predominantly Irish suburb of Chicago, instead of being red as elsewhere, the top light is green. When the "Arabic" (Indian) numerals were first written in the West, the sequence ran from 9 to 1, and 0-- reading from left-to-right in decreasing value; this followed merely the appearance of the numerals in the Arabic manuscript, which had been written following the Semitic convention of reading right-to-left. Curiously, the direction for writing and reading numerals changed in the conventions of both languages: soon enough in the West the sequence increased in value reading left-to-right, the same way in which numerals are written in modern Arabic even though the letters and words in the language continue to be written and read from right-to-left. This is a clue to the somewhat different way in which we read the letters making up a word or a numerical expression representing the value of a number (when the value is greater than that which can be represented by a single integer): the "reading" of the written "number" must take in the whole array of component integers all-at-once, as it were, in order to interpret the place-value, and so determine the correct value of the "first" numeral read (the one written furthest to the left) or, indeed, that of any other given numeral in the expression. Yet this may not be so different from the feeling of English-speaking students who, in learning German, must become accustomed to the syntactical convention of placing verbs at the end of a sentence. Despite some initial reluctance, the adoption of a notational system composed of an abstract set of numerals distinct from the letters of any alphabet proved to have enormous consequences for late medieval Western Europe. The price paid in this divorce was to be reckoned in the collapse of the mystical systems of gematria, such as the Qabala, in which--since antiquity--both Greek and Hebrew accorded conventional numerical values to specific letters, inevitably sacrificing some of the poetry of interconnectedness, and undermining a whole level of meaning for the letters of each written Viewed another way, the introduction of Arabic numerals had the immediate and salutory effect (for those who adopted the system) of freeing NUMBER from NOUN or NAME, and VERB or WORD. That is, separate notational systems for counting or TOLLING, and for "recounting" or TELLING, liberated number reckoning from what had become a convoluted, dogmatically artificial, institutionalized set of unremittingly self-referential associations. Even though there remain to this day, in the conventions of the Arabic language, a precise correspondence between numbers and letters of the Arabic alphabet, according to the abjad system, the introduction of Arabic numerals at the end of the middle ages in Europe there and then greatly relieved the objective function of computation from the superstitious cramps of mystical constipation. The...Europeans' adoption of Arabic numerals and their computation-facilitating "positioning-of-numbers" altogether made possible Columbus's navigational calculations and Copernicus's discovery of the operational patterning of the solar system and its planets. Facile calculation so improved the building of the ships and their navigation that the ever-larger ships of the Mediterranean ventured out into the North and South Atlantic to round Africa and reach the Orient. With Magellan's crew's completion of his planned circumnavigation, the planet Earth's predominantly water-covered sphericity was proven. The struggle for supreme mastery of human affairs thus passed out of the Mediterranean and into the world's oceans." [Fuller, Critical Path, p. xxi.] It is 1492. Columbus sails the ocean blue. Antonio de Nebrija presents the Gramtica, the first grammar of a modern European language, to Queen Isabella. "But what is it for?" Isabella asks. "Your Majesty, language is the perfect instrument of Empire, he replies. It is 1573. King Philip II of Spain declares that the conquest will be referred to as the pacification. The cutting edge of language is a knife. hell * heaven * hell When the cacique Hatuey is tied to the stake, a Franciscan attempts to persuade him to convert to Christianity. The friar explains to the cacique that if his soul is saved, he will go to heaven and be spared the eternal torments of Hatuey asks if there are any Spaniards in Assured by the friar that there are, Hatuey replies: I prefer From Brief Account of the Devastation of the Indies, 1542. Bartolomeo de las Casas [Deborah Small, with Maggie Jaffe, 1492 What is it like to be discovered?, Monthly Review Press, New York (1991), n.p.]
{"url":"http://www.csus.edu/indiv/v/vonmeierk/8-01ZE.html","timestamp":"2014-04-18T20:45:24Z","content_type":null,"content_length":"44807","record_id":"<urn:uuid:4805511f-c942-4fa1-8354-d2d2f5698c5e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Player Contributions in Hockey Hockey is a popular sport due to the continuous nature of play and frequent changes of players on the ice that leads to a fast pace of gameplay. Adding to the excitement is the relatively high frequency of scoring opportunities, but the relatively low frequency of scoring events. However, these same intrinsic properties of the sport make it difficult to quantify the performance of individual players. Specifically, there is a lot of interest in measuring the contribution of each player to goal scoring, which is not easy given the continuity of play, frequent line changes, and the infrequency of goals. The historical standard by which the overall contribution of a hockey player is measured is plus-minus (or +/–): each player on the ice gets a +1 when a goal is scored by their team and a –1 when a goal is scored against their team. These positives and negatives are aggregated over an entire season for each player, so that the plus-minus for a player is the total number of goals scored by their team minus the total number of goals scored by their opponents while that player was on the ice. Plus-minus is intuitive and easy to calculate from game data, but has some obvious disadvantages. First and foremost, plus-minus for any particular player does not just depend on their contribution but also the contributions of their teammates and their opponents. Since the set of teammates and opponents that each player is matched with on-ice is different, the plus-minus measure for individual players is inherently polluted. Plus-minus is also not standardized for comparing players that spend very different amounts of time on the ice. We can view plus-minus for a player from a statistical perspective as a marginal player effect that averages over the context experienced by that particular player. What we would rather have is a partial player effect: What does that particular player contribute to goal scoring/prevention on top of the contributions of their teammates and opponents? Hence, a regression model for estimating partial effects is a natural approach to improving upon plus-minus. Within the past 10 years, linear regression approaches have been used to estimate partial effects of individual players in basketball and hockey. Basketball is similar to hockey in the sense that both sports consist of continuous fast-paced play with frequent player substitutions. However, basketball is very different from hockey in the sense that (1) scoring events are much less frequent in hockey and (2) players tend to substitute together as “lines” in hockey. Both of these aspects complicate the estimation of individual player effects in hockey. To address issue #1, the infrequency of scoring events, it is more appropriate to model player effects on the log-odds of a goal being scored rather than a linear regression on total goals scored. Specifically, we can model the probability p[i] that a given goal i was scored by the home team, where β = [β[1] · · · β[np]]‘ is the vector of partial plus-minus effects for each of n[p] players in hockey, with {h[i1] . . . h[i6]}, {a[i1] . . . a[i6]} being the indices on β corresponding to home-team (h) and away-team (a) players on the ice for goal. Issue #2, that players tend to substitute together as “lines” in hockey, is problematic for regression modeling since it makes it harder to separate the contributions of individual players if they are always on the ice with the same set of teammates. In statistical terms, the indicator variables for players on the same line will be highly collinear, which could lead to unstable estimates of their partial plus-minus effects in the above equation. Regularization is a popular way for promoting stability in high-dimensional regression models with collinearity. From the classical perspective, we can include a penalty term into the regression optimization that shrinks the optimal estimates of β toward zero. Common regularization strategies are ridge regression, which places an L2 penalty, and lasso regression, which places an L1 penalty, on the partial effects β. From the Bayesian perspective, these penalty terms correspond to particular prior distributions on the parameters β. The L2 penalty corresponds to a Gaussian prior distribution centered at zero for β, whereas the L1 penalty corresponds to a Laplace prior distribution centered at zero for β. When the number of predictor variables is large, the L1 penalty (lasso regression) is especially popular, since it leads to optimal estimates βˆ where many of the βˆ [i] are set to exactly zero. This characteristic eases interpretation of the regression model since we can focus attention on the small subset of selected variables with non-zero estimates βˆ [i]. In the context of player performance in hockey, the L1 penalty allows a small number of highly effective players to stand out from their teammates and opponents. I have been involved in two recent papers that take a regularized approach to estimate player performance in hockey, and I will outline some features of both approaches. The data for these papers, from www.nhl.com, consists of all games over five seasons (2007–2011), which contains approximately 18,000 even-strength goals and around 1,500 players. In both approaches, we restrict ourselves to even-strength goals to remove the difficulty of handling power-play situations in which one team is playing with fewer players than the other. The first paper is Bobby Gramacy, Shane Jensen, and Matt Taddy’s 2013 Journal of the Quantitative Analysis of Sports paper, “Estimating Player Contribution in Hockey with Regularized Logistic Regression.” Here, we implement the logistic regression model outlined above, but we also consider overall team effects on goal scoring. We use an L2 penalty term on the partial team effects, which encourages every team to have a small, but non-zero, effect on goal scoring. We use an L1 penalty on the partial effects of each player, so the model will pick out a subset of players who stand out relative to their team and the other players in the data set. In Figure 1, we give all players who were estimated by our model to have a substantial effect on scoring. In other words, we examine only the players with non-zero partial player effects. This plot actually compares estimated player effects from two models: the model with both player and team effects and the model with just player effects. The black dots in the plot are the estimated player effects for the model with both player and team effects. A line comes out of each dot that links to the estimated player effect for the model without the team effects. Red lines are given to players who have decreased effects in the model that includes team, whereas blue lines are given to players who have increased effects in the model that includes team. Overall, we see that using an L1 penalty has led to the selection of a small number, about 100 out of the original 1,500, of players who have a non-zero player effect in either model. We highlight several notable players with red text. The best player according to our model is Pavel Datysuk, who stands far above the other players in model with team effects, and his player effect doesn’t change in the model without team effect. Craig Adams and Radek Bonk stand out as the worst players according to our model with team and player effects. However, their player effects do improve when the team effects are not included. In fact, we see many players who have substantially different player effects depending on whether team effects are included in the model. Sidney Crosby has a player effect that drops after accounting for his team, though he still has a positive contribution in both models. Zdeno Chara has a positive player effect in the model without team effects, but his player effect drops to zero after having accounted for his strong Boston Bruins team. Dwayne Roloson has a zero effect in the model without team effects, but a strong positive effect when having accounted for the weak teams he played on (Tampa Bay Lightning, New York Islanders, Edmonton Oilers, and Minnesota Wild). In Figure 2, we compare our partial player effects to the traditional plus-minus statistic for all players who had a non-zero player effect in our model with team effects included. The points are colored and labeled according to the position of that player (C = center, L = left wing, R = right wing, D = defense, G = goalie). We also give the estimated team effects for some teams compared to their aggregate team plus-minus. The positive association between traditional plus-minus and our partial player effects indicates general agreement between the two measures, but there are some interesting discrepancies as well. For example, we disagree with plus-minus in terms of the best player in hockey. Datysuk stands well above all others according to our approach, but Alexander Ovechkin has the largest plus-minus. Roloson has a negative plus-minus value, but is estimated by our model to have a large positive partial effect. Roloson’s negative plus-minus could be driven by his weak teams (TB, NYI, EDM, and MIN), which all have negative team plus-minus and negative team effects according to our model. The second paper I will describe is “Competing Process Hazard Function Models for Player Ratings in Ice Hockey” by Andrew Thomas, Samuel Ventura, Shane Jensen, and Stephen Ma, which will appear in the Annals of Applied Statistics. This paper uses an additional season of data (2012) and takes a different approach to goal scoring. Goals by the home team versus goals by the away team are set up as two competing processes. Each of these processes is specified as a Cox proportional hazards model where the scoring rates in each process are functions of the particular players on the ice. One benefit of this two-process approach is we can separate the offensive vs. defensive contributions of each player. Each process also can account for all the time on the ice in which goals are not scored, whereas the first approach ignores this non-goal portion of the game. The same issue of collinearity between players is present in this second model, and so regularization is again needed to help stabilize our estimated partial offensive and defensive effects for each player. In this paper, we use a combination of the L1 and L2 penalties commonly called “the elastic net.” The offensive and defensive effects for each player can be combined into a “net goals contributed” over an average player. In figure 3, we compare the net goals contributed for each player from our model to the traditional plus-minus measure. We see that Datysuk stands out as one of the best players in hockey, which agrees with the results of the first paper. However, Datysuk does not stand out alone in this analysis; he is joined by Henrik Sedin and goaltender Henrik Lundqvist. Lundqvist is given a substantial boost in our model compared to traditional plus-minus. Ilya Kovalchuk is also a substantial increase in our model relative to his plus-minus, whereas Tomas Holmstrom sees a substantial decrease in our model relative to his plus-minus. Both papers are rather complicated statistical approaches to the analysis of player performance in hockey. This sophistication was required by two unique challenges in the quantitative study of hockey: the relative infrequency of scoring events and the collinearity between teammates who play together on the same line. In both cases, our approaches were able to detect subtle and interesting effects that would be masked by the standard plus-minus metric. Much work remains to be done in the quantitative study of hockey. Power-play goals need to be added into our analysis in a principled way. Incorporating more detailed information about shots and passes as well as other on-ice events could also give a more resolute picture of each player’s contribution. There have already been some statistics introduced for shots, with Corsi and Fenwick being the most notable. Finally, there needs to be more dialogue with the decisionmakers in professional hockey that stand to benefit from these quantitative analyses of player performance. Further Reading Gramacy, R.B., S.T. Jensen, and M. Taddy. 2013. “Estimating player contribution in hockey with regularized logistic regression.” Journal of Quantitative Analysis in Sports 9:97–111. Thomas, A.C., S.L. Ventura, S.T. Jensen, and S. Ma. 2013. Competing process hazard function models for player ratings in ice hockey. Accepted for publication in the Annals of Applied Statistics. About the Author Shane Jensen is an associate professor of statistics in the Wharton School at the University of Pennsylvania, where he has been teaching since completing his PhD at Harvard University in 2004. Jensen has published more than 40 academic papers in statistical methodology for a variety of applied areas, including molecular biology, psychology, and sports. He maintains an active research program in developing sophisticated statistical models for the evaluation of player performance in baseball and hockey. 1 Comment // Begin Trackbacks ?> // End Trackbacks ?> Leave a Response
{"url":"http://chance.amstat.org/2013/09/sports263/","timestamp":"2014-04-20T08:13:53Z","content_type":null,"content_length":"38748","record_id":"<urn:uuid:49128d18-d75e-4ebe-83f4-89840a881bec>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Climate Sensitivity Deconstructed Guest Post by Willis Eschenbach I haven’t commented much on my most recents posts, because of the usual reasons: a day job, and the unending lure of doing more research, my true passion. To be precise, recently I’ve been frying my synapses trying to twist my head around the implications of the finding that the global temperature forecasts of the climate models are mechanically and accurately predictable by a one-line equation. It’s a salutary warning: kids, don’t try climate science at home. Figure 1. What happens when I twist my head too hard around climate models. Three years ago, inspired by Lucia Liljegren’s ultra-simple climate model that she called “Lumpy”, and with the indispensable assistance of the math-fu of commenters Paul_K and Joe Born, I made what to me was a very surprising discovery. The GISSE climate model could be accurately replicated by a one-line equation. In other words, the global temperature output of the GISSE model is described almost exactly by a lagged linear transformation of the input to the models (the “forcings” in climatespeak, from the sun, volcanoes, CO2 and the like). The correlation between the actual GISSE model results and my emulation of those results is 0.98 … doesn’t get much better than that. Well, actually, you can do better than that, I found you can get 99+% correlation by noting that they’ve somehow decreased the effects of forcing due to volcanoes. But either way, it was to me a very surprising result. I never guessed that the output of the incredibly complex climate models would follow their inputs that slavishly. Since then, Isaac Held has replicated the result using a third model, the CM2.1 climate model. I have gotten the CM2.1 forcings and data, and replicated his results. The same analysis has also been done on the GDFL model, with the same outcome. And I did the same analysis on the Forster data, which is an average of 19 model forcings and temperature outputs. That makes four individual models plus the average of 19 climate models, and all of the the results have been the same, so the surprising conclusion is inescapable—the climate model global average surface temperature results, individually or en masse, can be replicated with over 99% fidelity by a simple, one-line equation. However, the result of my most recent “black box” type analysis of the climate models was even more surprising to me, and more far-reaching. Here’s what happened. I built a spreadsheet, in order to make it simple to pull up various forcing and temperature datasets and calculate their properties. It uses “Solver” to iteratively select the values of tau (the time constant) and lambda (the sensitivity constant) to best fit the predicted outcome. After looking at a number of results, with widely varying sensitivities, I wondered what it was about the two datasets (model forcings, and model predicted temperatures) that determined the resulting sensitivity. I wondered if there were some simple relationship between the climate sensitivity, and the basic statistical properties of the two datasets (trends, standard deviations, ranges, and the like). I looked at the five forcing datasets that I have (GISSE, CCSM3, CM2.1, Forster, and Otto) along with the associated temperature results. To my total surprise, the correlation between the trend ratio (temperature dataset trend divided by forcing dataset trend) and the climate sensitivity (lambda) was 1.00. My jaw dropped. Perfect correlation? Say what? So I graphed the scatterplot. Figure 2. Scatterplot showing the relationship of lambda and the ratio of the output trend over the input trend. Forster is the Forster 19-model average. Otto is the Forster input data as modified by Otto, including the addition of a 0.3 W/m2 trend over the length of the dataset. Because this analysis only uses radiative forcings and not ocean forcings, lambda is the transient climate response (TCR). If the data included ocean forcings, lambda would be the equilibrium climate sensitivity (ECS). Lambda is in degrees per W/m2 of forcing. To convert to degrees per doubling of CO2, multiply lambda by 3.7. Dang, you don’t see that kind of correlation very often, R^2 = 1.00 to two decimal places … works for me. Let me repeat the caveat that this is not talking about real world temperatures. This is another “black box” comparison of the model inputs (presumably sort-of-real-world “forcings” from the sun and volcanoes and aerosols and black carbon and the rest) and the model results. I’m trying to understand what the models do, not how they do it. Now, I don’t have the ocean forcing data that was used by the models. But I do have Levitus ocean heat content data since 1950, poor as it might be. So I added that to each of the forcing datasets, to make new datasets that do include ocean data. As you might imagine, when some of the recent forcing goes into heating the ocean, the trend of the forcing dataset drops … and as we would expect, the trend ratio (and thus the climate sensitivity) increases. This effect is most pronounced where the forcing dataset has a smaller trend (CM2.1) and less visible at the other end of the scale (CCSM3). Figure 3 shows the same five datasets as in Figure 2, plus the same five datasets with the ocean forcings added. Note that when the forcing dataset contains the heat into/out of the ocean, lambda is the equilibrium climate sensitivity (ECS), and when the dataset is just radiative forcing alone, lambda is transient climate response. So the blue dots in Figure 3 are ECS, and the red dots are TCR. The average change (ECS/TCR) is 1.25, which fits with the estimate given in the Otto paper of ~ 1.3. Figure 3. Red dots show the models as in Figure 2. Blue dots show the same models, with the addition of the Levitus heat content data to each forcing dataset. Resulting sensitivities are higher for the equilibrium condition than for the transient condition, as would be expected. Blue dots show equilibrium climate sensitivity (ECS), while red dots (as in Fig. 2) show the corresponding transient climate response (TCR). Finally, I ran the five different forcing datasets, with and without ocean forcing, against three actual temperature datasets—HadCRUT4, BEST, and GISS LOTI. I took the data from all of those, and here are the results from the analysis of those 29 individual runs: Figure 4. Large red and blue dots are as in Figure 3. The light blue dots are the result of running the forcings and subsets of the forcings, with and without ocean forcing, and with and without volcano forcing, against actual datasets. Error shown is one sigma. So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases. And not only did I find this relationship experimentally, by looking at the results of using the one-line equation on models and model results. I then found that can derive this relationship mathematically from the one-line equation (see Appendix D for details). This is a clear confirmation of an observation first made by Kiehl in 2007, when he suggested an inverse relationship between forcing and sensitivity. The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available [here]) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work, and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity. However, Kiehl ascribed the variation in sensitivity to a difference in total forcing, rather than to the trend ratio, and as a result his graph of the results is much more scattered. Figure 5. Kiehl results, comparing climate sensitivity (ECS) and total forcing. Note that unlike Kiehl, my results cover both equilibrium climate sensitivity (ECS) and transient climate response Anyhow, there’s a bunch more I could write about this finding, but I gotta just get this off my head and get back to my day job. A final comment. Since I began this investigation, the commenter Paul_K has since written two outstanding posts on the subject over at Lucia’s marvelous blog, The Blackboard (Part 1, Part 2). In those posts, he proves mathematically that given what we know about the equation that replicates the climate models, that we cannot … well, I’ll let him tell it in his own words: The Question: Can you or can you not estimate Equilibrium Climate Sensitivity (ECS) from 120 years of temperature and OHC data (even) if the forcings are known? The Answer is: No. You cannot. Not unless other information is used to constrain the estimate. An important corollary to this is:- The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity. Note that this is not an opinion of Paul_K’s. It is a mathematical result of the fact that even if we use a more complex “two-box” model, we can’t constrain the sensitivity estimates. This is a stunning and largely unappreciated conclusion. The essential problem is that for any given climate model, we have more unknowns than we have fundamental equations to constrain them. Well, it was obvious from my earlier work that the models were useless for either hindcasting or forecasting the climate. They function indistinguishably from a simple one-line equation. On top of that, Paul_K has shown that they can’t tell us anything about the sensitivity, because the equation itself is poorly constrained. Finally, in this work I’ve shown that the climate sensitivity “lambda” that the models do exhibit, whether it represents equilibrium climate sensitivity (ECS) or transient climate response (TCR), is nothing but the ratio of the trends of the input and the output. The choice of forcings, models and datasets is quite immaterial. All the models give the same result for lambda, and that result is the ratio of the trends of the forcing and the response. This most recent finding completely explains the inability of the modelers to narrow the range of possible climate sensitivities despite thirty years of modeling. You can draw your own conclusions from that, I’m sure … My regards to all, Appendix A : The One-Line Equation The equation that Paul_K, Isaac Held, and I have used to replicate the climate models is as follows: Let me break this into four chunks, separated by the equals sign and the plus signs, and translate each chunk from math into English. Equation 1 means: This year’s temperature (T1) is equal to Last years temperature (T0) plus Climate sensitivity (λ) times this year’s forcing change (∆F1) times (one minus the lag factor) (1-a) plus Last year’s temperature change (∆T0) times the same lag factor (a) Or to put it another way, it looks like this: T1 = <— This year’s temperature [ T1 ] equals T0 + <— Last year’s temperature [ T0 ] plus λ ∆F1 (1-a) + <— How much radiative forcing is applied this year [ ∆F1 (1-a) ], times climate sensitivity lambda ( λ ), plus ∆T0 a <— The remainder of the forcing, lagged out over time as specified by the lag factor “a“ The lag factor “a” is a function of the time constant “tau” ( τ ), and is given by This factor “a” is just a constant number for a given calculation. For example, when the time constant “tau” is four years, the constant “a” is 0.78. Since 1 – a = 0.22, when tau is four years, about 22% of the incoming forcing is added immediately to last years temperature, and rest of the input pulse is expressed over time. Appendix B: Physical Meaning So what does all of that mean in the real world? The equation merely reflects that when you apply heat to something big, it takes a while for it to come up to temperature. For example, suppose we have a big brick in a domestic oven at say 200°C. Suppose further that we turn the oven heat up suddenly to 400° C for an hour, and then turn the oven back down to 200°C. What happens to the temperature of the big block of steel? If we plot temperature against time, we see that initially the block of steel starts to heat fairly rapidly. However as time goes on it heats less and less per unit of time until eventually it reaches 400°C. Figure B2 shows this change of temperature with time, as simulated in my spreadsheet for a climate forcing of plus/minus one watt/square metre. Now, how big is the lag? Well, in part that depends on how big the brick is. The larger the brick, the longer the time lag will be. In the real planet, of course, the ocean plays the part of the brick, soaking up The basic idea of the one-line equation is the same tired claim of the modelers. This is the claim that the changing temperature of the surface of the planet is linearly dependent on the size of the change in the forcing. I happen to think that this is only generally the rule, and that the temperature is actually set by the exceptions to the rule. The exceptions to this rule are the emergent phenomena of the climate—thunderstorms, El Niño/La Niña effects and the like. But I digress, let’s follow their claim for the sake of argument and see what their models have to say. It turns out that the results of the climate models can be described to 99% accuracy by the setting of two parameters—”tau”, or the time constant, and “lambda”, or the climate sensitivity. Lambda can represent either transient sensitivity, called TCR for “transient climate response”, or equilibrium sensitivity, called ECS for “equilibrium climate sensitivity”. Figure B2. One-line equation applied to a square-wave pulse of forcing. In this example, the sensitivity “lambda” is set to unity (output amplitude equals the input amplitude), and the time constant “tau” is set at five years. Note that the lagging does not change the amount of energy in the forcing pulse. It merely lags it, so that it doesn’t appear until a later date. So that is all the one-line equation is doing. It simply applies the given forcing, using the climate sensitivity to determine the amount of the temperature change, and using the time constant “tau” to determine the lag of the temperature change. That’s it. That’s all. The difference between ECS (climate sensitivity) and TCR (transient response) is whether slow heating and cooling of the ocean is taken into account in the calculations. If the slow heating and cooling of the ocean is taken into account, then lambda is equilibrium climate sensitivity. If the ocean doesn’t enter into the calculations, if the forcing is only the radiative forcing, then lambda is transient climate response. Appendix C. The Spreadsheet In order to be able to easily compare the various forcings and responses, I made myself up an Excel spreadsheet. It has a couple drop-down lists that let me select from various forcing datasets and various response datasets. Then I use the built-in Excel function “Solver” to iteratively calculate the best combination of the two parameters, sensitivity and time constant, so that the result matches the response. This makes it quite simple to experiment with various combinations of forcing and responses. You can see the difference, for example, between the GISS E model with and without volcanoes. It also has a button which automatically stores the current set of results in a dataset which is slowly expanding as I do more experiments. In a previous post called Retroactive Volcanoes, (link) I had discussed the fact that Otto et al. had smoothed the Forster forcings dataset using a centered three point average. In addition they had added a trend fromthe beginning tothe end of the dataset of 0.3 W per square meter. In that post I had I had said that the effect of that was unknown, although it might be large. My new spreadsheet allows me to actually determine what the effect of that actually is. It turns out that the effect of those two small changes is to take the indicated climate sensitivity from 2.8 degrees/doubling to 2.3° per doubling. One of the strangest findings to come out of this spreadsheet was that when the climate models are compared each to their own results, the climate sensitivity is a simple linear function of the ratio of the trends of the forcing and the response. This was true of both the individual models, and the average of the 19 models studied by Forster. The relationship is extremely simple. The climate sensitivity lambda is 1.07 times the ratio of the trends for the models alone, and equal to the trends when compared to all the results. This is true for all of the models without adding in the ocean heat content data, and also all of the models including the ocean heat content data. In any case I’m going to have to convert all this to the computer language R. Thanks to Stephen McIntyre, I learned the computer language R and have never regretted it. However, I still do much of my initial exploratory forays in Excel. I can make Excel do just about anything, so for quick and dirty analyses like the results above I use Excel. So as an invitation to people to continue and expand this analysis, my spreadsheet is available here. Note that it contains a macro to record the data from a given analysis. At present it contains the following data sets: Pinatubo in 1900 Step Change Forster No Volcano Forster N/V-Ocean Otto Forcing Otto-Ocean ∆ Levitus watts Ocean Heat Content ∆ GISS Forcing GISS-Ocean ∆ Forster Forcing Forster-Ocean ∆ CM2.1 Forcing CM2.1-Ocean ∆ GISS No Volcano GISS GHGs GISS Ozone GISS Strat_H20 GISS Solar GISS Landuse GISS Snow Albedo GISS Volcano GISS Black Carb GISS Refl Aer GISS Aer Indir Eff CCSM3 Model Temp CM2.1 Model Temp GISSE ModelE Temp BEST Temp Forster Model Temps Forster Model Temps No Volc GISS Temp You can insert your own data as well or makeup combinations of any of the forcings. I’ve included a variety of forcings and responses. This one-line equation model has forcing datasets, subsets of those such as volcanoes only or aerosols only, and the simple impulses such as a square step. Now, while this spreadsheet is by no means user-friendly, I’ve tried to make it at least not user-aggressive. Appendix D: The Mathematical Derivation of the Relationship between Climate Sensitivity and the Trend Ratio. I have stated that the relationship where climate sensitivity is equal to the ratio between trends of the forcing and response datasets. We start with the one-line equation: Let us consider the situation of a linear trend in the forcing, where the forcing is ramped up by a certain amount every year. Here are lagged results from that kind of forcing. Figure B1. A steady increase in forcing over time (red line), along with the situation with the time constant (tau) equal to zero, and also a time constant of 20 years. The residual is offset -0.6 degrees for clarity. Note that the only difference that tau (the lag time constant) makes is how long it takes to come to equilibrium. After that the results stabilize, with the same change each year in both the forcing and the temperature (∆F and ∆T). So let’s consider that equilibrium situation. Subtracting T0 from both sides gives Now, T1 minus T0 is simply ∆T1. But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. So equation 2 simplifies to Dividing by ∆F gives us Collecting terms, we get And dividing through by (1-a) yields Now, out in the equilibrium area on the right side of Figure B1, ∆T/∆F is the actual trend ratio. So we have shown that at equilibrium But if we include the entire dataset, you’ll see from Figure B1 that the measured trend will be slightly less than the trend at equilibrium. And as a result, we would expect to find that lambda is slightly larger than the actual trend ratio. And indeed, this is what we found for the models when compared to their own results, lambda = 1.07 times the trend ratio. When the forcings are run against real datasets, however, it appears that the greater variability of the actual temperature datasets averages out the small effect of tau on the results, and on average we end up with the situation shown in Figure 4 above, where lambda is experimentally determined to be equal to the trend ratio. Appendix E: The Underlying Math The best explanation of the derivation of the math used in the spreadsheet is an appendix to Paul_K’s post here. Paul has contributed hugely to my analysis by correcting my mistakes as I revealed them, and has my great thanks. Climate Modeling – Abstracting the Input Signal by Paul_K I will start with the (linear) feedback equation applied to a single capacity system—essentially the mixed layer plus fast-connected capacity: C dT/dt = F(t) – λ *T Equ. A1 C is the heat capacity of the mixed layer plus fast-connected capacity (Watt-years.m^-2.degK^-1) T is the change in temperature from time zero (degrees K) T(k) is the change in temperature from time zero to the end of the kth year t is time (years) F(t) is the cumulative radiative and non-radiative flux “forcing” applied to the single capacity system (Watts.m^-2) λ is the first order feedback parameter (Watts.m^-2.deg K^-1) We can solve Equ A1 using superposition. I am going to use timesteps of one year. Let the forcing increment applicable to the jth year be defined as f[j]. We can therefore write F(t=k ) = F[k] = Σ f[j] for j = 1 to k Equ. A2 The temperature contribution from the forcing increment f[j] at the end of the kth year is given by ΔTj(t=k) = f[j](1 – exp(-(k+1-j)/τ))/λ Equ.A3 where τ is set equal to C/λ . By superposition, the total temperature change at time t=k is given by the summation of all such forcing increments. Thus T(t=k) = Σ f[j] * (1 – exp(-(k+1-j)/τ))/ λ for j = 1 to k Equ.A4 Similarly, the total temperature change at time t= k-1 is given by T(t=k-1) = Σ f[j] (1 – exp(-(k-j)/τ))/ λ for j = 1 to k-1 Equ.A5 Subtracting Equ. A4 from Equ. A5 we obtain: T(k) – T(k-1) = f[k]*[1-exp(-1/τ)]/λ + ( [1 – exp(-1/τ)]/λ ) (Σf[j]*exp(-(k-j)/τ) for j = 1 to k-1) …Equ.A6 We note from Equ.A5 that (Σf[j]*exp(-(k-j)/τ)/λ for j = 1 to k-1) = ( Σ(f[j]/λ ) for j = 1 to k-1) – T(k-1) Making this substitution, Equ.A6 then becomes: T(k) – T(k-1) = f[k]*[1-exp(-1/τ)]/λ + [1 – exp(-1/τ)]*[( Σ(f[j]/λ ) for j = 1 to k-1) - T(k-1)] …Equ.A7 If we now set α = 1-exp(-1/τ) and make use of Equ.A2, we can rewrite Equ A7 in the following simple form: T(k) – T(k-1) = F[k]α /λ – α * T(k-1) Equ.A8 Equ.A8 can be used for prediction of temperature from a known cumulative forcing series, or can be readily used to determine the cumulative forcing series from a known temperature dataset. From the cumulative forcing series, it is a trivial step to abstract the annual incremental forcing data by difference. For the values of α and λ, I am going to use values which are conditioned to the same response sensitivity of temperature to flux changes as the GISS-ER Global Circulation Model (GCM). These values are:- α = 0.279563 λ = 2.94775 Shown below is a plot confirming that Equ. A8 with these values of alpha and lamda can reproduce the GISS-ER model results with good accuracy. The correlation is >0.99. This same governing equation has been applied to at least two other GCMs ( CCSM3 and GFDL ) and, with similar parameter values, works equally well to emulate those model results. While changing the parameter values modifies slightly the values of the fluxes calculated from temperature , it does not significantly change the structural form of the input signal, and nor can it change the primary conclusion of this article, which is that the AGW signal cannot be reliably extracted from the temperature series. Equally, substituting a more generalised non-linear form for Equ A1 does not change the results at all, provided that the parameters chosen for the non-linear form are selected to show the same sensitivity over the actual observed temperature range. (See here for proof.) 291 Responses to Climate Sensitivity Deconstructed 1. Bloody brilliant. I love maths. You cannot be wrong if the answer is correct it takes you back to the question. Willis, you are a bloody genius. 2. There’s only one factor that you left out, which I think is very important: the amount of warming predicted is directly proportional to the amount of funding expected to be realized by said prediction. Which leads to the corollary: A climate modeler’s Income stream is inversely proportional to the amount of cooling which he allows his model to show. 3. Brilliant. Simply brilliant. (Look forward to seeing the R results as it would be a worthwhile project for me to learn both R and this modelling issue.) 4. Without assumed water vapor feedback, CS is one degree C or less for first CO2 doubling. Unfortunately for the Team, this key but evidence-free assumption has been shown false by the only “climate science expert” who counts, Mother Nature. 5. err I pointed this out to you back in 2008 nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b) ‘Now, I don’t have the ocean forcing data that was used by the models. ” there is no ocean forcing data. Forcings are all radiative components. The ocean is forced. the atmosphere is forced. the land is forced. They respond to this forcing. 6. I’m not a climatologist, but I’ve built a number of models to forecast and/or assign to specific traffics the costs of a shared transportation network. I’ve found, generally, that no matter how many complex factors are included, there are usually only one or two that determine the result. It’s nice to know that is true for climate models as well. 7. • λ ∆F1 (1-a) + <— How much radiative forcing is applied this year [ ∆F1 (1-a) ], times climate sensitivity lambda So, all the warmists need do is be able to predict annual radiative forcings for the next century and their predictions will spot on allowing for spikes like eruptions. Wait. How are they doing with SC 24? Not so good eh. I say the warmists should release all their funding towards solar modeling. 8. It should be obvious that CO2 is among the least important potential climate forcings. Mean global T, as nearly as can be reconstructed, was about the same at 7000 ppm & 700. It was also about the same as today with carbon dioxide at 4000 to 7000 ppm during the Ordovician glaciation, although the sun was four percent cooler then. 9. Billions of dollars & hordes of researchers for something I could do w/a slide rule…. 10. What is it with climate scientists? Not only did they all appear to have skipped statistics classes, they appear to have skipped the philosophy of science courses as well. “entities must not be multiplied beyond necessity” was written by John Punch from Cork in 1639, although generally referred to as Occam’s Razor. Excellent work, Willis. 11. [...] So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases.[...]“ How do you make a jaw-dropper emoticon? wOw! P.S. I want my tax money back. Meanwhile, they can pay Willis his usual day rate for the same results – what the heck! 2-3x(day rate) – and put how many billions back in our pockets? 12. But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. At equilibrium, all of the temperature changes and forcing changes are 0. The first is the definition of equilibrium,, and the second is one of the necessary conditions for an equilibrium to be As you wrote, you have modeled the models, but you have not modeled the climate. You have taken two time series, modeled temperature and forcing, where modeled temperature is a function of the forcing. From those two correlated time series you have written modeled T as a linear function of changed forcing and changed temperature, where the two changes are not computed on the same 13. “Energy Secretary Ed Davey is to make an unprecedented attack later on climate change sceptics.” “n a speech, the Lib Dem minister will complain that right-wing newspapers are undermining science for political ends.” Pot to kettle!!! 14. Thanks Willis! Saves a lot of time and effort and headaches to have a simple expression like this to approximate climate models, looking forward to playing around with this. 15. Steven Mosher says: June 3, 2013 at 12:04 pm “nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b)” Yet you still think it is not a pseudoscience? 16. Matthew writes: “… you have modeled the models, but you have not modeled the climate.” That’s exactly the point. The models do not model the climate either, and are in effect just a representation of the forcing assumptions input to them. 17. This makes intuitive sense and long over-due to be articulated in such a clear way – thanks. Climate modellers have always been quick to demonstrate how well they can hindcast, but really they’re just saying 2 + 2 – 4 = 0 and congratulating each other on figuring out the third parameter was 4. Of course their colleague was solving 2 + 6 – 8 = 0 which is equally impressive and worthy of funding. I don’t have the reference in front of me, but I recall at least one GCM being criticized for including an interactive “solver” type application integrated into the parameter setting process to handle just such gaming. 18. I’m happy that Willis is understanding some of the math in simple one-line climate models, but as Steve Mosher has alluded to, there is really nothing new here. Of course the global average behavior of a complex 3-D climate model can be accurately approximated with a 0-D model, mainly because global average temperature change is just a function of energy in (forcings) versus energy out (feedbacks). But that really doesn’t help us know whether either one has anything to do with reality.) Willis, you are a smart guy and a quick learner, and you have a talent for writing. Try to understand what has already been done, and build upon that… rather than reinventing it. 19. It shows again, what i suspected, that the climate models only makes it all look more scientific by wrapping it in super complicated programming. The models have value when you analyse parts of the climate or weather, but when you take the average of the whole earth, and on top of that average over a year or more and then furthermore average over 30+ models, then the only part left is the forcings and sensitivity. I have seen it stated from different sources, that it more or less is so. 20. Wow, respect to you for this. Are you expecting the “Big Oil” check to arrive soon? 21. Blinded me with science. You da man with the math. to paraphrase you: So many variables, so little time. (or is that dimes) 22. Steven Mosher says: June 3, 2013 at 12:04 pm I pointed this out to you back in 2008 err … you pointed out exactly what to me in 2008? Once again, your cryptic posting style betrays you, and the citations are of little help. A précis would be useful … nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b) If that is what you pointed out, actually it is quite surprising that the climate models can be represented by a simple equation. There are models, and there are models. The climate models are massively complex, and more importantly, iterative models designed to model a chaotic system. Those kinds of models should definitely not be able to have their outputs replicated by a simple one-line equation. In addition, when you find the simple model that represents the complex LBL model, you don’t sneer at it as you have done with my simple model. You USE it, right? Once you find the simple model “delta forcing = 5.35ln(C02a/C02b)”, you often, even usually don’t need to use the complex line-by-line model in your further calculations. And that is exactly what I have done here, USED the simple model to find further insights into the relationships. Finally, none of your objections touch the really interesting result, which is that the climate sensitivity given by the climate models is simply the trend ratio … ‘Now, I don’t have the ocean forcing data that was used by the models. ” there is no ocean forcing data. Forcings are all radiative components. The ocean is forced. the atmosphere is forced. the land is forced. They respond to this forcing. Sorry for the shorthand. The radiative forcing doesn’t all go to warm the surface. Some of it warms the ocean as well. I have referred to the part of the radiative forcing which has gone into the secular warming trend of the ocean as “ocean forcing”, expressed the annual change in W/m2, and subtracted it from the radiative forcing. This gives the net forcing which warms the surface … at least in their simplistic theory. Best regards, 23. Hi Willis I use one equation & single forcing . 24. Fascinating and intuitively convincing (from what I know about modelling) BUT a passing comment and a friendly warning – and if you are already aware of this, my apologies, Spreadsheets generally and Excel in particular can be false friends. I was involved in a UK government programme on software quality which had as one theme, the dangers of dependence on the internal mathematics/ functions in spreadsheets in critical calculations. We were mainly looking at metrological (rather than meteorological!) applications but the point was that the internal functions/algorithms in spreadsheets, particularly Exel, could not be safely depended upon if the data concerned was other than very straightforward. We found some of the more specifically science oriented packages were much more reliable. This was a few years ago but I think you may still find some more reliable and tested algorithms that can be plugged into Exel on the NPL website at http://www.npl.gov.uk/ssfm / – ssfm was the software support for metrology programme (excuse the British spelling). 25. Roy Spencer says: June 3, 2013 at 12:33 pm I’m happy that Willis is understanding some of the math in simple one-line climate models, but as Steve Mosher has alluded to, there is really nothing new here. Of course the global average behavior of a complex 3-D climate model can be accurately approximated with a 0-D model, mainly because global average temperature change is just a function of energy in (forcings) versus energy out (feedbacks). But that really doesn’t help us know whether either one has anything to do with reality.) Willis, you are a smart guy and a quick learner, and you have a talent for writing. Try to understand what has already been done, and build upon that… rather than reinventing it. Roy, with all due respect, you are a smart guy as well, but first, show me anywhere that anyone has derived the relationship that the climate sensitivity of the models is merely the trend ratio of the input and output datasets. If you cannot do that, and I’ve never seen it, then your objection misses the point. Second, as a long-time computer modeler of all types of systems, I assure you that there is absolutely no “of course” that the output of a complex 3-D iterative model of a chaotic system can be replicated by a simple one-line equation. Please give me some examples if you think this is a common thing. You might start by contemplating the input and output states of Conway’s game of “Life”, an extremely simple 2D iterative model, and see if you can represent the state of the output with a 0-D model … good luck. Finally, you say that this “really doesn’t help us know whether either one has anything to do with reality”. In fact it does, because the simple model shows us that the climate model results have nothing to do with reality. Paul_K has shown that result mathematically for the entire simple 0-D model, and I have shown it with respect to the sensitivities shown by that model. 26. Yes I agree climate models based on the political decided UNFCCC are unscientific and are with 99 % certainty WRONG 27. Hi Willis, This is fairly common in engineering (that a complex system can be modeled by a simple equation over limited conditions/time-frame). As useful as that can be, it does not demonstrate understanding of the complex system- it merely means you can predict behavior over some limited range. OK, so what? Well, often that’s good enough. I’ll say that again, often that’s good enough. The problem is when a real event occurs that invalidates the simple equation (black box) and the black box no longer produces useful output. So there’s a difference between something that’s useful, and really understanding what’s under the hood. In the long run, you are better off understanding the complex system, but folks should understand that’s not required for something to be useful. 28. it is their way to keep energy conservation under control? 29. Matthew R Marler says: June 3, 2013 at 12:23 pm But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. At equilibrium, all of the temperature changes and forcing changes are 0. The first is the definition of equilibrium,, and the second is one of the necessary conditions for an equilibrium to be possible. Apologies for the lack of clarity, Matt, I should have said “steady state” rather than “equilibrium”, as the forcing and temperature are both continually rising (see figure B1). 30. Willis, I think you have found an underlying fact about the current status of climate science. Since 1980′s there has really been only cosmetic changes in AGR predictions despite an order of magnitude increase in resolution and complexity of GCM models. Most of the “progress” (IMHO) seems to have been in fine tuning various “natural/anthropogenic” variations in aerosols to better fit past temperature response. Uncertainties still remain about natural feedbacks – especially clouds. Looking at your equation. Tau is the relaxation time for the climate system to react to any sudden change in forcing. This could be a volcano, a meteor, aerosols or CO2. Climate models including ocean/atmosphere interactions (for example GISS) seem to point to a value for Tau of ~ 10-15 years. Your equation works by simply taking direct CO2 forcing (MODTRAN) each year to be DF = 5.3ln(C/C0), where C-C0 = the increase measured using the Mauna Loa data extrapolated back to 1750. So simply taking lambda as the climate sensitivity (hoping latex works!) $\Delta{T} = \Delta{T}_{0}(1 - e^\frac{-t}{15})$ where $\Delta{T}$ is the transient temperature response and $\Delta{T}_{0}$ is the equilibrium temperature response. $\Delta{T}_{0} = \lambda\Delta{S}$ and then taking Stefan Boltzman to derive the IR energy balance or in terms of feedbacks $\Delta{T}_{0} = \frac{\Delta{S}}{3.75 - F}$ and for equilibrium climate sensitivity for a doubling of CO2 $ECS = \frac{F_{2x}\Delta{S}}{G_0(1 - f)}$ To calculate the CO2 forcing take a yearly increment of $\Delta{S} = 5.3 log (\frac{C}{C_0})$ , where C and C0 are the values before and after the yearly pulse. All values are calculated from seasonally averaged Mauna Loa data smoothed back to an initial concentration of 280ppm in 1750. Each pulse is tracked through time and integrated into the overall transient temperature change using: $\Delta{T}(t) = \sum_{k=1}^N (\Delta{T}_{0}(1 - e^\frac{-t}{15}))$ $\Delta{T}_{0}$ was calculated based on an assumed ECS of 2.0C. The results can then be compared to the observed HADCRUT4 temperature anomalies – see here. If we assume an underlying 60 year natural oscillation (AMO?) superimposed onto a CO2 AGW signal then at worst ECS works out to be ~2C. more details – here 31. One line equations may work to emulate the crude GISS models but there is talk around town that the NCAR/UCAR CESM takes computational and algorithmic advantage of the vibrations of the massive crystal dome under the City of Boulder. There is no way to simplify and match that. 32. which is that the climate sensitivity given by the climate models is simply the trend ratio … woops…..so even if they are putting in all of the other crap……all that stuuf is doing is cancelling itself out 33. If 2 computer systems are functionally equivalent, that is they produce the same outputs from the same inputs, then they are logically equivalent, that is the operations of one can be transformed into the operations of the other. Which means 99.99% of the code in the climate models doesn’t actually do anything significant to the outputs. And buried within them is the logical equivalent of Willis’ equation. The climate modellers must surely know this. 34. Roy Spencer says: June 3, 2013 at 12:33 pm Of course the global average behavior of a complex 3-D climate model can be accurately approximated with a 0-D model, mainly because global average temperature change is just a function of energy in (forcings) versus energy out (feedbacks). Coming from a simulation background I’ve been saying for years that all they did was pick a CS they liked, and then adjusted the other knobs until they liked the results. But what’s really telling is that their 3D results aren’t very good, it’s only when they average them all together that they have something they can print while not hiding their faces. Figure 2 is a difference map; showing the difference between the SST model and observations of total cloud cover for July in the continental United States. We examined the SST model vs. the observations for the summer season in the region of the continental and coastal United States (lower 48 states), the most dramatic differences between the simulated model data and the observation data are seen in the variable of cloud cover. It seems that the misrepresentations of total cloud cover in the model tend to cause simulations of other variables to be off target as well. For example, in the Gulf of Mexico, the total cloud cover is overestimated by 8.2% to nearly 40% (Figure 2). Consequently, precipitation is also overestimated, by .5 mm to over 10 mm. In addition, too much cloud cover would lead to lower net solar radiation; in a large portion of the Gulf, it is underestimated by over 100 W/m2, because intuitively if there are more clouds, less sunlight can penetrate to the surface. Model images of the Pacific Coast region show highly underestimated cloud cover, by approximately 25% to 75%. This leads to a higher net solar radiation than expected (by 12 W/m2 to almost 130 W/m2), because solar radiation is not reflecting off of clouds, and therefore more radiation can get through into the atmosphere. More radiation being absorbed, and then re-radiated, would lead to higher than expected surface air temperatures, shown in model images as much as 16.2°C higher than the observation data (Figure 3). There is no significant difference, however, in modeled vs. observed precipitation, perhaps because July is a dry month for this region. East of the Himalaya Mountains, the model underestimates the Surf_Temp within a range of 3°C and 30°C. Moving eastward, however, it is found that the difference between the model and the observations decreases. West of the Himalayas, towards India, Pakistan, Afghanistan, and Iran, the model overestimates the surface air temperature. Surface air temperature is directly influenced by insolation (incoming sunlight) reaching the surface. One would expect that the absorbed solar radiation at the surface should be less as well. This was confirmed by the difference plot below (See Fig. 3). For the most part, where solar radiation was overestimated, the surface air temperature was overestimated, and where solar radiation was underestimated, the temperature was underestimated. Moving towards the west away from the Himalayas, the model tends to underrepresent precipitation in India by 8 to 16 mm/day. Over the Tibetan Plateau region itself, the model estimates too much precipitation (3 mm to 6 mm). In addition, in the Bay of Bengal, the model also predicts too much precipitation. Over the South China Sea, precipitation in the model is less than the observed. The increased precipitation over Mongolia and the Gobi Desert could be linked to the exaggerated amount of low clouds in the region. (See Figure 3.) This is because most of the rainmaking clouds are low clouds. Basically they can’t simulate any random specific area correctly, but it’s close if they average all of their errors together. 35. So essentially, we have paid a trillion dollars for this equation and published (apparently) 100,000+ papers on it, and Willis has reduced all the bumph down to an equation with an R^2 fit of 1.00. Mosher says this is nothing new here, but in the previous post when Willis used this equation as the blackbox equation, he protested that this isn’t the equation used by IPCC nobility – a disengenuous critique, with what he apparently knew already. Roy Spencer made the same criticism but I would argue to these two gentlemen that all the rest of us got a hell of a good education out of his effort because those in the know weren’t prepared to present this revelation to the great unwashed. The consensus synod has less to sneer and much to fear from the work of this remarkable man. 36. MJB says: June 3, 2013 at 12:31 pm “of funding. I don’t have the reference in front of me, but I recall at least one GCM being criticized for including an interactive “solver” type application integrated into the parameter setting process to handle just such gaming.” WHAT? It is frowned upon when they train the parameterization automatically? It is expected that they do it manually? What for? To uphold a pretense of scientific activity? The cult is getting ridiculouser by the day. … They function indistinguishably from a simple one-line equation. … In electronics, Thevenin’s* theorem states that any linear black box circuit, no matter how complicated, can be replaced with a voltage source and an impedance. Since a linear circuit, no matter how complicated, is modeled by a set of linear equations, we can extrapolate that any set of linear equations can be replaced by one linear equation if all we want is the overall system response. Your results make it a pretty good bet that the climate models are predominantly linear. Given that we’re talking about thermodynamics, … yep, I think that might be a problem. *http://en.wikipedia.org/wiki/Th%C3%A9venin%27s_theorem Thevenin is a great shortcut for circuit analysis, nothing more. 38. Steven Mosher says: June 3, 2013 at 12:04 pm I pointed this out to you back in 2008 nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b)” You really should ripen those sour grapes before you try to sell ‘em… they are (again) making it appear that you are attempting obfuscation. MiCro says: June 3, 2013 at 1:35 pm “Coming from a simulation background I’ve been saying for years that all they did was pick a CS they liked, and then adjusted the other knobs until they liked the results.” Yes indeed. That’s the way it’s looked for quite some time. (won’t mention harryreadme) Willis Eschenbach says: June 3, 2013 at 1:14 pm Matthew R Marler says: June 3, 2013 at 12:23 pm “But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. At equilibrium, all of the temperature changes and forcing changes are 0. The first is the definition of equilibrium,, and the second is one of the necessary conditions for an equilibrium to be “Apologies for the lack of clarity, Matt, I should have said “steady state” rather than “equilibrium”, as the forcing and temperature are both continually rising (see figure B1).” Fine work, nevertheless. Many thanks. 39. The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity. Only the climate models with the correct Equilibrium Climate Sensitivity would be able hind-cast past temperatures. However, multiple models with different ECS can all hind-cast past temperatures. Which either means that ECS doesn’t determine temperature, or the models are faulty. 40. Matthew R Marler says: June 3, 2013 at 12:23 pm As you wrote, you have modeled the models, but you have not modeled the climate. since it is quite clear that the models haven’t modeled climate, anything that models the models is also not going to model climate. 41. The models are pretty simple really. Each cell in the atm has a set of inputs and outputs on each side, plus top and bottom. There’s a set of rules that does the math based on each in, and propagates the results to each output. There are cells who’s sides are dirt, and by now ones who’s sides are water. There are 14-15 cells for each area from the a little below the surface up to space. And a grid of cells to cover the earth, the more cells the more accurate the results are, but the longer the simulations take to run. Climatologists have been insisting that the reason they’re results are off is they can’t make the grids small enough. So bigger computers are required. When it’s initialize, without stepping “time”, inputs are propagated to output until the outputs become stable. This accounts for the output of one cell changing the input of another. They do this until each cell initialized to the state it’s defined to be, then they provide a forcing, and let the clock step forward one clock tick, then each cell runs it’s calculations until the output stabilizes. Then they step the clock, and repeat. This is the same process used by linear simulators like SPICE, in fact you could replicate the equations of each cell with spice, it would just be really slow, and more difficult to twist knobs But it’s just a bunch of equations to solve. This link is to a very good document on the basics of GCM’s. 42. commieBob says: June 3, 2013 at 1:53 pm “Thevenin’s theorum…” Yep. I’ve never been able to look at the any statement about climate models without visions of Thevenin- Norton equivalents and Kirchoff equations zapping through what’s left of my brain. 43. Philip Bradley says: June 3, 2013 at 1:33 pm f 2 computer systems are functionally equivalent, that is they produce the same outputs from the same inputs, then they are logically equivalent, that is the operations of one can be transformed into the operations of the other. Which means 99.99% of the code in the climate models doesn’t actually do anything significant to the outputs. And buried within them is the logical equivalent of Willis’ equation. The climate modellers must surely know this. I doubt it. The climate models, like Topsy, have “jes’ growed”. In addition, it’s a recurring problem with iterative models, which is that even if you get the “right” answer, you don’t know how you got there … but my “black box” analysis shows how. So I don’t think the modelers knew that. 44. Nice work, Willis and collaborators. As with all clear math, beautiful and compelling. Another answer to Dr. Spencer is that his comment applies to figure three, but not to an important part of figure four and smacks of sour grapes, especially since his own recent simple model did not work out so well. A comment is surprising that so many different GCM models (and an ensemble!) gave ‘exactly’ the same trend ratio answer to the relationship between forcing input and temperature output, even though from figures two and three obviously not the same lambda, showing the models themselves do differ in important sensitivity respects (itself primarily driven by positive water vapor feedback and clouds, since the ‘feedback neutral’ sensitivity from Stefan-Boltsmann is always between 1( theoretical black body) and model grid specific, real earth ‘grey body’ 1.2). A possible reason is that each model is ‘tuned’ (parameterized with things like aerosols and within grid cell cloud and convection microprocesses) to hind cast past temperature as accurately as possible. Even though those tunings differ by model, they all produce the same temperature result, so by intent the same trend ratio as evidenced in Figures 2 and 3. Another way to show that they are therefore unlikely to accurately predict the future temperature or true sensitivity, as you have already observed. It will be most interesting to learn what you and your collaborators do next with these most interesting results. One suggestion might be to use the one line equation and it’s corollaries to ‘filter out’ the known forcings (like CO2) temperature consequences to estimate the ‘natural’ variability of temperature over the periods where data of different levels of certainty exists. Satellite era greatest, 1880 or so least. Said differently your new tools might go a long way on the attribution problem. Can a 60 year natural cycle be extracted? Can the pause be explained as a function of whatever happened from 1945-1965? Lots of potentially interesting stuff, without spending more billions to spin up the supercomputers for months at a pass. 45. Willis, “So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures.” I think calculus gives a reason for this. Idealized, trend_T≅dT/dt, trend_F≅dF/dt and, with many caveats as discussed in your previous thread, 46. Willis – I am surprised that you are surprised by what you found. It is an inevitable result of their process.. See IPCC report AR4 Ch.9 Executive Summary : “Estimates of the climate sensitivity are now better constrained by observations.“. “constrained by observation” means that they constrained their models in order to get climate sensitivity to match observed temperature. Forcings are the primary inputs to the models, so across the models you must necessarily get the relationship that you noticed. 47. Rud Istvan says: June 3, 2013 at 2:09 pm … It will be most interesting to learn what you and your collaborators do next with these most interesting results. One suggestion might be to use the one line equation and it’s corollaries to ‘filter out’ the known forcings (like CO2) temperature consequences to estimate the ‘natural’ variability of temperature over the periods where data of different levels of certainty Thanks for your kind words, Rud. I wish I had collaborators, it’s just me and my computer. Regarding what’s next, I want to apply the equations on a gridcell-by-gridcell basis using the CERES data. As you point out, the interesting thing about the one-line equation is that we can use it to see both when and where the climate is NOT responding as expected to the forcing. 48. Rud Istvan says: June 3, 2013 at 2:09 pm A comment is surprising that so many different GCM models (and an ensemble!) gave ‘exactly’ the same trend ratio answer to the relationship between forcing input and temperature output This is because they’re all almost identical, at most they adjust the equations in the cells, mostly they just set the knobs differently. Mod, I think my last post got eaten, can you check for it? this post will make more sense after reading the last one. [Reply: Prior comment found in Spam folder. Rescued & posted. — mod.] 49. Colorado Wellington says: June 3, 2013 at 1:21 pm One line equations may work to emulate the crude GISS models but there is talk around town that the NCAR/UCAR CESM takes computational and algorithmic advantage of the vibrations of the massive crystal dome under the City of Boulder. There is no way to simplify and match that. I’m keepin’ an eye on you, dude. It’s not clear yet whether you are a man of good humor or if you simply moved to Boulder to get higher. Or something. 50. PS. Willis – I don’t want that last comment of mine to be taken as critical of your analysis. What you have done, very succinctly, is to show that they really did do what they said they did, but you have done it in a way that shows people very clearly how that renders the models useless for most purposes, and why the IPCC say they only do “projections” not predictions. The crying shame is that people have deliberately been led to believe that the models make predictions which can sensibly be used for policy purposes. 51. What Willis is presenting is a difference equation that replicates the global average of the climate models. The very same average that climate modellers themselves put forward as a prediction of future temperatures. Far from being trivial, difference equations are used routinely as a shorthand method to model dynamic systems such as weather and climate that are derived from differential equations. When two methods calculate the same answer, and one takes seconds and costs pennies, and the other takes years and costs hundreds of millions, the one that takes seconds is significantly more valuable than the method that takes years. Every time it runs it saves millions of dollars. Computer Science invests millions each year in trying to find faster numerical methods to solve problems. For example: Difference Equations and Chaos in Mathematica Dr. Dobb’s Journal Year: 1997 Issue: November Page range: 84-90 A difference equation (or map) of the form x_n-1 = f(x_n, x_n-1, …) which, together with some specified values or initial conditions defines a sequence {x_n}. Despite the seemingly simple form, difference equations have a variety of applications and can display a range of dynamics. Since maps describe iterative processes, they come up frequently in computer science. Also, many of the approximations in numerical analysis (such as numerical solutions of differential equations) typically approximate continuous dynamical systems using discrete systems of difference equations. Modeling a map using a computer is equivalent to studying the process of functional composition or functional iteration. 52. Steve McIntyre is plainly a friend of Willis. Also, Phil Jones obviously has none, at least within climate science. 53. DirkH says: June 3, 2013 at 12:26 pm Steven Mosher says: June 3, 2013 at 12:04 pm “nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b)” Yet you still think it is not a pseudoscience? Understand what Willis has done. He’s done what we do all the time in modelling. you take a complex system that outputs thousands of variables. You pick a high level general metric ( like global temperature ) You fit the inputs to that output. You now have a model of the model or an emulation of the model. What this emulation cant do is tell you about regional climate, or SST by itself or arctic amplification. getting this kind of fit is a good test of the model.. which is why as an old modler I suggested it years ago. This is nothing new. There are other ways to do this that are more sophisticated ( and give you spatial fields) iits one of the ways you can find bugs in the models. I’ve posted on that as well. 54. ferd berple says: June 3, 2013 at 2:05 pm Re: Matthew R Marler … since it is quite clear that the models haven’t modeled climate, anything that models the models is also not going to model climate. 55. Luther Wu says: June 3, 2013 at 2:17 pm I’m keepin’ an eye on you, dude. It’s not clear yet whether you are a man of good humor or if you simply moved to Boulder to get higher. Or something. Still trying to sort it out myself. ”Or something” is the safest bet. 56. Mike Jonas writes : “What you have done, very succinctly, is to show that they really did do what they said they did, but you have done it in a way that shows people very clearly how that renders the models useless for most purposes, and why the IPCC say they only do “projections” not predictions. The crying shame is that people have deliberately been led to believe that the models make predictions which can sensibly be used for policy purposes.” This is spot on. The models accurately reproduce past temperature change because they fine tune natural/aerosol “forcings”. They have little predictive value because they cannot know in advance how such future natural forcing will evolve. That is why their “projections” fan out with massive error bars to 2100 in order to cover all eventualities. 57. Philip Bradley says:June 3, 2013 at 1:33 pm “If 2 computer systems are functionally equivalent, that is they produce the same outputs from the same inputs, then they are logically equivalent, that is the operations of one can be transformed into the operations of the other. Which means 99.99% of the code in the climate models doesn’t actually do anything significant to the outputs. And buried within them is the logical equivalent of Willis’ equation. The climate modellers must surely know this.” This long GISSE paper will show you that models output a great deal more than just a time series of global average temperature. That’s what the code is doing. Buried within them is energy conservation, which is the basis of the simple relations, as Roy Spencer says. Modellers do surely know that – they put a lot of effort into ensuring that mass and energy are conserved. But there are innumerable force balance relations too. 58. err … you pointed out exactly what to me in 2008? Once again, your cryptic posting style betrays you, and the citations are of little help. A précis would be useful … 1. read the thread. 2. I suggested that Lucia use Lumpy to hindcast 3. I pointed out to you how well one could hindcast models with two parameter lumpy more background here as the links have disappeared from the CA thread “nothing surprising about this. You can fit the super complicated Line by Line radiative transfer models with a simple function delta forcing = 5.35ln(C02a/C02b) If that is what you pointed out, actually it is quite surprising that the climate models can be represented by a simple equation. There are models, and there are models. The climate models are massively complex, and more importantly, iterative models designed to model a chaotic system. Those kinds of models should definitely not be able to have their outputs replicated by a simple one-line equation.” its Not at all surprising which is why i suggested to Lucia that she do this excercise back in 2008. it’s pretty well known. Its a standard technique called emulation. you emulate the model. This is STEP ONE in any sort of sensitivity analysis where the parameter space is too large to excercise in a full factorial manner. In addition, when you find the simple model that represents the complex LBL model, you don’t sneer at it as you have done with my simple model. You USE it, right? Once you find the simple model “delta forcing = 5.35ln(C02a/C02b)”, you often, even usually don’t need to use the complex line-by-line model in your further calculations. And that is exactly what I have done here, USED the simple model to find further insights into the relationships. WHO SNEERED? why would i sneer at a experiment I proposed back in 2008? Its cool. Its actually a good check on the models. but its not surprising. It tells you the models are working. Finally, none of your objections touch the really interesting result, which is that the climate sensitivity given by the climate models is simply the trend ratio … Huh? refer to Nick’s calculus. ‘Now, I don’t have the ocean forcing data that was used by the models. ” there is no ocean forcing data. Forcings are all radiative components. The ocean is forced. the atmosphere is forced. the land is forced. They respond to this forcing. Sorry for the shorthand. No problem 59. ” Understand what Willis has done. He’s done what we do all the time in modelling. you take a complex system that outputs thousands of variables. You pick a high level general metric ( like global temperature ) You fit the inputs to that output. You now have a model of the model or an emulation of the model. What this emulation cant do is tell you about regional climate, or SST by itself or arctic amplification.” If it was really that simple, then instead of picking global temperature as your high-level metric, you could pick regional climate for one particular location, or Arctic amplification. Then fit the input to that output, and you have a simple one-line model that will tell you what the climate in Albuquerque will be in 2100, or whatever. And then you would be able to tell people that you can project regional climates, too. Albeit, not all with the same model settings. I’m not sure if this is a feature of a special property of the model equations, like global energy conservation, or if it’s just simple curve-fitting. If the latter, then it ought to work equally well for any 1D function of the output. And if so, then it would appear the big models can’t tell you anything about regional climate or SST or Arctic amplification either. 60. Willis Eschenbach: Apologies for the lack of clarity, Matt, I should have said “steady state” rather than “equilibrium”, as the forcing and temperature are both continually rising (see figure At “steady-state” the forcings and the temperatures are constant. At steady-state, the inflow and outflow of heat in any voxel, parcel, compartment (etc) of the climate system exactly balance, and the temperature remains constant. That’s the definition of “steady state.” A condition for the steady state to be possible is that overall input, what is called “forcing” in this context, be If you relax further and go to “stationary distribution” , then the changes have constant means, but they are not equal at all times. Figures 2 and 3 display interesting relationships between your parameter estimate lambda and the forcing, across models, but that’s all. 61. So , You are described your ability to recostruct with a simple two-parameter formula the total conservation of energy into the system. Interesting but pretty useless. A model should do exactly this. I would be surprised if models don’t conserve the Earth energy balance, which is a direct effect of radiative energy coming in (forcings) and going out, at the equilibrium temperature. Or did you think that the Earth is a sponge adsorbing all energy and not releasing it? The problem of climate change is NOT the amount of energy, but the equilibrium temperature, and the distribution of it, and the effect on hidrosphere and biosphere. I suppose that your “one line” model is not sufficiently evoluted to quantify sea level change and humidity distribution, don’t you? 62. Colorado Wellington says: June 3, 2013 at 2:37 pm Luther Wu says: June 3, 2013 at 2:17 pm I’m keepin’ an eye on you, dude. It’s not clear yet whether you are a man of good humor or if you simply moved to Boulder to get higher. Or something. Still trying to sort it out myself. ”Or something” is the safest bet. A (much) earlier post of yours proved that you had a lick of sense. One person told me yesterday that HAARP is behind the OKC tornadoes and they knew that because of the abrupt turns some number of storms have made to avoid prime targets such as Tinker Air Force Base (or my house.) Another person told me just this morning that God was cleaning up OKC because there are homosexuals here. It’s a wild world, I tell ya. 63. Thanks Willis. As Gary says at 1:47pm : ” …all the rest of us got a hell of a good education out of [your] effort …”. 50 years ago I was in “remedial class” for maths and I can follow this. [Maybe I'd better add that nothing I have designed or built since then has resulted in catastrophic failure.] Next question is, which of these one liners to print on T-shirts? Equation 1 might be ok for those of us who are horizontally challenged … 64. ferd berple: since it is quite clear that the models haven’t modeled climate, anything that models the models is also not going to model climate. I have no quarrel with that. The obvious implication is that the parameter lambda is not related to anything in the climate, it’s just something that allows Willis’ model of forcing and model output to reproduce model output with high accuracy. I think it is remarkable that for all of their complexity, the models can be modeled by a really simple bivariate linear autoregressive model. 65. Matthew R Marler says: June 3, 2013 at 3:04 pm Willis Eschenbach: Apologies for the lack of clarity, Matt, I should have said “steady state” rather than “equilibrium”, as the forcing and temperature are both continually rising (see figure B1). At “steady-state” the forcings and the temperatures are constant. Thanks, Matt. You are talking about a different steady state. In Figure B1, both the forcing and the temperature are increasing at a steady rate. That’s the situation shown in that Figure, regardless of what you call it. 66. Nullius in Verba says: June 3, 2013 at 2:59 pm And if so, then it would appear the big models can’t tell you anything about regional climate or SST or Arctic amplification either. Run a climate model twice, it should give you two different results, unless it is a trivial (unrealistic) model. Run it many times and the results give you a boundary. If the model is accurate, then future climate lies somewhere within that boundary, and the edges of the boundary give you natural variability. However, no model can (accurately) tell you where within the boundary the future climate lies. The current climate science practice of averaging the runs and calling this the future is mathematical nonsense. Which is why the models have gone off the rails. 67. Roy Spencer says: June 3, 2013 at 12:33 pm Oh, Roy. I think you miss the point. So as a non-scientist, I will make it to you. Science spends a lot of time trying to reduce difficult concepts and physical theories and observations down to one simple equation. I think that Stephen Hawking and field unification theories are the quintessential example of this striving. While this is the (admirable) goal of all science, after Einstein’s E=mc2 I think most scientists were so overawed by that equation that they started striving for that beauty and simplicity in all other areas of science (such as climate science), where such beauty and simplicity is simply not possible (at least, not possible without sacrificing much of the truth in the process). As a result, some scientists have reduced the world down to something that it really is not (Mosh appears to make that mistake constantly, as he shows in his dialogue with Willis here). The models may correlate well with one equation, but as the old saw about models and assumptions says: garbage in, garbage out. It is also called “missing the forest for the trees.” Also, read “o sweet spontaneous” by e.e. cummings for a bit more of what I mean. 68. I suspect finding linear behavior is actually evidence that we are near an attractor. These kinds of simple formulas fall apart as the system gets away from the attractor and the system becomes more chaotic. 69. Richard M says: June 3, 2013 at 3:23 pm I suspect finding linear behavior is actually evidence that we are near an attractor. These kinds of simple formulas fall apart as the system gets away from the attractor and the system becomes more chaotic. Since the system is already chaotic, “becoming more chaotic” could be viewed as increasing amplitudes of various forces/feedbacks, which system would still return to trend, wouldn’t it? What sort of attractor do you envision? 70. This long GISSE paper will show you that models output a great deal more than just a time series of global average temperature. That’s what the code is doing. Buried within them is energy conservation, which is the basis of the simple relations, as Roy Spencer says. Modellers do surely know that – they put a lot of effort into ensuring that mass and energy are conserved. But there are innumerable force balance relations too. I should have been more precise and said, Which means 99.99% of the code in the climate models doesn’t actually do anything significant to the surface temperature outputs. The models may well be getting better at modelling at air column turbulence, etc, but these improvements have no significant effect on the surface temperature predictions. Therefore these model improvements are irrelevant to the metric everyone cares about. And 99.99% of the code in the models could removed without affecting the surface temperature predictions. But hugely complicated ‘sophisticated’ climate models impresses the mug punter. Who would be decidedly unimpressed by a surface temperature prediction from a one line computer program. Even if you told him/her that the prediction from the one line program was identical to that from the ‘sophisticated’ model. 71. [snip - more Slayers junk science from the banned DOUG COTTON who thinks his opinion is SO IMPORTANT he has to keep making up fake names to get it across -Anthony] 72. Roy Spencer says: June 3, 2013 at 12:33 pm I’m happy that Willis is understanding some of the math in simple one-line climate models, but as Steve Mosher has alluded to, there is really nothing new here. Willis Eschenbach already answered on this. Further, when the Met Office asks for the next new super computer, do British MPs and the public really know, that the main result could equally be computed on the back of an envelope ? 73. Steven Mosher says: June 3, 2013 at 2:49 pm err … you pointed out exactly what to me in 2008? Once again, your cryptic posting style betrays you, and the citations are of little help. A précis would be useful … 1. read the thread. 2. I suggested that Lucia use Lumpy to hindcast 3. I pointed out to you how well one could hindcast models with two parameter lumpy Thanks, Steven. So your point is that you noted that the models could be fit with Lumpy? My congratulations, but you are missing the point. You are correct that the math has been there all along, heck, it’s what Kiehl used in 2007. However, neither Kiehl, nor you, nor Nick Stokes, nor anyone as far as I know, has noticed that the various climate sensitivities displayed so proudly by the models are nothing more than the trend ratio of the output and input datasets. Kiehl got the closest, but he didn’t find the key either, he thought it was total forcing. That is the finding I’m discussing in this post, and it is the finding you haven’t touched. 74. Nick Stokes says: June 3, 2013 at 2:12 pm “So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting I think calculus gives a reason for this. Idealized, trend_T≅dT/dt, trend_F≅dF/dt and, with many caveats as discussed in your previous thread, Thanks, Nick. I thought that at first, but actually, the trend of T is often radically different from dT/dt, at least the trend I’m using which is the ordinary least squares trend. Nor is the trend ratio (the ratio of least squares trends) dT/dt divided by dF/dt as you state. Instead, it is $\displaystyle {Trend Ratio = \frac {\sum t \mspace{4mu} T_{(t)}}{\sum t \mspace{4mu} F_{(t)}}}$ where t is the time of the observation and F[(t)] and T[(t)] are the observations at time t. So I fear that the calculus you used doesn’t help. However, I’m sure someone with more math-fu than I have will give the answer. [UPDATE]: I should add that the above equation is only true when both datasets are expressed as anomalies about their respective averages. This, of course, doesn’t change the trend of the 75. Willis - wish i could understand the science. never mind, the CAGW architects are considering moving to Plan B: 3 June: SMH: Time to switch to ‘Plan B’ on climate change: study Climate policy makers must come up with a new global target to cap temperature gains because the current goal is no longer feasible, according to a German study. Limiting the increase in temperature to 2 degrees Celsius since industrialisation is unrealistic because emissions continue to rise and a new global climate deal won’t take effect until 2020, the German Institute for International and Security Affairs said. “Since a target that is obviously unattainable cannot fulfill either a positive symbolic function or a productive governance function, the primary target of international climate policy will have to be modified,” said Oliver Geden, author of the report, which will be released today as talks begin in Bonn… 76. Roy Spencer says: June 3, 2013 at 12:33 pm I’m happy that Willis is understanding some of the math in simple one-line climate models, but as Steve Mosher has alluded to, there is really nothing new here. Of course the global average behavior of a complex 3-D climate model can be accurately approximated with a 0-D model, mainly because global average temperature change is just a function of energy in (forcings) versus energy out (feedbacks). But that really doesn’t help us know whether either one has anything to do with reality.) Roy, maybe it’s obvious to you that these spherical cows are round, but it’s not obvious to the public. The public has been fed a steady diet of these are sooper-dooper hi-tech dilithium crystal powered supercomputer models that use enough computer power to do all the calculations of the Manhattan project in 3 milliseconds, and what Willis has seemed to discover is that the results are indistinguishable from something that can run on a Commodore 64. So there are two conclusions that one could draw; 1) these vast, sophisticated models are producing results that are trivially different from simple formulae, or 2) these models are just simple formulae. Neither conclusion is particularly reassuring. 77. Willis, “nor Nick Stokes, nor anyone as far as I know, has noticed that the climate sensitivities reported by the models is nothing more than the trend ratio of the output and input datasets.” As I noted above, the idea is just dT/dF=(dT/dt)/(dF/dt) If you computed the sensitivity as the ratio of F and T increments measured by trend*time over the same time interval, which is one conventional way, then the relation would be exact, as a matter of algebra. I tried to see what the spreadsheet did, but it asked me if I wanted to update the link to external files, and then gave a whole lot of #REF errors. You mentioned Lumpy. Lucia applied Lumpy to surface temperature, measured and modelled. It works quite well for both, so it is hardly a failing of models that they follow this simple formula. If they are modelling temperature well, then they have to. 78. The fact that you derive an r² of 1.00 should have told you something, Willis, something really important. As I understand it, climate models calculate temperature changes from forcing changes for individual cells, n° by n°, using the same algorithm everywhere. The results are combined by the model into a composite for the entire globe. The composite differential temperature value is unknown until all the cells are processed, but it should be no surprise that the composite follows the same general forcing equation as the cells. You’ve just discovered a way to back-calculate a composite lambda. As someone put it on an earlier thread, you may have just constructed a model of a model, re-inventing the…travois. Listen to Roy. That said, the models are garbage to start with, so finding any useful surprises from studying them was like trying to make a silk purse out of a sow’s ear. The 1.00 r² is a measure of the uniformity of the GCM algorithms, not the validity of your math, flawless though it is. But if this is your lowest content post, you’re still way ahead of most. Keep it coming. 79. Willis, “Appendix B: Physical Meaning” contains: Figure 4. One-line equation applied to a square-wave pulse of forcing. In this example, the sensitivity “lambda” is set to unity (output amplitude equals the input amplitude), and the time constant “tau” is set at five years. I’ve checked with two browsers, there is no image there to load. The designation “Figure 4″ was used before. In “Appendix D: The Mathematical Derivation of the Relationship between Climate Sensitivity and the Trend Ratio.” There is a “Figure B1″. Standard procedure would designate that as Figure D1. Offhand it looks like Appendix B is missing a graph that should be labeled Figure B1, while that graph in Appendix D should be labeled Figure D1. In your paper, I’m especially noticing Figure 2, “Blue dots show equilibrium climate sensitivity (ECS)…” with the Levitus ocean heat content data added. My monkey brain wants to see the pattern of a obvious log curve. It’s only five points, there’d have to be a lot more to say anything definitive. [UPDATE] Thanks, fixed, graph inserted, described as “B2″. 80. Phitio says: June 3, 2013 at 3:07 pm So , You are described your ability to recostruct with a simple two-parameter formula the total conservation of energy into the system. Interesting but pretty useless. A model should do exactly this. This is missing the point. Energy conservation is a must, but that does not necessitate THIS model response. Models could do an infinitely number of different things, they could, for example, produce more clouds to counteract increased forcing, they could produce more clouds during certain ocean cycles, faster transportation of heat to the poles, respond in multidecadal cycles to long past forcing changes, etc… with eventually lower or very low temperature resposes to greenhouse gases. It is amazing to see that models assume climate responds to forcings in about the most trivial way imaginable. And as we all know, temperature trends are too high, regional forcasts are science fiction, and humidity predictions are wrong.. 81. Phitio says: I suppose that your “one line” model is not sufficiently evoluted to quantify sea level change and humidity distribution, don’t you? Good question, and since the models typically get these more wrong than temperature, I am willing to bet that you find a similar relationship. Its just a question of changing this “one line equation” around slightly to model the input:output space slightly for a different output variable. In other words, I am willing to bet that for every output variable you find a simple linear equation. My hunch on this comes from reading Harry Read ME and just knowing what we have seen in the past in terms of misakes. Of course, this says nothing about correctness because perhaps the climate is all about a one line equation, but than again if you can represent climate with a simple one line equation just as well as with a complex GCM, than why bother with expensive super-computers in the first place? In that case, the super-computers bought by the tax-payers are just expensive toys when a high-end PC could do the job. 82. Enjoyed your finding a lot, but can’t say I am surprised by it. Having written a popular computer game long ago I found myself writing complicated algorithms to produce realistic effects – only to discover one late night (over beer) that all the complexity was for naught – 99% of the result could be reproduced by a very simple one line equation. What I learned is that it is easy to get caught up in the complexity of modeling without realizing how tiny an impact most of it has. It almost seems obvious (now) that computer modelers for climate went down the same path… One word of warning about your using your simple model to predict the outcomes of the more complex ones – beware taking your model too seriously, it may produce dramatically different results at some inflection points. All you have done so far is show it models the published results of the models – not that it models all of the behaviors. I learned long ago – NEVER take a model too seriously unless you can repeatedly compare it to real test results for fine tuning. Anyway congrats! 83. “Figure 4. Large red and blue dots are as in Figure 3. ” They’re not actually. One red dot is well off to one side. What happened? “One of the strangest findings to come out of this spreadsheet was that when the climate models are compared each to their own results, the climate sensitivity is a simple linear function of the ratio of the standard deviations of the forcing and the response. This was true of both the individual models, and the average of the 19 models studied by Forster. The relationship is extremely simple. The climate sensitivity lambda is 1.06 times the ratio of the trends. This is true for all of the models without adding in the ocean heat content data, and also all of the models including the ocean heat content data.” Standard dev or trend? Are you stating a second result here, implying it is the same thing or accidentally stating something other than what you intended? Perhaps you could clarify. 85. Richard M says: June 3, 2013 at 3:23 pm These kinds of simple formulas fall apart as the system gets away from the attractor and the system becomes more chaotic. a single difference equation is all it takes to describe a chaotic system. Willis’s equation 1 could well be chaotic, which would make it doubly interesting with possibly huge implication for climate science. Perhaps a look at the Lyapunov exponent and dimension would prove interesting? It woudl sure throw a monkey wrench into climate models if Willi’s equation showed a positive exponent. In any case, it seems reasonable that the various global temperature series should have a similar Lyapunov exponent and dimension to Willi’s equation if the climate models are actually modelling climate. 86. Nick Stokes says: June 3, 2013 at 4:21 pm “nor Nick Stokes, nor anyone as far as I know, has noticed that the climate sensitivities reported by the models is nothing more than the trend ratio of the output and input datasets.” As I noted above, the idea is just dT/dF=(dT/dt)/(dF/dt) And as I noted above, dT/dF is not the trend ration that is equal to lambda. 87. jorgekafkazar says: June 3, 2013 at 4:25 pm The fact that you derive an r² of 1.00 should have told you something, Willis, something really important. As I understand it, climate models calculate temperature changes from forcing changes for individual cells, n° by n°, using the same algorithm everywhere. The results are combined by the model into a composite for the entire globe. The composite differential temperature value is unknown until all the cells are processed, but it should be no surprise that the composite follows the same general forcing equation as the cells. You’ve just discovered a way to back-calculate a composite lambda. As someone put it on an earlier thread, you may have just constructed a model of a model, re-inventing the…travois. Listen to Roy. Thanks for the thought, Jorge, but neither Roy nor anyone else has noticed what I noticed, which is that the climate sensitivity displayed by any of the models is nothing more than the ratio of the input and output trends. Not only that, but this relationship is common to all of the models as well as to the average of the models. So if you (or anyone else) thinks I “re-invented” that idea, please point to someone else who has shown that to be true of the climate models, either experimentally or theoretically … I have shown both. 88. Willis, when frying your synapses, use a bear batter with Cajun spices and fry in peanut oil. Yum!!!!!!! 89. correction. that should be a beer batter, not bear batter. Bear grease is just toooo overpowering. 90. kadaka (KD Knoebel) says: June 3, 2013 at 4:26 pm … In your paper, I’m especially noticing Figure 2, “Blue dots show equilibrium climate sensitivity (ECS)…” with the Levitus ocean heat content data added. My monkey brain wants to see the pattern of a obvious log curve. It’s only five points, there’d have to be a lot more to say anything definitive. KD, the reason for the difference is that the ocean data we have shows increasing ocean heat content, and it’s only since 1950. As a result, when we add it to any forcing dataset, it decreases the trend. How much? Well, that depends on the original trend. If the original trend is small (CM2.1), the change is larger, and if the trend is large to start with (CCSM3) the change is smaller. 91. Dr. Roy. Sorry, but you have inputs, feedback which can either add or subtract from the inputs, storage, and outputs. No forcings. 92. Greg Goodman says: June 3, 2013 at 4:57 pm “Figure 4. Large red and blue dots are as in Figure 3. ” They’re not actually. One red dot is well off to one side. What happened? Thanks, fixed. I’d added some data without re-identifying the correct points. 93. Craig Moore said on June 3, 2013 at 5:07 pm: correction. that should be a beer batter, not bear batter. Bear grease is just toooo overpowering. Bear grease? You didn’t save the bones and “choice bits” to cook down for stock? Dang, that was a waste. Bone marrow is a great source of nutrition. 94. Don says: June 3, 2013 at 4:57 pm I had the same thought. The climate modellers are doing what peddlers of predictions have been doing since long before the Oracle at Delphi. Wrap your predictions in a mysterious and unfathomable to the ordinary person 95. Makes one wonder what came first, the assumptions, the result, or the model? I suspect the model came last to fit the results to the assumptions. 96. Correlation of one-point-ohh Same forces on the Input Makes the output just so It all matches with turn-in and turn-off Don’t be surprised if it all comes naught 97. GlynnMhor says: June 3, 2013 at 12:27 pm ‘Matthew writes: “… you have modeled the models, but you have not modeled the climate.” That’s exactly the point. The models do not model the climate either, and are in effect just a representation of the forcing assumptions input to them.’ And to all those who continue to claim that the computer models of climate are physical theories of climate, contain physical theories of climate, or adhere to the bestphysical theory of climate, I ask one simple question: Where is there to be found an effect of the supposed physical theory in model runs? There is none. There is no physical theory doing some work in model runs. As Willis has shown, the relationship between input and output is no more complicated than the equation for a line on a two-dimensional graph. 98. Willis, “So I fear that the calculus you used doesn’t help.” Well, trend is generally the best estimate available of derivative. Here’s how it works in terms of trend coefficients. You’ve said: $Trend = \frac{\sum t T_t}{\sum t F_t}$ and you noted that these are centred – so t=0 is the centre point. If you expand about that as a Taylor series about t=0: $T(t)=T_0+T_1t+T_2t^2/2+T_3t^3/6+...$ (suffix = order of deriv) and same with F, then you find that $\sum t T_t = T_1\sum t^2 + T_3\sum t^4 +...$ The even terms have zero sums by symmetry and drop out. Unless something wild is happening, the third derivarive term will be small relative to first. Same for F, so in the ratio the sums can\cel $Trend ratio = \frac{\sum t T_t}{\sum t F_t} \approx T_1/F_1 = \frac{dT/dt}{dF/dt} = dT/dF$ 99. Perhaps I should elucidate. Input: Solar. Perhaps others? Feedbacks: CO2, (Positive?), H2O (Positive?) etc. because we don’t know? Output: Radiation to space. So retained energy = Input + feedback – Output. Here’s a problem. We have a good idea of input but little idea of output or feedback. I wouldn’t mind betting that we have nothing better than emission once in 24 hours for any given point. 100. “Now, out in the equilibrium area on the right side of Figure B1, ∆T/∆F is the actual trend ratio. So we have shown that at equilibrium” OLE equation 8 You have not “shown” anything here except that this is what a linear model is. The extra condition you have imposed is the IPCC “business as usual” scenario. What this shows is steady rate of increase in T that will result from “business as usual” in a linear model. “When the forcings are run against real datasets, however, it appears that the greater variability of the actual temperature datasets averages out the small effect of tau on the results, and on average we end up with the situation shown in Figure 4 above, where lambda is experimentally determined to be equal to the trend ratio.” So what you have shown here is that experimentally determined results backup the linear model ! Willis , you’re a warmist. ;) In fact what this shows is that if you insert a spurious -0.5C “correction” in SST in 1945, reduce the pre-1900 cooling by 2/3; carefully balance out your various guesses about volcanic dust, aerosols, black soot CFC’s and the rest you can engineer a linear model that roughly follows climate upto 2000 AD. In short , assume a linear model and tweak the data to fit the model. And that is what climate science had done by the end of last century. What is not seen in your plots from the whole datasets is the way this all falls apart after y2k when there were no volcanoes. It is that period that gives the lie to the carefully constructed myth. That plus a detailed look at the way climate really responds to a sudden change in forcing: You may have lost interest in your last thread but I have refined you stacking idea by overlaying the eruption year and keeping calendar months aligned. I have also split out tropical , ex-tropical and separated each hemisphere. This shows really well that your “governor” goes beyond governor in the tropics and also conserves degree.days as you suspected. It also shows no net cooling even at temperate latitudes , though they do loose degree.days. Now unless anyone can find a convincing counter argument that pretty much kills off the whole concept of a linear response to radiative “forcings” and with it goes the very concept of “climate We no longer need to argue about what value of CS is statistically robust or whatever, because it does not exist. Y BASTA ! 101. Sorry. That should be twice for any given point. It’s why I suspect the satellite data as much as the land data. 102. OldWeirdHarold says 4:19 pm Commodore 64! Boy I’m sure glad I saved mine -I can be a climate scientist too! 103. But Willis, I thought the Lambada trend died out in the 1990s. 104. Nick Stokes says: June 3, 2013 at 5:33 pm “So I fear that the calculus you used doesn’t help.” Well, trend is generally the best estimate available of derivative. Here’s how it works in terms of trend coefficients. Nick, I suggest you try it with a real dataset. Take the HadCRUT dataset, see what you get … I know the equation I gave above is correct. How? I tested it against real data. 105. Greg Goodman says: June 3, 2013 at 5:40 pm … What is not seen in your plots from the whole datasets is the way this all falls apart after y2k when there were no volcanoes. It is that period that gives the lie to the carefully constructed myth. That plus a detailed look at the way climate really responds to a sudden change in forcing: You may have lost interest in your last thread but I have refined your stacking idea by overlaying the eruption year and keeping calendar months aligned. I have also split out tropical , ex-tropical and separated each hemisphere. This shows really well that your “governor” goes beyond governor in the tropics and also conserves degree.days as you suspected. It also shows no net cooling even at temperate latitudes , though they do loose degree.days. Now unless anyone can find a convincing counter argument that pretty much kills off the whole concept of a linear response to radiative “forcings” and with it goes the very concept of “climate sensitivity”. Very nicely done, my friend, very nice indeed. 106. Willis Eschenbach says: June 3, 2013 at 5:03 pm So if you (or anyone else) thinks I “re-invented” that idea What seems obvious after the fact is never quite so obvious before. How many high school physics students looking at E=mc^2 see the obvious, and wonder what the big deal was all about – while secretly wondering what happened to the 1/2? Your black box model is potentially a huge step forward in examining the math underlying the climate models. Something that has been largely overlooked and excused due to the costs involved. An interesting result might be a graph of the solution space. Maybe it is not as well behaved as climate science would like to believe. 107. Thanks, I thought you’d like it ;) 108. Willis, “Take the HadCRUT dataset, see what you get … I know the equation I gave above is correct. How? I tested it against real data.” The correctness isn’t an issue; I’m just showing the simple calculus which makes it happen. Real data will be noisier, because it is, and because your model points represent various kinds of averages across model runs. 109. Greg Goodman says: June 3, 2013 at 5:40 pm We no longer need to argue about what value of CS is statistically robust or whatever, because it does not exist. If the models are correct, that can be the only conclusion that is valid. If two correct models with different estimates of CS both hind-cast temperatures, that can only mean that CS has no effect on temperature. In other words, CS = 0. 110. Willis Eschenbach: In Figure B1, both the forcing and the temperature are increasing at a steady rate. That’s the situation shown in that Figure, regardless of what you call it. Oh. Both forcing and temperature are increasing linearly with time. If that’s the restriction you are imposing, then dT/dF = (dT/dt)/(dF/dT) is a constant. As everyone keeps reminding everyone else, that may be an interesting fact about all of the models, but it sure does not have anything to do with the climate. Since CO2 has increased approximately linearly, and the forcing is proportional to the log of the concentration, in the models, it isn’t too far off to write dF/dt is constant.(?) In that case, you have shown that, despite their complexity, all the models essentially feed the change in forcing through linearly to get the change in temperature: dT/dt = (dT/dF)(dF/dt); if any 2 of those ratios are assumed constant, then the third is constant. the models assume dT/dF is constant (don’t they?) and you assume dT/dt is constant, with both assuming dF/dt is constant. 111. Willis, I suggest you call your one-line climate model equation the Eschenbach Principle. It will make it a lot easier when telling Warmistas they’ve been snookered. Besides, it has a special, resonating “ring” to it. I appreciate your work, BTW. “the various climate sensitivities displayed so proudly by the models are nothing more than the trend ratio of the output and input datasets.” That’s… not surprising. In fact, it’s entirely expected – that temperatures will be some function of forcings and sensitivity. Trend T = F(trend forcings * climate sensitivity) – F( ) being a one or two box lagged function. The constraint is the certainties on the forcings; And that is entirely outside the models – it’s a set of separate measurements. If the forcing are estimated low, the computed sensitivity will be estimated high, if the forcings are estimated high, the computed sensitivity will be estimated low. Quite frankly, I would consider the linear trend relationship to be an indication that the models agree on internal dynamics – and that given similar forcings they will produce similar outputs. I would consider your results to be a support of these models, not a criticism. 113. @ Willis : Great job Willis. Thanks for the fine detail. @ Robert of Texas says on June 3, 2013 at 4:44 pm : “… – 99% of the result could be reproduced by a very simple one line equation. What I learned is that it is easy to get caught up in the complexity of modeling without realizing how tiny an impact most of it has.” Ah, complexity. From a fellow programmer of complex relations what you have said is so very true, have had that made apparent so many times… so tending to the raw simple physics as much as possible is best for scale itself matters little when looking at the entire climate system over long time periods, it all ends up coming back to the tried and true basic physics equations. This is where iterating models can literally create the expectations no matter what they may be since tune-able assumptions are involved. As Roy Spencer seemed to say above, with albedo estimates floating somewhere between 29% and 31% (implying a temperature range of about ±3°C), I’ve seen them all used, it all simply depends on the amount of inbound solar energy getting in and that is the one of the parameters we really cannot accurately measure globally, bummer. 114. Yes, Grandkids- I was there watching them tear it all down in real time. They were giants in those days. 115. Nick Stokes says: June 3, 2013 at 6:23 pm “Take the HadCRUT dataset, see what you get … I know the equation I gave above is correct. How? I tested it against real data.” The correctness isn’t an issue; … “Racehorse” Nick Stokes at his finest, he’s never been caught admitting he was wrong … actually, Nick, correctness is THE issue, testing your claim against the data is the way to determine it, and your claim is simply not correct. Except in the special steady-state circumstance I outlined in my explanation, the least squares trend of a dataset is not dT/dt, nor is the trend ratio dT/ I know because, as you might imagine, that’s one of the variables I tested when looking for the significant variable (which turned out to be the trend ratio), and guess what? The correlation of dT/dF with lambda (and thus with the trend ratio) is quite poor, with an r^2 of only 0.65 compared with 1.00 for the trend ratio, and r^2 for a number of other variables above 0.8 … I checked it because I thought it might actually correlate to either tau or lambda, but there’s lot’s of things that correlate better with lambda, and no single variable that I’ve found correlates with tau … although I’m still looking. PS—The “Racehorse” is for Racehorse Haynes, a flamboyant trial lawyer who never admitted anything. Here’s a sample: Haynes loves discussing his cases to teach young lawyers about trial practice. In 1978, he told attendees at an ABA meeting in New York City that attorneys too often limit their strategic defense options in court. When evidence inevitably surfaces that contradicts the defense’s position, lawyers need to have a backup plan. “Say you sue me because you say my dog bit you,” he told the audience. “Well, now this is my defense: My dog doesn’t bite. And second, in the alternative, my dog was tied up that night. And third, I don’t believe you really got bit.” His final defense, he said, would be: “I don’t have a dog.” 116. Greg Goodman, you combine NH and SH extra-tropics SSTs in your graph, and by doing so lose any seasonal signal that might exist. I notice that a pronounced seasonal divergence (increasing summer anomalies, decreasing winter anomalies) has developed in the N Atlantic ex-tropical SST anomalies from around 1998. You see a similar but less clear seasonal divergence in Arctic sea ice and even to a limited extent in the UAH tropo temps. I am pretty sure this is cloud related, but doing your analysis for each hemisphere ex-tropics SSTs separately will tell you if there is a seasonal effect from volcanoes. 117. Willis, “The correlation of dT/dF with lambda (and thus with the trend ratio) is quite poor, with an r^2 of only 0.65 compared with 1.00 for the trend ratio” Details, please. First, what data are you using. On one hand you have model ensemble averages – in the case of Otto and Forster at least, large ensembles. On the other side, HADCRUT4? Just one? You’ve correlated dT/dF with lambda? But you said dT/dF is not the trend ratio. How did you work it out? But you’ve said: “your claim is simply not correct”. My claim is simply that trends are approximations to derivatives. Strictly, the central derivative. I don’t think that is controversial. Sometimes thinking of trend as a derivative makes more sense, sometimes less. It depends on the noise above other things. With your model example, you have aimed for a situation where regarding trends as derivatives works quite well. You have used ensembles to reduce noise. And then the simple calculus rule follows – dT/dF=(dT/dt)/(dF/dt). Is that what you claim is not correct? It’s what determines your result. In other situations, such as where you have single instances of noisy data, the calculus rule won’t work so well. 118. [snip - more Slayers junk science from the banned DOUG COTTON who thinks his opinion is SO IMPORTANT he has to keep making up fake names to get it across -Anthony] 119. That Kiehl graph centers on the IPCC’s AR4predicted range for climate sensitivity: k = +1.5 to +4.5 K/doubling of CO2. That is not science. It is not even “Curve Fitting”. It is “Guesswork” at best. 120. Perhaps this is as good a time as any to pose a question about the range of IPCC-approved climate models & how the ensemble of models relates to Willis’ results. I recall seeing spaghetti graphs of projected T vs t for the numerous models included in the IPCC composite. I recall something like 39 different models being included, and I believe the projections were used in the AR4 report (circa 2004). Most of the models projected Ts well above the observed T in the intervening years, but 2 or 3 of the projections were reasonably close to the actually observed global Ts. Can anyone explain what accounts for the difference between the small number of reasonable projections and the vast majority of failed projections? I get the idea that all of the models are basically similar in approach and assumptions. So why are some better than others? Do they embody different assumptions in their method of calculation, or different input values? Are these Monte Carlo calculations which would be expected to produce different values merely by chance? And do the differences between more correct and less correct models permit us to conclude anything about climate sensitivity? With respect to Willis’s results, he seems to have used composite model values in his calculation. Would using the results for individual models – in particular the more accurate models- produce a different result? 121. fred burple: “If the models are correct, that can be the only conclusion that is valid.” No, fred, I did not say CS=0 , I said CS does not exist. The very concept of CS is feature of a linear model. What the volcano plots show is not a linear model with a vanishingly small CS, it is a totally non linear negative feedback response that fully corrects all aspects of the change in radiative forcing due to the eruptions in the tropics and fully restores temperature in temperate The “If the models are correct” condition does not even come into , they are wrong and fundamentally so , it’s not just a question of tweaking the numbers, the behaviour is totally wrong. 122. Matthew R Marler says: June 3, 2013 at 6:37 pm Willis Eschenbach: In Figure B1, both the forcing and the temperature are increasing at a steady rate. That’s the situation shown in that Figure, regardless of what you call it. Oh. Both forcing and temperature are increasing linearly with time. If that’s the restriction you are imposing, then dT/dF = (dT/dt)/(dF/dT) is a constant. As everyone keeps reminding everyone else, that may be an interesting fact about all of the models, but it sure does not have anything to do with the climate. Matt, that’s not an “interesting fact about all the models”. Figure B1 is just a theoretical situation I showed to clarify the math, nothing to do with the models directly other than it uses the one-line equation. And none of this has anything to do with the climate. 123. KR says: June 3, 2013 at 6:40 pm “the various climate sensitivities displayed so proudly by the models are nothing more than the trend ratio of the output and input datasets.” That’s… not surprising. In fact, it’s entirely expected – that temperatures will be some function of forcings and sensitivity. Everyone’s suddenly a genius now, after the fact? As I said before, if it’s so darn “entirely expected”, how come no one noticed it? Show me someone somewhere who even claimed that the ECS of the climate models individually and en masse is equal to the trend of the input and output dataset, much less measured it experimentally and explained it mathematically. I certainly don’t recall you demonstrating it both experimentally and mathematically, for example … but then you never mentioned it at all, as far as I know. Nor has anyone else, to my knowledge. Kiehl attempted to answer the puzzle, and came close, but failed. So who are you thinking of when you claim this is so blindingly obvious? Who has pointed this out before me? It is neither expected nor is it intuitively obvious. 124. Nick Stokes says: June 3, 2013 at 7:19 pm “The correlation of dT/dF with lambda (and thus with the trend ratio) is quite poor, with an r^2 of only 0.65 compared with 1.00 for the trend ratio” Details, please. First, what data are you using. On one hand you have model ensemble averages – in the case of Otto and Forster at least, large ensembles. On the other side, HADCRUT4? Just Stuff your “details, please”. If you think I’m wrong, demonstrate it. Nothing else will do at this point, I’ve had it with your endless carping and caviling about meaningless points, and your claim that “correctness is not an issue” is just “Racehorse” nonsense. Your original claim was wrong. I would advise you to say “Admit it and move on”, but I know that’s not in your lexicon. In any case, download the spreadsheet and do the calculations yourself, or go get your own data and do it. Because unless and until you have the stones to measure your ideas against the real world, and let us know what the outcome is, I’m not going to play your game, Nick. You’ve worn out your welcome with me. I’ve provided theory, data, and spreadsheet. It’s your turn. PS—The data I used is the same 29 different combinations of forcings and responses I used in Figure 4 above. As I said clearly, I did the analysis as part of looking for what went into Figure 4 … so what would you imagine I would use? This is just more unseemly wriggling on your part. 125. Willis writes of the equation (and by implication of the GCMs) “So what does all of that mean in the real world? The equation merely reflects that when you apply heat to something big, it takes a while for it to come up to temperature.” I think it also means that the GCMs dont (and probably cant) model atmospheric/ocean process changes. I think its fairly clear that in 1998 or thereabouts something in the climate changed such that we’ve moved into a period of minimal warming. A tipping point if you like. 126. It is a nice result and show that all of the current related modelling related to the Global Temperature vs Co2 predictions can be boiled down to a simple equation and does not require a supercomputer. Bad news for those grant holders, but I am sure they will find a way to justify further financing – that is what they are good at after all! The point of the 3D weather models (yes, originally they were *weather* models) was (originally) to be able to simulate the distribution in space and the evolution in time of weather variables given an observational starting state. It was well understood that this can work out to a few days, but not much further than a couple of weeks. Somewhere down the line somebody forgot that even with a system such as 5 snooker balls hitting into each other it becomes physically impossible (i.e. Heisenberg Uncertainty Principle kicks in) to specify the starting conditions to a degree fine enough to predict what will happen next. Okay, with a Climate Model it is not as bad, because the model system is much more stable. I.e. it is a heavily damped system containing mainly negative feed-backs. But still, nobody should expect to be able to calculate what the climate will be like 50 years from now. This is the main thing I don’t understand, is how anybody can think that it is possible to put a few grid cells together and run them forward in relatively massive time steps for a period of 50 years+ and expect to get a meaningful result out the other end. Yes, you can do it, but no, it will not be related to the real climate at all. The real climate is *not* a bunch of static grid cells exchanging energy and moisture and no matter how many grid cells you use as a model, you will always get the wrong answer. The answer you will get is: If the climate system can be modeled this way, then what will happen? But it cannot be modeled that way. “G-d did not create the universe out of static grid cells exchanging heat and moisture content”. 127. Willis, It has not been a fun time for modelers, especially when that stuff comes back to haunt our established government policies driven by such ! Reminds me of something> 128. Is Willis claiming that different climate models estimate different climate sensitivities merely because the forcing scenarios are different? 129. Nick Stokes says: June 3, 2013 at 2:46 pm “Buried within them is energy conservation, which is the basis of the simple relations, as Roy Spencer says. Modellers do surely know that – they put a lot of effort into ensuring that mass and energy are conserved. But there are innumerable force balance relations too.” We are interested in that particular physical theory that is climate theory. Is the set of statements that represent the relationships between forcings and feedbacks buried deep within the model? What work does it do? What are statements that create the theoretical context that defines “climate sensitivity?” Where are they buried? What work do they do? Why haven’t these statements be shown to the public? 130. Willis Eschenbach says: June 3, 2013 at 1:09 pm Spot on. Smashing response to one of my heroes who happens to be a very good climate scientist. Keep on with the good work. 131. Willis, “Your original claim was wrong. I would advise you to say “Admit it and move on”, but I know that’s not in your lexicon.” You haven’t even said what claim was wrong. I simply pointed out a calculus rule which explains your result. You are yourself not good at admitting error. In your last thread, we got to a stage where your spreadsheet turned out to have forcings where the model temperature should have been, and the latter wasn’t there at all. And as Ken Gregory showed, the graph you drew showing volcano responses was quite wrong. Explained? Corrected? No, no response at all. You disappeared. 132. The climate identity function, at a price and quantity only government funded bureaucrats could love. 133. Thank you Willis, I take away two things from this post. 1] If the climate modelers did not know what you have discovered, fine, if they did know, they really are disgraceful. 2] Could you use this equation, changing the forcings to make a fit of the real world temperate graph? 134. Absolutely brilliant work, Wilis. And an absolutely brilliant deconstruction of cryptic sour grapes comments and racehorse obfuscations. 135. Physics_of_Climate says: If what you say is true, then Venus would have the same surface temperature even if the Sun were not there! This is pure nonsense. The Lapse rate is a gradient, not temperature level, and something has to force a temperature level somewhere along the lapse rate curve. If there were no greenhouse effect (including clouds), the surface temperature would be set by surface insolation and surface emissivity, and the atmosphere presence would not change that surface level. What the greenhouse gas does is raise the altitude where the absorbed solar energy balances the outgoing radiation to space. Then the lapse rate time this altitude is added to the temperature at the balance level. 136. Thanks, Willis. A work of genius! Poor models, they cost so much and show so little for it. 137. Dear Luther Wu, It’s after 11:00PM in Oklahoma, now. Perhaps, you have gone to bed. In case you’re up, just wanted to tell you I am SO GLAD THAT YOU ARE OKAY. “Dear God, please take care of Luther Wu,” I prayed many times this past weekend. I was so glad to see your posts. Forget the nicotine relapse. That is now behind you. Forget what lies behind, and press on. (No WONDER you wanted to light up! — Holy Cow, that was terrifying!) Yes, Grandkids- I was there watching them tear it all down in real time. They were giants in those days. And so are you, great heart. I am so glad that you are in the world! (and, especially, the WUWT world) Take care, P.S. Thank you so much, dear Mr. Eschenbach, for once again providing your excellent research along with your very patient explanations. For crying out loud, I’m a non-science major and I could follow you better than some of the above posters (some were blinded by pride and in their eagerness to best you made donkeys of themselves, some were just plain lazy) did! Even if I DID have your intellectual abilities, I could NEVER post results as you do so generously — I would absolutely tear into those jerks and only end up demonstrating my own low tolerance for FOOLS. You are to be highly commended. WAY TO GO, MAN! Just you and your computer… . If I may say so, no. I think Einstein and Galileo and George W. Carver (and a whole crowd of others) were peering over your shoulder as you worked away, hour after hour. And you thought you were all alone. No one who serves the truth works alone. Since this thread is, I think, dwindling down, I’m going to go ahead and write this next here. I am a Christian. I am ashamed of my fellow believer above, a famous scientist known for his Christian faith, in his selfish, prideful, ungracious, remarks to you. Please, do not conflate us followers and our frequent failings (I’m one of the worst) with our Lord and Savior. He is all loving, all wise, and perfect. “Christians are not perfect — just forgiven.” Thanks for humoring me on this last paragraph. Ever since I read the above referred to scientist’s post, it has weighed on my mind. You shared in your “Not About Me” (and thank you so much for your refreshing candor and honesty — you are an amazingly resilient and caring individual) that you used to be a Buddhist. I don’t know where you are on your faith journey, now, but, thank you for listening to me and my concerns even if you don’t yet know Jesus personally. Yeah, I said, “yet,” LOL, — I’m praying for you (and all the WUWT — “uh, oh,” (or worse!) they are now thinking, or some of them are, heh, heh), Willis Eschenbach. Take care. 138. “… and all the WUWT bloggers and writers and moderators and, of course, our wonderful host…) 139. Evidence yet again points to a realization. Climate, as defined, does not exist on Earth, i.e. the Theory of Climate Failed by evidence. Models, i.e. computer code, built specifically to reproduce a nonexistent thing yield nonexistent results ! 140. Master of Puppets — you are SO funny (and correct, too!). For the enjoyment of our current listening audience, I’ve copied below (edited) the bulk of your hilarious post from Sunday re: the hair-do man (Ben Franklin?): Given the definition of ‘Climate’ I posit that ‘Climate’ does not exist !, i.e. the Theory Of Climate Failed. [Gasp Heard 'Round The Political World] [Rumblings and Vomitings Within the Royal Society] [Australia Laughing] [China and Japan demand a RECOUNT ! NOW ! DAMMIT !] [Vietnam responding to China: 'Can't you read engrish ?'] [Greenland: Screw You Ha Ha. We signed a big Oil Company Drilling Contract ! Whoop De Do !] [Saudia Arabia opens the oil valves to flood the markets ... 'Damn the Yanks' says one of the chosen ones to the lessor of the world] [Germany: Waite Waite ... Our Nuclear Plants ... Sniff Sniff ... [Tear In Eye] … Well. Looks like Iron Fist came to fruition. Thanks to my [splendid ;)] cell phone-computer. :) [Hardy Har Har ... Monday already arrived !] LAUGH — OUT — LOUD! TimTheToolMan says: June 3, 2013 at 8:10 pm I think its fairly clear that in 1998 or thereabouts something in the climate changed such that we’ve moved into a period of minimal warming. A tipping point if you like. A topping point! 142. Theo Goodwin ” Is the set of statements that represent the relationships between forcings and feedbacks buried deep within the model? What work does it do? What are statements that create the theoretical context that defines “climate sensitivity?” Where are they buried?” No, these statements do not appear anywhere. Forcings of course are supplied. But feedbacks and sensitivity are our mental constructs to understand the results. The computer does not need them. It just balances forces and fluxes, conserves mass and momentum etc. An electrical circuit is a collection of resistors, capacitors, transistors etc. There is no box in there labelled underneath “feedback”. But the circuit does what it does, and we use the notion of feedback to explain it. 143. Paul_K’s finding can be summarized by saying that there is no such thing as “the equilibrium climate sensitivity.” “An electrical circuit is a collection of resistors, capacitors, transistors etc. There is no box in there labelled underneath “feedback”.” I don’t think Nick Stokes understands electronics any better than he understands climate. 145. Well, Anthony, could you build that circuit from the diagram? REPLY: yes, because I know what are in the black boxes. The real question is, could you, Racehorse? – Anthony 146. “… I don’t think Nick Stokes understands … .” Bwah, ha, ha, ha, haaaaaaaaaaaaa! #[:)] He sure doesn’t. 147. Mosher, “All forcing’s are radioactive components” So why is water vapor not a forcing? 148. All very well but you haven’t answered my question, “How many angles can stand on the point of a needle?” Those who consider themselves to be skeptics should consider whether it is possible to be skeptical about something that does not exist. Realist might be a better description. 149. All I know is that ALL THE MODELS ARE WRONG. So whatever linear senstivity they are computing, it is not how the earth responds. 150. I’m always amazed when someone “discovers” the left side of the equation equals the right side of the equation. This is where Willis shines but it is hard to watch sometimes. You have to admire his enthusiasm, eh. 151. jorgekafkazar says: June 3, 2013 at 4:25 pm The fact that you derive an r² of 1.00 should have told you something, Willis, something really important. As I understand it, climate models calculate temperature changes from forcing changes for individual cells, n° by n°, using the same algorithm everywhere. Forcings are not measured, temperature is. So forcings are simply derived from measured temperature changes? 152. dp says: June 3, 2013 at 10:41 pm I’m always amazed when someone “discovers” the left side of the equation equals the right side of the equation. This is where Willis shines but it is hard to watch sometimes. You have to admire his enthusiasm, eh. What the heck does this mean? Who are you talking about that is discovering the right side equals the left? What are you trying to say? Communication fail, dp, sad to relate … whatever you’re trying to say, it’s not getting across. 153. Willis at 6:54 on 6/3/2013 refers to Richard “Racehorse” Haynes & his over-the-top everything -plus- the- kitchen- sink defenses. Many years ago I had a brief but revealing encounter with Haynes. Fully clothed and family friendly let me hasten to add. I was a grad student in chemistry/biochemistry/pharmacology and the campus law school housed the national college for criminal defense lawyers. The NCCDL held a summer training program for criminal defense lawyers which was heavily populated by very earnest public defenders along with a smattering of private attorneys with actual paying clients. In order to present a realistic program in white-powder criminal defense, the NCCDL recruited some of us grad students to impersonate police forensic chemists in mock trials. I did very well at the impersonation, and the grateful criminal defense lawyers invited me to the end-of-year banquet. The featured speaker was Racehorse. That evening I stepped into the elevator to the top-floor restaurant, and to my surprise encountered Haynes; his face was unmistakable to anyone familiar with the Houston newspapers in the 1970s. He was perhaps 5’7″, small and wiry looking. I attempted to introduce myself, but he resolutely looked straight ahead and avoided eye contact. Lots of non-Texans imagine that Texans are all larger than life. Certainly wasn’t true of Haynes, although he was wearing some nice boots. Haynes was more Kit Carson than Buffalo Bill. And yet once he stepped in front of a jury he apparently found a different personality from the one he inhabited in the ordinary world. The public defenders in his audience that evening were on the whole true believers. Even the ones who had been doing it for 30 years. I’m not revealing any secret to observe that most of the wreakage which washes onto the shore of the PD is guilty of something, though not necessarily the particular crime with which they are charged. Yet the PDs uniformly regard themselves as the last line of defense of civilization. And here one may see a similarity between the PDs and the mainstream climate science types. Both are on a mission greater than themselves.An occasional tweaking of facts in the interests of a grander vision of justice is surely good. 154. Barry Elledge says: June 3, 2013 at 7:29 pm With respect to Willis’s results, he seems to have used composite model values in his calculation. Would using the results for individual models – in particular the more accurate models- produce a different result? Thanks, Barry. I used three individual forcing datasets (GISSE, CCSM3, CM2.1) and two datasets that were the average of 19 models. So no, the different models appear to be no different in this That was one surprising thing to me, that my finding applied to all models regardless of which forcing dataset they used. So no, I doubt very much if the “more accurate models” (whatever that may be) would be any different. 155. “… a similarity between the PDs and the mainstream climate science types. … .” [Barry Elledge] Nicely put. I agree. I would say, though, that public defenders and climatologists do waaaay more than an “occasional tweaking of facts.” They regularly LIE. Climatologists regularly tell blatant untruths, but at least the majority of the climatologists are rationally (though corruptly) motivated by greed and or power or personal “prestige” (within their own slimy circle). A large part of the public criminal defense bar, on the other hand, is motivated solely by a misguided zealotry; they lie, as you pointed out, for their “cause.” Sickening. The P.D.’s correspond not so much to the climatologist “scientists” as to those in the pro-AGW movement who are the “true believers,” who shrilly vent their rage at “the rich” or “the religious right” or what-EVER, yelling, “Save the planet!” and “No blood for oil” and such nonsense. Some, like racehorse, are sick. They lie simply for the sport of it. They love deceiving people. If they had to choose between earning a good salary at an honest occupation or barely making ends meet by defrauding, they would choose to lie for a living. 156. Willis, “trend” usually means slope of an OLS fit of a straight line , it is the same as using OLS to fit a constant to dT/dt. This is exactly what you get if you divide each term by the time increment in your eqn 7. That equation was the solution from imposing the supplementary condition of constant deltaT on the linear model , so the instantaneous dT/dF is also the longer term average once the transient response has settle (the condition you referred to as “equilibrium” in that context). Nick Stokes says: June 3, 2013 at 9:40 pm Theo Goodwin ” Is the set of statements that represent the relationships between forcings and feedbacks buried deep within the model? What work does it do? What are statements that create the theoretical context that defines “climate sensitivity?” Where are they buried?” No, these statements do not appear anywhere. Forcings of course are supplied. But feedbacks and sensitivity are our mental constructs to understand the results. The computer does not need them. It just balances forces and fluxes, conserves mass and momentum etc. Nick, that would be true if ALL the inputs were known and measured and the only thing is the models was basic physics laws. In reality neither is true. There are quantities, like cloud amount, that are “parametrised” ( aka guestimated ). What should be an output becomes an input and a fairly flexible and subjective one. From your comments I think you know enough about climate models to realise this, so don’t try to snow us all with the idea that this is all known physical relationships of the “resistors and capacitors” of climate and the feedbacks naturally pop out free of any influence from the modellers, their biases and expectations. That is not the case. Now, in view of what I posted here: the whole concept of a linear response to radiative forcing seems pretty much blow apart. Maybe we need to address that issue before spending the next 20 years discussing the statistical robustness of the CS in a model that has no physical relevance. 157. Nick Stokes says: June 3, 2013 at 9:40 pm An electrical circuit is a collection of resistors, capacitors, transistors etc. There is no box in there labelled underneath “feedback”. June 3, 2013 at 9:56 pm Well, Anthony, could you build that circuit from the diagram? Here is an analogue electronics feedback circuit applied to climate change Electronic feedback circuits can be ‘modelled’ and build to a great accuracy, due to the fact that the exact properties of every component are known, which unfortunately is not the case with components controlling climate change. If climate statisticians and model designers did appreciate that, they would save themselves great deal of embarrassment. 158. I should add that the non linear response to a negative perturbation which seems to be corrected by tropics capturing a higher percentage of the (reduced) solar input, is not the same as the way it will handle a positive perturbation, which is dumping the excess surface heat to the troposphere. The latter is not the end of line. Some will radiate to space , some will go to temperate zones through Walker circulation and also end up affecting the polar regions. Once we dump the erroneous assumption of a simple linear feedback we can get to look at that in more detail but FIRST we dump the erroneous assumption of a simple linear feedback. We will then need to look at what is really causing the peaks in paramaters like the Pacific wind data that Stuecker et al 2013 found (without reporting the values of the peaks). As I pointed out, having extracted all the peaks from their graph, there is a lot of evidence there of lunar related periodicity. 159. Nick Stokes says An electrical circuit is a collection of resistors, capacitors, transistors etc. There is no box in there labelled underneath “feedback”. But the circuit does what it does, and we use the notion of feedback to explain it. You obviously never designed an electrical circuit. A circuit does what it does, because the designer wanted to implement a function. He has to explicitly calculate the feedback that he wants in the function of the circuit and put it into the “box” of his functional diagram. Hardware,software it is all the same, if you want something to work a certain way you have establish the functionality and then implement it. You continue to amaze me with the way you fling your BS.Racehorse indeed. 160. @Nick Stokes @ Willis I think calculus gives a reason for this. Idealized, trend_T≅dT/dt, trend_F≅dF/dt and, with many caveats as discussed in your previous thread, The net temperature change as measured at time T depends also on the sum of previous temperature responses: $\Delta{T}(t) = \sum_{k=1}^N (\Delta{T}_{0}(1 - e^\frac{-t}{15}))$ with $\Delta{T}_{0} = \lambda\Delta{F}$ then gives $\lambda = \frac{\Delta{T}(t)}{\sum_{k=1}^N (\Delta{F}(1 - e^\frac{-t}{15}))}$ Since the models have tuned F so as to correctly reproduce past temperatures, I think it is not surprising that lambda is equal to the ratio of the trends of the forcing and the resulting 161. Greg Goodman, “the “resistors and capacitors” of climate and the feedbacks naturally pop out free of any influence from the modellers, their biases and expectations.” Greg, I didn’t say anything like that. I’m simply pointing out that a GCM doesn’t operate at the level of defining feedbacks and sensitivities as entities. They mainly define exchanges between gridcells. My analogy was with circuits which consist of elements interacting according to Ohm’s Law etc. Feedback concepts are used to describe the circuit operation, but they are not present in the actual circuit elements. Despite AW’s curious notion of a circuit diagram, real ones do not specify feedback. It;s not something you can solder. Do you think one can find feedbacks and sensitivities as entities in a GCM code? 162. Clive, “Since the models have tuned F so as to correctly reproduce past temperatures” I don’t believe they have, but I also don’t think that’s relevant. I think the ratio you’ve calculated should have an exponential smooth in the denominator – my derivation is here. co2fan says: June 4, 2013 at 12:44 am “You obviously never designed an electrical circuit.” I have in fact designed and built many electrical circuits. Electronic music was a youthful hobby. But I’m not talking about how they’re designed; I’m talking about what they are. Feedback is an abstraction, as it is in GCM’s. 163. Nick Stokes says: “Since the models have tuned F so as to correctly reproduce past temperatures” I don’t believe they have, but I also don’t think that’s relevant. If that idea is still at the level of a belief you maybe need to look for some factual basis for forming an opinion. Let me help. Search the comments in my article on Judith Curry site for the word “tuned”. It is precisely the term John Kennedy of Met. Office Hadley centre used to explain the process of how models were developed to reproduce past temperatures. ” I’m simply pointing out that a GCM doesn’t operate at the level of defining feedbacks and sensitivities as entities.” That is true in general and a valid point to make because several people here seem to think that is explicitly part of the models. Which brings us back to what I said previously: that would be true if ALL the inputs were known and measured and the only thing is the models was basic physics laws. In reality neither is true. There are quantities, like cloud amount, that are “parametrised” ( aka guestimated ). What should be an output becomes an input and a fairly flexible and subjective one. Now perhaps, rather than continuing to get bogged down in pointless discussion about the workings of the erroneous linearity assumption that has lead us down a blind alley to 20 years, you would care to comment on what looks like a clear proof that assumption of a linear response is totally and fundamentally wrong: Until that is addressed, any further discussion of linear models is futile. You seem competent and well informed. You also seem to be of an inclination to disprove such a conclusion. I’d be interested to see if you can find fault with it and explain as a linear reaction what the climate does following a major eruption. 164. This brings to mind the state of weather forecasting in the 1950s. Someone realized that the claims from the Met Office that their forecasting was 50% right was exactly equal to saying that they were 50% wrong and therefore totally useless. It was pointed out that better results were obtained by looking out the window and saying that tomorrow’s weather would be the same as today, which from memory had a chance of being between 75% and 90% right. “Rain today = rain tomorrow! Fine today = fine tomorrow!” was a very good predictor. Which can be written as Wi = Wo + E, where Wi is weather tomorrow, Wo is weather today, and E is a variable error factor. Seems to me that is what your equation (1) boils down to. And it appears you have shown that effectively that is what the climate models boil down to, but they have added factors C and T, which represent Carbon dioxide in the atmosphere and temperature. They have put in a positive linkage so that as C increases so does T. T = kC where k is some constant, though I suggest better results would be for using T = aYC + (sine theta)kC, where a is a constant, Y is the year, and sine theta is a sine wave with a period of about 40 years, This should give the necessary results that as CO2 increases so does the temperature, as the year increases so does the temperature, but this is subjected to a periodic fluctuation so for 20 years the climate warms and then for 20 years the climate is near constant. Take that for what you wish to make of it! 165. Nick, 1. To quote from the Met Office: There has been a debate about why the decadal forecasts from 2012 are indicating a slower rate of warming in the next 5 years than the forecasts made in 2011. Such a change in the forecast is entirely possible from the natural variability in ocean ‘weather’ and the impact that has on global mean temperature. In other words data assimilation is being used to “guide” the models. If the stalling in global temperatures continues to 2030 (60 year cycle) so climate sensitivity will likewise continue to 2. I agree that the net effect will be a smoothed exponential. However the formula works fine if one assumes a single yearly pulse in forcing Then the sum is made annually from 1750 to 2012 using CO2 data from Mauna Loa interpolated backwards to 280ppm in 1750. This can then be compared to the the result using the forcings published in Otto e al. kindly digitized for us by Willis. 166. Greg and Clive, Clive’s proposition was specific – the forcings have been tuned to match the response. I believe that is quite wrong – you both then talk about something quite different. Forcings are published. Those of GISS are here. They change infrequently, usually following some published research (apart from each year’s new data). “It is precisely the term John Kennedy of Met. Office Hadley centre used to explain the process of how models were developed to reproduce past temperatures.” Greg, I cannot see that there. He said “Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect.” Later on some specific issue, he said he wasn’t expert and would ask. That’s all I could see. Of course people test their models against observation, and go back to check their assumptions if they are going astray. That’s how progress is made. But it isn’t tuning parameters. And Clive, I simply can’t see what you claim in what you have quoted. Obviously, forecasts change because there is another year of forcing data. And every model run starts from an observed state. For a decadal forecast, this would be a recent state. But that doesn’t mean they are tweaking model parameters. It’s a data based initial condition, which you have to have. 167. Nick, If what you say is correct, then why are the models so good at predicting the past and yet so bad at predicting the future ? 168. Clive, How do we know they are bad at predicting the future? Have you been there? 169. I have one problem with treating the temperature signal as basically as an autoregressive single pole digital filter: The autocorrelation function of temperature definitely does not conform to this model. The arguments about persistance in the climate system creating trends has looked at this in some depth and the persistance in temperature appears to have either power law dynamics or be represented by multicompartment model. There is no doubt that a simply linear model will reproduce the major features of a temperature record, but this is simply a description. It does not prove that the system is physically represented by this model because it has not been perturbed sufficiently to make the deviations clear. However, looking at the last figure in the post (eq 8 vs GISS) there is significant overshoot of the linear model at inflections, Although these would not effect the crude correlation between the signals by much, it is nevertheless a systematic error term. However, other models may give a better fit to the temperature record – if you have been following the controversy over the statistical model used by the Met Office in determining the likelihood of the temperature trends being natural fluctuations, in general higher oder ARMA models are used. In fact a first order model such as this does not produce long trends in response to random 170. Greg, ” I’d be interested to see if you can find fault with it and explain as a linear reaction what the climate does following a major eruption.” I don’t share your enthusiasm for degree-days. I think they exaggerate fairly minor effects. I also don’t have much enthusiasm for stacking, even with your greater accuracy. Too much else is going on. I was sympathetic to Willis’ dropping El Chichon, because it was immediately followed by a big El Nino. But that is the hazard of this approach. So I’m agnostic. I think there’s more to be gained by looking at more variables as in the Pinatubo paper I linked. But that means even fewer volcanoes available. 171. Willis nor anyone else has noticed what I noticed, which is that the climate sensitivity displayed by any of the models is nothing more than the ratio of the input and output trends. Not only that, but this relationship is common to all of the models as well as to the average of the models. But surely climate sensitivity is defined as the ratio of the input and output trends! It is the change in surface temperature that results from a unit change in forcing. So if forcing is increasing with a trend of 1, temperature will increase with a trend of 1xsensitivity. 172. Nick Stokes says: June 4, 2013 at 3:27 am How do we know they are bad at predicting the future? Have you been there? One of Nick’s more stupid statements, he obviously doesn’t realise that today is the future of yesterday and last year and the year before that etc. How long have the models been around? 173. Nick Stokes says… How do we know they are bad at predicting the future? Have you been there? We are there now…. http://suyts.wordpress.com/2013/06/01/a-repost-of-dr-john-christys-testimony/ 174. “I was sympathetic to Willis’ dropping El Chichon, because it was immediately followed by a big El Nino.” That is called selection bias, nothing else. You can possibly dismiss a point if it is so much of an outlier that it is clear that there is an experimental error or data recording/transcription error or similar. You do not remove data because you don’t where it lies. The cumulative integral , like all integrals is a kind of low pass filter. I used it precisely because it removes “fairly minor effects” . If you wish to object to the technique please show evidence of how it can exaggerate whatever and rather than stating your level of personal “enthusiasm” for it. Stacking is a means of averaging out other effects which is precisely why we most not arbitrarily remove El Chichon. The stacking is crude because we only have six large eruptions to work with but it is bettern than looking at one or two and falsely concluding cooling because you did not notice that it was already happening beforehand . The fact that the stacking reveals an underlying circa 2.75 periodicity is in itself remarkable and unexpected. But in such cases it is our expectations that should be brought into question first , not the data. What those graphs show is fundamentally important, it kicks the legs out from the whole linear feedback / climate sensitivity paradigm. Now, there may be something in there that is questionable / invalid, and no one is the best placed to see the defects in their own work. So I hope you will be able to come up with something more concrete than your “enthusiasm” to criticise it with. 175. A C Osborn says: June 4, 2013 at 4:24 am “he obviously doesn’t realise that today is the future of yesterday and last year and the year before that etc.” Alas, I do – I have too much future behind me. But I was responding to a charge that models predict the past well by tuning (something) but fail in the future. But where’s that past, then? If that was happening, they would be doing well right now. My background is computational fluid dynamics, and one thing I learnt very strongly was, stay very close to the physics. Anything else is far too complicated. Getting the physics right is the only thing that will make the program work at all. 176. Nick Stokes, you are behaving like Ken Ham. 177. Nick Stokes says: June 4, 2013 at 2:44 am Forcings are published. Those of GISS are here. They change infrequently, usually following some published research (apart from each year’s new data). The forcings are changed in response to model inaccuracies. The changes are used to bring the models back into line with observation. If you use the model to calculate the forcings, then feed these forcings back into the model, it is statistical nonsense, a circular argument. It is the models that are making the forcings appear correct, not the underlying physics. GIGO. 178. Nick, You seem like a nice guy, and I appreciate your insights. I also agree with your statement to stay very close to the physics. So looking now at the GISS forcing data page – http:// - It looks like stratospheric aerosols is the candidate for fine tuning. Some of the references to the data sources used are themselves the result of other modeling exercises. Volcanic eruptions which apparently decay fast do effect climate over longer periods due to the tau (15 year) relaxation time. Willis’s argument for a climate rebound after volcanoes works only for low values of tau (~ 2.8y) - Likewise as far as I can see – the increasing negative offset from tropospheric aerosols is the result of more modeling exercises rather than using direct measurements. - Finally I don’t understand why the “well mixed” greenhouse gases takes a downturn after 1990. CO2 emissions per year have actually increased since then. 179. Nick Stokes says: June 4, 2013 at 1:06 am “Since the models have tuned F so as to correctly reproduce past temperatures” I don’t believe they have, but I also don’t think that’s relevant. They have and it is why their past predictions have gone off the rails. It is why the model estimates of ECS are now falling. Something that would be impossible if the models were actually predicting the future. They aren’t. They are predicting what the model builders believe the future will be. If they weren’t, the models builders would think the models were in error and change 180. Nick Stokes says: on June 4, 2013 at 3:27 am How do we know they are bad at predicting the future? Have you been there? Nick, that reminds me of an old dissident Soviet joke: The future is inevitable and certain; it is only the past that is unpredictable. 181. Clive Best: “Volcanic eruptions which apparently decay fast do effect climate over longer periods due to the tau (15 year) relaxation time. Willis’s argument for a climate rebound after volcanoes works only for low values of tau (~ 2.8y)” This is not “Willis’ argument” it’s the data’s argument. In the face of the evidence (which maybe you missed if you have not read the thread) the idea of a 15 year relaxation time needs to be reassessed. Where did you find 15 years? You state it like a fact. “- Finally I don’t understand why the “well mixed” greenhouse gases takes a downturn after 1990. CO2 emissions per year have actually increased since then.” Then maybe you have been misinformed about what causes changes in atmospheric CO2 ! 182. clivebest says: June 4, 2013 at 6:21 am - Likewise as far as I can see – the increasing negative offset from tropospheric aerosols is the result of more modeling exercises rather than using direct measurements. because without increased negative offsets one cannot account for the current stall in temperatures in the face of increased human emissions of CO2 and high estimates of CS. So, rather than re-examine the high estimate of CS, which are mandatory if we are to believe CO2 is a danger, the only option is to assume that aerosols have a much bigger negative effect than was previously assumed. The problem is that none of the models are attempting to solve for CS. They are attempting to solve for temperature, given a value of CS. The other parameters such as aerosols are used to train the hind-cast, with no attempt to validate the models using hidden data or similar methods. It is a gigantic curve fitting exercise. A pig wearing diamonds and a designer gown, all paid for by the taxpayers. 183. Colorado Wellington says: …. that reminds me of an old dissident Soviet joke: The future is inevitable and certain; it is only the past that is unpredictable. … and that reminds me of a climate joke. Oh, hang on, I don’t think it was intended to be a joke. Pretty much sums the last 20 years of mainstream climatology. 184. PS ~Clive Best http://climategrog.wordpress.com/?attachment_id=223 185. Greg Goodman writes “Where did you find 15 years? You state it like a fact.” I got the 15 years by fitting an old GISS model response to a sudden doubling of CO2 – see: http://clivebest.com/blog/?p=3729 Then taking tau=15 years and using the digitized average CMIP5 forcings from Gregory et al. I get the temperature response very similar to CMIP5 models for ECS = 2.5C. see : http://clivebest.com/ The longer the stalling of temperatures remains the lower ECS will fall. CO2 forcing alone suggests ECS ~ 2.0C I agree that CO2 must depend on SST according to Henry’s law. Warm beer goes flat faster than cold beer. I also have an intuition that the current “natural” value for CO2 in the atmosphere of ~ 300ppm is not a coincidence. Why is it not say 5000ppm ? I once made a simple model of the greenhouse effect and discovered that the peak for atmospheric OLR occurs for ~ 300ppm which just happens to be that found on Earth naturally. Can this really be a coincidence ? It is almost as if convection and evaporation act to generate a lapse rate which maximizes radiative cooling of the atmosphere by CO2 to space. If this conjecture is true in general, then any surface warming due to a doubling of CO2 levels would be offset somewhat by a change in the average environmental lapse rate to restore the radiation losses in the main CO2 band. In this case the surface temperature would hardly change. see: http://clivebest.com/blog/?p=4475 and also: http://clivebest.com/blog/?p=4597 186. Willis Eschenbach: Figure B1 is just a theoretical situation I showed to clarify the math, nothing to do with the models directly other than it uses the one-line equation. It did clarify the math. What it has to do with the models (directly or indirectly?) is that it is part of your model of the models, and your model tits the other models well. 187. clivebest: Your remarks assume the existence of the equilibrium climate sensitivity (ECS). However, it is easy to show that, as a scientific concept, ECS does not exist. By the definition of terms, ECS is the ratio of the change in the equilibrium temperature to the change in the logarithm of the CO2 concentration. As the equilibrium temperature is not an observable, when it is asserted that ECS has a particular numerical value, this assertion is insusceptible to being tested. 188. Willis Eschenbach: Everyone’s suddenly a genius now, after the fact? It is neither expected nor is it intuitively obvious. True, it is not expected. Points for you on that. However, it is intuitively obvious to everyone who has studied calculus, once you clarified what exactly your assumptions were. “Equilibrium” was incorrect; “steady state” was incorrect; but linearly increasing (in time) F and T was correct, and with dF/dt and dT/dt assumed constant, the rest was intuitively obvious. 189. clivebest says: Greg Goodman writes “Where did you find 15 years? You state it like a fact.” I got the 15 years by fitting an old GISS model response to a sudden doubling of CO2 – see: http://clivebest.com/blog/?p=3729 So what you found and blandly stated as though it was fact was a time constant “an old GISS model”. Thanks for making that clear. How you go on to explain that climate is controlled by CO2 rather than water and water vapour leaves in amazement. “I agree that CO2 must depend on SST according to Henry’s law. ” No, this is Henry’s law. We see it in action post 2000 when long term trend in temp is flat. Now I’ve pointed out how CO2 changes with both temperature and air pressure in the real climate perhaps you can come up with a novel explanation or criticism of the true climate reaction to No many takers on that one yet, apart from Nick not being “enthusiastic” about that kind of plot because he “believes” it does some that it does not. I’d expected a vigorous response to something so fundamentally important. 190. oops, forgot the too many links trip wire. 191. TerryOldberg writes : “Your remarks assume the existence of the equilibrium climate sensitivity (ECS). However, it is easy to show that, as a scientific concept, ECS does not exist. I kind of agree with you. Climate sensitivity only makes sense on the differential level. Climate sensitivity $\lambda$ is the temperature response to an increment in forcing. $\Delta{T} = \lambda\Delta{S}$ In the case of no “feedbacks” $\lambda \approx 0.3$ due to the Stefan Boltzmann. Confusingly however the term “Climate Sensitivity” is usually defined as the change in temperature after a doubling of CO2. This means that the assumed “cause” is built into the definition and linear calculus approximations are no longer valid. Perhaps climate sensitivity to CO2 forcing behaves more like quark confinement in the nucleon. The more you kick it the stronger the restoring force (negative feedback). That would mean negative feedbacks such as clouds start small but increase strongly with forcing. How else could the oceans have survived the last 4 billion years ? Unfortunately ECS has been promoted by the “team” as the “bugle call” to action for the world’s political elite. Therefore we have to work with that in the short term. 192. Matthew R Marler says: June 4, 2013 at 10:19 am Willis Eschenbach: Everyone’s suddenly a genius now, after the fact? It is neither expected nor is it intuitively obvious. True, it is not expected. Points for you on that. However, it is intuitively obvious to everyone who has studied calculus, once you clarified what exactly your assumptions were. Matt, you are about the fifth person to make this claim. If you think my results are so obvious, then assuredly you can point out several other people who have demonstrated both experimentally and theoretically that what the modelers’ call “climate sensitivity” is nothing more than the trend ratio of the input and output datasets. And if you can’t demonstrate that, then why are you trying to bust me? Roy Spencer made the same claim, that my results were nothing new, and I made the same invitation to him, saying: … show me anywhere that anyone has derived the relationship that the climate sensitivity of the models is merely the trend ratio of the input and output datasets. Roy did not come with a damn thing, which saddened me greatly, as he is one of my heroes. Then Mosher took up the same BS, and I made the same invitation to him, saying: However, neither Kiehl, nor you, nor Nick Stokes, nor anyone as far as I know, has noticed that the various climate sensitivities displayed so proudly by the models are nothing more than the trend ratio of the output and input datasets. Kiehl got the closest, but he didn’t find the key either, he thought it was total forcing. He has not replied to this point either. Then jorgekafkazar tried the same nonsense, and I replied saying Thanks for the thought, Jorge, but neither Roy nor anyone else has noticed what I noticed, which is that the climate sensitivity displayed by any of the models is nothing more than the ratio of the input and output trends. Not only that, but this relationship is common to all of the models as well as to the average of the models. So if you (or anyone else) thinks I “re-invented” that idea, please point to someone else who has shown that to be true of the climate models, either experimentally or theoretically … I have shown both. Then KR tried the same cr*p, and I replied: I certainly don’t recall you demonstrating it both experimentally and mathematically, for example … but then you never mentioned it at all, as far as I know. Nor has anyone else, to my knowledge. Kiehl attempted to answer the puzzle, and came close, but failed. So who are you thinking of when you claim this is so blindingly obvious? Who has pointed this out before me? Now you want to start up with the same claim? I make you the same invitation I made to the others. If it’s so damn obvious that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output, please provide me with someone making that claim in the past (and preferably supporting the claim both experimentally and mathematically as I have done) . Kiehl tried, but I guess it wasn’t so dang obvious to him, because he came up with the wrong answer … where were you? You could have pointed out the “obvious” to him, and his paper wouldn’t have been incorrect … 193. Nick Stokes says: How do we know they are bad at predicting the future? Have you been there? Nick, you make it too easy. Verifying whether models can predict is called hindcasting. Why not just admit you’re on the wrong track? Would it kill you to admit that Willis is right? 194. Clive Best: “The more you kick it the stronger the restoring force (negative feedback). That would mean negative feedbacks such as clouds start small but increase strongly with forcing. How else could the oceans have survived the last 4 billion years ?” Yes, a strongly non-linear negative feedback is what is needed to explain the plots I posted. I pointed out to Willis some time ago that the tropical storm was a negative feedback with internal positive feedback makeing it strong and non-linear. In view of cumulative integral plots, I think it is clear that it is even more powerful a control than a “governor” in that, at least in the tropics it is restoring the degree.day sum as well. That makes it more like PID-controller as “onlyme” pointed out recently. I think that description merits further development. 195. PID controllers are under some criteria optimum controllers, frequently used in industrial process control. Like many things we invent, it looks like Mother Nature got there first. 196. Greg Goodman says: June 4, 2013 at 11:25 am Clive Best: “The more you kick it the stronger the restoring force (negative feedback). That would mean negative feedbacks such as clouds start small but increase strongly with forcing. How else could the oceans have survived the last 4 billion years ?” Yes, a strongly non-linear negative feedback is what is needed to explain the plots I posted. I pointed out to Willis some time ago that the tropical storm was a negative feedback with internal positive feedback makeing it strong and non-linear. In view of cumulative integral plots, I think it is clear that it is even more powerful a control than a “governor” in that, at least in the tropics it is restoring the degree.day sum as well. I don’t know if you saw this, but here’s the evidence in the surface station record that it is regulated. Night time cooling Basically what I found is that there’s no loss of cooling in the temperature record, even though Co2 has almost doubled. 197. Willis Eschenbach says: June 4, 2013 at 11:01 am I make you the same invitation I made to the others. If it’s so damn obvious that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output, please provide me with someone making that claim in the past (and preferably supporting the claim both experimentally and mathematically as I have done) . I did, but you have it backwards, CS drives the models output, they made the models respond to Co2 because they believe it’s the “control knob”. Not to make light of all of your work, you “reverse engineered” this relationship. My statement: First the climate sensitivity value for CO2 is made up to make the simulators temperature output rise with CO2. I have read somewhere (and I can’t find it now, it was years ago), that GCM’s didn’t create rising temps while Co2 went up, and they didn’t know why. They then linked Co2 to water vapor either directly to temp, or with a Climate Sensitivity factor. I’m trying to find this “proof”. In the mean while you can go to EdGCM.edu for your own GCM, or http://www.giss.nasa.gov/tools/modelE/ there’s a link about 3/4 down for the Model E1 source code. You can also probable get Model I & II at the same tools link. If you really want to understand what they’re doing, the code is available for review or even to run. 198. MiCro says: June 4, 2013 at 12:12 pm Willis Eschenbach says: June 4, 2013 at 11:01 am I make you the same invitation I made to the others. If it’s so damn obvious that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output, please provide me with someone making that claim in the past (and preferably supporting the claim both experimentally and mathematically as I have done) . I did, but you have it backwards, CS drives the models output, they made the models respond to Co2 because they believe it’s the “control knob”. Not to make light of all of your work, you “reverse engineered” this relationship. Since on that page you only mention climate sensitivity once in passing, and you don’t mention either the input datasets or the outputs datasets of the climate models … no, you didn’t. “Basically what I found is that there’s no loss of cooling in the temperature record, even though Co2 has almost doubled.” This implies that the overnight cooling rate (over land) has not changed in over 60 years. At night solar radiation is zero and CO2 levels are constant . Only H2O can maintain a constant cooling rate. So long term change in water vapour content of the upper atmosphere is crucial to understand what is meant by “climate sensitivity”. 200. clivebest says: June 4, 2013 at 12:39 pm “Basically what I found is that there’s no loss of cooling in the temperature record, even though Co2 has almost doubled.” This implies that the overnight cooling rate (over land) has not changed in over 60 years. At night solar radiation is zero and CO2 levels are constant . Only H2O can maintain a constant cooling rate. So long term change in water vapour content of the upper atmosphere is crucial to understand what is meant by “climate sensitivity”. I was a little sloppy with what I wrote, nightly cooling matches day time warming, with some years showing a slightly larger daily warming, and others slightly larger cooling, taken in it’s entirety cooling is slightly larger than warming. But yes, if Co2 isn’t regulating temperature, water vapor must be. Surface data makes this clear (at least to me). 201. Willis Eschenbach says: June 4, 2013 at 11:01 am “I make you the same invitation I made to the others. If it’s so damn obvious that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output, please provide me with someone making that claim in the past (and preferably supporting the claim both experimentally and mathematically as I have done).” There is a confusion that is common to those who are criticizing Willis’ claim that his result is important. The confusion is between “the result” and “the fact that the result can be deduced from the model formalism.” The confusion is the basis of critics’ claim that Willis’ result is obvious. Willis’ “result” is not just that the climate sensitivities displayed by the models are nothing but the trend ratio of input and output but includes the fact that it can be deduced mathematically from the formalism that is the model. The fact of deduction is part of Willis’ result. By contrast, the claim that “the climate sensitivities displayed by the models are nothing but the trend ratio of input and output” is an ideal standard that models are evaluated against. The fact that the claim is a standard is what makes it seem obvious. Now we must put together two facts, the fact of the standard and the fact of the deduction. What we get is the fact that the standard can be deduced from the model formalism. Such a deduction shows that the standard is embedded in the model formalism and, thereby, that the model is a circular argument to that standard. What should happen in the real world is that the model formalism should yield an equation whose instantiations approximate the standard. That equation must contain the term “climate sensitivity” and the terms that are scientifically necessary to give a meaning to “climate sensitivity.” Presumably, those additional terms would include a term for “water vapor forcing/feedback,” a term for “cloud forcing/feedback,” and so on for whatever ineliminable terms are found in climate theory. In Trenberth’s case, there will be a term for “deep ocean sequester.” But climate science has offered us no such equation and we are left asking what role climate theory has to play in climate computer models. Willis has answered our question. If the ideal standard can be deduced (this is the key word) from the model formalism then the ideal standard is found in the model formalism. The model formalism and the models amount to one grand circular argument. 202. Willis Eschenbach: If you think my results are so obvious, then assuredly you can point out several other people who have demonstrated both experimentally and theoretically that what the modelers’ call “climate sensitivity” is nothing more than the trend ratio of the input and output datasets. And if you can’t demonstrate that, then why are you trying to bust me? Bust you? don’t be absurd, thin-skinned and all that. I have at least 3 times written that you have discovered something interesting. It is “intuitively obvious” post-hoc, like the relativity of motion, the chain rule of differentiation, or Newton’s 3 laws of motion — but only to people who have studied, in this case people who have studied calculus. Your result does depend on the counterfactual assumption that dF/dt and dT/dt are both constant, which you misattributed to equilibrium and then steady-state, before stating it as a bald assumption compatible with what you have found. In this post you are batting 1 for 2, so to speak. 203. Theo Goodwin says: June 4, 2013 at 1:09 pm If the ideal standard can be deduced (this is the key word) from the model formalism then the ideal standard is found in the model formalism. The model formalism and the models amount to one grand circular argument. This is a Modeling issue, is the modeler modeling the system in question, or how he/she thinks the system behaves. Only by comparing model results to actual results can you tell. In electronics which is where my modeling experience comes from, you can drag a real thing into a lab and test it. You can even test things outside a lab is you can isolate it’s inputs. Climatologists can’t do this, and have to rely on statistics to compare two non-deterministic systems a model vs earths climate. Earth is still poorly sampled, spatially models still can’t simulate accurate results, so they average parameters so they can have some kind of result that match. I don’t have an issue with this as a scientific endeavor, I do have an issue when it’s used for policy. 204. Willis at 11:51 pm on 6/03 says: ” I doubt if the ‘more accurate models’ (whatever that may be) would be any different” Willis, thanks for the response. I went in search of the spaghetti graphs I had remembered; I found an example at realclimate/2008/05/what-the-ipcc-models-really-say. Apparently I was using the wrong terminology (never happened to me before). The individual runs of a given model are referred to as “simulations”, a term which seems to be interchangeable with “individual realization” of the model. A number of simulations are run, and the ensemble of simulations is averaged to produce the mean for the model. Interestingly, though most of the 55 shown simulations project T increasing over time, a few show flat or falling Ts, closer to what has been observed. Now I don’t know how this variability among simulations is generated; perhaps they merely insert random variations of the forcings. My original question was whether there is some fundamental difference between the small number of more accurate simulations and the large number of inaccurate simulations. Are there any systematic differences between them? Does anyone out there know how the variation among simulations is generated? Nick Stokes? Anyone? And would these differences among simulations in any way affect the results which Willis has found? 205. MiCro says: June 4, 2013 at 1:29 pm Anyone who can contribute substantially to the creation of a professional grade model is going to be highly concerned by the number of terms in the model. The number of terms has a great impact on what must be done to solve the model and to do so as efficiently as possible. My point is that professionals are highly aware of the number of terms that they must use. It is a matter of first importance to them. A model that reduces to three terms is a non-starter. By “reduces,” I mean that it can be shown deductively that input and output are related through one term. No honest person would agree to create such a model. 206. Greg Goodman says: June 4, 2013 at 12:16 am Nick Stokes says: June 3, 2013 at 9:40 pm No, these statements do not appear anywhere. Forcings of course are supplied. But feedbacks and sensitivity are our mental constructs to understand the results. The computer does not need them. It just balances forces and fluxes, conserves mass and momentum etc. “Nick, that would be true if ALL the inputs were known and measured and the only thing is the models was basic physics laws. In reality neither is true. There are quantities, like cloud amount, that are “parametrised” ( aka guestimated ). What should be an output becomes an input and a fairly flexible and subjective one. From your comments I think you know enough about climate models to realise this, so don’t try to snow us all with the idea that this is all known physical relationships of the “resistors and capacitors” of climate and the feedbacks naturally pop out free of any influence from the modellers, their biases and expectations.” Greg, good answer. Nick, I have to agree with Greg that your response might not be worth a reply. Something to remember. 207. Theo Goodwin says: June 4, 2013 at 2:09 pm Anyone who can contribute substantially to the creation of a professional grade model is going to be highly concerned by the number of terms in the model. The number of terms has a great impact on what must be done to solve the model and to do so as efficiently as possible. My point is that professionals are highly aware of the number of terms that they must use. It is a matter of first importance to them. And GCM’s have more than there terms, but we’re also comparing the values for the entire surface of the Earth averaged to a single value, all of the effects of those terms are compressed to a single value. Here’s an entry level GCM model doc. 208. Willis Eschenbach – “It is neither expected nor is it intuitively obvious.” I would disagree. Given Eqn. 1 and sufficient iterations under a constant ΔF (half a dozen or so with tau=4 years, after that additional changes due to ΔT(-n) approach zero), the last value goes to a constant summation of a decaying exponential, and ΔT1 becomes ΔT0: T1 = T0 + λΔF(1-a) + ΔT0 * a At that point the last term is just a constant, and the equation becomes: ΔT1 = λΔF(1-a) + β Dropping offsets and rearranging for changing terms: ΔT/ΔF = λ(1-a) With constant ΔF the asymptotic relationship of ΔT/ΔF to a changing λ is linear, the 1:1 correlation, as seen in the opening post. This is the case with _any_ exponential response to a change, one-box or N-box models – if the change continues at the same rate, the exponential decay factor(s) becomes a constant offset. QED. 209. Sir, I loved your article. (saying that so i dont get flamed too badly.) The real issue is that the GCM’s are stolen from the more generalized weather forcasting models. These models have many known issues, not the least of which they are typically accurate to 12 hours and take almost 6 hours to run on most supercomputers. They produce a 3D localized output that is put togther into a forecast. The farther out you look with them the more innacurate. At 120 hours they are ridiculously innaccurate, but at the 12 hour mark they are not bad. So “Climatologists” are using those to predict out to 100 years. One of the known weaknessess is they poorly predict temperature which makes this even more ridiculous. But my point is that the billions spent on computers and models is well spent money. Forcasters have saved many lives in the arena of tornado, Hurricane and Tsunami forcasting. Even if they do have a long way to go its important work. What is ridiculous though is stealing forcasting computer time for AGW type work when it is blatantly obvious that the models are easily replicated with a simple equation for temperature work. 210. Barry, “Are there any systematic differences between them? Does anyone out there know how the variation among simulations is generated? “ I’d expect some models are better than others. But I think you’re judging them on the performance over the last 15 years or so. On this scale, factors like ENSO are very important. Many models can show an ENSO oscillation, but the phase of the cycle is indeterminate. It is unpredictable in the real world too. I think the models that look good on this short period are mostly those which by chance came up with a succession of La Nina’s. 211. “Lambda is in degrees per W/m2 of forcing. To convert to degrees per doubling of CO2, multiply lambda by 3.7.” And here my stupid question after reading this post: “Starting from the present level of 395 ppm Modtran predicts a global warming of 0.5 C from radiative forcing of 2 W/m2.” As we are now at 395 ppm if we like it or not – should this not be rather used ? Btw, very interesting to read the post on the weaknesses of the 3.7 W/m2 calculation at Claes Johnson’s blog. 212. What it seems to me is that non-climatologist and non-expert in modelling are agreeing with you, while modellist and climatologist are putting some keen criticism that is not answered at all. Obviously the bulk of posts here belongs to the first capegory, but scientific accountability is something different than popularity rating. 213. Nick Stokes at 2:33 on 6/04 says: “I’d expect some models are better than others… I think the models that look good on this short period are mostly those which by chance came up with a succession of La Ninas.” Nick, thanks for the response. I can well appreciate that an ensemble comprising enough blind squirrels will stumble upon the occasional nut. But can you explain how the variations among simulations are produced? Do they simply input different forcings, or is something else involved? Are there differences among models in the way the output is calculated (i.e., the same forcings inputted into different models produce different outputs)? 214. Barry Elledge says: June 4, 2013 at 1:33 pm Interestingly, though most of the 55 shown simulations project T increasing over time, a few show flat or falling Ts, closer to what has been observed. Now I don’t know how this variability among simulations is generated; perhaps they merely insert random variations of the forcings. All digital software is wholly deterministic (barring faults). The only ways to produce variability in outputs is to vary the inputs, or insert quasi-random functions into the code. The variability in model output is nothing more than the modellers estimate (conscious or unconscious) of natural variability (or unmodelled variability if you like). To pretend climate model output variability has any more significance than this, is either ignorance or dishonesty. 215. MiCro says: June 4, 2013 at 2:23 pm Theo Goodwin says: June 4, 2013 at 2:09 pm “And GCM’s have more than there terms, but we’re also comparing the values for the entire surface of the Earth averaged to a single value, all of the effects of those terms are compressed to a single value.” Compressed to a single value? Your metaphor lifts no weight, does no work. I am astonished that you think that you said something. 216. As Willis has pointed out, many people here are saying the result is an old one. Well, how about it? Come on and post the links or citations to where this result was made public. Or, if you can’t find any then have the courtesy post to say that you were in error, that you have searched and searched, but it appears that this was not a result made public previously. The reason this is important is because we are now all hanging off of the edge of our seats, waiting to see what transpires. 217. clivebest: Thanks for taking the time to respond. In AR4, IPCC Working Group 1 uses “climate sensitivity” and “equilibrium climate sensitivity” as synonyms. In each case, the quantity being referenced is the change in the equilibrium temperature per unit change in the logarithm to the base 2 of the CO2 concentration. The unit of measure is Celsius degrees per doubling of the CO2 concentration but the concept applies to concentration increases that are not doublings. In an earlier message to you, I pointed out that the climate sensitivity does not exist as a scientific concept due to the non-observability of the equilibrium temperature. The non-observability has another consequence that is not often appreciated. This is that when the IPCC provides a policy maker with the magnitude which it estimates for the climate sensitivity it provides this policy maker with no information about the outcomes from his or her policy decisions; this conclusion follows from the definition of the “mutual information” as the measure of a relationship among observables. In view of the lack of mutual information between the increase in the logarithm of the CO2 concentration and the increase in the equilibrium temperature, to have the IPCC’s estimate of the magnitude is useless for the purpose of making policy. However, the IPCC has led policy makers to believe it is useful for this purpose. 218. @ Phitio Apparently you believe “modellist[s] and climatologist[s]” have something to say worth listening to. Given that “modellist[s] and climatologist[s]” of AGW regularly traffic in lies and wild speculation, you might want to reconsider that view. Further, while the Nick Stokes’s [a fine example of a "modellist and climatologist"] of the world post foolishness not worthy of dignifying with a response (except in the hopes of educating some poor brainwashed Cult of Climatology member… not likely to succeed, but worth a go), the fine scientists above who are (albeit pridefully blindly and or mistakenly in some cases) debating Eschenbach are by no stretch of a Fantasy Science imagination “modellist[s] and climatologist[s].” And Eschenbach (and others above) have soundly answered their concerns. You sound a little confused. Try following the above thread in its entirety. I have a feeling that will help you immensely. 219. Barry, A climate model solves differential equations. It can tell you how things will change, providing you tell it the starting point. In principle, that means every bit of air, its temp and velocity, etc. Of course that’s impossible. What is normally done is to take a known state in the past, define the state as best can (with lots of interpolation) and let it run for a wind-up period. After a while, initial perturbations settle, and you get realistic weather, but not weather you could have predicted. Of course, model differences have an effect as well, and there are different forcing scenarios etc. The thing is, they are climate models. They model the climate by creating weather, but do not claim to predict weather. They are good for longer term averages. 220. Adam: As Willis has pointed out, many people here are saying the result is an old one. Who has said it was old? All anyone has said is that it’s simply derivable once Willis’ assumptions are clearly expressed. 221. Nick Stokes: They are good for longer term averages. That is the hope. It has not been demonstrated yet to be true. 222. Lol, Actually Climate models are designed to predict weather. That is where they came from and what they are used for. They dont due well beyond a short timespan, or for predicting things related to heat energy such as temperature. Making a model of the climate bigger only makes it less accurate, ie the entire globe for 100 years. 223. David, Do you have an example of climate models being used for predicting weather? As in someone like the IPCC saying what some future weather will be. I think you’ll find they talk in decadal averages at a minimum. 224. Nick, Yes, the GFDL is used in hurricane forcasting. Originally came into being in the late 60′s for that purpose. most of the big complicated models IPCC uses/talks about are some modification or bounded version. 225. Nick Stokes says: June 4, 2013 at 4:35 pm Do you have an example of climate models being used for predicting weather? As in someone like the IPCC saying what some future weather will be. I think you’ll find they talk in decadal averages at a minimum. They should switch to yearly averages. At least then they would have a prayer of being correct in extreme years. As it is they are always wrong. 226. Actually, I rushed the math in my previous comment a bit – let’s look at it without dropping any constants. Equation 1: T1 = T0 + λΔF(1-a) + ΔT0 * a Over time, ΔT0 will go to a constant as per exponential decay. If ΔF = 0 after some period, (say, after a step change), ΔT0 will asymptotically approach zero as the the lagged change expires. If ΔF remains a constant, ΔT0 will asymptotically approach a constant change per time step, as each successive change to ΔT(n) will be smaller. As ΔT0 goes to a constant ΔT: T1 = T0 + λΔF(1-a) + ΔT * a T1 – T0 = λΔF(1-a) + ΔT * a ΔT = λΔF(1-a) + ΔT * a ΔT(1-a) = λΔF(1-a) Therefore ΔT/ΔF = λ : QED… and that’s the form of the equation all the fuss is about. I hope that is sufficiently clear – the relationship Willis Eschenbach is focusing upon is inherent in his model, in all such lagged models for that matter, and in the regression to the mean found in exponential decay equations. After transients have settled out, and second derivatives have gone to zero, such models will asymptotically go to a linear relationship. This is unsurprising if you are familiar with such equations, and should be apparent from the calculus. 227. Philip Bradley at 3:19 pm on 6/04 says: “All digital software is wholly deterministic (barring faults). The only way to produce variability in outputs is to vary the inputs, or insert quasi-randomness into the codes.” Philip, I quite believe you. The problem is I don’t know which it is: inserting variable forcings or perhaps variable response functions, or inserting quasi-random variables of some other sort. Another possibility is that different types of models treat the inputs somewhat differently (even though all models appear to share the same basic assumptions). Do you know how the randomness is actually generated? If so please enlighten me. 228. I too initially thought Willis’ observation was trivially obviousl: if you wait until the exponential has settled, what is left _has to be_ the linear response to forcing that you added the exponential response to to get the model. It’s like saying 4-2=2 . However, what is significant is that the models are settling to this value despite all the variable inputs and erratic volcanoes etc. What this points out is that despite the emense complexity of the models and the inputs, what we are seeing in the model otput is the same as linearly increasing CO2 “forcing” plus random noise that averages out. What Willis’ observation shows is, that despite all the varaible inputs : volacanoes, aerosols, CFcs, black soot, NO, O3 etc etc,. the long term, net result produced by the models is that all this pales into insignificance and climate is dominated by a linearly rising CO2 “forcing”. The exponentials never die out in model runs because there are always changes, but they _average_ out, leaving the same thing. This is the modellers’ preconceived understanding that they have built into the models themselves and adjusted with the “parametrised” inputs : that climate is nothing but a constantly increasing CO2 forcing + noise. And that is where they are wrong and that is why they have failed to work since 2000 AD. So Willis’ observation that, if you effectively take out the exponential decays by imposing a condition of constant deltaF , you get back to lambda that you started with, is trivial in that sense. What can be claimed as a “finding” is that this condition corresponds to what the model runs produce. And that is not trivial. The models do net neccesarily have to produce a result that conforms to the constant deltaF condition that Willis imposed, but they do. Their projections will, because that’s all they have, but hindcasts have supposedly “real” inputs that are not random. So what 30 years of modelling has told us is that climate is NOT well represented by constantly increasing CO2 + noise. Now negative results are traditionally under-reported in scientific literature , this is known to happen in all fields of science but sometimes negative result tells you as much or more than a positive one. And this is a very important NEGATIVE result. It has cost a lot of time, money and effort to get there but I think we have a result finally. And one that the IPCC can not refuse because it comes from the models on which they have chosen to base conclusions and their advice to “policy makers”. So lets repeat it: climate is NOT well represented by constantly increasing CO2 + noise. 229. That last sentence should read: climate is NOT well represented by constantly increasing CO2 forcing + noise. That’s the take home message for policy makers. 230. Nick Stokes at 4:11 pm on June 4 says: “What is normally done is to take a known state in the past, define the state as best can (with lots of interpolation) and let it run for a windup period.After a while, initial perturbations settle, and you get realistic weather, but not weather you could have predicted.” Nick, I’m trying to understand how this is used in practice, e.g.to generate the 55 simulations which were used to produce the AR4 model projection. How are 55 different simulations produced? Are these merely different inputted values of the forcings? If so, how do they generate the range of values for the forcings? Or is something else being varied besides the forcings? To me these sound like pretty straightforward questions which ought to have straightforward answers. I’m not trying to be difficult here; I just want to understand what’s going on behind that If you can get me a straight answer I will be grateful. 231. The way the models are randomized is by inputing the current weather which is constantly changing. They are designed to reproduce the same output with the same input but the input is incredably complex, hence the need for super computers. All the models share a lot of the same inputs, at least the ones that are easiest to measure such as barometric pressure gradients and humidity. Others add in things like ocean temps from various layers, gravitational effects and various tempuratures within the different levels of the atmosphere. The complexity comes from the habbit of creating grids or cubes of weather and having them all interact under certain rules with each other creating an output that contains the varios changes from interactions. This output can be further averaged and or weighted for consumer use (what the folks do looking out beyond 120 hours). Also the models are all in the “development phase” so they are prone to frequent programming adjustments so todays output with yesterdays input will not match due to changes in the way the model behaves. Despite this fact they are still very useful for predicting the short term weather. There is a reason why most of your weather forcasters (all with at least a bachelors degree) dont buy into AGW. 232. http://wattsupwiththat.com/2013/05/26/new-el-nino-causal-pattern-discovered/ See my comments there for evidence of lunar influence that Stueker et al published but failed to spot. I’m trying to write this up as a more coherent whole at the moment. Some climate models are apparently able to produce some “ENSO-like” variability but they’re still trying to make it part of the random noise paradigm. Once they link it to the 4.431 year lunar influence in the tropics we may see the first glimmer of a realistic variability. The 4.43 gets split into 3.7 and 5.4 year cycles and that is the origin of the “variable” 3-to-5 year periodicity in El Nino and ENSO. 233. KR, I think your algebra is the same as Willis’ in Appendix D. Willis is emphasising the ratio of trends, which isn’t quite the same. 234. David Riser says: “Others add in things like ocean temps from various layers, gravitational effects …” What gravitational effects does that involve? Which models? Adding in the 23% variation of the lunar tidal attraction with its 8.85 year cycle modulated by the 18.6 year variation in declination may produce some interesting patterns ;) However, AFAIK tides are put in directly because computer models don’t work too well at predicting tides either. Could you give more detail about these “gravitational effects” in models? 235. @Matthew R Marler Looking through the threads I see that you are correct. Nobody said the result had already been… oh wait, that’s not true. Looking through the threads, we see multiple people claiming that it is an old result or words to that effect. Hence my comment. What’s your problem here, bucko? You reply on behalf of other people to say that what I am asking them for is incorrect because you claim we have not been discussing it – even though much of the thread is about it? Are you saying that Willis has done new and original work here, or are you saying he has not. If you are saying the latter then show citations to somebody doing before him. Either way, try to be a little less cryptic because you are getting right on my tits. 236. Adam, take a breath. No point in getting annoyed about blog posts. Sturgeon’s law 90% or anything is crap. Sturgeon’s second law 99% of blog posts are crap. Chill out. 237. Barry, Weather is chaotic, so producing variable output in a model isn’t hard; controlling it is more of a problem. For the numerical weather forecasting aspect that David Riser is emphasising, they commonly produce an ensemble, deliberately varying the initial conditions. That’s where they get nunbers when they tell you there is x% chance of rain. ECWMF formalises this as their Ensemble Prediction System. Here is their 5.5Mb user guide which explains a lot about their system, including EPS. For climate simulation it is a bit different. They can be the same programs, like GFDL or UKMO, and they often use ensembles. For example, the GISS-ER result that Willis uses here is an ensemble of five. That of course reduces the variabilty. But because they aren’t claiming to get the weather right on any particular day (or month), but rather to get the dynamics right for the long term, they are happy to go back further to get a start. For the future they use different scenarios for forcing, and programs like CMIP3 and CMIP5 will prescribe particular ones that the programs should follow. There’s a table here in the AR4 which describes the various models at that time and their internal differences. And here is their discussion of the start-up processes. 238. Willis’ observation demonstrates that the models themselves prove : climate is NOT well represented by constantly increasing CO2 forcing + noise. my plots on volcano response shows linear models and the implicit concept of climate sensitivity are irrelevant. Anyone who does not agree with that please raise a hand (and provide a coherent reason for not agreeing). 239. Willis’ epiphany explalned: For all things that represent X there are things that represent Y. Willis has found by experiment that the ratio of trends (X) explains Y which is explained as Nick Stokes has described, in the higher analysis of what X and Y represent. It falls out of the analysis – and in fact that is how Willis stumbled onto it. It was always there, obviously, among myriad other equivalencies. I don’t think this is a particularly big deal as it hasn’t a thing to do with climate. This is not the first example where Willis has discovered X=Y for all valid examples of X and Y. It is why I cringe when Willis dives deep into the math. There are limitations to being self-taught. Still, Willis is a brilliant man, more akin to Edison than Tesla, but brilliant. I’m envious of the skill set he brings to the table and his ability to present complex ideas and reductions to the lay audience. And he doesn’t suffer valid and non-valid criticisms gracefully as will be seen shortly. 240. Greg Goodman says: June 4, 2013 at 9:04 pm “This is the modellers’ preconceived understanding that they have built into the models themselves and adjusted with the “parameterised” inputs : that climate is nothing but a constantly increasing CO2 forcing + noise.” I’ve no idea where you get all that from. It’s nonsense. Willis in this post works on total forcing; there’s no breakdown into components. And modellers have no such preconceived understanding; even if they did it would be irrelevant. They are solving the Navier-Stokes equations. No-one claims that climate is increasing CO2 forcing plus noise. There are simple models (Willis’s is one) which treat it as increasing (mostly) total forcing, with some decay function, plus Forcing is important – it’s right up in the first section of the AR4 SPM. But no-one says it is just CO2. 241. @Greg Goodman yeah, you are right. Apologies to Matthew. 242. “I’ve no idea where you get all that from. ….Forcing is important – it’s right up in the first section of the AR4 SPM. But no-one says it is just CO2.” If you don’t know where I get it from , I suggest you read the linked post again. no-one _says_ it is just CO2, but the models do. That is what Willis’ observation means as I explained in some detail. The fact that the models can be approximated in their global average output by a linear model means that the dominant features are linear. There’s a whole lot more going in there much of which is probably not linear and they produce a lot more than a global average temperature. However, they are predominantly linear. Furthermore, Willis’ observation is not a trivial result for all linear models in all circumstances, it is specific to applying an additional condition on the linear equation, that of constant Now if the models all line up bang on a slope equal to lambda that means not only that they are linear in their global average but they too are conforming to that additional condition. And we know where the constantly increasing “forcing” comes from we’ve been talking about for the last 20 years. This means that all the variation in forcing in the models is averaging out to give the same behaviour as the linear model under constant dF once the transients have settled. ie all the variations are equivalent to symmetrical random ‘noise’ and the dominant feature is the linearly increasing forcing. In fact the linearly increasing forcing is the calculated CO2 radiative forcing plus the hypothesised water vapour amplification. The latter is greater than the former and had not foundation in observational data. THAT is the preconceived understanding; and it is irrelevant. THAT is the model which has failed thus providing us with the NEGATIVE result which will be useful from now on: climate is NOT well represented by constantly increasing CO2 forcing + noise. 243. Nick writes: “Forcing is important – it’s right up in the first section of the AR4 SPM. But no-one says it is just CO2.” So supposing CO2 stays constant – does the climate change ? Can the models explain the Little Ice Age ? 244. Clive, Yes, if CO2 stayed constant and other forcings changed, the models would show a climate response. I don’t know of any LIA runs. You can only usefully run the models forward from a reasonably well-known starting point, with a lot of spatial detail; I doubt if they could find one. 245. Still no credible reply to the lack of cooling due to volcanism: If you take out volcano forcing from the models to better reflect this, they will go sky high from 1963 onwards. I can understand why Nick is not “enthusiastic” but that does not erase what happens in the data. 246. @Nick Stokes: Many models can show an ENSO oscillation, but the phase of the cycle is indeterminate And I can fly a helicopter, but my ability to keep it in the air is indeterminate :) 247. Nick Stokes – What Willis has managed to prove is that after transient effects have died out, the relationship of changes in forcing to changes in temperature is: λΔF = ΔT Which is the very definition of λ as equilibrium climate sensitivity (ECS). Whether the equilibrium is a zero ΔF or a constant one, a constant forcing pattern leads to ECS. By definition. Somehow I find the (re)discovery of the definition of ECS to be something less than earthshaking… 248. Theo Goodwin says: June 4, 2013 at 3:20 pm Compressed to a single value? Your metaphor lifts no weight, does no work. I am astonished that you think that you said something. I’m not sure I understand your critique, so let me say what i was trying to say in a different way. GCM’s (depending on which one, how course of resolution the run is and the size of the time step) can calculate >5M surface temps samples per year. All of these are then averaged to a single annual value. What this hides is that one area can be 30C high, and one area can be 30C low, and they average out to a reasonable value. 249. “but the phase of the cycle is indeterminate” because they have not yet worked out that it’s driven lunar perigee cycle. Then they’ll get the phase and the period in sync with the wobbles the models are able to make. 250. http://climategrog.wordpress.com/?attachment_id=281 “3-to-5″ year ENSO cycle is 8.85 / 2.0 peak being split by something longer circa 28 years. 251. MiCro says: June 4, 2013 at 12:12 pm I have read somewhere (and I can’t find it now, it was years ago), that GCM’s didn’t create rising temps while Co2 went up, and they didn’t know why. They then linked Co2 to water vapor either directly to temp, or with a Climate Sensitivity factor. I’m trying to find this “proof”. I found a reference to what I was trying to remember. For example, it was not until 2009 that satellite measurements showed definitively that Manabe’s idea of simply holding relative humidity constant as the temperature increased did describe quite exactly how the global atmosphere behaved. I’ve extended my data mining code of the NCDC data set to extract both Rel Humidity and Surface Pressure, and will write something up on measured trends, once I’ve finished this I’ll ask Anthony if he’ll be as so kind to publish it here. 252. Greg Goodman says: June 5, 2013 at 6:44 am “3-to-5″ year ENSO cycle is 8.85 / 2.0 peak being split by something longer circa 28 years. Could be orbital (Moon, Jupiter, Saturn), or it could be the time constant for enough heat to get stored in one (or more) oceans surface waters that then alter trade winds, surface pressure or the bulge of warmer water that can then get a stronger tidal push/pull ???? 253. Adam: Are you saying that Willis has done new and original work here, or are you saying he has not. MatthewRMarler, to Willis Eschenbach: I have at least 3 times written that you have discovered something interesting. 254. @Matthew R Marler I asked: “Are you saying that Willis has done new and original work here, or are you saying he has not.” You answered: “I have at least 3 times written that you (Willis) have discovered something interesting.” How is that answer relevant to the question of “new and original work”? You use the word “interesting”, that is not an answer to the question about “new and original work”. So I will ask you (and others) again. It really is a simple Yes, No, or Don’t Know situation. Has Willis presented here new and original work? Please answer either Yes, No, or Don’t Know. If the answer is No then please provide the sources for where the result(s) has(have) been previously made available. I don’t think this is an unreasonable request. Do you? 255. PS, my position is that I Don’t Know. Which is why I am keen to find out what the experts think. Janice Moore says: June 3, 2013 at 9:15 pm Dear Luther Wu, It’s after 11:00PM in Oklahoma, now. Perhaps, you have gone to bed. In case you’re up, just wanted to tell you I am SO GLAD THAT YOU ARE OKAY. “Dear God, please take care of Luther Wu,” I prayed many times this past weekend. It’s too bad “god” chose to ignore prayers for those who died, especially the children. 257. Adam, Don’t know? A most excellent and underused position. We who don’t know….well, at least we know for sure that we don’t know. What about y’all poseurs? I say that because of all the posturing. Even a decent scientist may reject the implication they spent a decade or two in a circular argument. Not a great one though. So even if Willis is confirmed to have found a sophistry fallacy in the minds of modelers, they may need to dance around their nostalgia for a few months. Or decades. The oddity here to me is the lack of (direct) refutal (of Willis’ proposition.) Only a, “you are an outtie, we are innies” kind of argument, beneath my expectation of serious thinkers. “You are smart but not allowed in the club” is the tone I heard a few times. If you don’t like what Willis said, the least you can do is explain yourself. Is this a cult? So Adam, I was thinking about pressing for clarity myself. Now that you did it, I can just say “what Adam said.” 258. Hansen’s 1984 Climate Sensitivity paper: This similarity suggests that, to first order, the climate effect due to several forcings including various tropospheric trace gases may be a simple function of the total forcing. 259. Nick Stokes A climate model solves differential equations. It can tell you how things will change, providing you tell it the starting point. This is so badly wrong that it is not even funny and you should know it. That you can write such an obvious nonsense well knowing that it is nonsense is certainly asking questions about your No, the climate models do anything but solve differential equations. What the climate models do is to take huge chunks of atmosphere and ocean (about 100kmx100km) and try to conserve energy, mass and momentum. I say try because they don’t succeed very well for obvious reasons – too low resolution and poor interface understanding. Of course in real physics the conservation laws translate in Navier Stokes equations for the system of fluids we are contemplating here. But it would be an insult for every physicist to even suggest that N numbers computed on N 100.km x 100 km cells might be anywhere near to a solution of Navier Stokes !! They are not, can’t be and will never be. This is btw the fundamental reason why the models get the spatial variability and biphasic processes (precipitation, clouds, snow and ice) hopelessly wrong. This is also why they will never be able to produce the right oceanic currents or the right oceanic oscillations which are the defining features of climate and are indeed solutions of differential equations that the Mother Nature is solving at every second. So let us be very clear, climate models are just primitive heaps of big boxes where the interfaces are added by hand and each box attempts to obey conservation laws. They solve no differential equations, converge to no solutions and approximate no exact local law of physics. The only thing they can do, and here Willis has a point, is to get completely trivial and tautological relations right. Indeed dT/dF = (dT/dt)/dF/dt and when one destroys the whole spatial variability by only taking global averages (what removes btw any physical relevance to the variables) then every model that at least half assedly respects energy conservation simply MUST get this tautology right. If it didn’t, then I think even Jones or Hansen would have noticed ;) 260. Tom, For the purposes of my remark, it would be sufficient to say they solve recurrence relations. But I have spent a lot of my professional life in numerical PDE. The GCM’s are orthodox PDE solvers. Of course they have resolution limitations – that’s inherent in discretisation. And they need to do subgrid modelling, as all practical CFD does. And CFD works. Planes fly (even helicopters). But they certainly conserve energy, mass and momentum. If you don’t conserve energy, it explodes. If you don’t conserve mass, it collapses. In fact, if you don’t conserve species, the planet runs dry or whatever. There is a minimum of physical reality which is needed just to keep a program running. And they work. As David Riser says, some of them double as numerical weather forecasters or hurricane modellers. Now people complain about weather forecasts, but they are actually very good, and certainly reveal coming reality in ways nothing else can. Where I am we get eight days ahead of quantitative rainfall maps. It rarely fails. Anyway, for those curious, here are the equations solved by CAM 3, a publicly available code. Here are the finite difference equations; the horizontal momentum equations are solved by a spectral 261. @Nick Stokes “And they work. As David Riser says, some of them double as numerical weather forecasters or hurricane modellers. Now people complain about weather forecasts, but they are actually very good, and certainly reveal coming reality in ways nothing else can. Where I am we get eight days ahead of quantitative rainfall maps. It rarely fails.” This is true. For short term in small regions of space the model works well enough. This is proven every day with the accurate weather forecasts. But how well do those models perform when the scale is global and the time is 50 years into the future? The answer, as we are seeing by comparing the predictions made in the 1990′s with what we are experiencing today, is… drum roll… not very well at all. For example, we were told that it would be a lot warmer by now and that the climate would be continuing to warm. But it is not warmer now than then and the climate is not continuing to warm (presently). The UK just had its coldest spring since 1891 http://wattsupwiththat.com/2013/06/02/coldest-spring-in-england-since-1891/ but the models told us that “snow would be a thing of the So, the proof is in the pudding. The models baked the pudding. The pudding tasted really bad and now nobody wants to pay for another one. 262. Adam, “For short term in small regions of space the model works well enough. This is proven every day with the accurate weather forecasts.” I am answering the absurd claim that the models do not solve differential equations. The accurate forecasts are proof that they do. Of course, accuracy on average over fifty years is another I think you need to look more carefully at what climate models have predicted. No model said that snow would be a thing of the past. Yes, we’ve had a few years cooler than expected, though one can be overly locally focussed. Where I am, we’ve just had a very warm autumn. It wasn’t bad today either. 263. What’s so misleading about this entire topic is the implication that all climate models do is calculate a single value for temperature. Willis would have us believe that all those dumb scientists made something so infinitely complex when they could have just listened to him and saved everyone a whole lot of time. Sorry Willis, single line equations don’t do this: 264. Phil M. says: June 6, 2013 at 5:49 pm Eh? You are inferring something that Willis doesn’t imply at all. 265. Adam: Has Willis presented here new and original work? Some of what Willis presented here was new and original. 266. Adam: PS, my position is that I Don’t Know On that we can agree. Start over from the top and read Willis’ essay carefully, then read the comments carefully, read Willis’ responses carefully, and then read the responses to his responses carefully. I think it should be clear what I thought was new and what wasn’t new. 267. Hey greg, Gravity holds the grids together :) lol additionally it provides a means to conserve energy when heat rises/falls etc. There are a lot more gravity effects than tidal. One thing that occured to me over the last few days while being offshore experiencing some weather is that what Willis’s mathematical demonstration shows is that the long term climate modelers cheated a bit when they developed the AGW forcing’s. Because just adding more CO2 did not in fact work they started playing with water vapor based on some unkown mechanic as CO2 was added. The only way this would work is by creating a fairly simple linear equation based on CO2 concentration that increases water vapor which in most of these models is a very direct representation of energy. Hence the steady, linear rise in temperature over time. Obviously this does not accurately model anything since the mechanics are not understood and models are designed to mimic how things work, just adding random equations does not in fact a model make. 268. David Riser, “Because just adding more CO2 did not in fact work they started playing with water vapor based on some unknown mechanic as CO2 was added. The only way this would work is by creating a fairly simple linear equation based on CO2 concentration that increases water vapor which in most of these models is a very direct representation of energy.” None of that is true. The water vapor feedback goes back to Arrhenius. In the models, water vapor increases because the ocean boundary condition keeps air saturated there, from where it is advected. No mystery. wv feedback applies to any rise in temperature – not specific to CO2. Models are designed to solve the flow equations, not just mimic how things work, and random equations are not added. But I agree about gravity. 269. Nick Stokes commented on David Riser, “Because just adding more CO2 did not in fact work they started playing with water vapor based on some unknown mechanic as CO2 was added. The only way this would work is by creating a fairly simple linear equation based on CO2 concentration that increases water vapor which in most of these models is a very direct representation of energy.” “None of that is true. The water vapor feedback goes back to Arrhenius. In the models, water vapor increases because the ocean boundary condition keeps air saturated there, from where it is advected. No mystery. wv feedback applies to any rise in temperature –not specific to CO2. Models are designed to solve the flow equations, not just mimic how things work, and random equations are not added.” As I noted above, this is not correct. What they do is force Relative Humidity to remain constant as temperature increases, this is the “hidden” forcing, without which GCM’s did not match measured temperature increases when Co2 270. ok, but holding relative humidity constant as temperature increases is exactly what i said they do. In order to hold relative humidity constant while raising termperature you have to add water vapor. This would take a simple linear equation and is sloppy since relative humidity does not stay constant as temperature increases in nature reguardless of why it is done particularly if its tied to CO2 increase. 271. MiCro, David, No, everyone seems to think they hold relative humidity constant, but they don’t. They have an ocean boundary condition, which is based on the idea that air adjacent to water is saturated. That doesn’t even mean fixing RH in bottom cells – there will be some model of diffusion through the air boundary layer, dependent on wind etc. But after that the water is just advected, conserving mass, and with mixing, condensation conditions etc. It would actually be impossible to hold RH constant and conserve mass. 272. @ David Riser I wasn’t disagreeing with you David. I was explaining exactly how they do it, and why it might look innoculous. 273. @Nick, There are at least a few papers on the topic that say it’s not correct. And while the origin of the idea I think goes back to the 60′s, it wasn’t added to gcm’s until 70′s-80′s(?), and not “confirmed” until 2009. But I’ll look at the Model E1 code tomorrow and see if I can follow what it actually does do. But if it makes CS larger than it would be, and we find that CS is to large compared to actual measurments, that makes a compelling case for it being wrong doesn’t it? 274. MiCro, Here is how it is done in CAM3. For the ocean boundary, see the para leading to 4.440, which determines the boundary transfer coefficient that I referred to. The advective transport equation is here. Because the slow processes of mixing are lagged behind the dynamic core which does advection, there is also a section on mass fixers 3.1.19; because water is condensable, there’s a bit more to this catch-up stage in 3.3.6. 275. This is not a comment on whether the Willis equation accurately reflects climate physics, just a This is not a comment on the relation of the Wiliis equiton to climate physics, it is a comment on the mathematical properties of the equation. In short, the equation (which is a digital filter) has a pole and a zero that cancel out, and can be reduced to a simpler first-order equation: Willis, I am afraid you are constructing entire mountain ranges out of a molehill. If I am understanding your “delta” notation correctly, delta F(1) is F(1)-F(0), or more generally, delta F(n) = F(n)-F(n-1). If that is correct then you are making a linear combination of current F, previous F, previous T, and previous-previous T. So I would rewrite the equation as: T(n) = (Lamba)(1-a)[F(n)-F(n-1)] + T(n-1) +a[T(n-1)-T(n-2)] We can collect some terms to get T(n) = (Lamba)(1-a)[F(n)-F(n-1)] + (1+a)[T(n-1)] – a[T(n-2)] This is a standard second order “biquad” digital filter as described here: (the standard form allows for a F(n-2) term also) Its z-transform is (1-a) – (1-a)z^-1 Lambda ———————— 1 – (1+a)z^-1 + (a)z^-2 The reason Lambda is brought out to the front of the expression is because it is what I would call the “DC gain” term. If Lambda is 1, a unit step input will cause the output to rise (sort of) exponentially to reach 1. If Lambda is 2, a unit step input will produce an output that rises to 2. If the input is a ramp, the output will be a ramp with a slope of Lambda times the slope of the input. So it’s really not remarkable. It’s property of your equation. But wait- it gets better. The numerator of the z transform has a root at Z=1. The denominator has roots at Z=1 and Z=a, so they have a common factor that can be canceled out. (i.e, 1-z^-1) The equivalent z-transform is Lambda ————- 1 – (a)z^-1 and the corresponding equation is: T(n) = (Lamba)(1-a)[F(n)] + a[T(n-1)] which will perform identically to the original equation. 276. Well, I thought I might have problems formatting equationns in plain text. Not sure what happened in the first sentence. In the two z transforms, the numerator and denominator should be aligned with the line, and Lambda is multiplied by the ratio. 277. David Moon, you end by saying: and the corresponding equation is: T(n) = (Lamba)(1-a)[F(n)] + a[T(n-1)] which will perform identically to the original equation. Thanks, David. I tried that equation, and I got very different results from my original equation. I couldn’t make them agree … perhaps if you posted a spreadsheet actually doing the calculations step by step, for your equation and for my original equation, it would become clear. Here’s the data for the Forster forcing and model for you to use as examples, I’m interested to see how your method compares. Year, Forster Forcing W/m2, Foster Model Results (°C) 1850, 0.01, 0.036 1851, 0.03, 0.059 1852, 0.06, 0.079 1853, -0.03, -0.005 1854, -0.03, -0.015 1855, -0.02, -0.006 1856, -0.26, -0.249 1857, -0.46, -0.45 1858, -0.18, -0.169 1859, 0.05, 0.059 1860, 0.12, 0.131 1861, 0.06, 0.061 1862, -0.09, -0.091 1863, 0.05, 0.048 1864, 0.04, 0.039 1865, 0.03, 0.028 1866, 0.11, 0.107 1867, 0.17, 0.163 1868, 0.18, 0.177 1869, 0.19, 0.177 1870, 0.06, 0.047 1871, 0.19, 0.18 1872, 0.20, 0.19 1873, 0.20, 0.181 1874, 0.10, 0.082 1875, 0.11, 0.089 1876, 0.02, -0.004 1877, 0.04, 0.018 1878, 0.09, 0.063 1879, 0.08, 0.054 1880, 0.16, 0.133 1881, 0.31, 0.283 1882, 0.23, 0.195 1883, -0.89, -0.927 1884, -1.64, -1.681 1885, -0.80, -0.836 1886, -0.18, -0.218 1887, -0.09, -0.132 1888, -0.02, -0.061 1889, -0.16, -0.205 1890, -0.50, -0.545 1891, -0.27, -0.318 1892, 0.14, 0.084 1893, 0.29, 0.235 1894, 0.36, 0.301 1895, 0.35, 0.296 1896, -0.05, -0.107 1897, -0.06, -0.117 1898, 0.27, 0.206 1899, 0.22, 0.154 1900, 0.24, 0.172 1901, 0.38, 0.307 1902, -0.26, -0.334 1903, -1.13, -1.205 1904, -0.11, -0.18 1905, 0.27, 0.195 1906, 0.22, 0.142 1907, 0.20, 0.124 1908, 0.37, 0.29 1909, 0.43, 0.341 1910, 0.32, 0.232 1911, 0.23, 0.142 1912, -0.07, -0.159 1913, 0.00, -0.088 1914, 0.34, 0.25 1915, 0.48, 0.389 1916, 0.41, 0.313 1917, 0.34, 0.241 1918, 0.46, 0.362 1919, 0.47, 0.371 1920, 0.38, 0.275 1921, 0.36, 0.257 1922, 0.45, 0.341 1923, 0.44, 0.327 1924, 0.43, 0.314 1925, 0.54, 0.422 1926, 0.52, 0.405 1927, 0.39, 0.27 1928, 0.46, 0.343 1929, 0.37, 0.252 1930, 0.49, 0.365 1931, 0.50, 0.371 1932, 0.30, 0.175 1933, 0.38, 0.247 1934, 0.57, 0.438 1935, 0.58, 0.451 1936, 0.60, 0.462 1937, 0.57, 0.431 1938, 0.61, 0.473 1939, 0.57, 0.431 1940, 0.59, 0.452 1941, 0.60, 0.455 1942, 0.52, 0.378 1943, 0.57, 0.419 1944, 0.63, 0.481 1945, 0.69, 0.542 1946, 0.65, 0.495 1947, 0.68, 0.528 1948, 0.70, 0.544 1949, 0.59, 0.427 1950, 0.66, 0.496 1951, 0.68, 0.515 1952, 0.58, 0.413 1953, 0.58, 0.409 1954, 0.62, 0.449 1955, 0.64, 0.469 1956, 0.82, 0.643 1957, 0.73, 0.557 1958, 0.65, 0.477 1959, 0.75, 0.577 1960, 0.75, 0.575 1961, 0.52, 0.338 1962, 0.39, 0.208 1963, 0.59, 0.407 1964, -0.49, -0.68 1965, 0.15, -0.041 1966, 0.42, 0.232 1967, 0.67, 0.473 1968, 0.47, 0.276 1969, 0.34, 0.142 1970, 0.63, 0.434 1971, 0.79, 0.586 1972, 0.91, 0.712 1973, 0.85, 0.648 1974, 0.85, 0.646 1975, 0.64, 0.429 1976, 0.82, 0.61 1977, 0.94, 0.725 1978, 1.12, 0.91 1979, 1.23, 1.014 1980, 1.22, 0.998 1981, 1.25, 1.029 1982, 0.44, 0.218 1983, 0.26, 0.04 1984, 0.90, 0.675 1985, 1.27, 1.04 1986, 1.36, 1.129 1987, 1.39, 1.162 1988, 1.42, 1.191 1989, 1.51, 1.276 1990, 1.54, 1.306 1991, 0.43, 0.196 1992, -0.32, -0.561 1993, 1.03, 0.785 1994, 1.45, 1.202 1995, 1.65, 1.404 1996, 1.68, 1.432 1997, 1.83, 1.58 1998, 1.77, 1.515 1999, 1.89, 1.639 2000, 1.97, 1.715 2001, 2.07, 1.812 2002, 2.03, 1.771 2003, 2.05, 1.79 278. Was my interpretation of Delta F and Delta T correct? Do you disagree with my restatement of your equation? A step function or impulse are sufficient to establish equivalence- no need for a particular dataset. If my interpretation of “delta” is correct then the z-transform is correct and a pole cancels a zero and makes it first-order. I will download your spreadsheet. Not sure how to make mine available- maybe through WUWT? 279. Phil M. says: June 6, 2013 at 5:49 am Sorry Willis, single line equations don’t do this: On the contrary, here is a relatively well known one line difference equation that shows otherwise zn+1 = zn^2 + c What Willis has shown is that the ensemble mean of the climate models can be closely modeled by a one line difference equation, and the ensemble mean is what the climate modellers claim represents future climate. In effect, the climate modellers and the IPCC claim that the average of chaos is the future. The power of Willis’s equation is that is can be explored at low cost to discover properties about the models that the model builders may not themselves be aware of – to explore the mathematical assumptions that are at the heart of the climate models. For example, is Willis’s formula chaotic? This could have huge implications for climate science and climate models. 280. Nick Stokes says: June 4, 2013 at 4:11 pm The thing is, they are climate models. They model the climate by creating weather, but do not claim to predict weather. They are good for longer term averages. Nick, here is a question for you. Is the average of chaos not chaotic? Is there a mathematical proof establishing that the average is not chaotic? Otherwise, if the average of chaos (weather) is chaotic, then what reliance can there be in the ” longer term averages”? It is the very nature of chaos that even the smallest lack of precision in the inputs will lead to divergence and large errors over time in the outputs. You cannot rely on the result to converge via the Law of Large numbers because a chaotic system lacks the constant mean and deviation required for convergence. I submit that you have made a fundamental mathematical error in assuming that the average of a chaotic system will demonstrate convergence over times less than infinity. 281. To help clarify my previous post, consider the orbit of earth’s moon. If I was to ask you the average distance between the earth and moon, you could find this over the short term with reasonable accuracy, even though the orbital distance is constantly changing. This is weather forecasting. However, over time this becomes harder to predict, because the moons orbit is changing due to external forces such as the earth’s tides, the sun and Jupiter. This is long term weather forecasting. The distance at present is slowly getting larger, just like the weather is getting warmer as we move from spring to summer. We expect the average temps to go up, but we cannot say on what day precisely they will be higher or lower. However, over really long periods of time it becomes impossible to predict the average distance to the moon, no matter how long the time period, because for all intents and purposes the moon’s orbit is chaotic. It is increasing now, but we cannot say with any certainty whether this will continue indefinitely, or at some point the moon will start to move closer to the earth. This is climate forecasting. Even though we know the forcings that affect the earth’s moon, we cannot accurately calculate its future orbit as we mover further and further into the future. Taking the mean of our calculations is not going to make our predictions more accurate. It could even make it less accurate, because the true answer may lie closer to one of the the boundaries than to the mean. It could even lie outside the boundaries, as we are now seeing with the current climate models. 282. Greg Goodman says: June 4, 2013 at 10:34 am .. the true climate reaction to volcanism. Falling temps are a good predictor of volcanoes. Clearly falling temps cause volcanoes by shrinking the surface of the earth. Sort of like the expansion gaps in bridges and railways. As temps drop the gaps get bigger, making more room for magma to flow out. Eventually we get volcanoes. Well, this is climate science we are talking about, so why not? Doesn’t seem to matter one bit that CO2 lags temperature to the climate scientists, so why should they worry if volcanoes lag Or, the alternate possibility is that there has been so much processing of the temperature records that annual temps have been smeared over multiple years, giving the impression that temps lead volcanoes. In other words, by trying to make temps “more accurate”, climate science has made them less accurate, because they have allowed bias to creep into the adjustments. 283. Output of original eqn,Lambda=1,alpha = 0.8, unit step input: input prev in alpha output prev out prev-prev out 1 0 0.8 0.2 0 0 1 1 0.8 0.36 0.2 0 1 1 0.8 0.488 0.36 0.2 1 1 0.8 0.5904 0.488 0.36 1 1 0.8 0.67232 0.5904 0.488 1 1 0.8 0.737856 0.67232 0.5904 1 1 0.8 0.7902848 0.737856 0.67232 1 1 0.8 0.83222784 0.7902848 0.737856 1 1 0.8 0.865782272 0.83222784 0.7902848 1 1 0.8 0.892625818 0.865782272 0.83222784 1 1 0.8 0.914100654 0.892625818 0.865782272 1 1 0.8 0.931280523 0.914100654 0.892625818 1 1 0.8 0.945024419 0.931280523 0.914100654 Output of simplified eqn: input prev in alpha output prev out prev-prev out 1 N/A 0.8 0.2 0 N/A 1 0.8 0.36 0.2 1 0.8 0.488 0.36 1 0.8 0.5904 0.488 1 0.8 0.67232 0.5904 1 0.8 0.737856 0.67232 1 0.8 0.7902848 0.737856 1 0.8 0.83222784 0.7902848 1 0.8 0.865782272 0.83222784 1 0.8 0.892625818 0.865782272 1 0.8 0.914100654 0.892625818 1 0.8 0.931280523 0.914100654 1 0.8 0.945024419 0.931280523 284. Argh- more perils of posting plain text. The columns should be input/prev in/alpha/output/prev out/prev-prev out. In the second example prev in and prev-prev out are not used in the equation and were N/A in the first row and blank in the remaining rows. All 0.8 should be in the same column (alpha). Probably easier to understand if pasted back into a spreadsheet. 285. @ Willis, In previous post a typical “output” cell was “=(1-C4)*(A4-B4)+(1+C4)*E4-C4*F4″ I signed up for the same file sharing service you use. Now I just need to learn how to use it to post my spreadsheet. June 7, 2013 at 11:39 pm you posted a forcing and a model output. I would need to know the Lambda and alpha used for that run in order to try to reproduce it. 286. david moon says: June 9, 2013 at 5:11 pm @ Willis, In previous post a typical “output” cell was “=(1-C4)*(A4-B4)+(1+C4)*E4-C4*F4″ I signed up for the same file sharing service you use. Now I just need to learn how to use it to post my spreadsheet. June 7, 2013 at 11:39 pm you posted a forcing and a model output. I would need to know the Lambda and alpha used for that run in order to try to reproduce it. I use Dropbox, which gives me a folder on my desktop. When I put something in there, it’s copied to the Dropbox cloud. When I right click on the item in the Dropbox folder (if it is in the “Public” folder in the Dropbox folder), I get an option to copy the URL. Regarding the model outputs, those were the actual outputs of the actual models—GISS, Forster 19 Average Models, CM2.1. So we don’t know the time constant and sensitivity used to create those outputs … but we can use the one-line equation to calculate them. Nick, here is a question for you. Is the average of chaos not chaotic? Is there a mathematical proof establishing that the average is not chaotic? Otherwise, if the average of chaos (weather) is chaotic, then what reliance can there be in the ” longer term averages”? It is the very nature of chaos that even the smallest lack of precision in the inputs will lead to divergence and large errors over time in the outputs. You cannot rely on the result to converge via the Law of Large numbers because a chaotic system lacks the constant mean and deviation required for convergence. I submit that you have made a fundamental mathematical error in assuming that the average of a chaotic system will demonstrate convergence over times less than infinity. Here’s my answer (YMMV). Weather is chaotic, climate isn’t. You can see this in actual long term weather averages. There are caveats though, underlying trends show up. If you go look at the charts I made here http://wattsupwiththat.com/2013/05/17/an-analysis-of-night-time-cooling-based-on-ncdc-station-record-data/ The first set are the averaged data, but when you look at the daily diff chart for 1950-2010 you see chaotic data, plus you see the seasons change, and yet when you average it out over a single year it’s almost zero. 288. Nick Stokes says: June 4, 2013 at 4:11 pm The thing is, they are climate models. They model the climate by creating weather, but do not claim to predict weather. They are good for longer term averages. MiCro says: June 10, 2013 at 7:24 am Nick, here is a question for you. Is the average of chaos not chaotic? Is there a mathematical proof establishing that the average is not chaotic? Otherwise, if the average of chaos (weather) is chaotic, then what reliance can there be in the ” longer term averages”? Here’s my answer (YMMV). Weather is chaotic, climate isn’t. You can see this in actual long term weather averages. Unfortunately, MiCro, your question has already been answered by the dean of fractals himself, Bernard Mandelbrot … you and Nick can read about it here, but the short answer is, Mandelbrot’s analysis and mathematics clearly establishes is that climate is just as chaotic as is weather. To make the analysis, Mandelbrot had to look at the longer term climate records. In making his determination, he analyzed 12 varve series, 27 tree ring series from western U.S. (no bristlecones), 9 precipitation series, 1 earthquake frequency series, 11 river series and 3 Paleozoic sediment series … so I’d say that the question is settled. Climate is as chaotic as weather. However, your point of view is well established among the modelers, so at least you have lots of company in your misconception … 289. Willis- I put a simple “demo” spreadsheet in dropbox which implements both “one line” equations. I will be interested in your reaction. Just a comment about your spreadsheet- it looks like my version of the original equation is functionally equivalent to yours. I chose to copy the output to a new column shifted down by one row, you look back in the same column. This might be a problem in for example, cell BD19 which refers to BD18 and BD17, which are not numbers. Open Office is VERY unhappy with this, Excel not so much. Maybe Excel assumes a value of zero- I don’t know. 290. W- feel free to change alpha or lambda, or paste a different input (“forcing”). The output of both equations will be the same. 291. w- are you still looking at comments on this thread? I have some more findings I would like to discuss with you. If you want to take it out of comments I am at dmoon@sbcglobal.net. This entry was posted in Climate sensitivity, Modeling and tagged climate models, Climate sensitivity, Kiehl. Bookmark the permalink.
{"url":"http://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/","timestamp":"2014-04-16T07:21:20Z","content_type":null,"content_length":"607839","record_id":"<urn:uuid:6fe5d7d4-0d22-4763-92d0-73a81c6ad810>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Asymptotics of the generalized exponential integral, and error bounds in the uniform asymptotic smoothing of its Stokes' discontinuities. To appear in: Royal Society of London Proceedings: Philosophical Transactions - In Tricomi's Ideas and Contemporary Applied Mathematics, Atti dei Convegni Lincei, n. 147, Accademia Nazionale dei Lincei , 1998 "... The theory of the incomplete gamma functions, as part of the theory of conuent hypergeometric functions, has received its rst systematic exposition by Tricomi in the early 1950s. His own contributions, as well as further advances made thereafter, are surveyed here with particular emphasis on asy ..." Cited by 15 (1 self) Add to MetaCart The theory of the incomplete gamma functions, as part of the theory of conuent hypergeometric functions, has received its rst systematic exposition by Tricomi in the early 1950s. His own contributions, as well as further advances made thereafter, are surveyed here with particular emphasis on asymptotic expansions, zeros, inequalities, computational methods, and applications. "... We consider the asymptotic behavior of the incomplete gamma functions fl(\Gammaa; \Gammaz) and \Gamma(\Gammaa; \Gammaz) as a !1. Uniform expansions are needed to describe the transition area z a, in which case error functions are used as main approximants. We use integral representations of the i ..." Cited by 5 (1 self) Add to MetaCart We consider the asymptotic behavior of the incomplete gamma functions fl(\Gammaa; \Gammaz) and \Gamma(\Gammaa; \Gammaz) as a !1. Uniform expansions are needed to describe the transition area z a, in which case error functions are used as main approximants. We use integral representations of the incomplete gamma functions and derive a uniform expansion by applying techniques used for the existing uniform expansions for fl(a; z) and \Gamma(a; z). The result is compared with Olver's uniform expansion for the generalized exponential integral. A numerical verification of the expansion is given in a final section.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2138655","timestamp":"2014-04-20T14:45:17Z","content_type":null,"content_length":"15372","record_id":"<urn:uuid:10f996b5-17cc-474c-a21e-ee981f584e41>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract-several questions February 9th 2010, 01:33 AM #1 Dec 2009 Abstract-several questions Well, I'm trying to solve some old exams in abstract algebra, and I need your help in the following: Question 1: Let $U(Z/15Z)$ be the multiplicative group of the invertible elemenets in $Z/15Z$ . Write $U(Z/15Z)$ as a product of cyclic groups. My try: It's easy to show that x is in U iff gcd(x,15)=1... Hence: U={1,2,8,4,7,13,11,14} ... If we'll take the element 2 to be a generator we'll get the cyclic group {1,2,4,8} ... If we'll take <7> we'll get {1,7,4,13} etc... I can't figure out how to write U as a product of cyclic groups when the intersection of each two cyclic subgroups of U is "bigger" than {1}... Question 2: Prove that $Z/nZ$ has only one maximal ideal (which isn't trivial) iff n is a power of a prime number. Question 3: Find a 3-sylow subgroup of $S_{8}$ and find a group that is isomorphic to it. My try: We know $o(S_{8}) = 8!$ . Hence, a 3-sylow subgroup H is from order $o(H)= 3^{2}=9$. If we'll be able to find an element in $S_{8}$ from that order- we're done...But is there any element of that order? How should I solve this one? Thanks a lot! Last edited by WannaBe; February 9th 2010 at 06:11 AM. Well, I'm trying to solve some old exams in abstract algebra, and I need your help in the following: Question 1: Let $U(Z/15Z)$ be the multiplicative group of the invertible elemenets in $Z/15Z$ . Write $U(Z/15Z)$ as a product of cyclic groups. My try: It's easy to show that x is in U iff gcd(x,15)=1... Hence: U={1,2,8,4,7,13,11,14} ... If we'll take the element 2 to be a generator we'll get the cyclic group {1,2,4,8} ... If we'll take <7> we'll get {1,7,4,13} etc... I can't figure out how to write U as a product of cyclic groups when the intersection of each two cyclic subgroups of U is "bigger" than {1}... If $\mathbb{Z}/m\mathbb{Z} \cong \mathbb{Z}/m_1\mathbb{Z} \oplus \mathbb{Z}/m_2\mathbb{Z} \oplus \cdots \oplus \mathbb{Z}/m_i\mathbb{Z}$, then $U(\mathbb{Z}/m\mathbb{Z}) \cong U(\mathbb{Z}/m_1\ mathbb{Z}) \times U(\mathbb{Z}/m_2\mathbb{Z}) \times \cdots \times U(\mathbb{Z}/m_i\mathbb{Z})$. We see that if each $m_k$ is a prime number, then each $U(\mathbb{Z}/m_k\mathbb{Z})$ is a cyclic group. Thus, $U(\mathbb{Z}/15\mathbb{Z}) \cong U(\mathbb{Z}/3\mathbb{Z}) \times U(\mathbb{Z}/5\mathbb{Z})$. Question 2: Prove that $Z/nZ$ has only one maximal ideal (which isn't trivial) iff n is a power of a prime number. If n is a power of prime number, then $\mathbb{Z}/n = \mathbb{Z}/p^i\mathbb{Z}$. The maximal ideal of $\mathbb{Z}/p^i\mathbb{Z}$ is $\mathbb{Z}/p^{i-1}\mathbb{Z}$ for i>=2, which is unique for $n Question 3: Find a 3-sylow subgroup of $S_{8}$ and find a group that is isomorphic to it. My try: We know $o(S_{8}) = 8!$ . Hence, a 3-sylow subgroup H is from order $o(H)= 3^{2}=9$. If we'll be able to find an element in $S_{8}$ from that order- we're done...But is there any element of that order? How should I solve this one? Thanks a lot! H={e, (1,2,3), (1,3,2), (4,5,6), (4,6,5), (1,2,3)(4,5,6), (1,2,3)(4,6,5), (1,3,2)(4,5,6), (1,3,2)(4,6,5)}. Other subgroups of order 9 in S_8 can be found similarly. Last edited by aliceinwonderland; February 10th 2010 at 02:15 AM. Reason: Correction Thanks a lot man! Oh , you meant Question 3 LOL ... I wasn't sure about it.... Yep, I saw your "modification"... But we know that there are 2 groups of order 9 up to isomorphism: C_3 x C_3 and C_9... We don't have an element of order 9 in S8 hence the only subgroup of order 9 in S8 is C_3xC_3 ... (I think so...) Thanks a lot anyway ! February 9th 2010, 11:42 PM #2 Senior Member Nov 2008 February 10th 2010, 12:39 AM #3 Dec 2009 February 10th 2010, 12:47 AM #4 Senior Member Nov 2008 February 10th 2010, 01:10 AM #5 Dec 2009 February 10th 2010, 01:27 AM #6 Senior Member Nov 2008 February 10th 2010, 01:53 AM #7 Dec 2009
{"url":"http://mathhelpforum.com/advanced-algebra/127945-abstract-several-questions.html","timestamp":"2014-04-18T08:59:17Z","content_type":null,"content_length":"53621","record_id":"<urn:uuid:906d51b7-0f17-49f6-936e-450750328023>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
A simple Binary Search Tree written in C# In Computer Science, a binary tree is a hierarchical structure of nodes, each node referencing at most to two child nodes. Every binary tree has a root from which the first two child nodes originate. If a node has no children, then such nodes are usually termed leaves, and mark the extent of the tree structure. A particular kind of binary tree, called the binary search tree, is very useful for storing data for rapid access, storage, and deletion. Data in a binary search tree are stored in tree nodes, and must have associated with them an ordinal value or key; these keys are used to structure the tree such that the value of a left child node is less than that of the parent node, and the value of a right child node is greater than that of the parent node. Sometimes, the key and datum are one and the same. Typical key values include simple integers or strings, the actual data for the key will depend on the application. In this article, I describe a binary search tree that stores string/double pairs. That is, the key is the string value and the data associated with the key is a double value. Developers can search the tree using string values. There are a number of basic operations one can apply to a binary search tree, the most obvious include, insertion, searching, and deletion. To insert a new node into a tree, the following method can be used. We first start at the root of the tree, and compare the ordinal value of the root to the ordinal value of the node to be inserted. If the ordinal values are identical, then we have a duplicate and we return to the caller indicating so. If the ordinal value is less than the root, then we follow the left branch of the root, else we follow the right branch. We now start the comparison again but at the branch we took, comparing the ordinal value of the child with the node to be inserted. Traversal of the tree continues in this manner until we reach a left or right node which is empty and we can go no further. At this point, we insert the new node into this empty location. Note that new nodes are always inserted as leaves into the tree, and strictly speaking, nodes are thus appended rather than inserted. Searching a binary search tree is almost identical to inserting a new node except that we stop the traversal when we find the node we're looking for (during an insertion, this would indicate a duplicate node in the tree). If the node is not located, then we report this to the caller. Both insertion and searching are naturally recursive and are, arguably, easier to understand when considered in terms of their unit operation. A basic recursive search algorithm will look like: node search (node, key) { if node is null then return null; if node.key = key then return node if key < node then return search (node.left, key); return search (node.right, key); In the source code provided with this article, insertion is implemented recursively, while searching uses an iterative approach. Deletion is a little bit more complicated but boils down to three rules. The three rules refer to deleting nodes without any children, nodes with one child, and nodes with two children. If a node has no children, then the node is simply deleted. If the node has one child, then the node is deleted and the child node is brought forward to link to the parent. The complication occurs when a node has two children. However, even here, the rules are straightforward when stated. To delete a node with two children, the next ordinal node (called the successive node) on the right branch is used to replaced the deleted node. The successive node is then deleted. The successive node will always be the left most node on the right branch (likewise, the predecessor node will be the right most node on the left branch). The figure below illustrates the deletion rules. A common alternative to using binary search tree is to use Hash tables. Hash tables have better search and insertion performance metrics. In theory, the time it takes to insert or search for an item in a Hash table is independent of the number of data items stored. In contrast, a binary search tree scales with log (N) where N is the number of data items (still far better than a linear search). The .NET libraries contain explicit support for Hash tables. Balanced Trees The time taken to insert or search for a specific item in a tree will be affected by a tree's depth. Deep trees take longer to search, and the insertion order into a tree can affect a tree's shape. A random insertion order will generally produce a more bushy and hence shallower tree compared to an ordered insert. Bushy trees are often called balanced trees, and although not implemented here, balancing a tree is a highly desirable feature for a binary search tree implementation. Certain algorithms such as the red-black tree will auto-balance as the tree is constructed (see Red/Black tree animation). The figure below shows three trees generated by three identical data sets but inserted in a different order. The first is the most balanced and hence the most shallow of the three Implementing the search and insertion methods using a recursive approach has the potential to yield poor performance, particularly when the trees are unbalanced. Using the Code Using the source code provided with this article is very easy. The following code illustrates the instantiation of a new binary tree, the insertion of data into the tree, and subsequent retrieval. The method insert() is used to insert new data, and the method findSymbol() is used to locate and retrieve data. If findSymbol() fails to locate the data item, it returns null. When successful, findSymbol() returns a TTreeNode object which has two properties, name and value. The following code illustrates how to use the binary search tree. The class name of the binary tree is TBinarySTree, and the individual nodes have class type TTreeNode. // Create a new binary tree bt = new TBinarySTree(); // Insert data bt.insert ("Canis Minoris", 5.37); bt.insert ("Gamma Cancri", 4.66); bt.insert ("Phi Centauri", 3.83); bt.insert ("Kappa Tauri", 4.21); // Retrieve data TTreeNode symbol = bt.findSymbol ("Phi Centauri"); if (symbol != null) Console.WriteLine ("Star {1} has magnitude = {0}", symbol.name, symbol.value); Other methods of interest include: count returns the number of nodes in the tree. bt.delete (key); delete will delete the node with the given key. If the method fails to locate the node, the method throws a simple exception. The source code is licensed under the BSD license. The source should compile on C# 2.0. To use the source code, unpack the source, load the binary tree solution (binaryTree.sln) and compile the BinaryTree project. In your own project, include as a reference to BinaryTree.dll. This version was written using Visual Studio 2003. Points of Interest The following graphs compare the performance of the Binary Tree search with .NET's built-in Hashtable class. The implementations follow the expected scaling laws as the number of stored data items increase. The X axis indicates the number of data items stored, ranging from 1000 to 1,000,000 items (21 intervals in all). The Y axis indicates the average time required to retrieve one data item (averaged over 20,000 retrieval attempts). The Hashtable follows roughly O(1), that is the time taken to retrieve data is independent of the number of data items stored. In contrast, the binary search tree scales roughly O(log (N)). However, this is far better than a linear search which would scale as O(N); that is, doubling the number of stored items doubles the average time taken to retrieve a single data item. The graph on the left shows the data plotted on log axes. Times were computed using the QueryPerformanceCounter() method. The code for timing was derived from Tobi+C#=T#. Other Possibilities One should consider the implementation outlined here as the minimum practical implementation. The project where this implementation originated did not require any further sophistication. However, there are a number of areas where it could be significantly improved. In particular, two areas warrant further work: 1. The current implementation is specific to storing name/value pairs. Ideally, one would prefer a more generic implementation where a developer could employ their own object type. 2. The implementation may suffer performance degradation when subjected to large data sets if the trees become significantly unbalanced. Ideally one could implement the Red/Black variant to avoid this issue (see reference 1. at the end of the article for details). Other minor changes include using a property in place of the public method count, adding further utility methods, and changing the naming convention on the classes and methods to make them more consistent with .NET. 1. Fixed a small error in the third tree, Figure 3 (missing C node). 2. There is an older article on CodeProject which discusses Red-Black trees in C#, something I should have spotted earlier (Red-Black Trees in C#). There appears to be very little material on Binary Search Trees using .NET 1.1; the following, particularly the first link, provide material related to .NET 2.0. 1. Examination of Binary Search Trees using C# 2.0. 2. C5 is a .NET 2.0 library of generic collection classes for C#. 3. A collection of data structures including binary search trees.
{"url":"http://www.codeproject.com/Articles/18976/A-simple-Binary-Search-Tree-written-in-C?msg=3994083&PageFlow=FixedWidth","timestamp":"2014-04-17T12:54:22Z","content_type":null,"content_length":"136866","record_id":"<urn:uuid:3e315ca9-65ae-457f-b2da-64ba56e9f52f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus (Urgent Help) Posted by Veronica on Friday, July 25, 2008 at 4:38pm. Okay, I need major help! Can someone tell me if these statements are true or false ASAP please. Thank you. 1. If ƒ′(x) < 0 when x < c then ƒ(x) is decreasing when x < c. 2. The function ƒ(x) = x^3 – 3x + 2 is increasing on the interval -1 < x < 1. 3. If ƒ'(c) < 0 then ƒ(x) is decreasing and the graph of ƒ(x) is concave down when x = c. 4.A local extreme point of a polynomial function ƒ(x) can only occur when ƒ′(x) = 0. 5. If ƒ′(x) > 0 when x < c and ƒ′(x) < 0 when x > c, then ƒ(x) has a maximum value when x = c. 6. If ƒ′(x) has a minimum value at x = c, then the graph of ƒ(x) has a point of inflection at x = c. 7. If ƒ′(c) > 0 and ƒ″(c), then ƒ(x) is increasing and the graph is concave up when x = c. 8. If ƒ′(c) = 0 then ƒ(x) must have a local extreme point at x = c. 9. The graph of ƒ(x) has an inflection point at x = c so ƒ′(x) has a maximum or minimum value at x = c. 10. ƒ′(x) is increasing when x < c and decreasing when x > c so the graph of ƒ(x) has an inflection point at x = c. So yeah those are the questions (statements), if you can tell which ones are true / false or correct me on what I said that would be helpful. I put what I thought it was so if its wrong, please correct me! Thanks, • Calculus (Urgent Help) - Damon, Friday, July 25, 2008 at 5:37pm 1. T agree - if x is increasing, f(x) is decreasing 2. F agree - max at x = -sqrt 3, min at x = +sqrt 3 3. False, disagree, it is decreasing but who says it is concave or convex? It could be a straight line with negative slope. • Calculus (Urgent Help) - Damon, Friday, July 25, 2008 at 5:41pm 4.A local extreme point of a polynomial function ƒ(x) can only occur when ƒ′(x) = 0. True AGREE 5. If ƒ′(x) > 0 when x < c and ƒ′(x) < 0 when x > c, then ƒ(x) has a maximum value when x = c. TRUE I think - DISAGREE looks like /\ • Calculus (Urgent Help) - Damon, Friday, July 25, 2008 at 5:47pm 6. If ƒ′(x) has a minimum value at x = c, then the graph of ƒ(x) has a point of inflection at x = c. TRUE - DISAGREE an extreme value of f' means f" changes sign which means inflection • Calculus (Urgent Help) - Damon, Friday, July 25, 2008 at 5:49pm Agree with 7 8 9 10 • Calculus (Urgent Help) - Veronica, Friday, July 25, 2008 at 5:51pm oh okay, thanks so much! :) i really appreciate the help, especially since u corrected what i did wrong. thanks! • Calculus (Urgent Help) - S, Monday, May 10, 2010 at 4:17pm I hope you realize by doing this you are helping students taking online calculus cheat on their test, as this is direct content from the test under Unit 6 Activity 9. • Calculus (Urgent Help) - Philip, Thursday, October 20, 2011 at 1:30am Not only that, but you are wrong on number 6. A minimum value of f'(x) at x = c does NOT imply an extreme value, although the reverse is certainly true if the sign changes from - to +. f''(x) could be undefined at f'(c) which would make it a critical value of f'(x) since it is in the domain of f'(x). An example would be if f'(x) = x ^ (1/2). (0,0) is clearly a minimum value of f'(x), and in fact f(x), but it is not a relative minimum so there would be no inflection point. Related Questions Physics Please check my answers - It's possible for a body to have both ... calculus - Determine whether the statement is true or false. If it is true, ... Physics Please check my answers - A sound of 7 kHz frequency is well within the ... Physics Please check my answers - 1. A fluid can only be a liquid. True <&lt... Reading - can you help me thank you: 2.Detecting bias is one of the major keys ... chemistry - In the chemical plating experiment zinc powder is dissolved in a ... Physical Ed - 11. Disagreements are a normal part of life. (1 point) True False ... chem - i need help on this question: Equal masses of gaseous N2 and Ar are ... grammar - Can you plese tell if the follong questions are true or false (... algebra - Hi, I need some help on my homework. Can you help me? True or False, ...
{"url":"http://www.jiskha.com/display.cgi?id=1217018285","timestamp":"2014-04-20T04:09:21Z","content_type":null,"content_length":"12315","record_id":"<urn:uuid:6b75d34c-c18b-43d4-bc41-1e4b0d2091a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum noise in optical fibers. I. Stochastic equations We analyze the quantum dynamics of radiation propagating in a single-mode optical fiber with dispersion, nonlinearity, and Raman coupling to thermal phonons. We start from a fundamental Hamiltonian that includes the principal known nonlinear effects and quantum-noise sources, including linear gain and loss. Both Markovian and frequency-dependent, non-Markovian reservoirs are treated. This treatment allows quantum Langevin equations, which have a classical form except for additional quantum-noise terms, to be calculated. In practical calculations, it is more useful to transform to Wigner or +P quasi-probability operator representations. These transformations result in stochastic equations that can be analyzed by use of perturbation theory or exact numerical techniques. The results have applications to fiber-optics communications, networking, and sensor technology. © 2001 Optical Society of America OCIS Codes (060.2400) Fiber optics and optical communications : Fiber properties (060.4510) Fiber optics and optical communications : Optical communications (190.4370) Nonlinear optics : Nonlinear optics, fibers (190.5650) Nonlinear optics : Raman effect (270.3430) Quantum optics : Laser theory (270.5530) Quantum optics : Pulse propagation and temporal solitons P. D. Drummond and J. F. Corney, "Quantum noise in optical fibers. I. Stochastic equations," J. Opt. Soc. Am. B 18, 139-152 (2001) Sort: Year | Journal | Reset 1. S. J. Carter, P. D. Drummond, M. D. Reid, and R. M. Shelby, “Squeezing of quantum solitons,” Phys. Rev. Lett. 58, 1841–1844 (1987). 2. P. D. Drummond and S. J. Carter, “Quantum-field theory of squeezing in solitons,” J. Opt. Soc. Am. B 4, 1565–1573 (1987). 3. M. Rosenbluh and R. M. Shelby, “Squeezed optical solitons,” Phys. Rev. Lett. 66, 153–156 (1991); P. D. Drummond, R. M. Shelby, S. R. Friberg, and Y. Yamamoto, “Quantum solitons in optical fibres,” Nature 365, 307–313 (1993). 4. J. P. Gordon and H. A. Haus, “Random walk of coherently amplified solitons in optical fiber transmission,” Opt. Lett. 11, 665–667 (1986). 5. H. A. Haus and W. S. Wong, “Solitons in optical communications,” Rev. Mod. Phys. 68, 423–444 (1996). 6. J. F. Corney and P. D. Drummond, “Quantum noise in optical fibers. II. Raman jitter in soliton communications,” J. Opt. Soc. Am. B 18, 153–161 (2001). 7. P. D. Drummond, “Electromagnetic quantization in dispersive inhomogeneous nonlinear dielectrics,” Phys. Rev. A 42, 6845–6857 (1990). 8. P. D. Drummond and S. J. Carter, “Quantum-field theory of squeezing in solitons,” J. Opt. Soc. Am. B 4, 1565–1573 (1987); P. D. Drummond, S. J. Carter, and R. M. Shelby, “Time dependence of quantum fluctuations in solitons,” Opt. Lett. 14, 373–375 (1989). 9. B. Yurke and M. J. Potasek, “Solution to the initial value problem for the quantum nonlinear Schrödinger equation,” J. Opt. Soc. Am. B 6, 1227–1238 (1989). 10. P. D. Drummond and M. Hillery, “Quantum theory of dispersive electromagnetic modes,” Phys. Rev. A 59, 691–707 (1999). 11. E. Power and S. Zienau, “Coulomb gauge in nonrelativistic quantum electrodynamics and the shape of spectral lines,” Philos. Trans. R. Soc. London, Ser. A 251, 427–454 (1959); R. Loudon, The Quantum Theory of Light (Clarendon, Oxford, 1983). 12. M. Hillery and L. D. Mlodinow, “Quantization of electrodynamics in nonlinear dielectric media,” Phys. Rev. A 30, 1860–1865 (1984). 13. N. Bloembergen, Nonlinear Optics (Benjamin, New York, 1965). 14. P. D. Drummond, “Quantum theory of fiber-optics and solitons,” in Coherence and Quantum Optics VII, J. Eberly, L. Mandel, and E. Wolf, eds. (Plenum, New York, 1996), pp. 323–332. 15. G. P. Agrawal, Nonlinear Fiber Optics, 2nd ed. (Academic, San Diego, Calif., 1995), pp. 28–59. 16. S. J. Carter and P. D. Drummond, “Squeezed quantum solitons and Raman noise,” Phys. Rev. Lett. 67, 3757–3760 (1991). 17. F. X. Kartner, D. J. Dougherty, H. A. Haus, and E. P. Ippen, “Raman noise and soliton squeezing,” J. Opt. Soc. Am. B 11, 1267–1276 (1994). 18. Y. Lai and S.-S. Yu, “General quantum theory of nonlinear optical-pulse propagation,” Phys. Rev. A 51, 817–829 (1995); S.-S. Yu and Y. Lai, “Impacts of self-Raman effect and third-order dispersion on pulse squeezed state generation using optical fibers,” J. Opt. Soc. Am. B 12, 2340–2346 (1995). 19. T. von Foerster and R. J. Glauber, “Quantum theory of light propagation in amplifying media,” Phys. Rev. A 3, 1484–1511 (1971); I. A. Walmsley and M. G. Raymer, “Observation of macroscopic quantum fluctuations in stimulated Raman scattering,” Phys. Rev. Lett. 50, 962–965 (1983). 20. M. D. Levenson, Introduction to Nonlinear Laser Spectroscopy (Academic, New York, 1982). 21. P. Dean, “The vibrational properties of disordered systems: numerical studies,” Rev. Mod. Phys. 44, 127–168 (1972). 22. R. H. Stolen, C. Lee, and R. K. Jain, “Development of the stimulated Raman spectrum in single-mode silica fibers,” J. Opt. Soc. Am. B 1, 652–657 (1984); D. J. Dougherty, F. X. Kartner, H. A. Haus, and E. P. Ippen, “Measurement of the Raman gain spectrum of optical fibers,” Opt. Lett. 20, 31–33 (1995); R. H. Stolen, J. P. Gordon, W. J. Tomlinson, and H. A. Haus, “Raman response function of silica-core fibers,” J. Opt. Soc. Am. B JOBPDE 6, 1159–1166 (1989). 23. R. M. Shelby, M. D. Levenson, and P. W. Bayer, “Guidedacoustic-wave Brillouin scattering,” Phys. Rev. B 31, 5244–5252 (1985). 24. R. M. Shelby, P. D. Drummond, and S. J. Carter, “Phase-noise scaling in quantum soliton propagation,” Phys. Rev. A 42, 2966–2796 (1990). 25. K. Bergman, H. A. Haus, and M. Shirasaki, “Analysis and measurement of GAWBS spectrum in a nonlinear fiber ring,” Appl. Phys. B 55, 242–249 (1992). 26. K. Smith and L. F. Mollenauer, “Experimental observation of soliton interaction over long fiber paths: discovery of a long-range interaction,” Opt. Lett. 14, 1284–1286 (1989); E. M. Dianov, A. V. Luchnikov, A. N. Pilipetskii, and A. M. Prokhorov, “Long-range interaction of picosecond solitons through excitation of acoustic waves in optical fibers,” Appl. Phys. B 54, 175–180 (1992). 27. S. H. Perlmutter, M. D. Levenson, R. M. Shelby, and M. B. Weissman, “Inverse-power-law light scattering in fused-silica optical fiber,” Phys. Rev. Lett. 61, 1388–1391 (1988); “Polarization of quasielastic light scattering in fused-silica optical fiber,” Phys. Rev. B 42, 5294–5305 (1990). 28. R. J. Mears, L. Reekie, I. M. Jauncey, and D. N. Payne, “Low-noise erbium-doped fibre amplifier operating at 1.54 μm,” Electron. Lett. 23, 1026–1028 (1987). 29. E. Desurvire, Erbium-Doped Fiber Amplifiers, Principles and Applications (Wiley, New York, 1993). 30. P. D. Drummond and M. G. Raymer, “Quantum theory of propagation of nonclassical radiation in a near-resonant medium,” Phys. Rev. A 44, 2072–2085 (1991). 31. L. F. Mollenauer, “Solitons in optical fibers and the soliton laser,” Philos. Trans. R. Soc. London 15, 437–450 (1985); L. F. Mollenauer, R. H. Stolen, and J. P. Gordon, “Experimental observation of picosecond pulse narrowing and solitons in optical fibers,” Phys. Rev. Lett. 45, 1095–1098 (1980). 32. J. P. Gordon, “Theory of the soliton self-frequency shift,” Opt. Lett. 11, 662–664 (1986); F. M. Mitschjke and L. F. Mollenauer, “Discovery of the soliton self-frequency shift,” Opt. Lett. 11, 659–661 (1986). 33. P. D. Drummond and A. D. Hardman, “Simulation of quantum effects in Raman-active waveguides,” Europhys. Lett. 21, 279–284 (1993); P. D. Drummond and W. Man, “Quantum noise in reversible soliton logic,” Opt. Commun. 105, 99–103 (1994). 34. S. J. Carter, “Quantum theory of nonlinear fiber optics: phase-space representations,” Phys. Rev. A 51, 3274–3301 (1995). 35. M. J. Werner and P. D. Drummond, “Robust algorithms for solving stochastic partial differential equations,” J. Comput. Phys. 132, 312–326 (1997). 36. N. J. Smith, N. J. Doran, W. Forysiak, and F. M. Knox, “Soliton transmission using periodic dispersion compensation,” J. Lightwave Technol. 15, 1808–1822 (1997). 37. T. I. Lakoba and D. J. Kaup, “Influence of the Raman effect on dispersion-managed solitons and their interchannel collisions,” Opt. Lett. 24, 808–810 (1999). 38. S. Namiki, C. X. Yu, and H. A. Haus, “Observation of nearly quantum-limited timing jitter in an all-fiber ring laser,” J. Opt. Soc. Am. B 13, 2817–2823 (1996); B. C. Collings, K. Bergman, and W. H. Knox, “Stable multigigahertz pulse-train formation in a short-cavity passively harmonic mode-locked erbium/ytterbium fiber laser,” Opt. Lett. 23, 123–125 (1998). 39. S. R. Friberg, S. Machida, M. J. Werner, A. Levanon, and T. Mukai, “Observation of optical soliton photon-number squeezing,” Phys. Rev. Lett. 77, 3775–3778 (1996); S. Spalter, M. Burk, U. Strossner, M. Bohm, A. Sizmann, and G. Leuchs, “Photon number squeezing of spectrally filtered sub-picosecond optical solitons,” Europhys. Lett. 38, 335–340 (1997); D. Krylov and K. Bergman, “Amplitude-squeezed solitons from an asymmetric fiber interferometer,” Opt. Lett. OPLEDP 23, 1390–1392 (1998). 40. M. J. Werner, “Raman-induced photon correlations in optical fiber solitons,” Phys. Rev. A 60, R781–R784 (1999). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
{"url":"http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-18-2-139","timestamp":"2014-04-20T00:20:32Z","content_type":null,"content_length":"130515","record_id":"<urn:uuid:b7f99144-fa0c-4a0b-853b-9f142b933a30>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
The Normal Distribution | University of the Sciences The Normal Distribution Repeated measurements in biology are rarely identical, due to random errors and natural variation. If enough measurements are repeated they can be plotted on a histogram, like the one on the right. This usually shows a normal distribution, with most of the repeats close to some central value. Many biological phenomena follow this pattern: eg. peoples' heights, number of peas in a pod, the breathing rate of insects, etc. The central value of the normal distribution curve is the mean (also known as the arithmetic mean or average). But how reliable is this mean? If the data are all close together, then the mean is probably good, but if they are scattered widely, then the calculated mean may not be very reliable. The reliability of the mean is given by the 95% confidence interval (also known as the confidence limit). This is derived from the standard deviation, and is the range above and below the mean within which 95% of the repeated measurements lie (marked on the histogram above). You can be pretty confident that the real mean lies somewhere in this range. Whenever you calculate a mean you should also calculate a confidence limit to indicate the quality of your data. In Excel the mean is calculated using the formula =AVERAGE (range) , and the 95% confidence interval is calculated using =CONFIDENCE (0.05, STDEV(range), COUNT(range)) . These are both shown in the spreadsheet below. This spreadsheet shows two sets of data with the same mean. In group A the confidence limit is small compared to the mean, so the data are reliable and you can be confident that the real mean is close to your calculated mean. But in group B the confidence limit is large compared to the mean, so the data are unreliable, as the real mean could be quite far away from your calculated mean. The Equations where n is the number of measurements x are the measurements to be averaged and S means sum of. 95% Confidence Interval where n is the number of measurements s is the standard deviation and x are the measurements to be averaged
{"url":"http://www.usciences.edu/biology/bs130/normal%20distribution.html","timestamp":"2014-04-20T00:57:38Z","content_type":null,"content_length":"4868","record_id":"<urn:uuid:21204f1c-60b8-4e80-86ea-66ee0cf2642f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Free Probability Theory: New Developments and Applications Serban Belinschi (Saskatchewan) and Benoît Collins A Wigner matrix is a self-adjoint (or symmetric) random matrix with i.i.d. entries above the diagonal. Their fluctuations were analyzed by Khorunzhy, Khoruzhenko, and Pastur. We consider the free analogue, i.e. freely independent non-commuting entries and analyze their fluctuations. Instead of non-crossing annular diagrams we get non-crossing linear diagrams. This is joint work with Roland Speicher. We develop a numerical approach for computing the additive, multiplicative and compressive convolution operations from free probability theory. We utilize the regularity properties of free convolution to identify (pairs of ) `admissible' measures whose convolution results in a so-called `invertible measure' which is either a smoothly-decaying measure supported on the entire real line (such as the Gaussian) or square-root decaying measure supported on a compact interval (such as the semi-circle). This class of measures is important because these measures along with their Cauchy transforms can be accurately represented via a Fourier or Chebyshev series expansion, respectively. Thus knowledge of the functional inverse of their Cauchy transform suffices for numerically recovering the invertible measure via a non-standard yet well-behaved Vandermonde system of equations. We describe explicit algorithms for computing the inverse Cauchy transform alluded to and recovering the associated measure with spectral accuracy. Given two complex vector spaces, V and W, a non-commutative function is, briefly, a mapping from a certain class of subsets of that matrix space over V to the matrix space over W, satisfying some compatibility conditions: it has to respect direct sums and simultaneous similarities. Non-commutative functions have very strong regularity properties and they admit a very nice differential calculus, closely related to some QD-bialgebras arising in free probabilities. Such objects were considered before by J. L. Taylor in his groundbreaking work in on the noncommutative spectral theory, and more recently independently by D.-V. Voiculescu in free probability. Besides a brief introduction in the theory of non-commutative functions, the lecture will survey some applications of this theory in operator-valued non-commutative probability, such as non-commmutative Levy-Hincine formulas, Bercovici-Pata bijection, operator-valued Cauchy and R-transforms, operator-valued semicircle, arcsine and Bernoulli laws. Most of results presented are joint work with V. Vinnikov and Serban Belinschi. The second-order statistics of large random matrices may be studied in a noncommutative probability space equipped with a bilinear function modelling the covariance of traces. As in the first-order case, second-order freeness may be defined so that fairly general classes of random matrices are asymptotically second-order free. However, the definition satisfied by real random matrices is different from that satisfied by their complex analogues. We present a topological approach to the matrix calculations, in which the real matrices are distinguished from their complex analogues by the appearance of twisted gluings and the resulting nonorientable surfaces. This motivates a different definition of second-order freeness in the real case, which is satisfied by a number of important matrix models, and in fact by any independent matrices which are orthogonally in general position. Expressing the local and global moments of random matrices is a common problem in different fields such as Random Matrix Theory, Statistics and Finance. Due to the important role played by moments of random matrices, we obtain an expression for the local moments of a number of complex and real matrices. A formula for the moments of left-right orthogonal (unitary) random matrices is our main result and it enables us to obtain the moments of the inverted Compound Wishart matrices. Since Covariance matrices with correlated samplings are Compound Wishart matrices, this explains the importance of the Compound Wishart matrices essentially in Statistics. This is joint work with Benoit Collins and Sho Matsumoto. Over the course of the last two decades, there emerged a general paradigm for passing from the commutative situation to the free noncommutative situation: we replace a vector space by the disjoint union of spaces of square matrices of all sizes over it. When applied to function theory on the vector space in question, this leads to noncommutative or fully matricial functions as studied by the speaker and D. S. Kaliuzhnyi-Verbovetskyi and by D.-V. Voiculescu; the origins of this theory actually go back to the pioneering work of J. L. Taylor on noncommutative functional calculi. In this talk, I will review some of the salient features of the theory of noncommutative functions with a special emphasis on their amazing regularity properties: over a finite-dimensional vector space, a noncommutative function that is locally bounded on slices, separately in every matrix dimension, is actually entrywise analytic in every matrix dimension, and admits a noncommutative power series expansion that converges locally uniformly. In a 1998 paper Biane started the investigation on the noncommutative stochastic processes with free increments, and he showed that there are two kinds of free Levy processes: the ones with stationary increment distributions (the first kind) and the ones with stationary transition probabilities (the second kind). In the literature, the second kind processes are less studied than are the first kind. In this talk we will explain briefly what is a free Levy process and then report some new results on the asymptotic distributional behavior for the marginal laws of a second kind free Levy process.
{"url":"http://cms.math.ca/Events/summer12/abs/fpt","timestamp":"2014-04-18T21:02:08Z","content_type":null,"content_length":"19041","record_id":"<urn:uuid:8c3b8482-cc6f-4c41-a37b-9a458af00e14>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - linear circuit problems linear circuit problems In solving linear circuit problems using complex currents, charges etc. I have stumbled upon something I never really understand. You are basically solving 1st and 2nd order differential equations by algebraic means, right? Well at what point do you apply initial conditions to your current and charge, Q(0), dQ/dt (0)? I might have missed out on an essential points, because it actually seems that you never apply boundary conditions. Simon Bridge Jan7-13 05:43 PM Re: linear circuit problems You could discover where the boundary conditions come in (and they don't have to be at t=0) by solving the same DEs the usual way :) What you are doing is exploiting that you know the solutions to the DEs with the appropriate boundary conditions already because you know the relationship between PD and dQ/dt. Re: linear circuit problems uhh. what is PD? :( Let me just clarify exactly the method I have learned and you can point at which step I use boundary conditions: 1) Say we have an RCL with I(0)=0, Q(0)=a driven by a time varying potential U(t) = εcos(ωt) 2) We write (where I is now complex I = I0exp(-iωt) and β is a phase) εexp(-iωt) = RI0exp(-iωt) - iωLβI0exp(-iωt) - 1/(-iωC) I0exp(-iωt) 3) You can cancel out the exponential function and find the phase which can be found entirely in terms of the physical quantities L,C,R. The same goes for I0, the amplitude of the current. 4) You can now put it all together and take the real part of the current. It will be a sinusoidal function with determined amplitude and phase - which is one specific solution and is not a general WHERE did I apply the boundary conditions? Simon Bridge Jan7-13 06:46 PM Re: linear circuit problems PD=potential difference. WHERE did I apply the boundary conditions? ... write out the general solution (from the DE) for the kind of quantity you have to solve when you do linear circuit analysis, then compare with the result you get when you use that analysis, and you'll be able to see what the boundary conditions were. You did explicitly assume I(0)=0 and Q(0)=a (Q(0) where?). When you draw the voltage source in the diagram, you specify a phase direction. When you draw the PD arrows on the diagram - you use that phase direction to decide if the potential is gained or dropped. Those contribute to the boundary conditions. Since the current will be sinusoidal, it actually doesn't matter what the initial phase is - so we have picked one that makes the math easy. You'll see it clearly when you do the calculus. All times are GMT -5. The time now is 10:40 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=662983","timestamp":"2014-04-18T15:40:47Z","content_type":null,"content_length":"7704","record_id":"<urn:uuid:433e4a49-2969-4574-afaf-60fcf9a72c8c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
The Runic Bulwark Troll Originally Posted by Maybe you should learn the value of HP. Edit: Here mathematical proof that bulwark is slot efficient. Self Gold Value 400 health = 1056g 10 health regeneration= 360g 30 armor = 600g 60 magic resist = 1200g Total Gold Value = 3216g Aura Value per Ally Champion 10 health regeneration = 360g 10 armor = 200g 30 magic resist = 600g Total Gold Value = 1160g Your 'math' appears a bit fuzzy to me. for example, you say that <"30 armor = 600g">. No doubt, you are using the item "Cloth Armor" which costs 300gold and provides 15armor points as the standard for your 'math'. Thus, using math: 300 / 15 = 20. Thus, 30armor x 20gold per armor pt = 600gold. However, your math doesn't take into consideration the "Chain Vest." The Chain Vest costs 720gold and provides 40 armor pts. Thus: using math: 720 / 40 = 18. For each point of Armor and/or Magic resist these items, the Chain Vest and Negatron Cloak, provide a 2gp savings. Over the course of a game, that 2gp will certainly add up. Which proves that there are items in the game with provide Armor and Magic Resist at a far cheaper cost. Thus, proving that your 'math' is 'fuzzy math' at best. And since I'm concerned with my overall build, and not just 'slot efficiency', those items will not only provide savings, they'll also provide flexibility, as they build into other items that provide greater benefits. Lets use more Math. Using "Giants Belt", the "Chain Vest", and the "Negatron Cloak" as the standards for gold to Armor/MR/HP values. 400hp at 2.5g per pt = 1000 30armor at 18g per pt = 540 60 magic resist at 18g per pt = 1080 Total: 2620 gold. If 2 Rejuvenation Beads are added to provide the 10hp regen per 5 secs, that's an additional 360gold bringing the total to 2980gold. (Whereas the Runic Bulwark costs 3200gold to build) Proving, that overall, considering other valuable factors, the Runic Bulwark is Overpriced, and provides the least 'value.'
{"url":"http://forums.na.leagueoflegends.com/board/showthread.php?s=&p=33205152","timestamp":"2014-04-21T13:53:47Z","content_type":null,"content_length":"49883","record_id":"<urn:uuid:cce48359-9153-403d-ba86-93fef40f50ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Histograms are used to record the distribution of a piece of data over time. They’re used when you have a type of data for which the following are true: • There are distinct “events” for this type of data, such as “user performs a search and we return N results”. • Each event has a numeric value (the “N results” in our example). • Comparisons of these numeric values are meaningful. For example: HTTP status codes do not fit this because comparisons between the numeric values are not meaingful. The fact that 404 happens to be less than 500 doesn’t tell you anything. Contrast this with something like “search results returned”: one value being less than the other tells you something meaningful about the data. Histograms can tell you things like: 75% of all searches returned 100 or fewer results, while 95% got 200 or fewer. If the numeric value you’re recording is the amount of time taken to do something, you probably want a timer instead of a histogram. Examples of metrics you might want to track with a histogram: • Search results returned (“99% of searches returned 300 or fewer results”). • Response body size (“75% of responses were 30kb or smaller”). TODO: More examples. Create your histogram: (use '[metrics.histograms :only (histogram)]) (def search-results-returned (histogram "search-results-returned")) You can create an unbiased histogram by passing an extra boolean argument (though you probably don’t want to): (def search-results-returned-biased (histogram "search-results-returned-unbiased" false)) You can also use the defhistogram macro to create a histogram and bind it to a var in one concise, easy step: (use '[metrics.histograms :only (defhistogram)]) (defhistogram search-results-returned) All the def[metric] macros do some magic to the metric title to make it easier to define. Once you’ve got a histogram you can update it with the numeric values of events as they occur. Update the histogram when you have a new value to record with update!: (use '[metrics.histograms :only (update!)]) (update! search-results-returned 10) The data of a histogram metrics can be retrived in a bunch of different ways. The function you’ll usually want to use to pull data from a histogram is percentiles: (use '[metrics.histograms :only (percentiles)]) (percentiles search-results-returned) => { 0.75 180 0.95 299 0.99 300 0.999 340 1.0 1345 } This returns a map of the percentiles you probably care about. The keys are the percentiles (doubles between 0 and 1 inclusive) and the values are the maximum value for that percentile. In this • 75% of searches returned 180 or fewer results. • 95% of searches returned 299 or fewer results. • ... etc. If you want a different set of percentiles just pass them as a sequence: (use '[metrics.histograms :only (percentiles)]) (percentiles search-results-returned [0.50 0.75]) => { 0.50 100 0.75 180 } To get the number of data points recorded over the entire lifetime of this histogram: (use '[metrics.histograms :only (number-recorded)]) (number-recorded search-results-returned) => 12882 To get the smallest data point recorded over the entire lifetime of this histogram: (use '[metrics.histograms :only (smallest)]) (smallest search-results-returned) => 4 To get the largest data point recorded over the entire lifetime of this histogram: (use '[metrics.histograms :only (largest)]) (largest search-results-returned) => 1345 To get the mean of the data points recorded over the entire lifetime of this histogram: (use '[metrics.histograms :only (mean)]) (mean search-results-returned) => 233.12 To get the standard deviation of the data points recorded over the entire lifetime of this histogram: (use '[metrics.histograms :only (std-dev)]) (std-dev search-results-returned) => 80.2 You can get the current sample points the histogram is using with sample, but you almost certainly don’t care about this. If you use it make sure you know what you’re doing. (use '[metrics.histograms :only (sample)]) (sample search-results-returned) => [12 2232 234 122]
{"url":"http://metrics-clojure.readthedocs.org/en/latest/metrics/histograms.html","timestamp":"2014-04-19T12:24:06Z","content_type":null,"content_length":"17529","record_id":"<urn:uuid:d072a664-961f-4437-b30a-d60ddb21823e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Lafayette, CO Science Tutor Find a Lafayette, CO Science Tutor ...Science is my very favorite to teach and share my excitement for. Reading: I love teaching reading! Whether your student is starting out, or just having troubles getting the knack of it, breaking it down into very small pieces is incredibly helpful and will get your student reading before you k... 21 Subjects: including geology, physics, English, algebra 2 ...I have been a tutor with Wyzant Tutoring for over 4 years and have been a top ranked tutor in mathematics and science. Please contact me about any questions you may have! I look forward to working with you to reaching your GPA and test score goals! 16 Subjects: including physics, geology, physical science, astronomy Get a tutor who has taught at the college level! My name is Peter, and I am a CU graduate, going to graduate school to earn my PhD in chemistry in the Fall. I have TA'd organic, general, and introductory level courses, as well as taken classes in pedagogy to further improve my skills as a teacher. 6 Subjects: including organic chemistry, chemistry, algebra 1, precalculus ...I applied algebraic, trigonometric, exponential, and logarithmic functions to graph and use in applications. I also solved linear and nonlinear equations and inequalities, systems of equations and inequalities, and applied sequences and series with facility. I am a summa cum laude Mathematics major and an Electrical Engineering major. 15 Subjects: including electrical engineering, physics, chemistry, calculus ...I have been teaching InDesign for a college online. I am comfortable with all of the basics of using InDesign including using text boxes, layers and the text tools. I do have a bachelor's degree in Graphic Design and I love working with Photoshop and Illustrator in conjunction InDesign. 12 Subjects: including sociology, psychology, reading, Microsoft Word Related Lafayette, CO Tutors Lafayette, CO Accounting Tutors Lafayette, CO ACT Tutors Lafayette, CO Algebra Tutors Lafayette, CO Algebra 2 Tutors Lafayette, CO Calculus Tutors Lafayette, CO Geometry Tutors Lafayette, CO Math Tutors Lafayette, CO Prealgebra Tutors Lafayette, CO Precalculus Tutors Lafayette, CO SAT Tutors Lafayette, CO SAT Math Tutors Lafayette, CO Science Tutors Lafayette, CO Statistics Tutors Lafayette, CO Trigonometry Tutors Nearby Cities With Science Tutor Broomfield Science Tutors Dacono Science Tutors East Lake, CO Science Tutors Eastlake, CO Science Tutors Edgewater, CO Science Tutors Eldorado Springs Science Tutors Erie, CO Science Tutors Firestone Science Tutors Frederick, CO Science Tutors Lakeside, CO Science Tutors Lonetree, CO Science Tutors Louisville, CO Science Tutors Niwot Science Tutors Northglenn, CO Science Tutors Superior, CO Science Tutors
{"url":"http://www.purplemath.com/Lafayette_CO_Science_tutors.php","timestamp":"2014-04-19T12:14:33Z","content_type":null,"content_length":"24014","record_id":"<urn:uuid:0ea635dd-96f0-4c07-ba78-91bdacb2ffd7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Parseval's Theorem July 4th 2010, 04:47 PM #1 Parseval's Theorem I am trying to prove Parseval's theorem and I have got a problem in this step. So I have shown only a step in this proof. The Fourier coefficient is given by $a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f (x)cosnx dx$ Now, in this step $\int_{-\pi}^{\pi}a_ncosnxdx$. How should I use the first relation which is fourier coefficient to solve this. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/150098-parseval-s-theorem.html","timestamp":"2014-04-20T10:22:11Z","content_type":null,"content_length":"29388","record_id":"<urn:uuid:ee963f0a-0004-403a-8e36-51555665388d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00604-ip-10-147-4-33.ec2.internal.warc.gz"}