content
stringlengths
86
994k
meta
stringlengths
288
619
Mansfield, TX Prealgebra Tutor Find a Mansfield, TX Prealgebra Tutor ...As a high school and college instuctor, I have had dozens of students develop and deliver oral presentations to their classmates. I have guided many of these students through this process in private and in small groups. I had difficulties with presenting in front of others in years past, and understand the fears that many people have of speaking in front of others. 25 Subjects: including prealgebra, English, reading, writing ...I will require a copy of your syllabus and a tentative schedule for meeting soon after we agree on terms of my tutoring you. I want to help you pass your chemistry courses and will help you anyway I can. I have my PhD. in inorganic chemistry from the University of Texas at Arlington and my B.S. in chemistry from Iowa State University. 9 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...I am comfortable not just with every day words, but I also know many technical and scientific expressions as well as literary terms. I grew up as a child in Paris, France and I spent 8 years in the french schooling system. Although I do not use french in my daily life here in the US, I am confident that I can coach a new learner in the basics of this wonderful tongue! 16 Subjects: including prealgebra, English, reading, French ...I taught 5th grade Math for 2 years and have now been teaching high school math for 3 years. I taught Algebra 2 for one of those 3 years and have taught Geometry all 3. I love working to help student gain a more clear understanding of math by connecting new topics to their prior knowledge. 5 Subjects: including prealgebra, geometry, algebra 1, elementary math ...I use simple, easy, and quick to remember methods, formulas and acronyms so that my students retain information and know when to use it. I present in a systematic format, using repetition of the information, exceptional practice, demonstration by the student, and lots of patience and encouragement. I provide examples that they can keep and slideshows they can view over and over 39 Subjects: including prealgebra, reading, writing, English Related Mansfield, TX Tutors Mansfield, TX Accounting Tutors Mansfield, TX ACT Tutors Mansfield, TX Algebra Tutors Mansfield, TX Algebra 2 Tutors Mansfield, TX Calculus Tutors Mansfield, TX Geometry Tutors Mansfield, TX Math Tutors Mansfield, TX Prealgebra Tutors Mansfield, TX Precalculus Tutors Mansfield, TX SAT Tutors Mansfield, TX SAT Math Tutors Mansfield, TX Science Tutors Mansfield, TX Statistics Tutors Mansfield, TX Trigonometry Tutors Nearby Cities With prealgebra Tutor Arlington, TX prealgebra Tutors Bedford, TX prealgebra Tutors Benbrook, TX prealgebra Tutors Burleson prealgebra Tutors Cedar Hill, TX prealgebra Tutors Dalworthington Gardens, TX prealgebra Tutors Desoto prealgebra Tutors Duncanville, TX prealgebra Tutors Euless prealgebra Tutors Forest Hill, TX prealgebra Tutors Glenn Heights, TX prealgebra Tutors Grand Prairie prealgebra Tutors Highland Park, TX prealgebra Tutors Midlothian, TX prealgebra Tutors Pantego, TX prealgebra Tutors
{"url":"http://www.purplemath.com/Mansfield_TX_Prealgebra_tutors.php","timestamp":"2014-04-20T19:31:19Z","content_type":null,"content_length":"24597","record_id":"<urn:uuid:91881cd8-9290-42af-b0e3-2f0604b6f08f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 39 - IEEE TRANSACTIONS ON INFORMATION THEORY , 2003 "... A new class of distances appropriate for measuring similarity relations between sequences, say one type of similarity per distance, is studied. We propose a new "normalized information distance", based on the noncomputable notion of Kolmogorov complexity, and show that it is in this class and it min ..." Cited by 192 (29 self) Add to MetaCart A new class of distances appropriate for measuring similarity relations between sequences, say one type of similarity per distance, is studied. We propose a new "normalized information distance", based on the noncomputable notion of Kolmogorov complexity, and show that it is in this class and it minorizes every computable distance in the class (that is, it is universal in that it discovers all computable similarities). We demonstrate that it is a metric and call it the similarity metric. This theory forms the foundation for a new practical tool. To evidence generality and robustness we give two distinctive applications in widely divergent areas using standard compression programs like gzip and GenCompress. First, we compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we fully automatically compute the language tree of 52 different languages. - in Advances in Minimum Description Length: Theory and Applications. 2005 "... ..." - Trends in Cognitive Sciences , 2003 "... This article reviews research exploring the idea that simplicity does, indeed, drive a wide range of cognitive processes. We outline mathematical theory, computational results, and empirical data underpinning this viewpoint. Key words: simplicity, Kolmogorov complexity, codes, learning, induction, B ..." Cited by 48 (2 self) Add to MetaCart This article reviews research exploring the idea that simplicity does, indeed, drive a wide range of cognitive processes. We outline mathematical theory, computational results, and empirical data underpinning this viewpoint. Key words: simplicity, Kolmogorov complexity, codes, learning, induction, Bayesian inference 30-word summary:This article outlines the proposal that many aspects of cognition, from perception, to language acquisition, to high-level cognition involve finding patterns that provide the simplest explanation of available data. 3 The cognitive system finds patterns in the data that it receives. Perception involves finding patterns in the external world, from sensory input. Language acquisition involves finding patterns in linguistic input, to determine the structure of the language. High-level cognition involves finding patterns in information, to form categories, and to infer causal relations. Simplicity and the problem of induction A fundamental puzzle is what we term the problem of induction: infinitely many patterns are compatible with any finite set of data (see Box 1). So, for example, an infinity of curves pass through any finite set of points (Box 1a); an infinity of symbol sequences are compatible with any subsequence of symbols (Box 1b); infinitely many grammars are compatible with any finite set of observed sentences (Box 1c); and infinitely many perceptual organizations can fit any specific visual input (Box 1d). What principle allows the cognitive system to solve the problem of induction, and choose appropriately from these infinite sets of possibilities? Any such principle must meet two criteria: (i) it must solve the problem of induction successfully; (ii) it must explain empirical data in cognition. We argue that the best approach to (i)... - IEEE Trans. Inform. Theory "... approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The “structure function ” of the given data expresses the relation between the complexity l ..." Cited by 32 (14 self) Add to MetaCart approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The “structure function ” of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the “true ” model is in the model class considered or not. In this setting, this happens with certainty, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that—within the obvious constraints—every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the “algorithmic minimal sufficient statistic.” Index Terms— constrained minimum description length (ML) constrained maximum likelihood (MDL) constrained best-fit model selection computability lossy compression minimal sufficient statistic non-probabilistic statistics Kolmogorov complexity, Kolmogorov Structure function prediction sufficient statistic , 2008 "... Inferring the causal structure that links n observables is usually basedupon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present. We develop a theory how to g ..." Cited by 11 (11 self) Add to MetaCart Inferring the causal structure that links n observables is usually basedupon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present. We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information anddescribe the corresponding causal inference rules. We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work. We also discuss how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution. email: "... We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and where they are fundamentally different. We discuss and relate the basic notions of both theories: Shannon entropy, Kolmogorov complexity, Shannon mutual informati ..." Cited by 10 (2 self) Add to MetaCart We compare the elementary theories of Shannon information and Kolmogorov complexity, the extent to which they have a common purpose, and where they are fundamentally different. We discuss and relate the basic notions of both theories: Shannon entropy, Kolmogorov complexity, Shannon mutual information and Kolmogorov (`algorithmic') mutual information. We explain how universal coding may be viewed as a middle ground between the two theories. We consider Shannon's rate distortion theory, which quantifies useful (in a certain sense) information. We use the communication of information as our guiding motif, and we explain how it relates to sequential question-answer sessions. - In Proc. 43rd Symposium on Foundations of Computer Science , 2002 "... We vindicate, for the first time, the rightness of the original “structure function”, proposed by Kolmogorov in 1974, by showing that minimizing a two-part code consisting of a model subject to (Kolmogorov) complexity constraints, together with a data-to-model code, produces a model of best fit (for ..." Cited by 10 (0 self) Add to MetaCart We vindicate, for the first time, the rightness of the original “structure function”, proposed by Kolmogorov in 1974, by showing that minimizing a two-part code consisting of a model subject to (Kolmogorov) complexity constraints, together with a data-to-model code, produces a model of best fit (for which the data is maximally “typical”). The method thus separates all possible model information from the remaining accidental information. This result gives a foundation for MDL, and related methods, in model selection. Settlement of this long-standing question is the more remarkable since the minimal randomness deficiency function (measuring maximal “typicality”) itself cannot be monotonically approximated, but the shortest two-part code can. We furthermore show that both the structure function and the minimum randomness deficiency function can assume all shapes over their full domain (improving an independent unpublished result of Levin on the former function of the early 70s, and extending a partial result of V’yugin on the latter function of the late 80s and also recent results on prediction loss measured by “snooping curves”). We give an explicit realization of optimal two-part codes at all levels of model complexity. We determine the (un)computability properties of the various functions and “algorithmic sufficient statistic ” considered. In our setting the models are finite sets, but the analysis is valid, up to logarithmic additive terms, for the model class of computable probability density functions, or the model class of total recursive functions. 1 , 2006 "... ABSTRACT. By applying the minimality principle for model selection, one should seek the model that describes the data by a code of minimal length. Learning is viewed as data compression that exploits the regularities or qualitative properties found in the data, in order to build a model containing t ..." Cited by 8 (0 self) Add to MetaCart ABSTRACT. By applying the minimality principle for model selection, one should seek the model that describes the data by a code of minimal length. Learning is viewed as data compression that exploits the regularities or qualitative properties found in the data, in order to build a model containing the meaningful information. The theory of causal modeling can be interpreted by this approach. The regularities are the conditional independencies reducing a factorization and the v-structure regularities. In the absence of other regularities, a causal model is faithful and offers a minimal description of a probability distribution. The causal interpretation of a faithful Bayesian network is motivated by the canonical representation it offers and faithfulness. A causal model decomposes the distribution into independent atomic blocks and is able to explain all qualitative properties found in the data. The existence of faithful models depends on the additional regularities in the data. Local structure of the conditional probability distributions allow further compression of the model. Interfering regularities, however, generate conditional independencies that do not follow from the Markov condition. These regularities has to be incorporated into an augmented model for which the inference algorithms are adapted to take into account their influences. But for other regularities, like patterns in a string, causality does not offer a modeling framework that leads to a minimal description. 1 "... Andrei Nikolaevich Kolmogorov was the foremost contributor to the mathematical and philosophical foundations of probability in the twentieth century, and his thinking on the topic is still potent today. In this article we first review the three stages of Kolmogorov's work on the foundations of proba ..." Cited by 7 (2 self) Add to MetaCart Andrei Nikolaevich Kolmogorov was the foremost contributor to the mathematical and philosophical foundations of probability in the twentieth century, and his thinking on the topic is still potent today. In this article we first review the three stages of Kolmogorov's work on the foundations of probability: (1) his formulation of measure-theoretic probability, 1933, (2) his frequentist theory of probability, 1963, and (3) his algorithmic theory of randomness, 1965--1987. We also discuss another approach to the foundations of probability, based on martingales, that Kolmogorov did not
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=28553","timestamp":"2014-04-19T23:49:21Z","content_type":null,"content_length":"39422","record_id":"<urn:uuid:49856942-9607-42de-be20-e5021c1f51cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Digital Signal Processing Using Chapter 5. The Discrete Fourier Gao Xinbo School of E.E., Xidian Univ. Review 1 The DTFT provides the frequency-domain (w) representation for absolutely summable sequences. The z-transform provides a generalized frequency- domain (z) representation for arbitrary sequences. Two features in common: Defined for infinite-length sequences Functions of continuous variable (w or z) From the numerical computation viewpoint, these two features are troublesome because one has to evaluate infinite sums at uncountably infinite frequencies. Review 2 To use Matlab, we have to truncate sequences and then evaluate the expression at finitely many points. The evaluation were obviously approximations to the exact calculations. In other words, the DTFT and the z-transform are not numerically computable transform. Introduction 1 Therefore we turn our attention to a numerically computable It is obtained by sampling the DTFT transform in the frequency domain (or the z-transform on the unit circle). We develop this transform by analyzing periodic sequences. From FT analysis we know that a periodic function can always be represented by a linear combination of harmonically related complex exponentials (which is form of sampling). This give us the Discrete Fourier Series representation. We extend the DFS to finite-duration sequences, which leads to a new transform, called the Discrete Fourier Transform. Introduction 2 The DFT avoids the two problems mentioned above and is a numerically computable transform that is suitable for computer implementation. The numerical computation of the DFT for long sequences is prohibitively time consuming. Therefore several algorithms have been developed to efficiently compute the DFT. These are collectively called fast Fourier transform (or FFT) algorithms. The Discrete Fourier Series Definition: Periodic sequence ~(n) ~(n kN), n, k x x N: the fundamental period of the sequences From FT analysis we know that the periodic functions can be synthesized as a linear combination of complex exponentials whose frequencies are multiples (or harmonics) of the fundamental frequency (2pi/N). From the frequency-domain periodicity of the DTFT, we conclude that there are a finite number of harmonics; the frequencies are {2pi/N*k,k=0,1,…,N-1}. The Discrete Fourier Series A periodic sequence can be expressed as N 1 ~ ( n) 1 ~ X (k )e , n 0,1, j 2N kn N k 0 { X ( K ), k 0,1, } are called the discrete Fourier series coefficients, which are given by N 1 ~ ~(n)e j 2N kn , k 0,1, X (k ) x n 0 The discrete Fourier series representation of periodic sequences The Discrete Fourier Series X(k) is itself a (complex-valued) periodic sequence with fundamental period equal to N. j 2N Let WN e N 1 X (k ) DFS[ ~ (n)] ~ (n)WN x x nk n 0 N 1 ~ (n) IDFS[ X (k )] 1 ~ ~ X (k )WN k 0 Example 5.1 Matlab Implementation Matrix-vector multiplication: Let x and X denote column vectors corresponding to the primary periods of sequences x(n) and X(k), respectively. Then X W x x 1 W * X N N 1 W 1 WN N 1) WN WN 0 k , n N 1 kn N ( N 1) ( N 1) 2 1 WN WN DFS Matrix dfs1 function(row vect.for dfs) Function [Xk] = dfs (xn, N) % Xk and xn are column(not row) vectors if nargin < 2 N = length(xn) n = 0: N-1; k = 0: N-1; WN = exp(-j*2*pi/N); kn = k’*n; WNkn = WN.^kn; Xk = WNkn * xn Ex5.2 DFS of square wave seq. L, k 0, N ,2 N , X (k ) j ( L 1) / N sin(kL / N ) sin(k / N ) 1. Envelope like the sinc function; 2. Zeros occur at N/L (reciprocal of duty cycle); 3. Relation of N to density of freq. Samples; Relation to the z-transform Nonzero, 0 n N 1 N 1 x(n) X ( z ) x(n) z n 0, elsewhere n 0 ~(n), 0 n N 1 ~ x N 1 X (k ) x(n) e j 2N k n 0, elsewhere n 0 X (k ) X ( z ) | j 2 k z e N The DFS X(k) represents N evenly spaced samples of the z- transform X(z) around the unit circle. Relation to the DTFT N 1 N 1 X (e ) x ( n )e jw jwn ~ (n)e jwn n 0 n 0 X (k ) X (e jw ) |w 2 k Let w1 , and wk k kw1 N N X (k ) X (e jwk ) X (e jkw1 ) The DFS is obtained by evenly sampling the DTFT at w1 intervals. The interval w1 is the sampling interval in the frequency domain. It is called frequency resolution because it tells us how close are the frequency samples. Sampling and construction in the X (k ) X ( z ) | j 2 k , k 0,1,2, DFS & z- z e N x ( m)e x(m)WNkm j 2N km m m ~ ( n) 1 ~ N 1 1 N 1 km IDFS x X (k )WN x(m)WN WN kn N k 0 m Nk 0 1 N 1 k ( n m ) x ( m ) WN x(m) (n m rN ) m N k 0 m r x(m) (n m rN ) x(n rN ) m r r When we sample X(z) on the unit circle, we obtain a periodic sequence in the time domain. This sequence is a linear combination of the original x(n) and its infinite replicas, each shifted by multiples of N or –N. If x(n)=0 for n<0 and n>=N, then there will be no overlap or aliasing in the time domain. x(n) ~(n) for 0 n N 1 ~(n) R (n) ~(n)1 0 n N 1 x ( n) x x 0 else RN(n) is called a rectangular window of length N. THEOREM1: Frequency Sampling If x(n) is time-limited (finite duration) to [0,N-1], then N samples of X(z) on the unit circle determine X(z) for all z. Reconstruction Formula Let x(n) be time-limited to [0,N-1]. Then from Theorem 1 we should be able to recover the z-transform X(z) using its samples X~(k). ~ ( n) z n 1 N 1 N 1 N 1 N 1 X ( z ) x ( n) z x N X (k )WN kn z n n 0 n 0 n 0 k 0 1 N 1 ~ N 1 kn n 1 N 1 ~ N 1 k 1 n X (k ) WN z X (k ) WN z N k 0 n 0 N k 0 n 0 1 N 1 ~ 1 WN kN z N X (k ) 1 W k z 1 WN-kN=1 N k 0 N N N 1 ~ 1 z X (k ) X ( z) 1 W k z 1 k 0 N The DTFT Interpolation Formula 1 e jwN N 1 X (k ) N 1 ~ 1 e jwN X (e ) 1 e j 2k / N e jw X (k ) N 1 e j 2k / N e jw k 0 k 0 sin wN jw N21 An interpolation polynomial ( w) 2 N sin 2 N 1 This is the DTFT interpolation X (e ) X (k )w 2Nk jw ~ formula to reconstruct X(ejw) from k 0 its samples X~(k) Since (0) 1 , we have that X(ej2pik/N)=X~(k), which means that the interpolation is exact at sampling points. The Discrete Fourier Transform The discrete Fourier series provided us a mechanism for numerically computing the discrete-time Fourier It also alert us to a potential problem of aliasing in the time domain. Mathematics dictates that the sampling of the discrete- time Fourier transform result in a periodic sequences But most of the signals in practice are not periodic. They are likely to be of finite duration. The Discrete Fourier Transform Theoretically, we can take care of this problem by defining a periodic signal whose primary shape is that of the finite duration signal and then using the DFS on this periodic signal. Practically, we define a new transform called the Discrete Fourier Transform (DFT), which is the primary period of the DFS. This DFT is the ultimate numerically computable Fourier transform for arbitrary finite duration The Discrete Fourier Transform First we define a finite-duration sequence x(n) that has N samples over 0<=n<=N as an N-point sequence ~ ( n) x x(n rN ) ~(n) x(n mod N ) x((n))N The compact relationships between x(n) and x~(n) are ~(n) x((n)) ( Periodic extension) x N x(n) ~(n) R (n) (Window operation) x N The function rem(n,N) can be used to implement our modulo-N operation. The Discrete Fourier Transform The Discrete Fourier Transform of an N-point sequence is given by ~ X (k ) 0 k N 1 ~ X (k ) DFT[ x(n)] X ( k ) RN ( n ) 0 else N 1 X (k ) x(n)WN , 0 k N 1 n 0 Note that the DFT X(k) is also an N-point sequence, that is, it is not defined outside of 0<=n<=N-1. DFT X(k) is the primary interval of X~(k). N 1 ~ ( n) R ( n) 1 x(n) IDFT[ X (k )] x N X (k )WN kn ,0 n N 1 k 0 Matlab Implementation X WN x 1 * x WN X 1 W 1 WN N 1) WN WN 0 k , n N 1 kn N ( N 1) 2 1 WN N 1) Examples 5.6 5.7 Comments Zero-padding is an operation in which more zeros are appended to the original sequence. The resulting longer DFT provides closely spaced samples of the discrete-times Fourier transform of the original sequence. The zero-padding gives us a high-density spectrum and provides a better displayed version for plotting. But it does not give us a high-resolution spectrum because no new information is added to the signal; only additional zeros are added in the To get high-resolution spectrum, one has to obtain more data from the experiment or observations. Properties of the DFT 1. Linearity: DFT[ax1(n)+bx2(n)]=aDFT[x1(n)]+bDFT[x2(n)] N3=max(N1,N2): N3-point DFT 2. Circular folding: x(0) k 0 x( N k ) 1 k N 1 X (0) k 0 DFT[ x((n))N ] X ((k ))N X ( N k ) 1 k N 1 Matlab: x=x(mod(-n,N)+1) Properties of the DFT 3. Conjugation: DFT [ x * ( n)] X * (( k )) N 4. Symmetry properties for real sequences: Let x(n) be a real-valued N-point sequence X (k ) X * (( k )) N Re[ X (k )] Re[ X ((k )) N ] : circular even sequence Im[X (k )] Im[X ((N k )) N ] : circular odd sequence | X (k ) || X ((k )) N | X (k ) X ((k )) N Circular symmetry Periodic conjugate symmetry About 50% savings in computation as well as in storage. X(0) is a real number: the DC frequency X(N/2)(N is even) is also real-valued: Nyquist component Circular-even and circular-odd components: xec (n) [ x(n) x((n)) N ] DFT[ xec (n)] Re[ X (k )] Re[ X ((k ))N ] xoc (n) [ x(n) x((n)) N ] DFT[ xoc (n)] Im[X (k )] Im[X ((k ))N ] The real-valued signals Function, p143 5. Circular shift of a sequence DFT [ x(( n m)) N RN (n)] WN X (k ) 6. Circular shift in the frequency domain DFT [WN ln x(n)] X (( k l )) N RN (k ) 7. Circular convolution** N 1 x1 (n) x2 (n) x1 (m) x2 ((n m))N , 0 n N 1 m 0 DFT [ x1 (n) x2 (n)] X 1 (k ) X 2 (k ) 8. Multiplication: DFT[ x1 (n) x2 (n)] X 1 (k ) X 2 (k ) 9. Parseval’s relation: N 1 N 1 E x | x ( n) | 2 | X ( k ) |2 n 0 N k 0 Energy spectrum | X ( k ) |2 Power spectrum X (k ) Linear convolution using the DFT In general, the circular convolution is an aliased version of the linear convolution. If we make both x1(n) and x2(n) N=N1+N2-1 point sequences by padding an appropriate number of zeros, then the circular convolution is identical to the linear convolution. N 1 x4 (n) x1 (n) x2 (n) x1 (k ) x2 ((n k ))N RN (n) m 0 N 1 x1 (k ) x2 (n k rN ) RN (n) m 0 r N1 1 x1 (k ) x2 (n k rN ) RN (n) x3 (n rN ) RN (n) r m 0 r Error Analysis When N=max(N1,N2) is chosen for circular convolution, then the first (M-1) samples are in error, where M=min(N1,N2). e(n) x4 x3 x3 (n rN ) RN (n) x3 (n) x3 (n rN ) RN (n) N max(N1 , N 2 ) r 0 e(n) [ x3 (n N ) x3 (n N )]RN (n) x3 (n N ) 0 n N 1 X3(n) is also causal Block Convolution Segment the infinite-length input sequence into smaller sections (or blocks), process each section using the DFT, and finally assemble the output sequence from the outputs of each section. This procedure is called a block convolution Let us assume that the sequence x(n) is sectioned into N-point sequence and that the impulse response of the filter is an M-point sequence, where M<N. We partition x(n) into sections, each overlapping with the previous one by exactly (M-1) samples, save at last (N-M+1) output samples, and finally concatenate these outputs into sequence. To correct for the first (M-1) samples in the first output block, we set the first (M-1) samples in the first input blocks to zero. Matlab Implementation Function [y]=ovrlpsav(x,h,N) Lenx = length(x); M=length(h); M1=M-1; L=N-M1; H = [h zeros(1,N-M)]; X = [zeros(1,M1), x, zeros(1,N-1); K = floor((Lenx+M1-1)/L); For k=0:K, xk = x(k*L+1:k*L+N); Y(k+1,: ) = circonvt(xk,h,N); end The Fast Fourier Transform Although the DFT is computable transform, the straightforward implementation is very inefficient, especially when the sequence length N is large. In 1965, Cooley and Tukey showed the a procedure to substantially reduce the amount of computations involved in the DFT. This led to the explosion of applications of the DFT. All these efficient algorithms are collectively known as fast Fourier transform (FFT) algorithms. The FFT Using the Matrix-vector multiplication to implement DFT: X=WNx (WN: N*N, x: 1*N, X: 1*N) takes N×N multiplications and (N-1)×N additions of complex number. Number of complex mult. CN=O(N2) A complex multiplication requires 4 real multiplications and 2 real additions. Goal of an Efficient computation The total number of computations should be linear rather than quadratic with respect to N. Most of the computations can be eliminated using the symmetry and periodicity properties k ( n N ) (k N )n W kn N W N W WN N / 2 WN kn kn If N=2^10, CN=will reduce to 1/100 times. Decimation-in-time: DIT-FFT, decimation-in-frequency: DIF-FFT 4-point DFT→FFT example X (k ) x(n)W4nk , 0 k 3; W4 e j 2 / 4 j n 0 Efficient W40 W44 1; W41 W49 j Approach W42 W46 1; W43 j X(0) = x(0)+x(2) + x(1)+x(3) = g1 + g2 X(1) = x(0)-x(2) – j(x(1)-x(3)) = h1 - jh2 X(2) = x(0)+x(2) - x(1)+x(3) = g1 - g2 X(3) = x(0)-x(2) + j(x(1)-x(3)) = h1 + jh2 It requires only 2 complex multiplications. Signal flowgraph A 4-point DFT→FFT example X (0) X (1) g1 g 2 1 1 g1 g 2 X (2) X (3) h jh *W2 1 j . * h h *W2 1 1 x(0) x(1) . *W2 * x(2) x(3) *W2 1 j W2 0*0 W2 0*1 1 1 where W2 W2 W2 1*0 1*1 1 1 so W2 * A or A *W2 no multiplic ation needed Divide-and-combine approach To reduce the DFT computation’s quadratic dependence on N, one must choose a composite number N=LM since L2+M2<<N2 for large N. Now divide the sequence into M smaller sequences of length L, take M smaller L-point DFTs, and combine these into a large DFT using L smaller M-point DFTs. This is the essence of the divide-and-combine approach. n Ml m, 0 l L 1, 0 m M 1 k p Lq , 0 p L 1, 0 q M 1 Divide-and-combine approach M 1 L 1 X ( p, q ) x(l , m)WN Ml m )( p Lq ) m 0 l 0 M 1 mp L 1 WN x(l , m)WNMlp WNLmq m 0 l 0 M 1 mq mp L 1 lp Twiddle factor WN x(l , m)WL WM m 0 l L int DFT M po int DFT Three-step procedure: P155 Divide-and-combine approach The total number of complex multiplications for this approach can now be given by This procedure can be further repeat if M or L are composite numbers. When N=Rv, then such algorithms are called radix-R FFT algorithms. A 8-point DFT→FFT example 1. two 4-point DFT for m=1,2 1 pm 3 mq X ( p, q) W8 W4 x(l , m) W2 l 0 m 0 W4 00 W4 01 W4 02 W4 03 x(0,0) x(0,1) 3 W410 W411 W412 W413 x(1,0) x(1,1) F ( p, m) * l 0 W4 20 W4 21 W4 22 W4 23 x(2,0) x(2,1) 30 31 32 33 x(3,0) x(3,1) W4 W4 W4 W4 2. G ( p, m) W8 pm F ( p, m) is a 4 2 matrix dot mult. of 8 cplx multns 3. 2 poi nt DFTX with ( p, q ) pm F ( p, m) W2 mq m 0 G (0,0) G (0,1) G (1,0) G (1,1) W 00 W 01 * 2 G (2,0) G (2,1) W 10 W 11 G (3,0) G (3,1) Number of multiplications A 4-point DFT is divided into two 2-point DFTs, with one intermedium matrix mult. number of multiplications= 4×4cplx→ 2 ×1+ 1 ×4 cplx 16 →6 A 8-point DFT is divided into two 4-point DFTs, with one intermedium matrix mult. 8×8→2 ×6 + 2×4 64 →20 For 16-point DFT: 16×16→2 ×20 + 2×8 256 →56 In general the reduction of mult. If N=M*L N-pt DFT divided into M times L-pt DFT + Intermediate matrix transform + L times M-pt DFT Radix-2 FFT Algorithms Let N=2v; then we choose M=2 and L=N/2 and divide x(n) into two N/2-point sequence. This procedure can be repeated again and again. At each stage the sequences are decimated and the smaller DFTs combined. This decimation ands after v stages when we have N one-point sequences, which are also one-point DFTs. The resulting procedure is called the decimation-in-time FFT (DIF-FFT) algorithm, for which the total number of complex multiplications is: CN=Nv= N*log2N; using additional symmetries: CN=Nv= N/2*log2N Signal flowgraph in Figure 5.19 function y=mditfft(x) % 本程序对输入序列 x 实现时域抽取快速傅立叶变换DIT-FFT基2算法 m=nextpow2(x); n=2^m; % 求x的长度对应的2的最低幂次m if length(x)<n x=[x,zeros(1,n-length(x))]; % 若x的长度不为2的幂次,补零到2的整数幂 nxd=bin2dec(fliplr(dec2bin([1:n]-1,m)))+1; % 求1:2^m数列的倒序 y=x(nxd); % 将x倒序排列得y的初始值 for mm=1:m % 将DFT作m次基2分解,从左到右,对每次分解作DFT运算 le=2^mm;u=1; % 旋转因子u初始化为w^0=1 w=exp(-i*2*pi/le); % 设定本次分解的基本DFT因子w=exp(-i*2*pi/le) for j=1:le/2 % 本次跨越间隔内的各次蝶形运算 for k=j:le:n % 本次蝶形运算的跨越间隔为le=2^mm kp=k+le/2; % 确定蝶形运算的对应单元下标 t=y(kp)*u; % 蝶形运算的乘积项 y(kp)=y(k)-t; % 蝶形运算 y(k)=y(k)+t; % 蝶形运算 u=u*w; % 修改旋转因子,多乘一个基本DFT因子w Decimation-in-frequency FFT In an alternate approach we choose L=2, M=N/2 and follow the steps in (5.49). We can get the decimation-frequency FFT (DIF-FFT) Its signal flowgraph is a transposed structure of the DIT-FFT structure. Its computational complexity is also equal to CN=Nv= Matlab Implementation Function: X = fft(x,N) If length(x)<N, x is padded with zeros. If the argument N is omitted, N=length(x) If x is matrix, fft computes the N-point DFT of each column of x It is written in machine languag and not use the Matlab command. Therefore, it executes very fast. It is written as a mixed-radix algorithm. N=2v; N=prime number, it is reduced to the raw DFT. Fast Convolutions Use the circular convolution to implement the linear convolution and the FFT to implement the circular The resulting algorithm is called a fast convolution algorithm. If we choose N=2v and implement the radix-2 FFT, then the algorithm is called a high-speed convolution. If x1(n) is N1-point, x2(n) is N2-point, then log 2 ( N1 N 2 1) N 2 Compare the linear convolution and the high-speed conv. High-speed Block Convolution Overlap-and-save method We can now replace the DFT by the radix-2 FFT algorithm to obtain a high-speed overlap- and-save algorithm. Ex5.21 MATLAB verification of DFT time versus data length N; Ex5.22 Convolution-time comparison of using FFT versus DFT; Fast block-convolution using FFT; Textbook: pp.116~172 Chinese reference book: pp.35~37, pp.68~82 ,pp.97~109 1: p5.1b,d; 选ch.1.3a 2: p5.3; p5.6; 选p5.9 3: p5.15;5.18a,d; 4: p5.23;p5.29;
{"url":"http://www.docstoc.com/docs/36552941/Digital-Signal-Processing-Using","timestamp":"2014-04-24T07:21:10Z","content_type":null,"content_length":"84293","record_id":"<urn:uuid:ddafc613-f930-4e3a-9f42-09e7d51e7e40>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Haciendas De Tena, PR Algebra 1 Tutor Find a Haciendas De Tena, PR Algebra 1 Tutor ...I have received a bachelor's degree from Arizona State University in Mathematics and I am currently in the PhD program for Mathematics. I have 2 years of experience as a tutor, specifically in the areas of Algebra, Geometry, Trigonometry, Calculus. I have received a bachelors degree from Arizona State University in Mathematics and I am currently in the PhD program for Mathematics. 15 Subjects: including algebra 1, calculus, algebra 2, geometry ...Reviews from previous students: "An inspiration. He touches the lives of everyone he teaches and gives students a new confidence in themselves. A smart, encouraging, patient and kind man that I will forever remember and respect." "Mr. 15 Subjects: including algebra 1, calculus, statistics, geometry ...Thanks so much.I have been in United States for the last 21 years; however, I am a native Chinese speaker. I can read, write and speak fluently in Chinese. For my years in the United States, I have always been helping people with their translations between English and Chinese. 8 Subjects: including algebra 1, calculus, geometry, Chinese ...I treat every student with the utmost respect and stand firm to any parental rules concerning the well-being of their child. I promise to provide the highest degree of professionalism and integrity to best of my abilities. I take my personal responsibility for learning and accountability very seriously. 12 Subjects: including algebra 1, English, reading, writing ...I teach Math full-time at a local Community College, but enjoy the one-on-one of tutoring. I've also taught High School mathematics. My students tell me I'm really good at explaining things in a way that makes sense, and that I'm extremely patient. 11 Subjects: including algebra 1, calculus, algebra 2, GED Related Haciendas De Tena, PR Tutors Haciendas De Tena, PR Accounting Tutors Haciendas De Tena, PR ACT Tutors Haciendas De Tena, PR Algebra Tutors Haciendas De Tena, PR Algebra 2 Tutors Haciendas De Tena, PR Calculus Tutors Haciendas De Tena, PR Geometry Tutors Haciendas De Tena, PR Math Tutors Haciendas De Tena, PR Prealgebra Tutors Haciendas De Tena, PR Precalculus Tutors Haciendas De Tena, PR SAT Tutors Haciendas De Tena, PR SAT Math Tutors Haciendas De Tena, PR Science Tutors Haciendas De Tena, PR Statistics Tutors Haciendas De Tena, PR Trigonometry Tutors Nearby Cities With algebra 1 Tutor Chandler Heights algebra 1 Tutors Chandler, AZ algebra 1 Tutors Circle City, AZ algebra 1 Tutors Eleven Mile Corner, AZ algebra 1 Tutors Eleven Mile, AZ algebra 1 Tutors Haciendas Constancia, PR algebra 1 Tutors Haciendas De Borinquen Ii, PR algebra 1 Tutors Haciendas Del Monte, PR algebra 1 Tutors Haciendas El Zorzal, PR algebra 1 Tutors Mobile, AZ algebra 1 Tutors Rock Springs, AZ algebra 1 Tutors Saddlebrooke, AZ algebra 1 Tutors Sun Lakes, AZ algebra 1 Tutors Superstition Mountain, AZ algebra 1 Tutors Toltec, AZ algebra 1 Tutors
{"url":"http://www.purplemath.com/Haciendas_De_Tena_PR_algebra_1_tutors.php","timestamp":"2014-04-17T13:35:19Z","content_type":null,"content_length":"24806","record_id":"<urn:uuid:6878de05-fd01-4162-9671-038697b36293>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary of Analysis Next: Evaluation of Implementation Up: Conclusion Previous: Conclusion Chapter 7 provides an analysis of some of the algorithms and the performance of the implementation. We can draw a number of conclusions from this. Firstly, operations may be performed using either the dyadic or signed binary representations. Algorithms performing the same operation using different representations have different behaviours. In general the signed binary algorithms are more complex and have greater lookahead requirements, but on the other hand perform better than the corresponding dyadic digit operations which generally suffer if the size of the dyadic digits involved swell. Secondly, computing certain expressions exactly (eg. the iterated logistic map) necessarily involves examining many more digits of input and performing a significantly greater number of manipulations than would normally be performed with floating point arithmetic. Thirdly, the order in which operations are performed can greatly affect the lookahead required. Rearranging the same expression can significantly reduce the computation time of complex expressions. Lastly, the present implementation is slow when compared to the floating point arithmetic packages commonly used, even when those operations are performed to a high precision in a package such as Maple. This is in part be due to the fact that the algorithms themselves are more complex than the floating point operations one might otherwise use (c.f. Chapter 7, especially section 7.1.5). However the fact that the implementation uses a functional language and most floating point arithmetic is either written using an imperative language, assembler, or embedded in hardware makes it unreasonable to make a direct comparison. Next: Evaluation of Implementation Up: Conclusion Previous: Conclusion Martin Escardo
{"url":"http://www.dcs.ed.ac.uk/home/mhe/plume/node133.html","timestamp":"2014-04-18T18:10:51Z","content_type":null,"content_length":"4836","record_id":"<urn:uuid:4bfdb1b7-1262-4a5e-94c2-22fca17f51d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Counting All Possible Ancestral Configurations of Sample Sequences in Population Genetics July-September 2006 (vol. 3 no. 3) pp. 239-251 ASCII Text x Yun S. Song, Rune Lyngs?, Jotun Hein, "Counting All Possible Ancestral Configurations of Sample Sequences in Population Genetics," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 3, no. 3, pp. 239-251, July-September, 2006. BibTex x @article{ 10.1109/TCBB.2006.31, author = {Yun S. Song and Rune Lyngs? and Jotun Hein}, title = {Counting All Possible Ancestral Configurations of Sample Sequences in Population Genetics}, journal ={IEEE/ACM Transactions on Computational Biology and Bioinformatics}, volume = {3}, number = {3}, issn = {1545-5963}, year = {2006}, pages = {239-251}, doi = {http://doi.ieeecomputersociety.org/10.1109/TCBB.2006.31}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE/ACM Transactions on Computational Biology and Bioinformatics TI - Counting All Possible Ancestral Configurations of Sample Sequences in Population Genetics IS - 3 SN - 1545-5963 EPD - 239-251 A1 - Yun S. Song, A1 - Rune Lyngs?, A1 - Jotun Hein, PY - 2006 KW - Ancestral configurations KW - coalescent KW - recombination KW - contingency table KW - enumeration. VL - 3 JA - IEEE/ACM Transactions on Computational Biology and Bioinformatics ER - Given a set D of input sequences, a genealogy for D can be constructed backward in time using such evolutionary events as mutation, coalescent, and recombination. An ancestral configuration (AC) can be regarded as the multiset of all sequences present at a particular point in time in a possible genealogy for D. The complexity of computing the likelihood of observing D depends heavily on the total number of distinct ACs of D and, therefore, it is of interest to estimate that number. For D consisting of binary sequences of finite length, we consider the problem of enumerating exactly all distinct ACs. We assume that the root sequence type is known and that the mutation process is governed by the infinite-sites model. When there is no recombination, we construct a general method of obtaining closed-form formulas for the total number of ACs. The enumeration problem becomes much more complicated when recombination is involved. In that case, we devise a method of enumeration based on counting contingency tables and construct a dynamic programming algorithm for the approach. Last, we describe a method of counting the number of ACs that can appear in genealogies with less than or equal to a given number R of recombinations. Of particular interest is the case in which R is close to the minimum number of recombinations for D. [1] S.N. Ethier and R.C. Griffiths, “The Infinitely-Many-Sites Model as a Measure Valued Diffusion,” Annals of Probability, vol. 15, pp. 515-545, 1987. [2] S.N. Ethier and R.C. Griffiths, “On the Two-Locus Sampling Distribution,” J. Math. Biology, vol. 29, pp. 131-159, 1990. [3] P. Fearnhead and P. Donnelly, “Estimating Recombination Rates from Population Genetic Data,” Genetics, vol. 159, pp. 1299-1318, 2001. [4] R.C. Griffiths, “Genealogical-Tree Probabilities in the Infinitely-Many-Site Mode,” J. Math. Biology, vol. 27, pp. 667-680, 1989. [5] R.C. Griffiths and P. Marjoram, “Ancestral Inference from Samples of DNA Sequences with Recombination,” J. Computational Biology, vol. 3, pp. 479-502, 1996. [6] R.C. Griffiths and S. Tavaré, “Ancestral Inference in Population Genetics,” Statistics in Science, vol. 9, pp. 307-319, 1994. [7] R.C. Griffiths and S. Tavaré, “Simulating Probability Distributions in the Coalescent,” Theoretical Population Biology, vol. 46, pp. 131-159, 1994. [8] D. Gusfield, “Efficient Algorithms for Inferring Evolutionary Trees,” Networks, vol. 21, pp. 19-28, 1991. [9] J.F.C. Kingman, “The Coalescent,” Stochastic Processing Applications, vol. 13, pp. 235-248, 1982. [10] J.F.C. Kingman, “On the Genealogy of Large Populations,” J. Applied Probability, vol. 19A, pp. 27-43, 1982. [11] M.K. Kuhner, J. Yamato, and J. Felsenstein, “Estimating Effective Population Size and Mutation Rate from Sequence Data Using Metropolis-Hastings Sampling,” Genetics, vol. 140, pp. 1421-1430, [12] M.K. Kuhner, J. Yamato, and J. Felsenstein, “Maximum Likelihood Estimation of Recombination Rates from Population Data,” Genetics, vol. 156, pp. 1393-1401, 2000. [13] J. De Loera, R. Hemmecke, J. Tauzer, and R. Yoshida, “Effective Lattice Point Counting in Rational Convex Polytopes,” J. Symbolic Computation, vol. 38, pp. 1273-1302, 2004. [14] R. Lyngsø, Y.S. Song, and J. Hein, “Minimum Recombination Histories by Branch and Bound,” Proc. 2005 Workshop Algorithms in Bioinformatics, pp. 239-250, 2005. [15] K.L. Simonsen and G.A. Churchill, “A Markov Chain Model of Coalescence with Recombination,” Theoretical Population Biology, vol. 52, pp. 43-59, 1997. [16] M. Stephens and P. Donnelly, “Inference in Molecular Population Genetics,” J. Royal Statistical Soc. Series B, vol. 62, pp. 605-655, 2000. [17] R.H. Ward, B.L. Frazier, K. Dew, and S. Pääbo, “Extensive Mitochondria Diversity within a Single Amerindian Tribe,” Proc. Nat'l Academy Science, vol. 88, pp. 8720-8724, 1991. Index Terms: Ancestral configurations, coalescent, recombination, contingency table, enumeration. Yun S. Song, Rune Lyngs?, Jotun Hein, "Counting All Possible Ancestral Configurations of Sample Sequences in Population Genetics," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 3, no. 3, pp. 239-251, July-Sept. 2006, doi:10.1109/TCBB.2006.31 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tb/2006/03/n0239-abs.html","timestamp":"2014-04-21T14:40:59Z","content_type":null,"content_length":"56064","record_id":"<urn:uuid:b26ff467-c645-40e4-98ec-92d2b5480bd5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
GEOL3650: Energy: A Geological Perspective: Laboratory: Visualizing the Earth in Three Dimensions - Geologic Maps, Cross-sections & Block Diagrams Intro | Tabular Bodies | Definition | Map Symbols | Special Symbols | Measuring On a map, strike and dip are represented by a T-shaped symbol (Fig. 1). The long bar of the T is parallel to the strike and the short bar of the T indicates the dip direction. Since dip is always measured at right angles to strike, the strike and dip components are always perpendicular to each other. The dip angle is indicated by placing the angle (in degrees) next to the strike and dip symbol. In contrast, the strike orientation is not explicitly recorded by the strike and dip symbol. Fig. 1: Strike and dip symbol on a map. To determine the strike of a feature on a geologic map, use a straight edge to lightly draw in pencil a line parallel the map's north direction that intersects the strike line of the strike and dip symbol then xtend the strike line. The strike of the feature is the angle between these two lines and can be measured with a protractor. Intro | Tabular Bodies | Definition | Map Symbols | Special Symbols | Measuring [ Lab Listing | Geologic Maps ] Copyright @ 2010 J.D. Myers
{"url":"http://www.gg.uwyo.edu/content/laboratory/geomaps/strike_dip/symbols.asp?callNumber=14276","timestamp":"2014-04-18T15:39:25Z","content_type":null,"content_length":"4383","record_id":"<urn:uuid:d38d760c-2b50-42ce-9d2c-552d30fef88d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Antievolution.org - Antievolution.org Discussion Board -Topic::Evolutionary Computation Posts: 529 Joined: Sep. 2008 (Permalink) Posted: June 11 2009,14:05 I haven't run the program, but perusing the description of the algorithm, it seem to me that this section that describes how fitness is assigned is the part that determines the ultimate behavior of the model. They clearly have built in an asymmetry in the fitness of beneficial vs deleterious mutations, and their justifications of the asymmetry smell fishy to me, but IANAB. (bolding mine) To provide users of Mendel even more flexibility in specifying the fitness effect distribution, we have chosen to use a form of the Weibull function [12] that is a generalization of the more usual exponential function. Our function, expressed by eq. (3.1), maps a random number x, drawn from a set of uniformly distributed random numbers, to a fitness effect d(x) for a given random mutation. d(x) = (dsf) exp(?ax^gamma), 0 < x < 1. (3.1) Here (dsf) is the scale factor which is equal to the extreme value which d(x) assumes when x = 0. We allow this scale factor to have two separate values, one for deleterious mutations and the other for favorable ones. These scale factors are meaningful relative to the initial fitness value assumed for the population before we introduce new mutations. In Mendel we assume this initial fitness value to be 1.0. For deleterious mutations, since lethal mutations exist, we choose dsf del = ?1. For favorable mutations, we allow the user to specify the (positive) scale factor dsf fav. Normally, this would be a small value (e.g., 0.01 to 0.1), since it is only in very special situations that a single beneficial mutation would have a very large effect. The parameters a and gamma, both positive real numbers, determine the shape of the fitness effect distribution. We applythe same values of a and gamma to both favorable and deleterious mutations. The parameter a determines the minimum absolute values for d(x), realized when x = 1. We choose to make the minimum absolute value of d(x) the inverse of the haploid genome size G (measured in number of nucleotides) by choosing a = loge(G). For example, for the human genome, G = 3 × 109, which means that for the case of deleterious mutations, d(1) = ?1/G = ?3 × 10?10. For large genomes, this minimum value is essentially 0. For organisms with smaller genomes such as yeast, which has a value for G on the order of 107, the minimum absolute effect is larger. This is consistent with the expectation that each nucleotide in a smaller genome on average plays a greater relative role in the organism’s The second parameter gamma, can be viewed as ontrolling the fraction of mutations that have a large absolute fitness effect. Instead of specifying gamma directly, we select two quantities that are more intuitive and together define gamma. The first is theta, a threshold value that defines a “high-impact mutation”. The second is q, the fraction of mutations that exceed this threshold in their effect. For example, a user can first define a high-impact mutation as one that results in 10% or more change in fitness (theta = 0.1) relative to the scale factor and then specify that 0.001 of all mutations (q = 0.001) be in this category. Inside the code the value of is computed that satisfies these requirements. We reiterate that Mendel uses the same value for gamma, and thus the same values for theta and q, for both favorable and deleterious mutations. Figure 3.1 shows the effect of the parameter q on the shape of the distribution of fitness effect. Note that for each of the cases displayed the large majority of mutations are nearly neutral, that is, they have very small effects. Since a utation’s effect on fitness can be measured experimentally only if it is sufficiently large, our strategy for parameterizing the fitness effect distribution in terms of high-impact situtations provides a means for the Mendel user to relate the numerical model input more directly to available data regarding the actual measurable frequencies of mutations in a given biological context. Part of the justification for asymmetry is that some mutations are lethal, meaning that individual has zero probability of reproducing. OK, but the maximum fitness benefit of a beneficial mutation is "a very small number like 0.001", which is then subject to "heritability factor", typically 0.2, and other probabilities that severely limit its ability to propagate. To make matters worse, for some unjustified reason, the same distribution for beneficial and deleterious is used, after severely skewing the results with the above. Again, IANOB, but it seems to me that a single beneficial mutation can, in many situations like disease resistance, blonde hair, big boobs, etc, virtually guarantee mating success, just like a deleterious mutation can be reproductively lethal. I can see easily how the skewed treatment of beneficial vs deleterious mutations could virtually guarantee "genetic entropy", as evidenced by monotonically decreasing population fitness caused by accumulation of deleterious mutational load. ETA source. link is above Sanford, J., Baumgardner, J., Gibson, P., Brewer, W., & ReMine, W. (2007a). Mendel’s Accountant: A biologically realistic forward-time population genetics program. Scalable Computing: Practice and Experience 8(2), 147–165. The majority of the stupid is invincible and guaranteed for all time. The terror of their tyranny is alleviated by their lack of consistency. -A. Einstein (H/T, JAD) If evolution is true, you could not know that it's true because your brain is nothing but chemicals. ?Think about that. -K. Hovind
{"url":"http://www.antievolution.org/cgi-bin/ikonboard/ikonboard.cgi?s=50a399b6a4631568;act=ST;f=14;t=6034;st=30","timestamp":"2014-04-16T15:25:01Z","content_type":null,"content_length":"174275","record_id":"<urn:uuid:f2f90592-1094-47aa-a797-b2fd1abff2c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher-Dimensional Algebra VI: Lie 2-Algebras John C. Baez and Alissa S. Crans The theory of Lie algebras can be categorified starting from a new notion of `2-vector space', which we define as an internal category in Vect. There is a 2-category 2Vect having these 2-vector spaces as objects, `linear functors' as morphisms and `linear natural transformations' as 2-morphisms. We define a `semistrict Lie 2-algebra' to be a 2-vector space L equipped with a skew-symmetric bilinear functor [ . , . ] : L x L -> L satisfying the Jacobi identity up to a completely antisymmetric trilinear natural transformation called the `Jacobiator', which in turn must satisfy a certain law of its own. This law is closely related to the Zamolodchikov tetrahedron equation, and indeed we prove that any semistrict Lie 2-algebra gives a solution of this equation, just as any Lie algebra gives a solution of the Yang--Baxter equation. We construct a 2-category of semistrict Lie 2-algebras and prove that it is 2-equivalent to the 2-category of 2-term L_\infty-algebras in the sense of Stasheff. We also study strict and skeletal Lie 2-algebras, obtaining the former from strict Lie 2-groups and using the latter to classify Lie 2-algebras in terms of 3rd cohomology classes in Lie algebra cohomology. This classification allows us to construct for any finite-dimensional Lie algebra g a canonical 1-parameter family of Lie 2-algebras g_h which reduces to g at h = 0. These are closely related to the 2-groups G_h constructed in a companion paper. Keywords: Lie 2-algebra, L_\infty-algebra, Lie algebra cohomology 2000 MSC: 17B37,17B81,17B856,55U15,81R50 Theory and Applications of Categories, Vol. 12, 2004, No. 15, pp 492-528. Revised 2010-02-25. Original version at TAC Home
{"url":"http://www.tac.mta.ca/tac/volumes/12/15/12-15abs.html","timestamp":"2014-04-20T20:55:39Z","content_type":null,"content_length":"3301","record_id":"<urn:uuid:b789fac5-a4ac-4e58-8696-e591c94c6af8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
"Building a patio" problem, must write an equation September 11th 2009, 05:06 PM #1 Sep 2009 Duluth, California Here's a problem a friend and I have been trying to figure out. Maybe one of you will be able to help us? Here it goes: "Someone is planning to build a patio along the back wall of her house which is 32 feet long. The patio will be rectangular in shape and will fit against the full length of the back wall (so one side of the patio will be 32 feet long). Let's assume the patio tiles are each 1ft x 1ft. If this person has 256 tiles to work with, how far out from the wall will the patio extend? Pretend you are the patio builder and do these tasks: * Choose the variable you are going to use and state what it represents. * Write an equation that represents the problem. * Then solve the problem and equation. Also ... Benito is also going to build a patio but his patio does not have to fit exactly against a wall. In fact all that Benito has decided is that the patio should be rectangular in shape and should use all of the 144 tiles he has available. His tiles are the same size as the problem above. Find as many possibilities you can for the dimensions of Benito's patio." Any tips, suggestions, etc on how to solve this two are appreciated. Ok, let's assume the following variables! L= (The distance of the back wall = the length of the patio) = 32 feet (this is told to us) W = how far the patio extends from the wall (what we're trying to find) A = The total area of the patio tiles Clearly, if the patio tiles are 1ft by 1ft big and there's 256 of these, then the total area (A) will be 256 feet squared, correct? (1 patio tile = 1ft squared. Hence, 256 = 256 ft squared) Therefore, A = 256 feet squared. By using the Area formula for a rectangle (which happens to contain all of our variables) we can solve the problem: 256 = 32 * W Solving for W: 256/32 = W Therefore, W (what we're trying to find) = 8 feet Hello Sarah- Benito is also going to build a patio but his patio does not have to fit exactly against a wall. In fact all that Benito has decided is that the patio should be rectangular in shape and should use all of the 144 tiles he has available. His tiles are the same size as the problem above. Find as many possibilities you can for the dimensions of Benito's patio." Any tips, suggestions, etc on how to solve this two are appreciated. When you're building a rectangular patio, you can work out the number of tiles you'll need by multiplying the length of the patio by its breadth. For example, if the length is 10 feet and the breadth 6 feet, then the area of the patio is 10 x 6 = 60 square feet. So if each tile is 1 ft by 1 ft, that will use 60 tiles. So, if you have 144 of these tiles, you can make any shape rectangle you like provided its length multiplied by its breadth is 144. I'll start you off: 1 ft x 144 ft (Well, it's possible, but it's a very long thin patio!) 2 ft x 72 ft 3 ft x 48 ft ... and so on. Can you find all the other possible shapes now? (Bear in mind that each number you choose will have to divide exactly into 144. So it's no good trying to make a patio measuring 5 feet along one side, is it?) I think so.... wouldn't the only other possible combination be 4 * 36? If not, please correct me. Most certainly not, This is just a matter of being proficient with your times tables. You're looking for two factors of 144 that are both whole integers (that is, not fractions nor decimals). There are quite a number of combinations! 6 * 24 8 * 18 12 * 12 16 * 9 Can you think of anymore? September 11th 2009, 05:18 PM #2 Junior Member Sep 2009 September 11th 2009, 10:14 PM #3 September 12th 2009, 12:56 PM #4 Sep 2009 Duluth, California September 12th 2009, 05:03 PM #5 Junior Member Sep 2009
{"url":"http://mathhelpforum.com/algebra/101742-building-patio-problem-must-write-equation.html","timestamp":"2014-04-21T06:00:53Z","content_type":null,"content_length":"44506","record_id":"<urn:uuid:f6da8dae-8543-489f-89f1-58f3faa65531>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
HPL_dlaswp04N copy rows of U in A and replace them with columns of W. #include "hpl.h" void HPL_dlaswp04N( const int M0, const int M1, const int N, double * U, const int LDU, double * A, const int LDA, const double * W0, const double * W, const int LDW, const int * LINDXA, const int * LINDXAU ); HPL_dlaswp04N copies M0 rows of U into A and replaces those rows of U with columns of W. In addition M1 - M0 columns of W are copied into rows of U. M0 (local input) const int On entry, M0 specifies the number of rows of U that should be copied into A and replaced by columns of W. M0 must be at least zero. M1 (local input) const int On entry, M1 specifies the number of columns of W that should be copied into rows of U. M1 must be at least zero. N (local input) const int On entry, N specifies the length of the rows of U that should be copied into A. N must be at least zero. U (local input/output) double * On entry, U points to an array of dimension (LDU,N). This array contains the rows that are to be copied into A. LDU (local input) const int On entry, LDU specifies the leading dimension of the array U. LDU must be at least MAX(1,M1). A (local output) double * On entry, A points to an array of dimension (LDA,N). On exit, the rows of this array specified by LINDXA are replaced by rows of U indicated by LINDXAU. LDA (local input) const int On entry, LDA specifies the leading dimension of the array A. LDA must be at least MAX(1,M0). W0 (local input) const double * On entry, W0 is an array of size (M-1)*LDW+1, that contains the destination offset in U where the columns of W should be W (local input) const double * On entry, W is an array of size (LDW,M0+M1), that contains data to be copied into U. For i in [M0..M0+M1), the entries W(:,i) are copied into the row W0(i*LDW) of U. LDW (local input) const int On entry, LDW specifies the leading dimension of the array W. LDW must be at least MAX(1,N+1). LINDXA (local input) const int * On entry, LINDXA is an array of dimension M0 containing the local row indexes A into which rows of U are copied. LINDXAU (local input) const int * On entry, LINDXAU is an array of dimension M0 that contains the local row indexes of U that should be copied into A and replaced by the columns of W. See Also HPL_dlaswp00N, HPL_dlaswp10N, HPL_dlaswp01N, HPL_dlaswp01T, HPL_dlaswp02N, HPL_dlaswp03N, HPL_dlaswp03T, HPL_dlaswp04N, HPL_dlaswp04T, HPL_dlaswp05N, HPL_dlaswp05T, HPL_dlaswp06N, HPL_dlaswp06T.
{"url":"http://www.netlib.org/benchmark/hpl/HPL_dlaswp04N.html","timestamp":"2014-04-17T04:20:24Z","content_type":null,"content_length":"4465","record_id":"<urn:uuid:12dc404d-df22-47b0-a8b2-89c419e43c3d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
SPORTS >>Cabot skirts by Lions SPORTS >>Cabot skirts by Lions IN SHORT: The Cabot Panthers had a tough time with Searcy, but got past the winless team to remain undefeated as they head into conference play in the brand new 7A-East Conference against rival By JOHNNY RAY LAKE Special to the Leader For the second week in a row, the Cabot Panthers overcame a second half deficit to come away with a victory on the road as Cabot defeated the Searcy Lions 23-16. Vince Aguilar’s second touchdown of the game with 4:06 left in the fourth quarter proved to be the deciding points in the game. The fourth quarter seemed to be dominated by the Panthers as they controlled both the clock and the line of scrimmage. With Aguilar gaining 61 of his 167 rushing yards in the final 12 minutes, Cabot held the ball for more than seven minutes, while limiting the Lions to just less than five minutes and only two possessions to end the game. An interception of Searcy quarterback Justin Rowden by senior Johnnie Stone with 2:55 left ended the Lions hopes of tying the score. Cabot broke a 16-16 tie on the first play of the fourth quarter on a 24-yard field goal by Alex Tripp. On the next possession, facing fourth and one from the Cabot 46, Rowden snuck up the middle for two yards and a first down. But on the very next play, Rowden’s pass under pressure was intercepted by Ethan Coffee, which led to the go-ahead score for the Panthers. Cabot got on the board first with an impressive ten play 65-yard march that ended with a 28-yard touchdown pass from Corey Wade to Josh Clem on fourth down and five. Searcy responded with their own ten play drive that led to a 32-yard field goal by Ryan Wilbourn. On the drive, Rowden ran three straight draw plays for 13, 14, and 16 yards. But a holding penalty on brought back a 15-yard touchdown run and Searcy settled for the field goal. After exchanging punts heading through the second quarter, Cabot took over at their own 34 with 9:35 left in the first half. 13 plays later, Aguilar busted out from the middle of the field down the right sideline for a 27-yard score with 3:48 to go until halftime. Matt Cramblett broke through to block the extra point, so the score was 13-3 in favor of the Panthers. Using the sidelines and their timeouts, the Lions went 65 yards in under four minutes to take the lead on a Rowden quarterback sneak with 12 seconds left in the first half. The key play on the 17- play drive was converting on fourth and one on the Cabot 19-yard line. Rather than take the field goal attempt, Adam Robertson ran up the middle for a two-yard gain. Cabot led Searcy 13-10 at the break. The Lions took the ball on their own 34 to start the third quarter. On fourth and two on the Panthers’ 41, Searcy failed to convert as Robinson this time was stuffed short of the first down marker. The ensuing Panthers drive ended in a fumble, and after taking over on their own 37, Searcy faced once again fourth and short in Panther territory. Pressure up the middle forced Rowden to scramble to his right, where he found a wide open Nick Evans on the 6 who then went in for the score. Cabot returned the favor and blocked the extra point. Cabot then forced the two fourth quarter interceptions to end the Lions’ final two drives. Aguilar carried the ball 31 times for 167-yards and two scores, with 96 of those coming in the second half as the Panthers offensive line began to wear down the Lions defense. Both teams combined to convert five of six fourth down plays, with Cabot going two for two, and Searcy converting three of four. Both teams scored six of their points on fourth down. When asked how important it was that his team responded quickly to the Searcy scores, Cabot head coach Mike Malham said ”Yes, that and the fact we didn’t turn the ball over in the fourth were the key factors in the game. “Searcy is always a tough place to go and win on the road, he added. The Panthers will travel to archrival Conway next week to open up play in the 7A-East Conference.
{"url":"http://www.arkansasleader.com/2006/09/sports-cabot-skirts-by-lions.html","timestamp":"2014-04-18T08:41:47Z","content_type":null,"content_length":"21411","record_id":"<urn:uuid:312720a4-26c3-42fe-8d3c-8ac987290704>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Schur's lemma Schur's lemma Two related basic facts in representation theory bear the name of Schur: one general about categories of modules, and another specific to the case of complex numbers. If $G$ is a group, a $G$-module is any $k$-module with an action of $G$, where $k$ is a fixed commutative unital ground ring. 1. Given a group $G$ and a linear map $\phi : M\to N$ between two irreducible (= simple) $G$-modules (linear maps intertwining the actions) then $\phi$ is either the zero morphism or an isomorphism. It follows that the endomorphisms of simple irreducible $G$-module form a division ring. 2. Set $M=N$, and suppose further that the ground ring $k$ is an algebraically closed field; then $\phi$ is a multiple $\lambda I$ of the identity operator. In other words, the nontrivial automorphisms of simple modules, a priori possible by (1), are ruled out over algebraically closed fields. Part (1) is essentially category-theoretic and can be generalized in many ways, for example, by replacing $G$ by some $k$-algebra and taking the representations compatible with the action of $k$; more generally, given an abelian category, the endomorphism ring of a simple object is a division ring. Schur’s lemma is one of the basic facts of representation theory. For (2), if the endomorphism rings of all objects in an abelian category are finite-dimensional over an algebraically closed ground field $k$ (as is the case for group representations), then the endomorphism ring of a simple object is $k$ itself. Revised on October 17, 2009 20:12:57 by Toby Bartels
{"url":"http://ncatlab.org/nlab/show/Schur's+lemma","timestamp":"2014-04-20T19:00:42Z","content_type":null,"content_length":"17384","record_id":"<urn:uuid:9fc42c26-e0eb-4870-85bf-ecf6f569c8a1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Collapsing of Riemannian manifolds with a group action up vote 4 down vote favorite Let $M$ be a complete Riemannian manifold with bounded sectional curvature and $G$ a compact connected Lie group acts smoothly on $M$. Consider the fixed point set $F$, it is of course a submanifold of $M$ by the slice theorem. Let $\{F_i\}$ be the connected components of $F$. Then for each $i$, is there a sequence of Riemannian manifolds $\{M_j\},j\in\mathbb{N}$ with $M_0=M$ such that $\{M_j\}$ collapses to $F_i$ while keeping their sectional curvatures uniformly bounded? If in general such a sequence does not exist, how about the case $G=T$? Here $T$ is a finite-dimensional torus. riemannian-geometry mg.metric-geometry transformation-groups 1 Motivation? Could you point us to a good definition of collapsing'? Is it true that each Euclidean space collapses' to a point? [If not, take T = S^1 acting in standard linear fashion on the plane to get a counterexample. – Richard Montgomery Sep 19 '12 at 17:58 1 Collapsing in the Gromov-Hausdorff sense. – Acky Sep 19 '12 at 18:05 2 Do you require that all $M_j$ are diffeomorphic to $M$? If not, why not take $M_j=F_i\times $(small circle)? – Sergei Ivanov Sep 19 '12 at 18:16 3 "...Consider the fixed point set F, it is of course a submanifold of M by the slice theorem". Note that it is really simpler than that; in geodesic coordinates at a point p of F, the fixed point set is locally the linear subspace left fixed by the linearized action at p. – Dick Palais Sep 19 '12 at 18:22 1 I assume you give $F$ the induced metric from $M$. If the answer to your question is yes, then the curvature of $F$ is bounded from below, so in effect you hope that the fixed point set of any smooth compact group action has curvature bounded below. Why would that be true? – Igor Belegradek Sep 19 '12 at 18:25 show 5 more comments 1 Answer active oldest votes As it was noted in the comments you probably wanted to say that the action is isometric and $M_n$ is diffeomorphic to $M$ for all $n$. (Otherwise the question has no In this case answer is NO. Consider $\mathbb S^1$ action on $\mathbb S^3$ with fixed point set $\mathbb S^1$ and note that simply connected spaces can not GH-converge to $\ up vote 2 down vote mathbb S^1$. For the second part, it seems that you may only get a torus as the fixed set (?). In this case the answer is obviously YES. add comment Not the answer you're looking for? Browse other questions tagged riemannian-geometry mg.metric-geometry transformation-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/107597/collapsing-of-riemannian-manifolds-with-a-group-action","timestamp":"2014-04-20T06:35:10Z","content_type":null,"content_length":"57943","record_id":"<urn:uuid:687553f9-a8ca-4f57-af25-33b9e87b7820>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Sébastien Van Bellegem, CORE Monday, 13 February 2012, 12:00 - 13:30 Sébastien Van Bellegem, CORE Functional linear instrumental regression In an increasing number of empirical studies, the dimensionality mea- sured e.g. as the number of variables in a model, can be very large. Two instances of large dimensional models are the linear regression with a large number of covariates and the estimation of a regression function with many instrumental variables. We will recall these examples in the talk by means of some examples. We also recall why classical least square or IV estimators behaves poorly in such large dimensional regression problems. An appropriate setting to analyze high dimensional problems is provided by a functional linear model, in which the covariates are a vector in Rp for large p (p can tend to infinity). More generally we consider that covariates belongs to some Hilbert space. We also consider the case where covariates are endogenous and assume the existence of instrumental variables (that are functional as well). In this talk we show that estimating the regression function is a linear ill-posed inverse problem, with a known but data-dependent operator. Our main contribution is to analyse the rate of convergence of the Tikhonov regularized estimator, when we premultiply the problem by an instrument dependent operator. This extends the technology of Generalized Method of Moments to functional (GMM) to functional data. We then discuss the optimal choice of the premultiplication operator and propose an extension of the notion of “weak instrument” to this nonparametric framework. The performance of the resulting nonparametric estimator is also studied through simulations. Location: ULB R42.2.113 Contact: Claude Adan, This e-mail address is being protected from spam bots, you need JavaScript enabled to view it
{"url":"http://www.ecares.org/index.php?option=com_events&task=view_detail&agid=639&year=2012&month=02&day=13&Itemid=149&catids=39","timestamp":"2014-04-20T15:50:42Z","content_type":null,"content_length":"30344","record_id":"<urn:uuid:501f290e-63af-4a7d-8890-c518255180a6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Hayward, CA Calculus Tutor Find a Hayward, CA Calculus Tutor ...This is the mathematics that comes before pre-algebra. Those topics include those of 5th grade mathematics: addition, subtraction, multiplication, division, fractions, decimals, and various types of word problems. I have tutored students studying 5th grade mathematics. 9 Subjects: including calculus, physics, algebra 2, algebra 1 ...My students show clear improvement in understanding and technical Algebraic skills. I have 5+ recent Hon. Algebra II students in Pleasanton and Danville school district. 15 Subjects: including calculus, geometry, algebra 1, GRE ...Linear Algebra investigates the linear relationship between multiple variables. On one side it is for many students a first encounter with mathematical abstraction and on the other side it is a topic that occurs in many scientific applications like in numerical or economical models. At different universities I taught my own courses that built on linear algebra. 41 Subjects: including calculus, geometry, statistics, algebra 1 ...I take care to ensure understanding through quizzes, and by conversing with the student about the subject. I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. As I've always said, "If you can't explain... 24 Subjects: including calculus, chemistry, physics, geometry ...I prepare students for the following SAT tests: SAT RT Math (the “big” SAT) and SAT Subject Math Level 2. I began tutoring for these tests as an instructor for a couple of premier Test Prep companies. In the 10+ years since then, I honed my skills and knowledge by helping hundreds of students one-on-one. 14 Subjects: including calculus, statistics, geometry, algebra 2 Related Hayward, CA Tutors Hayward, CA Accounting Tutors Hayward, CA ACT Tutors Hayward, CA Algebra Tutors Hayward, CA Algebra 2 Tutors Hayward, CA Calculus Tutors Hayward, CA Geometry Tutors Hayward, CA Math Tutors Hayward, CA Prealgebra Tutors Hayward, CA Precalculus Tutors Hayward, CA SAT Tutors Hayward, CA SAT Math Tutors Hayward, CA Science Tutors Hayward, CA Statistics Tutors Hayward, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/hayward_ca_calculus_tutors.php","timestamp":"2014-04-17T22:05:35Z","content_type":null,"content_length":"23974","record_id":"<urn:uuid:9f6b1800-8b34-4847-b2de-1e71562d4b39>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Scaling for Spatial Modulation with Limited Feedback International Journal of Antennas and Propagation Volume 2013 (2013), Article ID 718482, 5 pages Research Article Power Scaling for Spatial Modulation with Limited Feedback National Key Laboratory of Science and Technology on Communications, University of Electronic Science and Technology of China, Chengdu 610054, China Received 8 February 2013; Revised 6 May 2013; Accepted 8 May 2013 Academic Editor: Feifei Gao Copyright © 2013 Yue Xiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Spatial modulation (SM) is a recently developed multiple-input multiple-output (MIMO) technique which offers a new tradeoff between spatial diversity and spectrum efficiency, by introducing the indices of transmit antennas as a means of information modulation. Due to the special structure of SM-MIMO, in the receiver, maximum likelihood (ML) detector can be combined with low complexity. For further improving the system performance with limited feedback, in this paper, a novel power scaling spatial modulation (PS-SM) scheme is proposed. The main idea is based on the introduction of scaling factor (SF) for weighting the modulated symbols on each transmit antenna of SM, so as to enlarge the minimal Euclidean distance of modulated constellations and improve the system performance. Simulation results show that the proposed PS-SM outperforms the conventional adaptive spatial modulation (ASM) with the same feedback amount and similar computational complexity. 1. Introduction Spatial modulation (SM) [1] is a recently proposed multiple-input multiple-output (MIMO) transmission technique, in which the index of the transmit antenna is utilized for information modulation in the spatial domain. The main advantages of SM-MIMO can be described as follows. Firstly, only one transmit antenna is active on each time slot so that strict time synchronization is saved. Secondly, SM-MIMO can be used for any number of receive antennas so that it adapts to downlink communications on an unbalanced MIMO channels in which the number of transmit antennas is much larger than that of receive antennas. Finally, SM-MIMO can offer a new balance between spectrum efficiency and spatial diversity compared to the conventional MIMO techniques [2, 3]. For detecting the SM-MIMO signals at the receiver side, maximum likelihood (ML) algorithm was suggested in [4] to achieve the optimal performance, which also gives a performance bound of the SM-MIMO system. For controlling the computational complexity of ML detection while approaching the performance bound, in current research, a series of near-ML detectors has been proposed [5–7] to make a tradeoff between complexity and performance. Traditional SM-MIMO cannot exceed the ML performance bound. In this case, for further improving the system performance, in recent research, limited feedback was considered for link adapted SM-MIMO. However, these techniques have their disadvantages. For example, adaptive SM (ASM) was firstly proposed in [8] to select the modulation styles according to the channel information. In ASM, the transmission rate of each data frame is not fixed, which is not good for system design. And error propagation may happen in low-SNR domain due to the wrong demodulation for any single data symbol. Furthermore, a combination of adaptive spatial modulation and antenna section scheme is proposed in [9] to enhance the performance. However, it inherits the drawbacks of ASM, and the introduction of additional antennas will introduce extra implementation cost and channel estimation complexity. This paper considers another aspect as adaptive power scaling for SM-MIMO with limited feedback. A novel power scaling spatial modulation (PS-SM) scheme is proposed, based on the introduction of scaling factor (SF) for weighting the modulated symbols and enlarging the minimum Euclidean distance of the transmit signals. As a result the proposed method can effectively improve the system performance. We will show that the proposed PS-SM outperforms the conventional adaptive spatial modulation (ASM) with the same feedback. In this case, PS-SM can be an alternative scheme to ASM schemes by overcoming their disadvantages. Furthermore, the enlarging of the minimum distance could lead to an improvement of equivalent transmit power. In this paper, a power attenuation factor is introduced to keep the minimal transmit power for improving the energy efficiency. The rest of the paper is organized as follows. In Section 2, the basic structure of SM-MIMO and main idea of adaptive schemes are summarized. The proposed PS-SM scheme is described in Section 3. In Section 4, bit-error rate (BER) performance of PS-SM and conventional schemes are disclosed by simulation results. Finally, conclusions are given in Section 5. 2. Conventional SM and Adaptive Scheme Assume an SM-MIMO [1] with transmit and receive antennas. The information bits vector , with length , can be divided into two parts as and ( is the size of QAM constellation) bits, as . Then the bit information vector is mapped into transmit vector with length , in which there is only one nonzero element , where is the index of transmit antennas with and is the index of QAM constellation with . Let be the MIMO channel matrix. Then the receive vector , with length , can be written as where represents the th column of and is AWGN noise with variance of . For optimal ML detector [4], the estimated SM-MIMO symbol is given by where denotes the estimated transmit symbol vector, is the set of all possible transmit symbols, and represents the Frobenius norm of the vector. The performance bound of ML detection is decided by the minimal Euclidean distance , defined as [8] In general, the minimal Euclidean distance is determined by the transmit vector and channel matrix for conventional SM. In ASM [8, 9], the modulation styles for each antenna can be selected to offer extra freedom for generating a larger . However, ASM schemes suffer from a nonconstant data rate when different antennas employ different modulation styles. For instance, for an SM-MIMO with 2 transmit antennas, if the first and second antennas separately select BPSK and QPSK, the data rate may be variable between 2 and 3 bits/slot. Furthermore, in the receiver side, if the antenna index is estimated incorrectly, the number of the received bits will differ from that of original transmission, which will cause consecutive errors. 3. Power Scaling Spatial Modulation For avoiding the drawback of ASM schemes, in this paper, a novel power scaling spatial modulation scheme is proposed. The system block is shown in Figure 1. Firstly, the information bits are divided into two parts to modulate the digital constellations and the active transmit antenna index. Then at each time instant one active transmit antenna is selected with a scaling factor (SF) , for weighting the modulated symbols, where and is the total number of candidate SF groups. In this case, the transmit SM symbol can be expressed as . For normalizing the transmit power, let . According to (1) and (3), in PS-SM, the minimal Euclidean distance is a function of , given and further computed as with where denotes the real part of the input and are the inner products of the columns of the channel matrix and can be reused for reducing the calculation complexity. Comparing (4) to the processing of [8], it is shown that the proposed PS-SM scheme has similar computational complexity as ASM. From every possible set of SF candidates as , there is one minimal Euclidean distance . Then the optimal value of , which is defined as the maximal value of , is expressed as The only limitations for the selection of SF are given as and . For example, we target a PS-SM-MIMO transmission system with the scaling factors obeying uniform distribution, which is generated in Table 1, as . With limited feedback, the proposed PS-SM algorithm presets the same candidate SF sets at both transceiver sides. And the selected index of the optimal candidate will be delivered to the transmitter. Figure 2 gives the complementary cumulative distribution functions (CCDF) of the minimum Euclidean distance for a PS-SM-MIMO and conventional SM-MIMO. It is shown that with the introduction of scaling factor, the minimum Euclidean distance can be enlarged effectively. Moreover, with the increase in feedback amount, the increase in minimum Euclidean distance is more considerable. In this case, the system performance will be effectively improved. The following simulation results will show the benefit of PS-SM to system performance. In this case, the proposed PS-SM is an alternative scheme by overcoming the disadvantage of traditional ASM. 4. Simulation Results Computer simulation is performed for comparing the system performance of conventional SM, ASM, and PS-SM. We consider an SM-MIMO system with 2 and 4 transmit antennas and QPSK modulation under Rayleigh flat fading channel, so the transmission data rate is 3 and 4 bits per symbol, respectively. The number of receive antennas is 2. Assume that the channel information of Rayleigh flat fading channel is perfectly estimated at the receiver side. Figure 3 gives the system performance of SM-MIMO system. Both ASM, and PS-SM schemes are with 5 candidates for a fair comparison. Firstly, with limited feedback, both ASM and PS-SM outperform conventional SM. For example, at BER of 10^−3, there is a 1.7dB SNR gap between ASM and conventional SM. Furthermore, with the same amount of candidate adaptive sets, the proposed PS-SM outperforms ASM. More specifically, at BER of 10^−3, 0.9dB SNR gain can be achieved by the proposed PS-SM scheme. Figure 4 shows the comparison of system performance of PS-SM with different amounts of feedback as 1, 2, and 3 bits in which 2, 4, and 8 candidate SF sets are assumed, respectively. The system is with transceiver configuration and QPSK modulation. It is shown that the performance of PS-SM improves with the increase in the number of feedback bits which is related to the number of factor candidates. At a BER of 10^−3, there are 2.5dB, 3.8dB, and 4.4dB performance gains in PS-SM with 1-bit, 2-bit, and 3-bit feedback, respectively. In this case, a feedback of 2 bits gives a better tradeoff between feedback amount and system performance. Since PS-SM can effectively enlarge the minimum Euclidean distance, the equivalent transmit power is also enlarged. In this part, we consider a tradeoff between system performance and energy-efficient minimum power transmission (Figure 5). A power attenuation factor is introduced to normalize the enlarged minimum Euclidean distance to original SM. In this case, we keep the same system performance to traditional SM while minimizing the transmit power for improving the energy efficiency. A PS-SM with different amounts of feedback as 1, 2, and 3 bits in which 2, 4, and 8 candidate SF sets are assumed, respectively. The system is with transceiver configuration and QPSK modulation. A PS-SM with different amounts of feedbacks as 1, 2, and 3bits are assumed, with 2, 4, and 8 candidates SF sets respectively. Furthermore, energy saving of 0.4dB, 1.5dB, and 2.2dB is achieved with feedback amounts of 1, 2, and 3 bits, respectively. 5. Conclusions For overcoming the disadvantages of original ASM schemes, a novel power scaling spatial modulation scheme was proposed as an alternative adaptive SM-MIMO scheme with limited feedback. In PS-SM, multiple candidate scaling factor sets are introduced for weighting the modulated symbols on each transmit antenna, so as to enlarge the minimal Euclidean distance and improve the system performance. Simulation results showed that the proposed PS-SM outperforms ASM schemes with the same feedback amount on similar computational complexity. This work was supported in part by the National Science Foundation of China under Grant no. 61101101, the National High-Tech R&D Program of China (“863” Project under Grant no. 2011AA01A105), and the Foundation Project of National Key Laboratory of Science and Technology on Communications under Grant 9140C020404110C0201. 1. R. Mesleh, H. Haas, S. Sinanovic, C. W. Ahn, and S. Yun, “Spatial modulation,” IEEE Transactions on Vehicular Technology, vol. 57, no. 4, pp. 2228–2241, 2008. 2. J. Fu, C. Hou, W. Xiang, L. Yan, and Y. Hou, “Generalised spatial modulation with multiple active transmit antennas,” in Proceedings of the IEEE Globecom Workshops (GC '10), pp. 839–844, December 2010. View at Publisher · View at Google Scholar · View at Scopus 3. P. Yang, Y. Xiao, B. Zhou, and S. Li, “Initial performance evaluation of spatial modulation OFDM in LTE-based systems,” in Proceedings of the 6th International ICST Conference on Communications and Networking in China (CHINACOM '11), pp. 102–107, August 2011. View at Publisher · View at Google Scholar · View at Scopus 4. J. Jeganathan, A. Ghrayeb, and L. Szczecinski, “Spatial modulation: optimal detection and performance analysis,” IEEE Communications Letters, vol. 12, no. 8, pp. 545–547, 2008. View at Publisher · View at Google Scholar · View at Scopus 5. Q. Tang, Y. Xiao, P. Yang, Q. Yu, and S. Li, “A new low-complexity near-ML detection algorithm for spatial modulation,” IEEE Wireless Communications Letters, no. 2, pp. 1–4, 2012. 6. C. Xu, S. Sugiura, S. Ng, and L. Hanzo, “Spatial modulation and space-time shift keying: optimal performance at a reduced detection complexity,” IEEE Transactions on Communications, vol. 61, no. 1, pp. 206–216, 2013. 7. P. Yang, Y. Xiao, L. Li, Q. Tang, and S. Li, “An improved matched-filter based detection algorithm for space-time shift keying systems,” IEEE Signal Processing Letters, vol. 19, no. 5, pp. 271–274, 2012. View at Publisher · View at Google Scholar · View at Scopus 8. P. Yang, Y. Xiao, Y. Yu, and S. Li, “Adaptive spatial modulation for wireless mimo transmission systems,” IEEE Communications Letters, vol. 15, no. 6, pp. 602–604, 2011. View at Publisher · View at Google Scholar · View at Scopus 9. P. Yang, Y. Xiao, L. Li, Q. Tang, Y. Yu, and S. Li, “Link adaptation for spatial modulation with limited feedback,” IEEE Transactions on Vehicular Technology, vol. 61, no. 8, pp. 3808–3813, 2012.
{"url":"http://www.hindawi.com/journals/ijap/2013/718482/","timestamp":"2014-04-16T12:18:14Z","content_type":null,"content_length":"138960","record_id":"<urn:uuid:8d46d32c-d602-4ae0-b7cd-f77d3ed11b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Brentwood, CA Precalculus Tutor Find a Brentwood, CA Precalculus Tutor ...I have an elementary credential to teach all elementary subjects. I can help your student. I have taken the CBEST and passed the test successfully. 22 Subjects: including precalculus, chemistry, reading, geometry ...Discover how and why things do the things they do! Learn how to make exceptional presentations for many different occasions. Reading is the most important subject to be learned by all people. 16 Subjects: including precalculus, reading, physics, geometry ...I am excited about helping students of all levels and abilities understand math concepts and practices. With my guidance, students experience a boost in confidence and skill level, resulting in better grades and a better attitude towards math. Please do make sure that you are either within my 10 mile travel radius or are willing to work with me online. 10 Subjects: including precalculus, calculus, geometry, algebra 1 ...I give discounts for extended tutoring, multiple subjects, or small groups. I'm looking forward to helping you pass with flying colors!While tutoring and teaching math and science for over 9 years, I have tutored students in a range of math subjects, including Algebra 1. I have been tutoring students in a range of math subjects, including Algebra 2 for over 9 years. 43 Subjects: including precalculus, Spanish, geometry, chemistry ...I look forward to helping people to learn new subjects and overcoming their fears regarding learning and testing. My background includes a Bachelor of Science degree in Physics and a Masters in Business Administration, plus various courses in data processing, programming, technical training and ... 21 Subjects: including precalculus, physics, geometry, calculus Related Brentwood, CA Tutors Brentwood, CA Accounting Tutors Brentwood, CA ACT Tutors Brentwood, CA Algebra Tutors Brentwood, CA Algebra 2 Tutors Brentwood, CA Calculus Tutors Brentwood, CA Geometry Tutors Brentwood, CA Math Tutors Brentwood, CA Prealgebra Tutors Brentwood, CA Precalculus Tutors Brentwood, CA SAT Tutors Brentwood, CA SAT Math Tutors Brentwood, CA Science Tutors Brentwood, CA Statistics Tutors Brentwood, CA Trigonometry Tutors Nearby Cities With precalculus Tutor Antioch, CA precalculus Tutors Byron, CA precalculus Tutors Castro Valley precalculus Tutors Danville, CA precalculus Tutors Discovery Bay precalculus Tutors Dublin, CA precalculus Tutors Knightsen precalculus Tutors Lafayette, CA precalculus Tutors Manteca precalculus Tutors Oakley, CA precalculus Tutors Pittsburg, CA precalculus Tutors Pleasant Hill, CA precalculus Tutors San Ramon precalculus Tutors Tracy, CA precalculus Tutors Woodside, CA precalculus Tutors
{"url":"http://www.purplemath.com/Brentwood_CA_Precalculus_tutors.php","timestamp":"2014-04-20T21:01:20Z","content_type":null,"content_length":"24140","record_id":"<urn:uuid:57586e55-69a7-4739-972d-ff53844eb4e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Buoyancy Force Question October 21st 2008, 12:43 AM #1 Mar 2008 Buoyancy Force Question I am looking to investigate the buoyancy force of a cuboid. The dimensions of the cuboid are 30cm x 20cm x 15 cm. When the cuboid is placed in the water (the face 30cm x 20cm faces the water), 4cm of the cuboid will be submerged. If I push the cuboid so that the top face is 15mm below the surface of the water, I will have pushed the cuboid (15 - 4) + 1.5 = 12.5 cm. Therefore, using some notes that I have I have worked out the buoyancy force to be: F = p*A*y*g where p = pressure of the water (998.2 kg/m^3, pressure of tap water at 20oC), A is the cross sectional area to face the water (0.06 m), y is the overall displacement (0.125 m) and g = 9.8m/s, acceleration due to gravity. If I work this out, I get: 73.37 Newtons. First things first, is this correct? Please note that this is NOT homework, just a problem that I am looking into. I have looked at some notes where I was placing a bottle filled with sand in water, and then pushing it to a new depth. We used F = p*A*y*g to work out the buoyancy force in that case, however I dont believe that we considered the case if the bottle was pushed so far into the water that it was fully submerged (as in this case). Therefore, I think that the equation is correct so far as I push the cuboid down to a depth of 11cm so that the top face of the cuboid will be level with the top of the water, however I am unsure if I can carry the above reasoning through to when the object is fully submerged in water. I then want to consider pushing the cuboid so that the top face is 15cm and then 30cm deep, if I know that I can apply the above principle at 15mm then I know that I can extend it to these two cases. What I am really confused with (the reason i dont think that this works past 11cm) is that Archimedes said the Buoyancy Force is equal to the mass of the water displaced. However, when fully submerged, surely there wont be any more water displaced if we push the cuboid deeper and deeper?!? Any help on this would be greatly appreciated! Cheers in advance, I am looking to investigate the buoyancy force of a cuboid. The dimensions of the cuboid are 30cm x 20cm x 15 cm. When the cuboid is placed in the water (the face 30cm x 20cm faces the water), 4cm of the cuboid will be submerged. If I push the cuboid so that the top face is 15mm below the surface of the water, I will have pushed the cuboid (15 - 4) + 1.5 = 12.5 cm. Therefore, using some notes that I have I have worked out the buoyancy force to be: F = p*A*y*g where p = pressure of the water (998.2 kg/m^3, pressure of tap water at 20oC), A is the cross sectional area to face the water (0.06 m), y is the overall displacement (0.125 m) and g = 9.8m/s, acceleration due to gravity. If I work this out, I get: 73.37 Newtons. What you have written as p is usualy $\rho$ and is the density of the water (as you should be able to see from the units) The boyancy force is equal (and opposite if we are being carefull about direction) to the weight of the displaced water. In this case the cuboid is completley submerged so the boyancy force is: $F=\rho A h = \rho Vg$ where $h$ is the dimension of the cuboid normal to a face of area $A$. What you had is only applicable if the cuboid is not completly submerged. What I am really confused with (the reason i dont think that this works past 11cm) is that Archimedes said the Buoyancy Force is equal to the mass of the water displaced. However, when fully submerged, surely there wont be any more water displaced if we push the cuboid deeper and deeper?!? That is not what Archimedes principle is, it is that the boyancy force is equal to the weight of the fluid displaced. October 31st 2008, 08:03 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-applied-math/54880-buoyancy-force-question.html","timestamp":"2014-04-17T21:51:07Z","content_type":null,"content_length":"36945","record_id":"<urn:uuid:94af0017-9a87-4f1e-a32f-e8389ff255f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Interpolated estimation of markov source parameters from sparse data,” Pattern Recognit. Practice Results 1 - 10 of 285 - COMPUTATIONAL LINGUISTICS , 1996 "... The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we des ..." Cited by 1082 (5 self) Add to MetaCart The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we describe a method for statistical modeling based on maximum entropy. We present a maximum-likelihood approach for automatically constructing maximum entropy models and describe how to implement this approach efficiently, using as examples several problems in natural language processing. , 1998 "... We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Br ..." Cited by 850 (20 self) Add to MetaCart We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods. 1 - Computational Linguistics , 1992 "... We address the problem of predicting a word from previous words in a sample of text. In particular we discuss n-gram models based on calsses of words. We also discuss several statistical algoirthms for assigning words to classes based on the frequency of their co-occurrence with other words. We find ..." Cited by 698 (5 self) Add to MetaCart We address the problem of predicting a word from previous words in a sample of text. In particular we discuss n-gram models based on calsses of words. We also discuss several statistical algoirthms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics. - IEEE Transactions on Acoustics, Speech and Signal Processing , 1987 "... Abstract-The description of a novel type of rn-gram language model is given. The model offers, via a nonlinear recursive procedure, a com-putation and space efficient solution to the problem of estimating prob-abilities from sparse data. This solution compares favorably to other proposed methods. Wh ..." Cited by 663 (1 self) Add to MetaCart Abstract-The description of a novel type of rn-gram language model is given. The model offers, via a nonlinear recursive procedure, a com-putation and space efficient solution to the problem of estimating prob-abilities from sparse data. This solution compares favorably to other proposed methods. While the method has been developed for and suc-cessfully implemented in the IBM Real Time Speech Recognizers, its generality makes it applicable in other areas where the problem of es-timating probabilities from sparse data arises. Sparseness of data is an inherent property of any real text, and it is a problem that one always encounters while collecting fre-quency statistics on words and word sequences (m-grams) from a text of finite size. This means that even for a very large data col-lection, the maximum likelihood estimation method does not allow Turing’s estimate PT for a probability of a word (m-gram) which occurred in the sample r times is r* PT = where r We call a procedure of replacing a count r with a modified count r ’ “discounting ” and a ratio rt/r a discount coefficient dr. When r ’ = r*, we have Turing’s discounting. Let us denote the m-gram wl, *.., w, as wy and the number of times it occurred in the sample text as c(wT). Then the maxi-mum likelihood estimate is - COMPUTATIONAL LINGUISTICS , 1990 "... In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and give preliminary results. ..." Cited by 585 (8 self) Add to MetaCart In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and give preliminary results. - IN PROCEEDINGS OF THE THIRD CONFERENCE ON APPLIED NATURAL LANGUAGE PROCESSING , 1992 "... We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96%. We describe implementation strategies and op ..." Cited by 356 (5 self) Add to MetaCart We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96%. We describe implementation strategies and optimizations which result in high-speed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment. - Readings in Speech Recognition , 1990 "... In the case of a trlgr~m language model, the proba-bility of the next word conditioned on the previous two words is estimated from a large corpus of text. The re-sulting static trigram language model (STLM) has fixed probabilities that are independent of the document being dictated. To improve the l ..." Cited by 337 (5 self) Add to MetaCart In the case of a trlgr~m language model, the proba-bility of the next word conditioned on the previous two words is estimated from a large corpus of text. The re-sulting static trigram language model (STLM) has fixed probabilities that are independent of the document being dictated. To improve the language mode] (LM), one can adapt the probabilities of the trigram language model to match the current document more closely. The partially dictated document provides significant clues about what words ~re more likely to be used next. Of many meth-ods that can be used to adapt the LM, we describe in this paper a simple model based on the trigram frequencies es-timated from the partially dictated document. We call this model ~ cache trigram language model (CTLM) since we are c~chlng the recent history of words. We have found that the CTLM red,aces the perplexity of a dictated doc-ument by 23%. The error rate of a 20,000-word isolated word recognizer decreases by about 5 % at the beginning of a document and by about 24 % after a few hundred words. , 1991 "... Parameter sharing plays an important role in statistical modeling since training data are usually limited. On the one hand, we would like to use models that are as detailed as possible. On the other hand, with models too detailed, we can no longer reliably estimate the parameters. Triphone generaliz ..." Cited by 275 (7 self) Add to MetaCart Parameter sharing plays an important role in statistical modeling since training data are usually limited. On the one hand, we would like to use models that are as detailed as possible. On the other hand, with models too detailed, we can no longer reliably estimate the parameters. Triphone generalization may force two models to be merged together when only parts of the model output distributions are similar, while the rest of the output distributions are different. This problem can be avoided if clustering is carried out at the distribution level. In this paper, a shared-distribution model is proposed to replace generalized triphone models for speaker-independent continuous speech recognition. Here, output distributions in the hidden Markov model are shared with each other if they exhibit acoustic similarity. In addition to detailed representation, it also gives us the freedom to use a large number of states for each phonetic model. Although an increase in the number of states will inc... - Computer, Speech and Language , 1996 "... An adaptive statistical languagemodel is described, which successfullyintegrates long distancelinguistic information with other knowledge sources. Most existing statistical language models exploit only the immediate history of a text. To extract information from further back in the document's histor ..." Cited by 242 (11 self) Add to MetaCart An adaptive statistical languagemodel is described, which successfullyintegrates long distancelinguistic information with other knowledge sources. Most existing statistical language models exploit only the immediate history of a text. To extract information from further back in the document's history, we propose and use trigger pairs as the basic information bearing elements. This allows the model to adapt its expectations to the topic of discourse. Next, statistical evidence from multiple sources must be combined. Traditionally, linear interpolation and its variants have been used, but these are shown here to be seriously deficient. Instead, we apply the principle of Maximum Entropy (ME). Each information source gives rise to a set of constraints, to be imposed on the combined estimate. The intersection of these constraints is the set of probability functions which are consistent with all the information sources. The function with the highest entropy within that set is the ME solution...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=136562","timestamp":"2014-04-18T08:44:38Z","content_type":null,"content_length":"37539","record_id":"<urn:uuid:0f73ed78-a6e9-4581-abfa-b0b4fb6be01e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Macungie Prealgebra Tutor Find a Macungie Prealgebra Tutor ...I tutored a friend when we were in middle school - her parents hired me to help her improve her skills. I am a certified special education teacher with five years of teaching experience, including students diagnosed with ADD/ADHD. I am also a TSS, providing emotional and behavioral therapy to children, which has also included multiple clients with ADD/ADHD. 31 Subjects: including prealgebra, English, reading, writing I am a tutor with over five years of experience teaching math, science, and humanities at the secondary level. I have worked with students from all backgrounds; I have also worked extensively with children with disabilities. I hold a BA in Anthropology from Florida Atlantic University and am currently a graduate student in the Anthropology Department. 55 Subjects: including prealgebra, reading, Spanish, writing I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including prealgebra, geometry, GRE, algebra 1 ...Students should not have to limit their dreams because of a lack of foundation in mathematics. Start 1 on 1 tutoring today!I have taught Algebra 1 for over 10 years. I have several worksheets developed to help students gain confidence. 22 Subjects: including prealgebra, geometry, algebra 1, GRE ...Allow an experienced PA certified math teacher to make your college admissions officers take notice!Algebra is often a stumbling block for the student that finds abstract math difficult to conceptualize. I work one-on-one with my students to focus on their weaknesses by utilizing their strengths... 12 Subjects: including prealgebra, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/Macungie_prealgebra_tutors.php","timestamp":"2014-04-21T11:01:32Z","content_type":null,"content_length":"23989","record_id":"<urn:uuid:126a6105-19a4-40df-be87-b9f67cbf3add>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
simple graph is a tree? May 7th 2009, 07:32 PM #1 May 2009 simple graph is a tree? Prove that a simple graph is a tree if and only if it is connected but the deletion of any of its edges produces a graph that is not connected. Can someone here please help me with this proof? Thanks in advance ... This really depends on the definition of a tree you were given. But for example if you were given the definition that a tree is a connected graph without any cycles, it follows straight from the definition that the graph is connected, and if you remove a path between a and b, then no path exists between a and b, because otherwise in the original situation there would exist two paths, and thus a cycle. The same argument works for other definitions like a tree is a connected graph with each vertice connected by only ONE path etc. May 9th 2009, 07:45 AM #2 Junior Member May 2009 Tokyo, Japan
{"url":"http://mathhelpforum.com/discrete-math/88093-simple-graph-tree.html","timestamp":"2014-04-18T23:02:53Z","content_type":null,"content_length":"31383","record_id":"<urn:uuid:ab17cda3-3f62-40fd-ba32-29f1017637cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
How to handle secondary keys in partitioning? Re: How to handle secondary keys in partitioning? Date: May 05, 2010 02:20AM OK in this case it depends on how many partitions and how many matching rows there are for 'SELECT * FROM tp WHERE sender_id=1 ORDER BY date'. Since it is partitioned by recipient_id pruning will not kick in. Lets say you have N partitions and M matching rows (sender_id = 1). This means that it has to do a index ordered read from each partition, and merge sort all N rows. And while there is still matching rows a new index ordered read would be done, until all matching rows are found. So if (M < N) i.e. more partitions than matching rows, it will need to do more index lookups (and the sum of all steps in the b-tree index will be higher). But if (M > N) i.e. more matching rows than partitions, the number of index reads will be the same (but the sum of the b-tree depth will be slightly higher, and there will be an additional sorting So there is an overhead of index search in partitioning, since the indexes are partitioned just like the data, but this overhead is higher per row if there are more partitions than matching rows and shrinks per row as the matching row number increases. And the cost for sorting is also added in case of partitioning. The type of performance degradation is hard to estimate without benchmarking, it depends on how many partitions and how many matching rows are found, as well as used engine and if a covering index can be used etc. What I would suggest is that you benchmark your load on both partitioned and non partitioned tables. And if possible, add the results to this topic! (Note that this is only index reads we discuss, the cost for index writes should be lower for partitioned tables, since the b-tree depth is lower per partition than it would be for a non partitioned If you usually query recipient_id and sender_id independently, I would suggest you to try to also subpartition your data, perhaps like: PARTITION BY LIST (recipient_id % 5) SUBPARTITION BY HASH (sender_id) PARTITIONS 5 (PARTITION pREC_0 VALUES IN (0), PARTITION pREC_1 VALUES IN (1), PARTITION pREC_2 VALUES IN (2), PARTITION pREC_3 VALUES IN (3), PARTITION pREC_4 VALUES IN (4)) That way you have partitioned on both recipient_id and sender_id equally, and have a total of 25 partitions. (you could also use RANGE instead of LIST, but I used LIST to get an easy way to equally distribute both columns.)
{"url":"http://forums.mysql.com/read.php?106,366049,366217","timestamp":"2014-04-21T07:20:52Z","content_type":null,"content_length":"22887","record_id":"<urn:uuid:b3c451be-27a0-4077-8178-c8be7307fee7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Vol. 6, No. 1, September 2011 Title: Integrals Involving I-Function Authors: U.K. Saha, L.K. Arora and B.K. Dutta Abstract: In this paper, we have presented certain integrals involving product of the I-function with exponential function, Gauss's hypergeometric function and Fox's H-function. The results derived here are basic in nature and may include a number of known and new results as particular cases. PP. 1-14 Title: Solution for Boundary Value Problem of non-integer order in L^2- space Author: Azhaar H. Sallo Abstract: In this paper, we shall prove the existence and uniqueness of a square - integrable solution in L^2-space, for the boundary value problem of non- integer order which has the form: ^cD^α[x] y(x) = f(x, y(x)), 1<α ≤2 y(a) = y[a], y(b) = y[b] Where ^cD^α is the Caputo fractional derivative and a, b are positive constants with b not equal a. The contraction mapping principle has been used in establishing our main results. PP. 15-24 Title: Combined Effect of MHD and Radiation on Unsteady Transient Free Convection Flow between Two Long Vertical Parallel Plates with Constant Temperature and Mass Diffusion Authors: U. S.Rajput and P. K. Sahu Abstract: The paper studies combined effects of MHD and radiation on unsteady transient free convection flow of a viscous, incompressible, electrically conducting and radiating fluid between two long vertical parallel plates with constant temperature and mass diffusion, under the assumption that the induced magnetic field is negligible. TheLaplacetransform method has been used to find the solutions for the velocity, temperature and concentration profiles. The velocity, temperature, concentration and skin-friction are studied for different parameters like Prandtl number, Schmidt number, magnetic parameter, buoyancy ratio parameter and time. PP. 25-39 Title: On the Solution of Fractional Kinetic Equation Authors: B.K. Dutta, L.K. Arora and J. Borah Abstract: In this paper, the solution of a class of fractional Kinetic equation involving generalized I-function has been discussed. Special cases involving the I-function, H-function, generalized M-series, generalized Mittag-Leffer functions are also discussed. Results obtained are related to recent investigations of possible astrophysical solutions of the solar neutrino problem. PP. 40-48 Title: A Parametric Study on Multi-Objective Integer Quadratic Programming Problems under Uncertainty Author: Osama E. Emam Abstract: This paper presents a parametric study on multi-objective integer quadratic programming problem under uncertainty. The proposed procedure presents a quadratic multi-objective integer programming problem with a stochastic parameters in the right hand sides, and all constraints occurs under certain probability. We consider all random variables are normally distributed. We shall be essentially concerned with three basic notions: the set of feasible parameters; the solvability set and the stability set of the first kind (SSK1). An algorithm to clarify the developed theory as well as an illustrative example is presented. PP. 49-60 Title: The n- Dimensional Generalized Weyl Fractional Calculus Containing to n-Dimensional $\bar{H}$-Transforms Authors: V. B. L. Chaurasia and RaviShanker Dubey Abstract: The object of this paper is to establish a relation between the n-dimensional $\bar{H}$-transform involving the Weyl type n-dimensional Saigo operator of fractional integration. PP. 61-72 Title: An Optimal Distributed Control for Age-Dependent Population Diffusion System Authors: Jun Fu, Renzhao Chen and Xuezhang Hou Abstract: The optimal distributed control problem for age-dependent population diffusion system governed by integral partial differential equations is investigated in this paper. As new results, the existence and uniqueness of the optimal distributed control are proposed and proved, a necessary and suffcient conditions for the control to be optimal are obtained, and the optimality system consisting of integro-partial differential equations and variational inequalities are constructed in which the optimal controls can be determined. The applications of penalty shifting method for infinite dimensional systems to approximate solutions of control problems for the population system are researched. An approximation program is structured, and the convergence of the approximating sequences in appropriate Hilbert spaces is derived. The results in this paper may significantly provide theoretical reference for the practical research of the control problem in population systems. PP. 73-85
{"url":"http://www.emis.de/journals/GMN/volumes/vol_6_no_1_september_2011.html","timestamp":"2014-04-20T11:03:28Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:f7e0a502-ae94-4b7d-ab95-5ddb351952eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Particle in 3D Rectangular Rigid Box From ICT-Wiki 6.2 Particle in 3D Rectangular Rigid Box 6.2.1 Schrödinger Equation in Cartesian Co-ordinates Consider a particle of mass $m\,$ and energy $E\,$ constrained to move in a three-dimensional rectangular potential well having sides of length equal to $L_x,\, L_y,\, L_z$ parallel to the $x,\, y,\, z$ -axes respectively. Suppose there is no force acting on the particle in the box. The appropriate potential is $V(x,\,y,\,z) = 0$ for its position at a point $(x,\, y,\, z)$ given by $0< \, x \, < \, L_x$ $0< \, y \, < \, L_y$ $0< \, z \, < \, L_z$ and $V(x,\,y,\,z) = \infty$ outside the box Fig. Particle in 3-D rectangular box The total kinetic energy $T\,$ is given by $T=-\frac{\hbar^2}{2m}\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}\right)=-\frac{\hbar^2}{2m}\triangledown^2\,...........$ ---------------(1) $\triangledown^2\,$ is called as the Laplacian operator $-\frac{\hbar^2}{2m}\triangledown^2 \psi=E \psi$ For the motion of the particle inside the box the Schrödinger's time independent equation: $-\frac{\hbar^2}{2m}\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}\right)\psi(x,\,y,\,z)=E \psi(x,\,y,\,z)$ ---------------(2) It is a partial differential equation. It is assumed that the function $\psi\,$ can be written as a product of three functions, $X,\, Y$ and $Z\,$ each of which depends on only one of the coordinates $x,\, y,\, z$ respectively. The equation can be solved by the technique of separation of variables. $\psi=X(x).\,Y(y).\,Z(z)$ ---------------(3) The Hamiltonian is the sum of three independent one dimensional Hamiltonians in totally separate variables $H= H_x+H_y+H_z\,$.---------------(4) $[H_x+H_y+H_z]\psi(x,\,y,\,z)=E \psi(x,\,y,\,z)$ ---------------(5) Substituting for $\psi\,$ the Schrödinger's equation in the separated variables is $X^{''}YZ+XY^{''}Z+XY^{''}Z+\frac{2mE}{\hbar^2}XYZ=0$ ---------------(6) The terms $X^{''},\, Y^{''},\, Z^{''}$ are ordinary derivatives instead of the partial derivatives as each of the functions $X,\, Y,\, Z$, is a function of one variable only. Dividing the equation by Each term of this equation depends on a different variable $x,\, y$, or $z\,$ and the three variables $x,\, y$, and $z\,$ are independent. The term $k^2=\frac{2mE}{\hbar^2}$ is a constant for a particular value of $T\,$ kinetic energy. Velocity of the particle is a vector quantity. Velocity and corresponding kinetic energy can be resolved in three components along three co-ordinate axes $x, \, y,\, z$. $E \,= E_x+E_y+E_z$ ---------------(8) The only way for the equation to remain valid for all of $x,\, y$, and $z\,$ in the interval is for each term to be constant. Therefore with separation of variables, the Schrödinger's equation separates into three independent equations in $x,\, y,\, z$ respectively as follows $\frac{-\hbar^2}{2m}\left(\frac{\partial^2 X(x)}{\partial x^2}\right) \,=E_x X \,(x)$ $\frac{-\hbar^2}{2m}\left(\frac{\partial^2 Y(y)}{\partial y^2}\right) \,=E_y Y \,(y)$ $\frac{-\hbar^2}{2m}\left(\frac{\partial^2 Z(z)}{\partial z^2}\right) \,=E_z Z \,(z)$ ---------------(9) These differential equations have the solution: $X(x) \,= A_x \,sin \,(k_x x)+B_x \,cos\, (k_x x) \,\,\,\,E_x =\frac{\hbar^2 k_x^2}{2m}$ ---------------(10a) $Y(y) \,= A_y \,sin \,(k_y y)+B_y \,cos\, (k_y y) \,\,\,\,E_y =\frac{\hbar^2 k_y^2}{2m}$ ---------------(10b) $Z(z) \,= A_z \,sin \,(k_z z) +B_z \,cos\, (k_z z) \,\,\,\,E_z =\frac{\hbar^2 k_z^2}{2m}$ ---------------(10c) Applying the boundary conditions $\psi(x=0)=0\rArr\, B_x=0 \,\,\,\,\,\,\,\,\psi(x=L_x)=0\rArr \, k_x L_x=n_x \pi$ ---------------(10a) $\psi(y=0)=0\rArr \,B_y=0 \,\,\,\,\,\,\,\,\psi(y=L_y)=0\rArr \, k_y L_y=n_y \pi$ ---------------(10b) $\psi(z=0)=0\rArr \,B_z=0 \,\,\,\,\,\,\,\,\psi(z=L_z)=0\rArr \, k_z L_z=n_z \pi$ ---------------(11c) The resulting x-components of eigenfunctions are of the form $\Psi_n=\sqrt{\frac{2}{L}}\,sin\,\left(\frac{n\pi x}{L}\right) \,\,\,\,\mathbf{n}=1,\,2,\,3,\,4....$ The solution for $y,\, z$ components have the same form. The total normalized eigenfunctions for the motion of the particle in the box are given by $\psi(x,\,y,\,z)=\sqrt{\frac{8}{L_x L_y L_z}}sin(k_x x)\,sin\, (k_y y)\,sin(k_z z)$ $k_x=\frac{\pi n_x}{L_x} ,\, k_y=\frac{\pi n_y}{L_y} ,\, k_z=\frac{\pi n_z}{L_z}$ ---------------(12) $n_x= 1,\, 2,\, 3,\, 4\, ....$ $n_y= 1,\, 2,\, 3,\, 4 \,....$ $n_z= 1,\, 2,\, 3,\, 4\, ....$ ---------------(13) The eigenfunctions are zero outside the box. The eigenvalues of energy $E\,$ are given by $E=\frac{\hbar^2(k_x^2+k_y^2+k_z^2)}{2m}=\frac{\hbar^2\pi^2}{2m}\left(\frac{n_x^2}{L_x^2}+\frac{n_y^2}{L_y^2}+\frac{n_z^2}{L_z^2}\right)$ ---------------(14) These eigenvalues of the energy $E\,$ are called as the energy levels of the particle. They form quantized or discrete energy spectrum. $\ast$ The integers $n_x,\, n_y,\, n_z$ are called as the quantum numbers. They are required to specify each stationary state. If the sign of the quantum numbers is changed, then there is no change in the energy and in the wave function except for the minus sign. Hence all the stationary states are given by the positive integral values of $n_x,\, n_y,\, n_z$. $\ast$ The value of any of the quantum numbers $n_x,\, n_y,\, n_z$ cannot be zero. If any one is taken as zero then $\psi (x,\,y,\,z) = 0$. It implies that the particle does not exist in the box $\ast$ The lowest possible energy is obtained for $n_{x,} =1,\, n_{y,}=1,\, n_z =1$. It is called as the ground state energy and it depends on the values of $L_x,\, L_y,\, L_z$ The wave function corresponding to ground state energy is called as ground state wave function and denoted by $\Psi_{111}\,$ $\ast$ The excited state energy levels $E_{nx,\,ny,\,nz}$ are obtained by substituting different positive integer values for $n_x,\,n_y,\,n_z$ and the corresponding eigen functions $\Psi_{nx,\,ny, \,nz}$ are labeled by these quantum numbers
{"url":"http://ictwiki.iitk.ernet.in/wiki/index.php/Particle_in_3D_Rectangular_Rigid_Box","timestamp":"2014-04-17T15:48:05Z","content_type":null,"content_length":"25730","record_id":"<urn:uuid:d55a6610-196d-45f1-bb3a-d5b4b17b3a76>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
word problem April 21st 2011, 05:53 AM word problem cant figure this out at all. basically there are 5 cds that come in a cereal box randomly. what are the chances of getting all 5 cds if you buy 12 boxes? can someone point me in the right direction? April 21st 2011, 12:11 PM You need more information, i.e. the probability of getting a cd April 21st 2011, 02:27 PM This is an inclusion/exclusion question. I deleted one reply because the notation was too complicated. Lets say the CD's are $a,b,c,d,e$. Let X be the event that at least one disk x is present in a sample of 12. Now we want to show http://quicklatex.com/cache3/ql_96c0...d8f7d1f_l3.png That is http://quicklatex.com/cache3/ql_9162...289b3a6_l3.png Note that http://quicklatex.com/cache3/ql_807c...5c6672e_l3.png April 22nd 2011, 11:49 AM Archie Meade If the cereal box is guaranteed to have a CD in it and each of the CDs have equal probability of being placed in a cereal box, then you can find the probability of NOT finding all five CDS in the 12 boxes by calculating the probabilities that (1) one of the CDs is not in the 12 packets (2) two of the CDs are not in the 12 packets (3) three of the CDs are not in the 12 packets (4) four of the CDs are not in the 12 packets All 5 CDs cannot be missing of course. The probability that one CD is missing is 5C1(4/5)^(12) because one of the other 4 CDs are in each box for each missing one. The probability that two CDs are missing is 5C2(3/5)^(12) because one of the other 3 CDs is in each box for every missing pair. The probability that three CDs are missing is 5C3(2/5)^(12) Also, include four CDs missing. Now the question is... Does the probability of one being missing include the probability of 2, 3 or 4 also missing ? Subtract the probability of at least one CD missing from 1.
{"url":"http://mathhelpforum.com/statistics/178245-word-problem-print.html","timestamp":"2014-04-21T03:10:02Z","content_type":null,"content_length":"7327","record_id":"<urn:uuid:bad08434-def6-4d0d-89cd-b60698131619>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Square roots by hand: Divide and Average Date: 8/31/96 at 8:10:55 From: Janet Rodgers Subject: Calculating square roots Dear Dr. Math, My 7th grade son has been trying to find out how to calculate square roots by hand. I remember learning to do this in the old days before calculators, but I can't find the technique in any of my books. Can you find out how to do this? Date: 9/1/96 at 0:29:0 From: Doctor Jerry Subject: Re: Calculating square roots There are several methods, but the most ancient (dating back to the Babylonians, ca. 1700 B.C.) and best is the divide-and-average method. Here's the method. Let A be the number whose square root is wanted. Determine in some way a first guess g of the square root of A. The remaining part of the method is to repeat the following step: Divide A by the current guess g and then average the quotient with g. The average is the new Repeat until you have calculated the square root to desired accuracy. Roughly, the number of decimal places doubles with each repetition. Suppose we want to calculate the square root of 2. Take the first guess to be 1.5 (or 1 or ...). I'm going to copy the results directly from my calculator screen. In a "production line" calculation the intermediate results need not be written down - just keep them in the 1. 2/1.5 = 1.33333333333 (1.5+1.33333333333)/2 = 1.41666666666 2. 2/1.41666666666 = 1.41176470589 (1.41666666666+1.41176470589)/2 = 1.41421568628 Since the square root of 2 is (to calculator accuracy) 1.41421356237, the algorithm is doing rather well. One more step gives the result to 11 significant figures! -Doctor Jerry, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 9/1/96 at 0:32:23 From: Doctor Ken Subject: Re: Calculating square roots There are a couple of other methods to calculate square roots that you can find in our archives, or look under "Square roots..." in our Dr. Math FAQ at -Doctor Ken, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/52623.html","timestamp":"2014-04-19T00:30:36Z","content_type":null,"content_length":"7152","record_id":"<urn:uuid:6872edb6-b8e6-4a46-983a-aa90955209dd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Selected Publications 1. Sigman, K. (1988). Regeneration in Tandem Queues with Multi-Server Stations. Journal of Applied Probability, 25, 391-403. 2. Sigman, K. (1988). Queues as Harris Recurrent Markov Chains. Queueing Systems (QUESTA),3, 179-198. 3. Sigman, K. (1989). Notes on the Stability of Closed Queueing Networks. Journal of Applied Probability, 26, 678-682 [Correction: (1990), 27]. 4. Chao, X., M. Pinedo and K. Sigman, (1989). On the Interchangeability and Stochastic Ordering of Exponential Queues in Tandem with Blocking, Probability in the Engineering and Informational Sciences (PEIS), 3, 223-236. 5. Sigman, K., (1990). One-Dependent Regenerative Processes and Queues in Continuous Time, Mathematics of Operations Research, 15, 175-189. 6. Sigman, K., (1990). The Stability of Open Queueing Networks, Stochastic Processes and their Applications, 35,11-25. 7. Sigman, K.(1991). A Note On A Sample-Path Rate Conservation Law and its Relationship with H= l G. Advances in Applied Probability, 23, 662-665. 8. Gelenbe, E., P. Glynn and K. Sigman, (1991). Queues with Negative Arrivals, Journal of Applied Probability, 28, 245-250. 9. Sigman, K. (1992). Light Traffic For Work-Load In Queues. Queueing Systems (QUESTA), 4, 429-442. 10. Browne, S. and K. Sigman (1992) . Work-Modulated Queues with Applictions to Storage Processes. Journal of Applied Probability, 3, 699-712. 11. Glynn, P. and K. Sigman, (1992). Uniform Cesaro Limit theorems for Synchronous Processes with Applications to Queues. Stochastic Processes and their Applications, 40, 29-44. 12. Sigman, K. and G. Yamazaki (1992). Fluid Models with Burst Arrivals: A Sample-Path Analysis. Probability in the Engineering and Informational Sciences (PEIS), 6, 17-27. 13. Sigman, K., and D. Simchi-Levi (1992). A Light Traffic Heuristic for an M/G/1 Queue with Inventory. Annals of Operations Research, 40, 371-380. 14. Yamazaki, G., K. Sigman and M. Miyazawa (1992). Moments in Infinite Channel Queues. Computers and Mathematics with Applications, 24, 1/2, 1-6. 15. Kiang, S. and K. Sigman (1992). Burst Fluid Models with General Flow and Process Rates.(Technical Report, Columbia University, Department of IEOR) 16. Bardhan, I., and K. Sigman (1993). Rate Conservation Law for Stationary Semimartingales. Probability in the Engineering and Informational Sciences (PEIS), 7,1-17. 17. Sigman, K. and R. Wolff (1993). A Review of Regenerative Processes. SIAM Review, 2, 269-288. 18. Sigman, K. and D. Yao. (1993) Finite Moments for Inventory Processes. Annals of Applied Probability, 3,765-778. 19. Bardhan, I. and K. Sigman (1994). Stationary regimes for inventory processes. Stochastic Processes and their Applications, 56 , 77-86. 20. Sigman, K., Thorisson, H. and R.W. Wolff (1994). A note on the existence of regeneration times. J. of Applied Probability, 31,1116-1122. 21. Asmussen, S. and K. Sigman (1996). Monotone Stochastic Recursions and their Duals. Probability in the Engineering and Informational Sciences (PEIS),10, 1-20. 22. Jain, G. and K. Sigman (1996) . A Polleczek-Khintchine Formulation for M/G/1 Queues with Disasters. Journal of Applied Probability,33, 1191-1200. 23. Jain, G. and K. Sigman (1996) . Generalizing the Polleczek-Khintchine Formula to Account for Arbitrary Work Removal. Probability in the Engineering and Informational Sciences (PEIS) 10, 519-531. 24. Sigman, K. (1996) . Queues Under Preemptive LIFO and Ladder Height Distributions for Risk Processes: A Duality. Stochastic Models, 12,4, 725-736. 25. P. Glasserman, K. Sigman and D. Yao (Editors) (1996). Stochastic Networks: Stability and Rare Events. Springer Lecture Notes in Statistics , 117, Springer-Verlag, New York. 26. A. Scheller-Wolf and K. Sigman (1997) . Delay Moments for FIFO GI/GI/s Queues. Queueing Systems (QUESTA)(to appear) 27. A. Scheller-Wolf and K. Sigman (1997) . New Bounds for Expected Delay in FIFO GI/GI/c Queues Queueing Systems (QUESTA), 25, 77-96. 28. Boucherie, R.J., Boxma, O. and K. Sigman (1997). A note on negative customers, GI/G/ workload, and risk processes Probability in the Engineering and Informational Sciences (PEIS), 10, 305-311. 29. P. Glynn and K. Sigman (1999). Independent Sampling of a Stochastic Process Stoch. Proc. Appls., 74, 151-164. 30. A. Scheller-Wolf and K. Sigman (1998) . Moments in Tandem Queues. Operations Research ,46, 378-380. 31. Asmussen, S., Kluppelberg, C, and K. Sigman (1999). Sampling at subexponential times, with queueing applications Stoch. Proc. Appls., 79, 265-286. 32. T. Huang and K. Sigman (1999). Steady-state asymptotics for tandem, split-match and other feedforward queues with heavy-tailed service Queueing Systems (QUESTA), 33, 233-260. 33. K. Sigman (1999). A Primer on heavy-tailed distributions Queueing Systems (QUESTA), 33, 261-275. 34. B. Bl aszczyszyn and K. Sigman (1999). Risk and Duality in Multidimensions Stoch. Proc. Appls., 83, 331-356. 35. K. Sigman (1999)(Guest Editor). Special Volume on Queues with Heavy-tailed Distributions Queueing Systems (QUESTA), 33. 36. R. Ryan and K. Sigman (2000). Continuous-time monotone stochastic recursions and duality Adv. Appl. Prob, 32,426-445. 37. R. Erikson and K. Sigman (2000). A simple stochastic model for close US presidential elections. 38. R. Erikson and K. Sigman (2000). Gore favored in the Electoral College. 39. R. Erikson and K. Sigman (2000). A dead heat and the Electoral College (Bush verus Gore). 40. M. Miyazawa, G. Nieuwenhuis, and K. Sigman (2000). Palm theory for random time changes, JAMSA, 14, 55-74. 41. Mor Harchol-Balter, K. Sigman and Adam Wierman (2002). Asymptotic Convergence of Scheduling Policies with respect to Slowdown. Performance Evaluation, 49, 241-256. International Symposium on Computer Modeling, Measurement and Evaluation. 42. L. Munasinghe and K. Sigman (2004). A hobo syndrome? Mobility, wages, and job turnover. Labour Economics, 11, 191-218. 43. M. Nakayama, P. Shahabuddin, and K. Sigman (2004). On finite exponential moments for branching processes and busy periods for queues. Journal of Applied Probability (Special Volume), 41A, 44. G. Iyengar and K. Sigman (2004). Exponential Penalty Function Control of Loss networks. Annals of Applied Probability, 14, 1698-1740. 45. K. Sigman (2004) Queueing theory. Encyclopedia of Actuarial Science (EoAS), 3, 1357-1361 46. J. Cosyn and K. Sigman (2004). Stochastic networks: admission and routing using penalty functions. Queueing Systems (QUESTA), 48, 237-262. 47. J. Cosyn and K. Sigman (2004). Admission control of the infinite server queue with applications to bandwidth control (submitted). 48. K. Sigman (2006). Stationary marked point processes. (Contributed invited chapter in Springer Handbook of Engineering Statistics (Part A), Springer-Verlag. 49. K. Sigman and U. Yechiali (2007). Stationary remaining service time conditional on queue length. Operations Research Letters, 35, 581-583. 50. Harchol Balter, Varun Gupta, K. Sigman and W. Whitt (2007). Analysis of join-the-shortest-queue routing for web server farms. Performance Eval., 64, 1062-1081. 51. K. Sigman and W. Whitt (2011). Heavy-traffic limits for nearly deterministic queues. Journal of Applied Probability, 48, 657-678. 52. K. Sigman and W. Whitt (2011) Heavy-traffic limits for nearly deterministic queues II: Stationary distributions. Queueing Systems, 69, 145-173. 53. K. Sigman (2011). Exact simulation of the stationary distribution of the FIFO M/G/c queue. Journal of Applied Probability, 48A,209-216. 54. J. Blanchet and K. Sigman (2011). On Exact Sampling of Stochastic Perpetuities. Journal of Applied Probability, 48A, 165-182. 55. V. Goyal and K. Sigman (2011) On simulating a class of Bernstein polynomials. ACM Transactions on Modeling and Computer Simulation (TOMACS), 22, Issue 2, (to appear).
{"url":"http://www.columbia.edu/~ks20/papers.html","timestamp":"2014-04-18T03:45:50Z","content_type":null,"content_length":"9623","record_id":"<urn:uuid:7f7af931-4ddd-4eaa-bf14-b1289dd57397>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding similarities in a multidimensional array up vote 0 down vote favorite Consider a sales department that sets a sales goal for each day. The total goal isn't important, but the overage or underage is. For example, if Monday of week 1 has a goal of 50 and we sell 60, that day gets a score of +10. On Tuesday, our goal is 48 and we sell 46 for a score of -2. At the end of the week, we score the week like this: In this example, both Monday (0,0) and Thursday and Friday (0,3 and 0,4) are "hot" If we look at the results from week 2, we see: For week 2, the end of the week is hot, and Tuesday is warm. Next, if we compare weeks one and two, we see that the end of the week tends to be better than the first part of the week. So, now let's add weeks 3 and 4: From this, we see that the end of the week is better theory holds true. But we also see that end of the month is better than the start. Of course, we would want to next compare this month with next month, or compare a group of months for quarterly or annual results. I'm not a math or stats guy, but I'm pretty sure there are algorithms designed for this type of problem. Since I don't have a math background (and don't remember any algebra from my earlier days), where would I look for help? Does this type of "hotspot" logic have a name? Are there formulas or algorithms that can slice and dice and compare multidimensional arrays? Any help, pointers or advice is appreciated! algorithm math statistics time-series add comment 7 Answers active oldest votes This data isn't really multidimensional, it's just a simple time series, and there are many ways to analyse it. I'd suggest you start with the Fourier Transform, it detects "rhythms" in a series, so this data would show a spike at 7 days, and also around thirty, and if you extended the data set to a few years it would show a one-year spike for seasons and holidays. up vote 2 That should keep you busy for a while, until you're ready to use real multidimensional data, say by adding in weather information, stock market data, results of recent sports events and down vote so on. Ouch, faster by a few seconds. – fortran Oct 1 '09 at 18:32 5 I had the whole thing hotkeyed. I knew someday someone would ask a question whose answer was "the Fourier Transform". That frees up [f10], only two more to go. – Beta Oct 1 '09 at xDDD I'd vote up you twice for that comment if I could :p – fortran Oct 1 '09 at 21:29 add comment The following might be relevant to you: Stochastic oscillators in technical analysis, which are used to determine whether a stock has been overbought or oversold. I'm oversimplifying here, but essentially you have two moving calculations: • 14-day stochastic: 100 * (today's closing price - low of last 14 days) / (high of last 14 days - low of last 14 days) • 3-day stochastic: same calculation, but relative to 3 days. up vote 2 The 14-day and 3-day stochastics will have a tendency to follow the same curve. Your stochastics will fall somewhere between 1.0 and 0.0; stochastics above 0.8 are considered overbought down vote or bearish, below 0.2 indicates oversold or bullish. More specifically, when your 3-day stochastic "crosses" the 14-day stochastic in one of those regions, you have predictor of momentum of the prices. Although some people consider technical analysis to be voodoo, empirical evidence indicates that it has some predictive power. For what its worth, a stochastic is a very easy and efficient way to visualize the momentum of prices over time. add comment It seems to me that an OLAP approach (like pivot tables in MS Excel) fit the problem perfectly. up vote 1 down vote 2 I think this is the most useful suggestion for someone who's not trying to hack this data in R or something and get complex forecasting or statistical significance results. Pivot tables in Excel are incredibly useful, not that hard to set up or use, and you're almost certain to already have the appropriate tool. – Harlan Oct 2 '09 at 19:52 add comment What you want to do is quite simple - you just have to calculate the autocorrelation of your data and look at the correlogram. From the correlogram you can see 'hidden' periods of your data and then you can use this information to analyze the periods. Here is the result - your numbers and their normalized autocorrelation. 10 1,000 -2 0,097 1 -0,121 7 0,084 6 0,098 -4 0,154 2 -0,082 -1 -0,550 4 -0,341 up vote 1 down 5 -0,027 vote -8 -0,165 -2 -0,212 -1 -0,555 2 -0,426 3 -0,279 2 0,195 3 0,000 4 -0,795 7 -1,000 I used Excel to get the values. But the sequence in column A and add the equation =CORREL($A$1:$A$20;$A1:$A20) to cell B1 and copy it then up to B19. If you the add a line diagram, you can nicely see the structure of the data. add comment You can already make reasonable guesses about the periods of patterns - you're looking at things like weekly and monthly. To look for weekly patterns, for example, just average all the mondays together and so on. Same goes for days of the month, for months of the year. up vote 0 Sure, you could use a complex algorithm to find out that there's a weekly pattern, but you already know to expect that. If you think there really may be patterns buried there that you'd down vote never suspect (there's a strange community of people who use a 5-day week and frequent your business), by all means, use a strong tool -- but if you know what kinds of things to look for, there's really no need. add comment Daniel has the right idea when he suggested correlation but I don't think autocorrelation is what you want. Instead I would suggest correlating each week with each other week. Peaks in your correlation--that is values close to 1--suggest that the values of the weeks resemble each other (I.e. are peiodic) for that particular shift. For example when you cross correlate the result would be the highest value is 3, which corresponds to shifting (right) the second array by 4 up vote 0 0 0 0 1 1 0 --> 0 0 1 1 0 0 down vote and thenn multiplying component wise 0 + 0 + 1 + 2 + 0 + 0 = 3 Note that when you correlate you can create your own "fake" week and cross-correlate all your real weeks, the idea being that you are looking for "shapes" of your weekly values that correspond to the shape of your fake week by looking for peaks in the correlation result. So if you are interested in finding weeks that are close near the end of the week you could use the "fake" week -1 -1 -1 -1 1 1 and if you get a high response in the first value of the correlation this means that the real week that you correlated with has roughly this shape. add comment This is probably beyond the scope of what you're looking for, but one technical approach that would give you the ability to do forecasting, look at things like statistical up vote 0 down significance, etc., would be ARIMA or similar Box-Jenkins models. add comment Not the answer you're looking for? Browse other questions tagged algorithm math statistics time-series or ask your own question.
{"url":"http://stackoverflow.com/questions/1505607/finding-similarities-in-a-multidimensional-array","timestamp":"2014-04-17T22:48:06Z","content_type":null,"content_length":"92149","record_id":"<urn:uuid:c7e8ea18-fdd1-4f6b-be39-0423f6976565>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetry Extensions and Their Physical Reasons in the Kinetic and Hydrodynamic Plasma Models SIGMA 4 (2008), 006, 7 pages arXiv:0801.2773 http://dx.doi.org/10.3842/SIGMA.2008.006 Contribution to the Proceedings of the Seventh International Conference Symmetry in Nonlinear Mathematical Physics Symmetry Extensions and Their Physical Reasons in the Kinetic and Hydrodynamic Plasma Models Volodymyr B. Taranov Institute for Nuclear Research, 47 Nauky Ave., 03028 Kyiv, Ukraine Received October 31, 2007, in final form January 14, 2008; Published online January 17, 2008 Characteristic examples of continuous symmetries in hydrodynamic plasma theory (partial differential equations) and in kinetic Vlasov-Maxwell models (integro-differential equations) are considered. Possible symmetry extensions conditional and extended symmetries are discussed. Physical reasons for these symmetry extensions are clarified. Key words: symmetry; plasma; hydrodynamic; kinetic. pdf (166 kb) ps (124 kb) tex (9 kb) 1. Ibragimov N.H., Kovalev V.F., Pustovalov V.V., Symmetries of integro-differential equations: a survey of methods illustrated by the Benney equation, Nonlinear Dynam. 28 (2002), 135-165, math-ph/ 2. Cicogna G., Ceccherini F., Pegoraro F., Applications of symmetry methods to the theory of plasma physics, SIGMA 2 (2006), 017, 17 pages, math-ph/0602008. 3. Taranov V.B., On the symmetry of one-dimensional high frequency motions of a collisionless plasma, Sov. J. Tech. Phys. 21 (1976), 720-726. 4. Gordeev A.V., Kingsep A.S., Rudakov L.I., Electron magnetohydrodynamics, Phys. Rep. 243 (1994), 215-465. 5. Meleshko S.V., Application of group analysis in gas kinetics, in Proc. Joint ISAMM/FRD Inter-Disciplinary Workshop "Symmetry Analysis and Mathematical Modelling", 1998, 45-60. 6. Horton W., Drift waves and transport, Rev. Modern Phys. 71 (1999), 735-778. 7. Taranov V.B., Drift and ion-acoustic waves in magnetized plasmas, symmetries and invariant solutions, Ukrainian J. Phys. 49 (2004), 870-874. 8. Taranov V.B., Symmetry extensions in kinetic and hydrodynamic plasma models, in Proceedinds of 13th International Congress on Plasma Physics (2006, Kyiv), 2006, A041p, 4 pages. 9. Cicogna G., Laino M., On the notion of conditional symmetry of differential equations, Rev. Math. Phys. 18 (2006), 1-18, math-ph/0603021.
{"url":"http://www.emis.de/journals/SIGMA/2008/006/","timestamp":"2014-04-17T04:16:17Z","content_type":null,"content_length":"6045","record_id":"<urn:uuid:7f1117ec-b1fd-4432-9636-8896c5f2da9f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
use paradox in a sentence. How is that paradoxical? Google "paradox of thrift. The Haskell Paradox, too. So where's the paradox? In what sense is the birthday paradox a paradox? Your statement is a paradox! It's called the birthday paradox :) Your two statements are paradoxical. This title is a paradox. The paradox of choice - as a framework! How is misapplied probability theory a paradox? Recently Searched Words Popular Words This Week
{"url":"http://paradox.inasentence.org/use-paradox-in-a-sentence/","timestamp":"2014-04-17T12:29:25Z","content_type":null,"content_length":"14551","record_id":"<urn:uuid:0b80e9a5-cda6-4929-b352-5b5f3b7551a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove x^n > x for all x > 1. November 20th 2009, 05:54 PM Prove x^n > x for all x > 1. Suppose n > 1 is a positive integer. Prove $x^n > x$for all x > 1. (hint: suppose we know f and g are differentiable on the interval (-c,c) and f(0) = g(0), if f'(x) > g'(x) for all $x \in(0,c)$, then f(x) > g(x) for all $x \in (0,c)$. November 20th 2009, 06:38 PM Do you know how to prove something by induction? If not I am unsure how to do this for general n. November 20th 2009, 06:46 PM November 20th 2009, 06:49 PM What you want to do is use the hint on a base case (n=2). Then you will make the assumption that the problem hypothesis holds for n=k. Use this assumption to prove that it is true for n=k+1. November 20th 2009, 06:52 PM November 20th 2009, 06:54 PM Because it is easy and the problem statement says n > 1. November 20th 2009, 06:59 PM November 20th 2009, 07:03 PM Note that if $f(x)=x^n\quad n>1$ and $g(x)=x$. That $f(1)=1=g(1)$ and $f'(x)=nx^{n-1}\ge n>1=g'(1)$ and the concluso follows. November 20th 2009, 07:05 PM Okay, I will show you the base case. Assume n=2. $f(x) = x^2, \; g(x) = x$ Now let's show that these are equal at $x=0$ $f(1)=1^2 = 1 = g(1)$ Now we differentiate the two $f'(x) = 2x$ $<br /> g'(x) = 1$ For $x>1$ $<br /> 2x > 2$ Therefore we see that $f'(x)=2x>1 =g'(x)$ for $x>1$. November 20th 2009, 07:07 PM November 20th 2009, 08:30 PM November 20th 2009, 09:26 PM November 20th 2009, 09:33 PM We know $g(x)=x$ because you are trying to prove $x^n > x$ and the hint was suggesting you $f(x) > g(x)$ Wouldn't you agree $nx^{n-1} \ge n$ for $x>1$ and $n>1$? But we know that $f'(x) = nx^{n-1}$ and $g'(x) = 1$. November 20th 2009, 09:35 PM November 20th 2009, 09:37 PM Yes, because the hint was to prove $f(x) > g(x)$ and we wanted to prove $x^n > x$ so it makes sense to assume $g(x)=x$.
{"url":"http://mathhelpforum.com/calculus/115829-prove-x-n-x-all-x-1-a-print.html","timestamp":"2014-04-20T01:43:14Z","content_type":null,"content_length":"24721","record_id":"<urn:uuid:d83e1704-144f-4572-ab0b-070bb0146d77>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
What Are the Assumptions of Classical Fault-Tolerance? Oftentimes when we encounter the threshold theorem for quantum computation, it is pointed out that there are two assumptions which are necessary for fault-tolerant quantum computation. These two assumptions are 1. A supply of refreshable pure or nearly pure ancillas into which the entropy generated by quantum errors can be dumped throughout the error correction procedure. 2. Parallel operations. For point 1, an important reference is Aharonov, Ben-Or, Impagliazzo and Nisan, quant-ph/9611028, and for point 2, an important reference is Aharonov and Ben-Or, quant-ph/9611029. These two assumptions are provably necessary. Usually when I see these two points presented, they are presented as if they are something unique to quantum computation. But is this really true? Well certainly not for the first case. In fact quant-ph/9611028 is first about computation in a noisy model of classical reversible computation and second about a noisy model of quantum computation! What about point 2? Well I can’t find a reference showing this, but it seems that the arguments presented in quant-ph/9611029 can be extended to reversible noisy classical computation. Of course many of you will, by now, be grumbling that I haven’t included the other assumptions usually stated for the threshold theorem for fault-tolerant quantum computation. Okay, so I’ll put it 3. A noisy model in relation to control of your system which is not too ridiculous. Well, okay, usually it is not stated this way, but that’s because I don’t have an easy way to encapsulate the noise models which lead to provable lack of ability to compute fault tolerantly. But certainly there are similar noise constaints for classical computation which are provably necessary for fault-tolerant classical computation. So what are the difference between the assumptions for fault-tolerant quantum and fault-tolerant classical computation? I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation. Of course this short changes the sweet and tears which is necessary to build the theory of fault-tolerant quantum computation, and I don’t want to disparage this in any way. I just want to suggest that the more we learn about the theory of fault-tolerant quantum computation, the more we recognize its similarity to probablistic classical computation. I call this general view of the world the “principal of the ubiquitous factor of two in quantum computing.” The idea being mostly that quantum theory differs from probablistic theory only in the necessity to deal with phase as well as amplitude errors, or more generally, to deal with going from probabilities to amplitudes. This point of view is certainly not even heuristic, but it seems at least that this is the direction we are heading towards. Of course the above view is speculative, not rigorous, and open to grand debate over beers or the within the battleground of an academic seminar. But it certainly is one of the reasons why I’m optimistic about quantum computing, and why, when I talk to a researcher in higher dimensional black holes (for example) and they express pessimism, I find it hard to put myself in their shoes. To summarize that last sentence, I’m betting we have quantum computers before higher dimensional black holes are discovered 29 Responses to What Are the Assumptions of Classical Fault-Tolerance? 1. I think there is a long tradition in the cellular automaton community of noting the need for parallelism to achieve fault tolerance. It is certainly emphasized in the papers by Peter Gacs. Which is why we referred to Gacs in noting that these requirements apply to classical as well as quantum fault tolerance in Sec. II of http://arxiv.org/abs/quant-ph/0110143 Like or Dislike: 0 0 2. I meant to add that the reason we should emphasize these conditions for fault tolerance when giving talks about quantum fault tolerance (even though the same conditions must be satisfied classically) is that it is important, when considering the promise of any proposal for physically realizing quantum gates, to assess how well that realization will meet these conditions. That parallel operation and a place to dump entropy are both necessary is sometimes overlooked. Like or Dislike: 0 0 3. Either you’re secretly hosting a mirror of arxiv.org, or your URLs need fixing. Like or Dislike: 0 0 4. Oops. Doh. Fixed. Like or Dislike: 0 0 5. “I strongly believe that the only differences here are difference arising simply going from a theory of probabilities in classical computation to a theory of amplitudes in quantum computation.” If run-of-the-mill classical computers were reversible, then I might agree. What’s the largest scale classical reversible computer ever built? Like or Dislike: 0 0 6. Ah but going to irreversible classical is like going to quantum theory with measurements, so I will restate: “I strongly believe that the only differences here are differences arising simply going from a theory of probabilities in classical computation to a theory of amplitudes and probabilities in quantum computation.” Like or Dislike: 0 0 7. It appears to be correct that the requirements for fault tolerance in the quantum case are very similar (if not identical ) to those in the classical case. What is unclear is what is the correct interpretation of this similarity. Does this means that since classical computation exists, quantum computers (and the “new phase of matter” they seem to represent, according to Dave) must be feasible as well? Or is it that we should have a second look at “ridiculous” noise models of item 3. (I’d be happy to see many and detailed such examples even if not encapsulated.) A technical point: as far as I know, the (indeed, important) result about noisy reversible computation still leaves open the possibility that a noiseless classical computer plus a noisy reversible quantum computer can factor in a polynomial time. Finding any model of low-rate noise (as ridiculous as one wishes) which (provably) does not enable even log-depth computing seems difficult. Like or Dislike: 0 0 8. Gil, I thought that the modular exponentiation part of Shor’s algorithm wasn’t log-depth. Am I misinterpreting what you are saying? Also, of course, I’m a grumpy old man, so I will always maintain that the model of noiseless classical computers does not exist (nor does the model of noiseless quantum computers exist.) But that’s just what I get for hanging out with too many experimental physicists. Like or Dislike: 0 0 9. I meant that noiseless classical computers + log depth (noiseless) quantum computers are strong enough to enable factoring in polynomial time, (as far as I know, by a result of R. Cleve and J. Watrous, quant-ph/0006004.) Like or Dislike: 0 0 10. Ah right, thanks! Forgot about that. The idea is you compute the bitwise exponentiated terms classically in poly time and then do an interated multiplication a la Beame, Cook, and Hoover for the quantum part of the circuit. Like or Dislike: 0 0 11. Under the two assumptions that Dave mentioned the threshold theorem describes one important obstruction for fault tolerance: (1) – The noise rate is large. This obstruction deals with a single qubit/bit. We need low noise/signal ratio. Indeed there is essentially no difference between classical and quantum systems, and the feasibility of classical computing indicates that intolerably high noise rate is not a universal property of noisy computing devices. However, the following possible principal is also an obstruction (2) – The correlation between the noise acting on pairs of correlated elements in a noisy device is high. This obstruction deals with pairs of qubits. We need to require low signal/noise ratio also in terms of correlations. Again I see no difference between classical and quantum devices and am curious if (2) generally applies to noisy physical devices (classical or quantum) whose behavior is stochastic. Such a principal appears to allow fault-tolerant classical computing, but be harmful to quantum computing. (The assumptions of the threshold theorems exclude the possibility of (2)) Of course, one can ask how such a principal can come about. We do not want to seriously consider the fantastic scenarios of terrorist attacks Dave mentioned in his previous post, but rather stochastic models for noise which are based on local operations, namely the noise being created by a sort of stochastic (rather primitive) quantum computer running aside our physical device. The question is whether such models can create damaging patterns of noise. This looks interesting to me and starting with “ridiculous” models seems rather reasonable. Alternatively (or in parallel), one can try to test (2) empirically. Like or Dislike: 0 0 12. “I strongly believe that the only differences here are differences arising simply going from a theory of probabilities in classical computation to a theory of amplitudes and probabilities in quantum computation.†Dave, I’m surprised you’re discounting the importance of the quantum tensor product structure (especially after you made such nice use of it in your “Operator Quantum Error Correcting Subsystems for Self-Correcting Quantum Memories” paper). What about that old dictum “fight entanglement with entanglement”? It does not seem to apply to classical fault tolerance. Like or Dislike: 0 0 13. Daniel, The classical dictum is “fight correlation with correlation.” A classical noise process is simply getting correlated with a classical environment. If you don’t create correlated classical states you won’t be able to protect your classical information (encoding into a classical error correcting code is nothing more than the limit of creating perfect correlation.) Also I don’t think a tensor product structure alone makes the difference between the power of classical and quantum computers. The difference is when you combine complex amplitudes with the tensor product. Or at least in my myopic view of the world Like or Dislike: 0 0 14. Gil, I don’t understand what you mean that correlated errors seem to allow classical but not allow quantum computation. Certainly if I come up with ridiculous (to use my favorite word Like or Dislike: 0 0 15. Dave, As a Bell’s theorem entrepeneur I’m sure you’d agree there is a qualitative difference between classical correlation and quantum entanglement. (Well maybe your work on the communication cost of simulating Bell correlations would actually indicate just a quantitative difference.) And indeed, I meant to point out that there is more to quantum computing than complex amplitudes: the power comes from combining the latter with the tensor product structure. Like or Dislike: 0 0 16. Right, so we agree that I that there is more to the the power of quantum computing than complex amplitudes. But I would argue (the original direction of your posting) that there is also more to the difference between quantum and classical fault tolerance than amplitudes versus probabilities. And that more is (I suspect) at least in part the quantum tensor product structure (TPS).^* I do agree that entanglement is not the single ingredient one must add to amplitudes to understand the quantum/classical difference. This is why I emphasized quantum TPS rather than entanglement. Indeed, if you accept the “generalized entanglement” idea of Barnum et al. (quant-ph/0305023)^** then entanglement exists independently of a TPS. Namely, they have examples of generalized entanglement where no subsystem partition exists (such as a single spin-1 particle). I suspect such entanglement plays no role in the quantum/classical speedup or fault tolerance distinction. ^* And that TPS is not an absolute property of the Hilbert space, but rather a property that is relative to the experimentally accessible set of observables: Quantum Tensor Product Structures are Observable Induced ^** A pure state is generalized unentangled relative to a set of distinguished observables if its reduced state is pure, and generalized entangled otherwise. Like or Dislike: 0 0 17. Daniel: see we agree! I agree that there is more the the power of quantum computing than complex amplitudes. But there is more to the power of classical computing in classical computing, too! It is an interesting question, why do quantum computers seem to outperform classical computers. But we can also ask, for example, why do classical computers outperform the Clifford group gate set with computatinal basis preparation and measurement. Clearly Clifford group gates can generate entanglement, are quantum, but are also not universal for classical computation and can be simulated. Why are classical computers more powerful than this type of quantum computer (which is the easiest model to implement fault-tolerantly, interestingly)? I personally suspect that the reason we get into thinking about the power of quantum computation is that we are in the very early days of quantum algorithms. Certainly entanglement has something to do with it, in the sense that it is “necessary (in the Jozsa and Linden sense), but it is not sufficient. So trying to explain the power of quantum computers as coming just from a tensor product with a complex vector space, doesn’t seem to me to be the answer. And this is probably great! Because it means that there is much we need to explore. The answer, I suspect, won’t be a simple one, but more likely it will envolve some very sophisticated theory. Can you tell that I’m sitting in a computer science department, now? Like or Dislike: 0 0 18. We agree. The difference may be that I think calling generalized entanglement “entanglement” is a bit heavy handed. I am very enthusiastic about the idea behind “generalized entanglement,” and especially its usefulness in many settings (as, for example, in condensed matter systems where it leads to a definition of purity which corresponds at least in some systems to an order paramter distinguishing different phases of the system), but I personally think that calling it entanglement is very confusing. For example consider the example you site, a spin-1 particle. One takes a three dimensional irrep of su(2) and then looks at observables of the generators. These form a three dimensional sphere and the extremal points on the boundary are the spin-coherent states. Then a state like |m=0> is not on the boundary and is therefore not a “pure” state with respect to these observables. They then proceed to call this state a “generalized entangled” state, since entangled subsystem states are distinguished by being the states that are mixed when tracing over the full pure state. I do agree that this is an interesting (and in many cases useful) definition, but what does this have to do with anything being entangled with anything else? I would instead talk about states which are pure with respect to an observable algebra and states which are mixed with respect to an observable algebra. Then the interesting thing about “generalized entanglement” has nothing to do with entanglement but everything to do about pure states with respect to one algebra which are mixed with respect to a different algebra. But that’s just my opinion. And I just got my wisdom teeth out, so clearly I’m not thinking clearly. Like or Dislike: 0 0 19. Dave, “Gil, I don’t understand what you mean that correlated errors seem to allow classical but not allow quantum computation. Certainly…they too will destroy classical computation” Actually, there are some forms of highly correlated noise which allow classical computation and harm quantum computation but this was NOT the point I was making above. (Obstruction (2) from my previous comment implies no correlations for errors in the classical case when the bits themselves are not correlated.) “The only way I can see engineering worlds with classical but not quantum computation…” Let us engineer a world, then, Dave. The requirement is this: (obstruction (2) to quantum fault tolerance from my comment above.) (*) In any noisy system the noise “acting” on pairs of elements which are substantially correlated is also substantially correlated. (Well, the word “noisy” refer to what happens when we approximate a small subsystem of a large system by neglecting the effect of the “irrelevant” elements (and by having an approximate “initial conditions”). We suppose that we can have good approximations (this is what we mean by a small noise rate, and this assumption allows error correction and computation to start with,) but (*) is an assumption about the nature of such approximations.) 1. (*) is just a specification, if we want to send our world plans for production we need more details how (*) comes about. 2. Maybe our present world already satisfies (*) BOTH for classical correlations and for quantum correlations. Can you think of a counter example? We may be able to examine (*) from the behavior of pairs of qubits in small quantum computers. 3. (*) will allow classic (even randomized) computation (because in this case we do not need the bits to be correlated so (*) does not impose any condition on the noise being correlated.) But (*) appears to be damaging for quantum computation. 4. If (**) we have a noisy quantum computer with highly entangled qubits, and if (*) holds, then the nature of low-rate noise will be rather counter-intuitive. There will be a substantial probability of a large proportion of all qubits being hit. (Hmmm, you called this t-error, nice!) I do not know if such a pattern for the noise should be considered ridiculous. But if you do, note, that there are two “if”s here, and decide if you rather give up (**) or (*). Returning to the main theme of this post: I agree that the issues of fault-tolerance are very similar for the classical and quantum case (in-spite of the recent Bacon-Lidar pact), I do not understand why this should be a source of optimism (or pessimism). Like or Dislike: 0 0 20. Without taking any position on the relative ease of classical-versus-quantum error correction, the “difficult-to-correct” postulate of Gil Kalai: (*) In any noisy system the noise “acting†on pairs of elements which are substantially correlated is also substantially correlated. … is a very reasonable postulate (to an engineer). For example, suppose two qubits are dipole-coupled with a coupling C~1 (in natural units set by the clock interval of the computation). To an engineer, though, no qubit coupling C ever takes exactly its design value of unity. Rather, such couplings are known only within error intervals. And furthermore, such couplings are generically subject to broad-spectrum time-dependent noise, originating e.g. within lattice vibrations of the device. Doesn’t this generically introduce both systematic errors and stochastic errors into practical quantum computations, and aren’t these errors generically of precisely the form that Kalai’s post regards as hard to correct? The one area where I will venture an opinion is this: stochastic coupling noise is Choi-equivalent to a continuous weak measurement process, wherein the quantity being measured is the localized qubit-dependent energy in the quantum computation. In other words, for a quantum computer to operate successfully, it must necessarily be a device whose localized internal operations cannot be spied upon, not even by the physical noise mechanisms of the device. And this absolute requirement of “hidden operation” is one feature that makes quantum computation essentially different from classical computation. This is why we quantum system engineers are increasingly studying cryptography! Like or Dislike: 0 0 21. Gil, I see, I misread your point (2). You have a noise model there were correlated noise only occurs between correlated (qu)bits. Oops. In note 3 you say: 3. (*) will allow classic (even randomized) computation (because in this case we do not need the bits to be correlated so (*) does not impose any condition on the noise being correlated.) This confuses me. Why don’t we require classical correlated bits for classical computation? When I perform classical error correction (as our hard drives and transistors are doing for us) then aren’t the individual bits correlated? And what about during the classical computation? It seems to me that this will produce correlated bits along the computation. As to optimism versus pessimism, I am an optimistic for many reasons. I am not an optimist when it comes to the philosophical question of creating computers which are perfectly fault-tolerant. But for creating fault-tolerant quantum (and classical) computers which can solve interesting problems which will make our lives better: for this I am an optimist. Further, I am focused on the main problems in the upcoming experiments. And they have nothing to do with crazy models of noise. Which is of course not an argument to say that such noise doesn’t exist: it’s just that right now the discussions of these noise models are so far removed from the real problems of the lab, that I don’t see a reason to be pessimist. Further, perhaps my experience with decoherence-free subspaces also shapes my optimism. Give me correlated noise and I am even happier. And I guess, on a fundamental level, I side with Einstein: the universe may be mysterious, but it is never Like or Dislike: 0 0 22. John: the type of errors you are talking about are exactly the type of errors which fault-tolerant quantum computation is designed to deal with. Error which occur due to imprecision of couplings for the gates you are trying to correct are fine for fault-tolerant quantum computation, as long as they are not strong. The noise Gil seems to be arguing for (note: my understanding, not Gil’s menaing!) causes correlated errors when the (qu)bits are corelated, not coupled. Like or Dislike: 0 0 23. Gee, I can’t even figure out how to insert paragraph breaks in these replies, so how can I figure out quantum error correction? Is there an XHTML code that is functionally similar ? On hearing the words “solvable in principle”, a mathematician or physicist tends to focus mainly on the word “solvable”, while an engineer tends to hear “solvable in principle, but all-too-likely to be NP-hard to solve in practice”. The “Mike and Ike” book provides good examples. In the book, qubit gates are designed to be error-correcting via a beautiful and rigorous formalism. But in deriving the threshold theorem for the overall computation, there seems to be an implicit assumption that whenever the deterministic part of a qubit coupling is turned off, the stochastic part of the coupling is turned off too. The assumed noise turn-off is achievable in principle, but it is amazingly hard for engineers to achieve in practice. Even designs that physically shuttle qubits away from one another tend to create enduring long-range couplings, e.g., via thermal fluctuations in the conduction bands of the confining walls of a shared ion trap, or via optical fluctuations in a shared Fabry-Perot IMHO, mathematicians and physicists are right to ignore these couplings, for the sake of doing beautiful math and physics, and engineers are equally right to fear them, for the sake of building hardware that works. Note that this “noise turn off” problem is unique to quantum computers, in that classical computers have robust fan-out, such that any desired number of copies of any desired bit can be robustly Like or Dislike: 0 0 24. Dave, 1. To the best of my understanding (or just instincts) the issue of correlated bits does not enter classical computation. So my (*) does allow classical computation. (But I will be happy to think more about it off-line.) (If there was a need to manipulate correlated bits (*) would have been as damaging, but there is no such need from a computational point of view.) 2. Well, I did not really present (unfortunately) a noise model, just an idea or a requirement for a noise model, where the noise (as you correctly put it) causes correlated errors when the qubits are correlated. Another way to look at it (avoiding the optimism/pessimism issue which seems rather irrelevant) is this: The threshold theorem (which at present is the only game in town) has spectacular consequences but we try to look at the simplest non-trivial consequence. Namely, that we can have two qubits in an entangled state (say, a cat) such that the noise for them is (almost) independent – Inspite the arbitrary noise we allow on gates. (This is an ingredient of fault tolerance which is not similar to classical fault tolerance and it indeed looks a bit This simple consequence of the threshold theorem seems to capture much of its power. So perhaps this is what we should try to attack (theoretically) or examine (empirically). Like or Dislike: 0 0 25. Yeah, my line breaks are all fudged up. Need to fix that. . But in deriving the threshold theorem for the overall computation, there seems to be an implicit assumption that whenever the deterministic part of a qubit coupling is turned off, the stochastic part of the coupling is turned off too. Hm, I don’t think this is neglected. The stochastic part of the coupling, as you call it, is simply a quantum noise on a wire when the gate is turned off. The assumed noise turn-off is achievable in principle, but it is amazingly hard for engineers to achieve in practice. Even designs that physically shuttle qubits away from one another tend to create enduring long-range couplings, e.g., via thermal fluctuations in the conduction bands of the confining walls of a shared ion trap, or via optical fluctuations in a shared Fabry-Perot Like I said, the threshold theorem doesn’t depend on this, except in the strength of this problem, but like you said, it can be a major source of error. Certainly, it is a bigger problem in some implementations than others. For example, while the patch potentials believed to cause heating in ion traps are a source of error, they are curerntly not the source of error which is keeping ion traps above the fault-tolerant threshold. That error comes from the two-qubit (and to some extent one-qubit) gates. Note that this “noise turn off†problem is unique to quantum computers, in that classical computers have robust fan-out, such that any desired number of copies of any desired bit can be robustly cloned. I don’t believe this. I believe that we think there is no such problem because we are dealing with already physically error corrected systems. Certainly if you try to engineer a SINGLE spin, for example, to be a classical spin as a bit, it will have the same problems. We live in a world in which the ideas of error correction are sheilded from our eyes: our hard drives, our transistors, are all fault-tolerant (or at least fault-tolerant with a lifetime long enough for our purposes: that data on your hard drive will eventually die…time to backup!) due to the physics of the Like or Dislike: 0 0 We live in a world in which the ideas of error correction are shielded from our eyes: our hard drives, our transistors, are all fault-tolerant That is very nicely put, IMHO! In fact, when we check the literature, our eyes themselves are very strongly error-corrected sensors, within which everal retinal molecule is strongly coupled to a thermal reservoir of vibrating water molecules, with the functional consequence being that the signals from a single retinal molecule can be robustly amplified and fanned-out to the visual From this point of view, we humans evolved at “hot” temperatures precisely so we could reliably observe and interact with a classical world, within which both our ideas and our genes can reproduce Iand evolve). And yes, we humans do support embedded biological error correcting codes, which are found at every level of our metabolic function. In fact, from a combined physics/ engineering/biology perspective, it is a very striking fact that our own internal error-correcting processes are sufficient to create a classical world. It is fun to imagine an alternative “cold” ecology, in which life has evolved (somehow) in the context of strongly entangled, robustly error-corrected quantum processes. It is hard to see how concepts such as “individual”, “object”, “3D spatial localization”, and even “physical reality” could be well-defined in this quantum-entangled but still-Darwinian world. These cold quantum creatures would perceive hot classical creatures like us as volcanoes of ecology-destroying raw entropy. Hey, we’re the “bad guys” in this scenario! “Worst … plot … concept … ever”, as Comic Book Guy might say. Like or Dislike: 0 0 27. Three remarks: 1. It is quite interesting that the sticky points that John sees from the engineering point of view are quite similar to the places I would seek the theoretic weakness of the quantum fault tolerance hypothesis. The threshold theorem asserts that qubit coupling can be “purified” via fault-tolerance. John find it hard to implement in practice and I question if this purification is not an artifact of too strong mathematical assumptions concerning the noise. Another point of similarity is the advantage of the ability to clown and use majority that John mentioned. This allows error correction and classical computation to prevail in certain cases of high-rate noise or low-rate noise with massive (“crazy”) correlations that would fail quantum computing. 2. If we agree that the matter of noise is essentially a matter of appropriate approximations we can see if in cases of dependence data we can expect approximation up to independent (or almost independent) error. (I do not think we have in nature anything close to the massive (crazy?) forms of dependencies needed among qubits of a quantum computer, but we do have some examples we can Take, for example, the weather. Suppose you wish to forecast the weather in 20 locations around Seattle. The weather in these places is clearly dependent. You can approximate the weather at time T by the weather at time T-t, which is better and better as t gets small. You can also monitor and use sophisticated physics and mathematics to get some better forecasts. Do you think we can have a forecast so the error terms will be independent among cities? or close to be? 3. So this brings me to the nice error-correction examples that Dave claims we take too much for granted. Consider the following hypothetical dialogue between Dave and me: Dave: I can do something very nice. In the process there is an arbitrary matrix B and my method is based on my ability (using some physics you will probably not understand) to approximate it by a matrix C so that E = B-C is small. Gil: Sound good, cool. Dave: And as it turns out, E would be(approximately) rank one matrix. (This is important and necessary.) Gil: Sounds bad, forget it. Dave: But why? We see such approximations all the time: in transistors and hard drives and many other natural and artificial cases that are simply shielded from your eyes. Gil: Yes, but in ALL these cases you approximate a rank-one matrix to start with. I believe that you may be able to approximate a rank-one matrix up to a rank-one error. I do not believe that you will be able to approximate an arbitrary matrix up to a rank one matrix. Like or Dislike: 0 0 28. I will never look at rank one matrices the same Like or Dislike: 0 0 29. Well, stretching perhaps the blog’s hospitality, here is a (finite) sequence of additional short comments on this interesting problem: A. Dave’s analogy between classical and quantum fault tolerance. 1. The analogy between the classical and quantum case for error correction and fault tolerance is very useful. I also share the view that the underlying mathematics is very close. 2. Of course, the threshold theorem and various later approaches to fault tolerance give strong support to the hypothesis of fault-tolerant quantum computation (at present, this hypothesis is “the only play in town”) and to the feasibility of computationally superior quantum computers. 3. A useful dividing line to draw is between deterministic information and stochastic correlated information (both classic and quantum). 4. The prominent role (sometimes hidden) of artificial and natural forms of error-correction is, of course, something to have in mind, but I do not share the view that it gives us substantial additional reasons for optimism. (It does not give reasons for pessimism either.) 5. I am not aware of (and will be happy to learn about) cases (artificial or natural) where fault tolerance and error correction are used for stochastic correlated data, nor am I aware of natural forms of error correction which uses a method essentially different than “clown and use majority”. 6. I also am not aware of cases of a natural system with stochastic highly correlated elements which admits an approximation up to an “almost independent” error term. This is the kind of approximation required for fault tolerant quantum computation. B. Correlated errors and spontaneous synchronization of errors. 7. A remark experts can skip: (Again it is better just to think about classical systems.) When you have a system with many correlated bits, an error in one bit will also effect the other bits. On an intuitive level this looks like an obstacle to error correction but it is not. Independent errors effecting a substantial fraction of –even highly correlated bits, can be handled. (This is, I believe, what Dave refers to as fighting correlations with correlations.) Correlated errors which allows, with substantial probabilities, errors that exceed the capacity of the error-corrector are problematic. The crux of the matter (all the rank-one matrix stuff) is whether independent errors on highly correlated bits is a possible or even a physically meaningful notion. (So perhaps, laymen concerns are correct after all.) 8. The postulate of correlated errors. (*) In any noisy system, noise acting on pairs of elements which are substantially correlated is also substantially correlated. Correlated errors can reflect the correlation of the actual bits/qubits which is echoed by the noise. (Another possible explanation for (*) for quantum computers is that the ability of the physical device to create strong correlations needed for the computation may already cause it to be vulnerable to correlated noise regardless of the state of the computer.) 9. (We talk about pairs of bits/qubits, but let me remark, that while pairwise (almost ) independence appears necessary for fault tolerant quantum computatio(and is the first thing to challenge), for the threshold theorem, pairwise (almost) independence would not suffice. Similar conditions on triples/quadruples etc. are also needed.) 10. The error correlation hypothesis and T-errors. An appealing strengthening of (*) is (*’) In a noisy system, the probabilities for noise to hit two strongly correlated elements have a substantial positive correlation. Positive correlations for the event that the bits are hit for every pair of bits leads precisely to what Dave calls T-errors: There is a substantial probability for large portion of all bits to be hit. For example, if the probability for a bit flip is 1/1000, but for every pair the probability that they will both be flipped is > 1/3000, then there is a substantial probability that more than 25% of the bits will be hit. 11. T-errors as spontaneous synchronization of errors. Low rate T-errors amounts to the errors getting synchronized. Can we have a spontaneous synchronization of errors when the noise is expressed by a “local” coupled stochastic process? There is a substantial literature (from physics, mainly) suggesting that the answer is yes. For our purposes we want more: We want to show that for coupled processes, such as the process describing the noise of highly correlated physical system (especially, a quantum computer), substantial amount of synchronization of errors will always take place. 12. Can coupled noise create interesting/harmful noise patterns? Here is a related problem: Take a Zebra . Can the pattern of strips be the outcome of a locally defined stochastic process? This is a problem we used to discuss (me, Tali Tishby and others) in our student years, 30 years ago, and we returned to related questions occasionally since then. To make the connection concrete think about the direction from tail to head as time, and the cells in the horizontal lines as representing your qubits (white represent a defected qubit, say). Isn’t it a whole new way to think about a zebra? 13. Spontaneous synchronization and the concentration of measure phenomena. A reason that I find T-errors/spontaneous synchronization an appealing possibility is that this is how a “random” random noise looks like. Actually talking about a random form of noise is easier in the quantum context. If you prescribe the noise rate and consider the noise as a random (say unitary) operator (with this noise rate) you will see a perfect form of synchronization for the errors, and this property will be violated with extremely low probability (when there are many qubits). Random unitary operators with a given noise rate appears to be unapproachable by any process of a “local” nature, but their statistical properties may hold for such stochastic processes describing the noise. The fact that perfect error synchronization is the “generic” form of noise suggests that stochastic processes describing the noise will approach this “generic” behavior unless they have a good reason (such as time-independence) not to. 14. The postulate of error synchronization. (**) In any noisy system with many substantially correlated elements there will a strong effect of spontaneous error synchronization (T-errors). In particular, for quantum computers: (**) In a noisy quantum computer at a highly entangled state there will a strong effect of spontaneous error synchronization. Both (*) and (**) can be tested, in principle, for quantum computers with a small number of qubits (10-20). I agree with Dave that even this is well down the road, (but much before the superior complexity power of quantum computers will kick in). Maybe (*) and (**) can be tested on classical highly correlated systems (weather, the stock-market). 15. The idea that for the evolution of highly correlated systems, changes tends to be synchronized, and thus that we will witness rapid changes effecting large portion of the system (between long periods of relative calm) is appealing and may be related to various other thing (sharp threshold phenomena, or, closer to themes of the Pontiff – evolution, the evolution of scientific thoughts, etc. ) 16. Clowning and taking majority: suppose you have T-errors where a large proportion (99%) of bits are attacked in an unbiased way. (Namely, the probability to change a bit from 0 to 1 is the same as the probability from 1 to 0.) Noiseless bits can still be extracted. This extends to the quantum case but only with the same conclusion: noiseless bits can still prevail. But there is no quantum error-correction against such a noise. 17. But are (*) and (**) really true?? How would I know? These conjectures looks to me as expressing a rather concrete (although I would like to make them even more concrete) possible reason for why (superior) quantum computation may be infeasible (in principle!). The idea of (forced) spontaneous synchronization of errors definitely sounds a bit crazy. Maybe it is true though. C. Going below log-depth will not be easy. 18. Let me repeat the comment that even for noisy reversible computation, where the mathematical results support the physics intuition that computation is not possible, at present, we cannot exclude log depth computation, (which allows together with noiseless classical computation to factor in polynomial time). So even assuming the pessimistic notions of noise, proving a reduction to classical computation will not be an easy matter. D. Optimism, skepticism And there were the matters of skepticism and optimism and even the virtues of the universe; well, all these are extremely interesting issues but their direct relevance to the factual questions at hand is not that strong. One comment is that we do not seem to have good ways to base interesting scientific research on skeptical approaches. (Maybe social scientists are somewhat better at Like or Dislike: 0 0 This entry was posted in Computer Science, Quantum. Bookmark the permalink.
{"url":"http://dabacon.org/pontiff/?p=1203","timestamp":"2014-04-17T00:48:24Z","content_type":null,"content_length":"117076","record_id":"<urn:uuid:fda3b315-13b6-4bc5-9fe0-da9e65daae5d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: multiply imputed values for outcome [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: multiply imputed values for outcome From Maarten buis <maartenbuis@yahoo.co.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: multiply imputed values for outcome Date Tue, 11 Jul 2006 08:33:33 +0100 (BST) --- "Leslie R Hinkson asked: > I have a data set that has multiply imputed values (5) for the > outcome variable. I have previously used HLM software to conduct my > analysis but I was told that with the new GLM features in Stata 9 > that it should be possible to do the same in Stata. Unfortunately, I > haven't found that way yet. Any thoughts? --- Austin Nichols answered: > If you used HLM, you may want -xtmixed- or -gllamm- (-ssc install > gllamm-) but I don't know about the "five plausible values". The trick with multiple imputation is that you multiple plausible values for each missing value, thus creating multiple "complete" datasets. Next you estimate your model just as if you had a real complete dataset on each "complete" dataset. In your case you would than have five sets of estimates. The final point estimates are the means over these five sets of estimates and the final standard error is made with two components: the mean variance (squared standard error) of each estimate and the variance between sets. The formulas can be found on: http://www.stat.psu.edu/~jls/mifaq.html#howto . J. B. Carlin, N. Li, P. Greenwood, & C. Coffey have writen tools for analyzing multiple imputed datasets, that implement these formulas (-findit mifit-), but I don't think they include -xtmixed- or -gllamm-. However, these formulas are pretty simple and can be implemented by hand if needs be. --- "Leslie R Hinkson also asked: > Also, is it possible to conduct standard linear regression analysis > with multiple plausible values for the dependent variable using > Stata 9? -mifit- can handle standard linear regression analysis. Maarten L. Buis Department of Social Research Methodology Vrije Universiteit Amsterdam Boelelaan 1081 1081 HV Amsterdam The Netherlands visiting adress: Buitenveldertselaan 3 (Metropolitan), room Z214 +31 20 5986715 The all-new Yahoo! Mail goes wherever you go - free your email address from your Internet provider. http://uk.docs.yahoo.com/nowyoucan.html * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-07/msg00266.html","timestamp":"2014-04-16T16:04:43Z","content_type":null,"content_length":"8491","record_id":"<urn:uuid:8b091d00-3bf6-4501-8b8f-949322edfc73>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Can't integrate this differential equation May 4th 2010, 08:07 AM [SOLVED] Integration of D.E. Hi, having difficulty with this D.E. that relates to a tank of initially 400L of pure water, with 1L of salt water (concentration 4g/L) added per min, and 3L of the well-mixed tank solution removed per min. This is what I have so far; I'm trying to find an integrating factor though I keep ending up with a final differentiated x(t) function that doesn't make sense (based on t-200, resulting in the function only existing from t > 200 when it should be from t > 0) $P(t)=\frac{3}{2(200-t)}$ leading to but then this doesn't seem to make the LHS equal to the derivative of $Q(t)I(t)$? Without the negative sign it would work fine but I'm sure that's the integral of P(t)... a little help? Maybe I've done something stupid... May 4th 2010, 09:47 AM Hello, McChickenb! $\frac{dQ}{dt} +\frac{3}{400-2t}\,Q\;=\;4$ We have: . $\frac{dQ}{dt} + \frac{3}{2}\cdot\frac{1}{200-t}\,Q \;=\;4$ Integrating factor: . $I \;=\;e^{\frac{3}{2}\!\int \!\frac{dt}{200-t}} \;=\; e^{-\frac{3}{2}\ln(200-t)} \;=\;e^{\ln(200-t)^{-\frac{3}{2}}} \;=\;(200-t)^{-\frac{3}{2}}$ Multiply by $I\!:\;\;(200-t)^{-\frac{3}{2}}\,\frac{dQ}{dt} + \frac{3}{2}\!\cdot\!\frac{1}{200-t}\!\cdot\!(200-t)^{-\frac{3}{2}} \,Q \;=\;4(200-t)^{-\frac{3}{2}}$ . . . . . . . . . . . . . . . . . . $(200-t)^{-\frac{3}{2}}+ \frac{3}{2}\,(200-t)^{-\frac{5}{2}}\,Q \;=\;4(200-t)^{-\frac{3}{2}}$ We have: . $\frac{d}{dt}\bigg[(200-t)^{-\frac{3}{2}}\,Q\bigg] \;=\;4(200-t)^{-\frac{3}{2}}$ Integrate: . $(200-t)^{-\frac{3}{2}}\,Q \;=\;8(200-t)^{-\frac{1}{2}} + C$ Multiply by $(200-t)^{\frac{3}{2}}\!:\;\;\;Q \;=\;8(200-t) + C(200-t)^{\frac{3}{2}}$ May 4th 2010, 10:13 AM D'oh! That's perfect. I was confusing myself on the simple stuff. Thanks a lot!
{"url":"http://mathhelpforum.com/calculus/142988-cant-integrate-differential-equation-print.html","timestamp":"2014-04-21T02:23:41Z","content_type":null,"content_length":"8629","record_id":"<urn:uuid:ce1ab249-8e5d-4e62-8c07-1225c439be27>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: f The time taken by the first four swimmers in a 50 m free style swimming championship are 41.68, 41.65, 41.67, and 41.64 seconds. What is the correct order to represent the positions? View Solution ccc The percentage change in the populations of the four cities Redfield, Rosedale, Jasper and Bisbee for the year 1990 - 2000 are 5.7, 5.5, 5.4 and 5.9. Which city has the highest View percentage change in the population? ccc Solution ccc What is the value of the digit 3 in the decimal number 6.36? View Solution ccc What is the difference between the values of the digits 2 and 3 in the number 323? ccc View Solution ccc How many complete grids will be there for the model representation of the decimal number 9.9326? View Solution ccc What can we infer from the two-grid model representation of 0.78 and 0.780? View Solution ccc Find the number of decimal numbers between 4 and 5 that have only tenths. View Solution ccc What are the values of each occurrence of the digit 6 in the number 61.16? View Solution ccc Which of the following numbers has the same value of the digit 2 as in the number 53.123? View Solution ccc The time taken by the first four swimmers in a 50 m free style swimming championship are 29.88, 29.85, 29.87, and 29.84 seconds. What is the correct order to represent the positions? View Solution ccc What are the values of each occurrence of the digit 3 in the number 30.53? View Solution ccc Which of the following numbers has the same value of the digit 2 as in the number 63.125? View Solution ccc What can we infer from the two-grid model representation of 0.46 and 0.460? View Solution ccc How many complete grids will be there for the model representation of the decimal number 2.8774? View Solution ccc Arrange the decimals 42.06, 42.48, 42.22, and 42.50 from the least to the greatest. View Solution ccc Identify the largest number among 4.4, 4.5, 4.6, 4.7 and 4.8. View Solution ccc Which of the following numbers are indicated by the number line? ccc View Solution ccc Which number line represents the decimal number 1.4? View Solution ccc What is the difference between the two numbers represented in the number line? View Solution ccc The time taken by the first four swimmers in a 50 m free style swimming championship are 21.69, 21.66, 21.68, and 21.65 seconds. What is the correct order to represent the positions? View Solution ccc Arrange the decimals 6.06, 6.48, 6.22, and 6.50 from the least to the greatest. ccc View Solution ccc What is the decimal number represented by the shaded area in the figure? ccc View Solution ccc What is the decimal number represented by the square? ccc View Solution ccc What is the decimal number represented by the ratio of yellow-colored triangles to the green-colored ones? View Solution ccc How many decimal numbers are there between 2 and 3? ccc View Solution ccc Find the number of decimal numbers between 5 and 6 that have only tenths. View Solution ccc Which number is greater between the two decimal numbers represented in the models? ccc View Solution ccc The percentage change in the populations of the four cities Milan, Rosedale, Jasper and Rushville for the year 1990 - 2000 are 2.6, 2.4, 2.3 and 2.8. Which city has the highest View percentage change in the population? ccc Solution ccc Which of the following is true? ccc View Solution ccc Which of the following decimals is the greatest? ccc View Solution ccc What is the value of the digit 2 in the decimal number 5.24? View Solution ccc What is the difference between the values of the digits 3 and 5 in the number 435? ccc View Solution ccc The temperature of New York City is 108.01^oF on a day during summer. What are the place values of digit 1 on either side of the decimal? ccc View Solution ccc The tilt of axis (degrees) of moon is given as 9.9970. What is the value of the digit 7 in the above measure? ccc View Solution ccc Which of the following numbers has the largest value for the digit 9? ccc View Solution ccc Fill in the blank with suitable symbol. View Solution 4.23 _____ 4.25 ccc ccc Fill in the blank with the symbol that correctly fits the sentence. 70.5 ______ 70.05 View Solution ccc Arrange the numbers 59, - 54, and 1.45 in descending order. ccc View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxdxbgdamxkhhdk&.html","timestamp":"2014-04-20T03:33:49Z","content_type":null,"content_length":"78260","record_id":"<urn:uuid:0430c7e5-918c-4dc0-af0b-27fe13b13c31>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher-Index Roots: Basic Operations Higher-Index Roots: Basic Operations (page 6 of 7) Sections: Square roots, More simplification / Multiplication, Adding (and subtracting) square roots, Conjugates / Dividing by square roots, Rationalizing denominators, Higher-Index Roots, A special case of rationalizing / Radicals & exponents / Radicals & domains Operations with cube roots, fourth roots, and other higher-index roots work similarly to square roots. Simplifying Higher-Index Terms • Simplify Just as I can pull from a square (or second) root anything that I have two copies of, so also I can pull from a fourth root anything I've got four of: If you have a cube root, you can take out any factor that occurs in threes; in a fourth root, take out any factor that occurs in fours; in a fifth root, take out any factor that occurs in fives; etc. • Simplify the cube root: • Simplify the cube root: • Simplify: Copyright © Elizabeth Stapel 1999-2011 All Rights Reserved • Simplify: • Simplify: Multiplying Higher-Index Roots • Simplify: • Simplify: Adding Higher-Index Roots • Simplify: • Simplify: Dividing Higher-Index Roots • Simplify: • Simplify: I can't simplify this expression properly, because I can't simplify the radical in the denominator down to whole numbers: To rationalize a denominator containing a square root, I needed two copies of whatever factors were inside the radical. For a cube root, I'll need three copies. So that's what I'll multiply onto this fraction: • Simplify: Since 72 = 8 × 9 = 2 × 2 × 2 × 3 ×3, I won't have enough of any of the denominator's factors to get rid of the radical. To simplify a fourth root, I would need four copies of each factor. For this denominator's radical, I'll need two more 3s and one more 2: << Previous Top | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Return to Index Next >> Cite this article as: Stapel, Elizabeth. "Higher-Index Roots: Basic Operations." Purplemath. Available from http://www.purplemath.com/modules/radicals6.htm. Accessed
{"url":"http://www.purplemath.com/modules/radicals6.htm","timestamp":"2014-04-16T04:28:57Z","content_type":null,"content_length":"28147","record_id":"<urn:uuid:650693d3-5854-4e4f-8691-b71f4799344a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Superintendent’s Memo #251-11 Department of Education September 9, 2011 TO: Division Superintendents FROM: Patricia I. Wright, Superintendent of Public Instruction SUBJECT: Nominations for Standard Setting Committees for the Mathematics Standards of Learning Tests Based on the Revised 2009 Standards of Learning The Office of Assessment Development is accepting nominations for standard setting committees for the Mathematics Standards of Learning (SOL) tests. Standard setting is necessary because of the implementation of new mathematics tests based on the new 2009 Mathematics SOL in 2011-2012. The committee meetings will be held at the Wyndham Virginia Crossings Hotel and Conference Center, Glen Allen, Virginia, as follows: November 1-3, 2011 □ End-of-Course (EOC) Algebra I □ EOC Algebra II □ EOC Geometry January 31-February 1-2, 2012 □ Grade 3 Mathematics □ Grade 4 Mathematics □ Grade 5 Mathematics □ Grade 6 Mathematics □ Grade 7 Mathematics □ Grade 8 Mathematics The standard setting committees are responsible for recommending cut scores that reflect students’ achievement status. These recommendations will be submitted to the Board of Education, which will then decide the cut scores for fail basic (grades 3-8 only) pass/proficient and pass/advanced status. For Algebra II, a college ready cut score will be determined in lieu of a pass/advanced cut score. Approximately 20 teachers will be selected to serve on each of the nine committees. School divisions are encouraged to nominate a representative for each committee. Committee members will be chosen based on the following criteria: • instructional training and experience in the mathematics content area; • in-depth knowledge of the Mathematics Standards of Learning; • instructional experience with students who have disabilities and/or students with limited English proficiency; and • balanced regional representation. Committee members will be provided the following: • reimbursement for meals and travel expenses in accordance with state travel policy and guidelines; • lodging; and • attendance certificate for recertification points (pending local approval). School divisions will be reimbursed for substitute teacher pay at a rate of $75 per day. All nominees who wish to serve on the standard setting committees must complete the Web-based application using the Assessment Committee Application Processing System (ACAPS). The Web-based application process requires a professional reference and division-level approval. Procedures for submitting the Web-based application are available at https://p1pe.doe.virginia.gov/acaps/. Completed applications should be submitted to the Virginia Department of Education using ACAPS by September 30, 2011. If you have questions, please contact the student assessment staff by e-mail at Student_Assessment@doe.virginia.gov or by phone at (804) 225-2102.
{"url":"http://www.doe.virginia.gov/administrators/superintendents_memos/2011/251-11.shtml","timestamp":"2014-04-21T10:10:29Z","content_type":null,"content_length":"4589","record_id":"<urn:uuid:0f2314fb-f235-4ec0-9942-242571d6ada8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the canonical morphism $\mathbb A^n \to\mathbb A^{n-1}$ a projective morphism? up vote 1 down vote favorite Let $\mathbb A^n$ be the n-dimensional affine space over a field K (algebraically closed if that makes it easier), so $\mathbb A^n= \text{Spec }K[x_1,...,x_n]$, and $\mathbb A^{n-1}$ the (n-1) -dimensional analog. The inclusion $K[x_1,...,x_{n-1}]\to K[x_1,...,x_n]$ induces a map $\mathbb A^n \to\mathbb A^{n-1}$. Is this a projective map? For it to be projective (in particular proper), the image of a closed set should be a closed set. Look at the case $n = 2$. Then the map $\mathbb{A}^2 \rightarrow \mathbb{A}^1$ with coordinates 2 $x,y$ and $x$ resp. The map on rings you wrote corresponds to the projection $(x,y) \mapsto x$. Now, $\mathbb{A}^2$ has the closed set $xy = 1$, the image of which is all of $\mathbb{A}^1$ except the origin. Since this isn't a closed set, the map isn't proper. Similarly, you can rewrite the standard counterexample for larger $n$. (Btw, maybe you want to tag this "algebraic-geometry"). – LMN Nov 17 '12 at 21:45 that was very useful, thanks a lot – kwkwkw Nov 17 '12 at 22:02 1 maybe you meant to ask if this were an affine map? (it makes less sense to ask about projective maps when the spaces are all affine.) – roy smith Nov 18 '12 at 6:48 It makes sense, it's just the same as talking about finite maps. – Will Sawin Nov 18 '12 at 7:24 add comment closed as off topic by Will Sawin, Qing Liu, Dan Petersen, Kevin Ventullo, Andy Putman Nov 18 '12 at 0:17 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. Browse other questions tagged tag-removed or ask your own question.
{"url":"http://mathoverflow.net/questions/112720/is-the-canonical-morphism-mathbb-an-to-mathbb-an-1-a-projective-morphism","timestamp":"2014-04-18T00:25:51Z","content_type":null,"content_length":"44958","record_id":"<urn:uuid:971e0216-33ee-436a-99e7-43fb95d72ac2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
cylindrical equal-area projection 03-20-2009, 01:55 PM #2 Guild Artisan Join Date Aug 2007 Port Alberta, Regina(IRL: Eugene, OR) 03-20-2009, 09:56 AM #1 Guild Novice Join Date Apr 2008 I have a bit of a problem with the cylindrical equal-area projection. I can't figure out how i can project it correctly from longitude , latitude and altitude data to x, y and altitude. I need an equal-area projection, because i want to use this for a game map in which i want all the area's to have the same size (that would be fair for the players). But I also want to keep realism, so i need a correct projection of the map. Or can i better use an other projection method to achieve this (like for example the mollerweide projection)? i hope my question is clear. Last edited by haroim; 03-20-2009 at 11:31 AM. Here are some formulas: Each parallel has a length of 2*pi*cos(theta), this is the height of the map in y. Each meridian has a length of 2*R/cos(theta), this is the width of the map in x. The distance of each parallel to the equator is R*sin(phi)/cos(theta). Where R is the radius of the generating globe of the chosen area scale, theta is the standard parallel, phi is the latitude of interest. For a point on the globe (lat/lon), where lat is the latitude of the point and lon is the longitude of the point(in degrees) and the map is centered at latlon(0,0), west longitudes are negative values as are south latitudes, x = 2*pi*cos(theta) * (lon + 180)/360 y = R*sin(lat)/cos(phi) The ratio of width/height of the map is determined from, cos(theta) = acos((ratio/pi)^0.5) Some standard projections and their standard parallels: Lambert - pi/1 ratio, theta=0 Behrmann - ~2.356:1 ratio, theta=30 Gall-Peters - pi/2:1 ratio, theta=45 Hopefully this would be helpful. edit: Info from Arthur Robinson, Randall Sale, Joel Morrison, "Elements of Cartography" 4th ed., 1978, John Wiley and Sons: New York. The wikipedia describes this well under Gall-Peters projection. Even more generally, but in less detail under Lambert cylindrical equal-area projection. Last edited by su_liam; 03-20-2009 at 02:05 PM. Reason: Added wikipedia references, 'cause the wikipedia adds an air of authority Astrographer - My blog. -How to Fit a Map to a Globe -Regina, Jewel of the Spinward Main(uvmapping to apply icosahedral projection worldmaps to 3d globes) -Building a Ridge Heightmap in PS -Faking Morphological Dilate and Contract with PS -Editing Noise Into Terrain the Burpwallow Way -Wilbur is Waldronate's. I'm just a fan. thanks for the reaction! i am now looking into it. I have looked into it and i get out the following: If i want to make a projection i will need for each (x,y) an altitude. I can find the altitude in a (lon,lat) form. So to get the altitude of (x,y) i will need to know the (lon,lat) of (x,y). x = 2*pi*cos(theta) * (lon + 180)/360 lon = (x*360)/(2*pi*cos(theta))-180 y = R*sin(lat)/cos(phi) so (x,y)=(lon,lat)=( (x*360)/(2*pi*cos(theta))-180, invsin(cos(phi)*y/R)) ) so i should be able to calculate the altitude of (x,y) by taking the altitude of ( (x*360)/(2*pi*cos(theta))-180, invsin(cos(phi)*y/R)) ). is this correct? And what is phi? Is it the standard latitude, and what number should i enter? Last edited by haroim; 03-21-2009 at 04:02 AM. Reason: invsin(cos(phi*y/R)) must be invsin(cos(phi)*y/R)) It looks correct to me... I'll have to check your algebra when I have more time and my brain is working. Yes, phi is the standard latitude. It's a constant for a given map. For Lambert cylindrical equal-area it's 0, the equator. For Behrmann, it's 30degrees. For Gall-Peters it's 45 degrees. Astrographer - My blog. -How to Fit a Map to a Globe -Regina, Jewel of the Spinward Main(uvmapping to apply icosahedral projection worldmaps to 3d globes) -Building a Ridge Heightmap in PS -Faking Morphological Dilate and Contract with PS -Editing Noise Into Terrain the Burpwallow Way -Wilbur is Waldronate's. I'm just a fan. Thanks! So the phi is the same as the theta. Should i take the width of my map as R, or should i take the width, calculate it into a sphere, and calculate the radius of the sphere? Last edited by haroim; 03-21-2009 at 04:37 AM. 03-20-2009, 04:00 PM #3 Guild Novice Join Date Apr 2008 03-20-2009, 06:53 PM #4 Guild Novice Join Date Apr 2008 03-21-2009, 12:08 AM #5 Guild Artisan Join Date Aug 2007 Port Alberta, Regina(IRL: Eugene, OR) 03-21-2009, 03:00 AM #6 Guild Novice Join Date Apr 2008
{"url":"http://www.cartographersguild.com/regional-world-mapping/4825-cylindrical-equal-area-projection.html","timestamp":"2014-04-18T19:27:02Z","content_type":null,"content_length":"74747","record_id":"<urn:uuid:91c5c3a0-ea39-4e03-bac6-b078e5140f53>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling Tue, 02/25/2014 - 02:02 unconformable arrays what you say makes sense mike, but mxAlgebra(-2 * sum(MZw.data.weight * log(MZ.objective) ), name = "mzWeigh") generates the unconformable array error. Unfortunately the error doesn't tell one what the noncorforming array's sizes are... figuring this might just be a column of weights from the data entering a row of likelihoods, i tried mxAlgebra(-2 * sum(t(MZw.data.weight) * log(MZ.objective) ), name = "mzWeigh") Also fail... It shouldn't be necessary to place the weights into a matrix, should it? OK.. tried making a matrix and placing the weight data in there. A benefit is that the errors are more informative. But same error. Would be great if the back end kicked back something like I was doing "[300,1] %*% [300,1] when…" Will look more in the morning. PS: I assume the row objective is padded with NAs for all(is.na(rows)? Tue, 02/25/2014 - 19:47 trying a simple example: vector alters estimates in model? So, building a simple play model. After encountering some no-context errors :-( This is just estimating the covariance of X and Y, which is around .5 #Simulate Data rs = .5 nSubs = 1000 selVars <- c('X','Y') nVar = length(selVars) xy <- mvrnorm (nSubs, c(0,0), matrix(c(1,rs,rs,1),2,2)) testData <- data.frame(xy) names(testData) <- selVars m1 <- mxModel("vector_is_FALSE", mxMatrix(name = "expCov", type = "Symm", nrow = nVar, ncol = nVar, free = T, values = var(testData)), mxMatrix(name = "expMean", type = "Full", nrow = 1, ncol = nVar, free = T, values = 0), mxExpectationNormal(covariance = "expCov", means = "expMean", dimnames = selVars), mxFitFunctionML(vector = F), mxData(observed = testData, type = "raw") m1 <- mxRun(m1) # summary shows all parameters recovered fine. # Now: Switch to vector m1 <- mxModel(m1, mxFitFunctionML(vector = T), name = "vector") m1 <- mxRun(m1) umxSummary(m1, showEstimates = "std" ) # what we get # name Estimate Std.Error # 1 vector.expCov[1,1] 0.4705 9.0e+07 # 2 vector.expCov[1,2] 0.6148 1.4e+08 # 3 vector.expCov[2,2] 1.0314 2.1e+08 # what it should be... # XX 0.9945328 # XY 0.4818317 # YY 1.0102951 So when vector is on, something needs to change in the model to keep it driving toward good estimates? Wed, 02/26/2014 - 00:53 Concerning ... but a solution This simple example reliably causes my R to completely crash. Rolling back a couple of revisions does not fix the problem for me. Going back to r2655 still has the same crashing problem. For r2583, I no longer crash, but I replicate your finding of rather bad estimates only when vector=TRUE. I should also note I haven't observed this problem with other models running vector=TRUE. However, those other models do not optimize the fit function with vector=TRUE; they have that fit function as a component of another fit function. This got me thinking. I think in this case OpenMx is trying to optimize the likelihood (not even the log likelihood) of the last row of data. Essentially, for the model you specified the row likelihoods do not collapse to a single number for optimization. The developers should discuss if we should catch this, if it should case an error, a warning, or what. The following code works like a charm. #Simulate Data rs = .5 nSubs = 1000 selVars <- c('X','Y') nVar = length(selVars) xy <- mvrnorm (nSubs, c(0,0), matrix(c(1,rs,rs,1),2,2)) testData <- data.frame(xy) names(testData) <- selVars m1 <- mxModel("vector_is_FALSE", mxMatrix(name = "expCov", type = "Symm", nrow = nVar, ncol = nVar, free = T, values = var(testData)), mxMatrix(name = "expMean", type = "Full", nrow = 1, ncol = nVar, free = T, values = 0, ubound=1), mxExpectationNormal(covariance = "expCov", means = "expMean", dimnames = selVars), mxFitFunctionML(vector = FALSE), mxData(observed = testData, type = "raw") m1 <- mxRun(m1) # summary shows all parameters recovered fine. # Now: Switch to vector m1 <- mxModel(m1, mxFitFunctionML(vector = T), name = "vector") mc <- mxModel("vector_is_TRUE", m1, mxAlgebra(name="thefit", -2*sum(log(vector.fitfunction))), mc <- mxRun(mc) Wed, 02/26/2014 - 01:50 to fit or not to fit, that is the parameter Very helpful mike H ! If a side effect of vector = TRUE is that the likelihoods are not optimised on, then perhaps mxFitFunctionML(vector = T) should have a fit = TRUE (default) parameter, which causes the ML values to be fit, rather than just optimising one value in the vector? mxFitFunctionML(vector = TRUE, optimiseMinus2log_all = TRUE) # note, i will find ML values based on -2 * sum(log(vector)) or perhaps a new function, followed by a fit algebra. Perhaps not... the complexity of this is already in the top 1 % of IQ :-) mxLikelihoodsML(vector = T, name = "mylike") # note, you need to fit these... Fri, 02/28/2014 - 18:17 weighted data tutorial So, I made up a tutorial-type resumé of what I think I gathered here... Does that seem correct? Also, I noticed that these two give the same result: so the mc <- mxModel("vector_is_TRUE", mxMatrix(name = "expCov", type = "Symm", nrow = nVar, ncol = nVar, free = T, labels = c("XX","XY","YY"), values = var(testData)), mxMatrix(name = "expMean", type = "Full", nrow = 1, ncol = nVar, free = T, labels = c("meanX","meanY"), values = 0, ubound=1), mxExpectationNormal(covariance = "expCov", means = "expMean", dimnames = selVars), mxFitFunctionML(vector = TRUE), mxData(observed = testData, type = "raw") mxAlgebra(name = "minus2LL", -2 * sum(log(vector.fitfunction)) ), and this version with only one model containing two fit functions. mxMatrix(name = "expCov", type = "Symm", nrow = nVar, ncol = nVar, free = T, labels = c("XX","XY","YY"), values = var(testData)), mxMatrix(name = "expMean", type = "Full", nrow = 1, ncol = nVar, free = T, labels = c("meanX","meanY"), values = 0, ubound=1), mxExpectationNormal(covariance = "expCov", means = "expMean", dimnames = selVars), mxAlgebra(name = "minus2LL", -2 * sum(log(vector.fitfunction)) ), mxFitFunctionML(vector = TRUE), mxData(observed = testData, type = "raw") Does that make any sense? Fri, 02/28/2014 - 18:53 I like this. I'm surprised that the one with two fit functions in one model actually works, but pleased it does. Your github link is incorrect (there's a trailing w). It would be nice to have an example of selective sampling, and then recovery of population parameters by weighting. Wed, 02/26/2014 - 10:30 Note example script in models/passing Although it uses a mixture distribution, the principle for sample weights is very similar (just have a single component weighted mixture, essentially). It uses the technique of pulling variables out of the data frame to use as weights. I think this circumvents the issues with dimensionality, but I agree it is a bit clunky that the likelihoods are specified on an individual-wise basis (a la definition variable) but then you get the whole vector of them back. It might be better to have a weight constructor that would compute an algebra (individual-wise) and weight the likelihoods on the way back. Tue, 02/25/2014 - 09:41 Three things to try With this example there aren't a lot of arrays available to be non-conformable. It's either -2, MZw.data.weight, or MZ.objective. First, I'd make sure the objective (fit function) for the MZ model has vector=TRUE. Second, is MZw.data.weight a single column vector? The * operator is for elementwise multiplication, and it will not recycle either argument if they are different lengths like R. Use %*% for matrix multiplication. Third, you could put MZw.data.weight into its own mxAlgebra and have a look at it that way. Wed, 02/26/2014 - 08:04 True in this case not many sources of non-conformability But, it would be SUPER HELPFUL to print out the dimensions of the matrices found not to be conformable along with the error. This is something classic Mx used to do, and it's sorely missed in OpenMx. Pretty please? Mon, 04/04/2011 - 10:25 Weighted objectives This is possible in OpenMx; Mike was arguing for a shortcut. To weight objective functions in OpenMx, you'll have to specify two models. The first model is your standard model as though there were no weights, with the "vector=TRUE" option added to the FIML objective. This option makes the first model return not a single -2LL value for the model, but individual likelihoods for each row of the data. That model is placed in a second MxModel object, which will also contain the data (if you put the data here, you don't have to put it in the first model, but you still can). That second model will contain an algebra that multiplies the likelihoods from the first model by some weight. I'll assume that you want to weight the log likelihoods by a variable in your dataset. Your second model will then look something like this, where "data.weight" is the weight from your data and "firstModel" is the name you assigned to your first model. fullModel <- mxModel("ThisIsHowYouDoWeights", mxFIMLObjective("myCov", "myMeans", dims, vector=TRUE)), mxData(myData, "raw"), mxAlgebra(-2 * sum(data.weight %x% log(firstModel.objective), name="obj"), Edit: corrected the multiplication to a Kronecker product, as its a 1 x 1 definition variable matrix and a 1 x n likelihood vector. Fri, 02/21/2014 - 00:17 Does this work to implement weighted analysis? Following this thread and a previous one [1], I've been trying to implement differential weighting of subjects in a model by building another model along side which multiplies the vector of fits in the first by the row weight in the data and optimizing that sum. I thought it was working, but I am getting very different results from a colleague who is using MPlus's weight feature. Just deleting the low-weighted rows gives results more similar to his than to wMZ = data.frame(weight = w[twinData$ZYG == "MZFF"]) # get weights for MZs wDZ = data.frame(weight = w[twinData$ZYG == "DZFF"]) # and for DZs # An MZ model with vector = T mxModel(name = "MZ", mxData(mzData, type = "raw"), mxFIMLObjective("top.mzCov", "top.expMean", vector = bVector)) # model "MZw", which simply weights the MZ model mxData(wMZ, "raw"), mxAlgebra(-2 * sum(data.weight %x% log(MZ.objective) ), name = "mzWeightedCov"), # ... # In the container model, target an algebra which sums the weighted objectives # of the MZw and DZw, which in turn target weighted functions of the individual row # likelihoods of the underlying MZ and DZ ACE models mxAlgebra(MZw.objective + DZw.objective, name = "twin"), mxAlgebraObjective("twin") What (if anything) is wrong with this approach? (or my implementation) [1] http://openmx.psyc.virginia.edu/thread/445 Mon, 02/24/2014 - 16:01 Unsure about product mxAlgebra(-2 * sum(data.weight %x% log(MZ.objective) ), name = "mzWeightedCov"), Is this really doing what we want, if MZ.objective is a vector of all the likelihoods? I'd be inclined to go for: mxAlgebra(-2 * sum(mzDataWeight * log(MZ.objective) ), name = "mzWeightedCov"), where mzDataWeight is the name of an mxMatrix explicitly loaded with the weights. Mon, 04/04/2011 - 10:42 Umm, I'm not following the Umm, I'm not following the previous explanation. The definition variable "data.weight" is a 1 x 1 matrix, and "firstModel.objective" is a n x 1 matrix when vector=TRUE. Another way to specify this is to put the weights in a n x 1 MxMatrix, and then multiply the weights by the likelihood vector. I think there's a right-parenthesis missing the closing of the model "firstModel". Mon, 04/04/2011 - 11:50 My fault for writing code My fault for writing code without checking it. It should be a Kronecker product in this specification. You're right that a 1 x n matrix of weights may be included in lieu of the definition variable approach. I fixed the parenthesis issue as well. Mon, 04/04/2011 - 17:45 thank you that was very helpful! just a quick follow up question: I know Mplus uses a sandwich estimator to correct the standard errors in addition to using the weighted likelihood function. Is there a similar option in OpenMx? Can I trust the standard errors?
{"url":"http://openmx.psyc.virginia.edu/thread/861","timestamp":"2014-04-21T04:39:33Z","content_type":null,"content_length":"96658","record_id":"<urn:uuid:775abad1-ccd8-403f-82dc-78dd6027c013>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Singular Value Decomposition of Compact Operators: A Tool for Computing Frequency Responses of PDEs Binh K. Lieu and Mihailo R. Jovanovic, 6th January 2012 (Chebfun example pde/SVDFrequencyResponse.m) In many physical systems there is a need to examine the effects of exogenous disturbances on the variables of interest. The analysis of dynamical systems with inputs has a long history in physics, circuit theory, controls, communications, and signal processing. Recently, input-output analysis has been effectively employed to uncover the mechanisms and associated spatio-temporal flow patterns that trigger the early stages of transition to turbulence in wall-bounded shear flows of Newtonian and viscoelastic fluids. Frequency response analysis represents an effective means for quantifying the system's performance in the presence of a stimulus, and it characterizes the steady-state response of a stable system to persistent harmonic forcing. For infinite dimensional systems, the principal singular value of the frequency response operator quantifies the largest amplification from the input forcing to the desired output at each frequency. Furthermore, the associated left and right principal singular functions identify the spatial distributions of the output (that exhibits this largest amplification) and the input (that has the strongest influence on the system's dynamics), respectively. We have employed Chebfun as a tool for computing frequency responses of linear time-invariant PDEs in which an independent spatial variable belongs to a compact interval. Our method recasts the frequency response operator as a two-point boundary value problem (TPBVP) and determines the singular value decomposition of the resulting representation in Chebfun. This approach has two advantages over currently available schemes: first, it avoids numerical instabilities encountered in systems with differential operators of high order and, second, it alleviates difficulty in implementing boundary conditions. We refer the user to Lieu & Jovanovic 2011 [3] for a detailed explanation of the method. We have developed the following easy-to-use Matlab function (m-file) [1] which takes the system's coefficients and boundary condition matrices as inputs and returns the desired number of left (or right) singular pairs as the output. The coefficients and boundary conditions of the adjoint systems are automatically implemented within the code. Thus, the burden of finding the adjoint operators and corresponding boundary conditions is removed from the user. The algorithm is based on transforming the TPBVP in differential form into an equivalent integral representation. The procedure for achieving this is described in Lieu & Jovanovic 2011 [3]; also see T. Driscoll 2010 [2]. Additional examples are provided at [1]. help svdfr [Sfun,Sval] = svdfr(A0,B0,C0,Wa0,Wb0,LRfuns,Nsigs) Given a two point boundary value representation of the frequency response { A0*phi = B0*d, T: { u = C0*phi, { 0 = Wa0*phi(a) + Wb0*phi(b), solve the eigenvalue problem T*Ts*Sfun = Sval*Sfun, or Ts*T*Sfun = Sval*Sfun, where Ts is the adjoint of the frequency response operator T { A0s*psi = B0s*f, Ts: { g = C0s*psi, { 0 = Wa0s*psi(a) + Wb0s*psi(b). LRfuns = 1 --> solve for left singular functions: T*Ts --> determine spatial profile of the output LRfuns = 0 --> solve for right singular functions: Ts*T --> determine spatial profile of the input Nsigs --> number of desired singular values (default: Nsigs = 1) Sval --> singular values of T arranged in descending order Sfun --> singular functions associated with Sval written by: Binh Lieu, 2011 B.K. Lieu & M.R. Jovanovic, "Computation of frequency responses for linear time-invariant PDEs on a compact interval", Journal of Computational Physics, submitted (2011); also arXiv:1112.0579v1 Example: one-dimensional diffusion equation We demonstrate our method on a simple one-dimensional diffusion equation subject to spatially and temporally distributed forcing d(y,t), homogenous Dirichlet boundary conditions, and zero initial u_t(y,t) = u_{yy}(y,t) + d(y,t), y \in [-1,1] u(-1,t) = u(+1,t) = 0, u(y,0) = 0. Application of the temporal Fourier transform yields the following two point boundary value representation of the frequency response operator T, ( D^{(2)} - i*w )*u(y) = -d(y) ( [1; 0] E_{-1} + [0; 1] E_{1} )*u(y) = [0; 0] w -- temporal frequency D^{(2)} -- second-order differential operator in y i -- imaginary unit E_{j} -- point evaluation functional at the boundary y = j. We note that svdfr.m only requires the system's coefficients and boundary conditions matrices to compute the singular value decomposition of the frequency response operator T. For completeness, we will next show how to obtain the two point boundary value representations of the adjoint operator Ts and the composition operator T*Ts. The two-point boundary value representation for the adjoint of the frequency response operator, Ts, is given by ( D^{(2)} + i*w )*v(y) = f(y) g(y) = -v(y) ( [1; 0] E_{-1} + [0; 1] E_{1} )*v(y) = [0; 0]. The representation of the operator T*Ts is obtained by combining the two point boundary value representations of T and Ts. This can be achieved by setting d = g ( D^{(2)} - i*w )*u(y) - v(y) = 0 ( D^{(2)} + i*w )*v(y) = f(y) ( [1; 0] E_{-1} + [0; 1] E_{1} )*u(y) = [0; 0] ( [1; 0] E_{-1} + [0; 1] E_{1} )*v(y) = [0; 0]. Note that svdfr.m utilizes the integral form of a two point boundary value representation of the operator T*Ts. This yields accurate results even for systems with high order differential operators and poorly-scaled coefficients. % The system parameters: w = 0; % set temporal frequency to the value of interest dom = domain(-1,1); % domain of your function fone = chebfun(1,dom); % one function fzero = chebfun(0,dom); % zero function y = chebfun('x',dom); % linear function The system operators can be constructed as follows: A0{1} = [-1i*w*fone, fzero, fone]; B0{1} = -fone; C0{1} = fone; The boundary condition matrices are given by: Wa0{1} = [1, 0]; % 1*u(-1) + 0*D^{(1)}*u(-1) = 0 Wb0{1} = [1, 0]; % 1*u(+1) + 0*D^{(1)}*u(+1) = 0 The singular values and the associated singular functions of the frequency response operator can be computed using the following code Nsigs = 25; % find the largest 25 singular values LRfuns = 1; % and associated left singular functions [Sfun, Sval] = svdfr(A0,B0,C0,Wa0,Wb0,LRfuns,Nsigs); Analytical expressions for the singular values and corresponding singular functions are given by: Sa = (4./(([1:Nsigs].*pi).^2)).'; % analytical singular values Sf1 = sin((1/sqrt(Sa(1))).*(y+1)); % analytical soln of 1st singular function Sf2 = sin((1/sqrt(Sa(2))).*(y+1)); % analytical soln of 2nd singular function The absolute error of the first 25 singular values norm(Sval - Sa) ans = The 25 largest singular values of the frequency response operator: svdfr versus analytical results. hold on xlab = xlabel('n', 'interpreter', 'tex'); set(xlab, 'FontName', 'cmmi10', 'FontSize', 20); h = get(gcf,'CurrentAxes'); set(h, 'FontName', 'cmr10', 'FontSize', 20, 'xscale', 'lin', 'yscale', 'lin'); The principal singular function of the frequency response operator: svdfr versus analytical results. hold off; plot(y,-Sfun(:,1),'bx-','LineWidth',1.25,'MarkerSize',10) hold on; plot(y,Sf1,'ro-','LineWidth',1.25,'MarkerSize',10); xlab = xlabel('y', 'interpreter', 'tex'); set(xlab, 'FontName', 'cmmi10', 'FontSize', 20); h = get(gcf,'CurrentAxes'); set(h, 'FontName', 'cmr10', 'FontSize', 20, 'xscale', 'lin', 'yscale', 'lin'); axis tight The singular function of the frequency response operator corresponding to the second largest singular value: svdfr versus analytical results. hold off; plot(y,Sfun(:,2),'bx-','LineWidth',1.25,'MarkerSize',10) hold on; plot(y,Sf2,'ro-','LineWidth',1.25,'MarkerSize',10); xlab = xlabel('y', 'interpreter', 'tex'); set(xlab, 'FontName', 'cmmi10', 'FontSize', 20); h = get(gcf,'CurrentAxes'); set(h, 'FontName', 'cmr10', 'FontSize', 20, 'xscale', 'lin', 'yscale', 'lin'); axis tight [1] http://www.ece.umn.edu/users/mihailo/software/chebfun-svd/ [2] T. A. Driscoll, Automatic spectral collocation for integral, integro-differential, and integrally reformulated differential equations, J. Comput. Phys. 229 (2010), 5980-5998. If you are using this software please cite: [3] B. K. Lieu and M. R. Jovanovic, "Computation of frequency responses for linear time-invariant PDEs on a compact interval", Journal of Computational Physics (2011), submitted; available at: [4] L. N. Trefethen and others, Chebfun Version 4.0, The Chebfun Development Team, 2011, http://www.maths.ox.ac.uk/chebfun/
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/23972-chebfun/content/chebfun/examples/pde/html/SVDFrequencyResponse.html","timestamp":"2014-04-18T08:18:27Z","content_type":null,"content_length":"127879","record_id":"<urn:uuid:cfa619d3-6a41-4143-b915-a47acfda4b5b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
ACCA F2 Cost classification part b View all ACCA F2 / FIA FMA lectures >> This ACCA F2 / FIA FMA lecture is based on OpenTuition course notes, view or download here>> 1. Good afternoon! I have a question regarding the variable and fixed costs per unit and I was hoping you could clear it out for me. I looked over a table in my text book, where it says that the variable cost per unit is constant when the volume of production increases, and the fixed cost per unit decreases while production increases. I am really confused about this. Shouldn’t the fixed costs be constant regardless of output and variable costs vary accordingly to the volume of production? Thank you! □ I think maybe you are reading it too fast The book is correct (and says the same as my lecture). Variable costs are fixed per unit – it is the total variable cost that changes with the number of units. Fixed costs are fixed in total, but the fixed cost per unit will fall with more units being produced. (If the fixed cost is $10000 and you produce 10000 units, then each unit costs $1. If you produce 20000 units, then the total cost stays at $10000 but the cost per unit is then $0.50 ) ☆ You are right-I didn’t think it through properly. I am clear now :D. Thank you again very much! 2. Hello everyone! Can someone please tell me what does: ” overtime is paid at a rate of time and a quarter” mean? For example: workers are paid $10 per hour and this week they have worked a total of 20 hours overtime: 12 hours on specific orders and 8 hours on general overtime. How do I compute the specific overtime, keeping in mind that ” overtime is paid at a rate of time and a quarter”? Thank you! □ It means that for every hour of overtime they are paid for 1.25 hours. So if they work 12 hours overtime they will be paid for 12 x 1.25 = 15 hours at $10 = $ 150 (Alternatively you can get the same answer by paying the overtime hours at 1.25 x the normal rate. So 1.25 x $10 = 12 hours at $12.50 an hour = $150 ) ☆ Thank you so much!!! It’s clear now:) 3. These lectures are so helpful! Thanks so much, they’re very clear and easy to understand! 4. most of the topics in f2 are missing in these lectures.plzzzzzzzzzzz included that topics □ Rubbish! Most of the topics in F2 are included in the lectures – and all the more important topics are certainly covered. Those that are not in the lectures are covered in the Course Notes for you to read yourself. There will not be more lectures. ☆ For this examination we will assume that total variable costs vary linearly with the level of production (or that the variable cost per unit remains constant). In practice this may not be the case, but we will not consider the effect of this until later examinations.I NEED HELP :MEANING.:) 6. an organization has the following activity total cost activity level variable cost is constant within the activity range but there is a step up increase 4000 in total fixed cost when an activty level of 22500 is reached .. what are total fixed cost at an activty level of 24000 units A 40000 b 44000 c 60000 d 64000 answer please . □ The Answer is D. activity level total cost High 24,000 136,000 low 20,000 120,000 difference 4,000 16,000 Extra fixed cost (4,000) Total variable cost 12,000 Variable cost per unit 12,000/4000 = 3 Fixed cost at level 24,000 (which is the highest level) = Total cost at that level – variable cost = 136,000 – (24,000 x 3) = 136,000 – 72,000 = 64,000 7. Question 5 says “Up to a given level o? activity in each period the purchase price per unit o? a raw material is constant”i cant see that constant part on that graph? □ The graphs are showing the total cost. If the price per unit is constant, then the total cost will increase linearly with more units! 8. the lecture is great. am enjoying it. 9. i now have a better understanding of cost behavior. Thank you Opentuition 10. Guys what the best book publusher for ACCA, have anyone tried Get Through Guide? 11. Can anyone explain Q7 from chapter 4 please? Did not understand the first bit of question – guarantees 80% of time based pay….? □ guaranteed 80% of time based pay means that although they are paid on the basis of what they produce, the minimum they can be paid is 80% of what it would be if they were simply paid per So…the minimum they can be paid is 80% x 8 hours x $20 = $128 You have to calculate what they would be paid using piecework. If it comes to more than $128 then they get paid the piecework amount. If it comes to less than $128 they they get the guaranteed $128. The full answer is at the back of the course notes. 12. Two budgets are given below: Output(units) 1,000 2,000 Budgeted cost 2,500 3.500 The total fixed costs estimated for the 2,000 units budget are 20% higher than the total fixed costs for the 1.000 units budget. What is the budgeted variable cost/unit of output? plzz someone help me with this question..!!!!! □ The answer is $0.625. Use simple algebra. ☆ I am getting $0.5…Can you please explain how it is $0.625? This is how I worked: Units Budgeted Cost 2000 3000 (as total fixed cost for 2000 units is 20% higher than that of 1000 units – hence 3500 – (20% of 2500) ) Using High low method then it comes $0.5 ☆ Your workings are treating the whole of the $2500 as though it was a fixed cost. If v is the variable cost per unit, and if f is the fixed cost per unit, then: for 1000 units: 1000v + f = 2500 for 2000 units: 2000v + 1.2f = 3500 There are several ways of solving these equations (all obviously giving the same answer!), but here is one way: Multiply the first equation throughout by 1.2 This gives: 1200v + 1.2F = 3000 The subtract this equation for the equation for 2000 units: (2000v – 1200v) + 0 = 3500 – 3000 so 800v = 500 v = 500/800 = 0.625 13. Can some one help me with question 1 please I understand the high low method but I don’t understand the trick thank u 14. Can some one help me question one I know how to do high low method but I didn’t understand the trick AND I HOPE THIS QUESTION WONT COME TO THE EXAM. Thank u You must be logged in to post a comment.
{"url":"http://opentuition.com/acca/f2/acca-f2-cost-classification-part-b/","timestamp":"2014-04-20T05:56:37Z","content_type":null,"content_length":"72800","record_id":"<urn:uuid:ff42f246-6bfc-4035-bb4c-373da102c9c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Round Lake, IL Geometry Tutor Find a Round Lake, IL Geometry Tutor ...Of the 100 or so students I taught during that time, my students gained an average of 225 points. Some students achieved even greater gains than the average, of 300-400 points, driven by their willingness to complete homework and to engage with me and ask for help. I'd be happy to help you achieve this level of success in your SAT/PSAT test preparation. 25 Subjects: including geometry, chemistry, English, accounting ...Everything around us has some explanation involving these subjects! I would like to share my enthusiasm with others. My background includes physical science (two degrees in Engineering) as well as biological sciences (MD). I have teaching experience at the university level as a teaching assistant for biology as well as anatomy and physiology. 2 Subjects: including geometry, algebra 1 ...I believe solving more problems allows students to learn how to approach different questions. My main goal in teaching is to strengthen basic knowledge of each concepts and strengthen it by adding more laws and formula to their knowledge. I wish I can be helpful to the students to have a liking towards mathematics. 6 Subjects: including geometry, algebra 1, algebra 2, precalculus ...I help students by making sure that they get questions that they have "earned" through years of study in high school. I also can teach them things they don't know with just enough detail to allow them to apply the material on the exam, and I can re-teach them material they may have forgotten. I... 24 Subjects: including geometry, calculus, physics, GRE I have a teaching certificate in kindergarten through 9th grade and I have a middle school endorsement in English. I also have elementary school age children so I am very familiar with the homework elementary school children bring home. I especially enjoy helping my children with their math homework. 18 Subjects: including geometry, English, reading, algebra 1 Related Round Lake, IL Tutors Round Lake, IL Accounting Tutors Round Lake, IL ACT Tutors Round Lake, IL Algebra Tutors Round Lake, IL Algebra 2 Tutors Round Lake, IL Calculus Tutors Round Lake, IL Geometry Tutors Round Lake, IL Math Tutors Round Lake, IL Prealgebra Tutors Round Lake, IL Precalculus Tutors Round Lake, IL SAT Tutors Round Lake, IL SAT Math Tutors Round Lake, IL Science Tutors Round Lake, IL Statistics Tutors Round Lake, IL Trigonometry Tutors Nearby Cities With geometry Tutor Fox Lake, IL geometry Tutors Grayslake geometry Tutors Hainesville, IL geometry Tutors Ingleside, IL geometry Tutors Johnsburg, IL geometry Tutors Lake Villa geometry Tutors Lindenhurst, IL geometry Tutors Mundelein geometry Tutors Old Mill Creek, IL geometry Tutors Old Mill Crk, IL geometry Tutors Round Lake Beach, IL geometry Tutors Round Lake Heights, IL geometry Tutors Round Lake Park, IL geometry Tutors Volo, IL geometry Tutors Wadsworth, IL geometry Tutors
{"url":"http://www.purplemath.com/Round_Lake_IL_Geometry_tutors.php","timestamp":"2014-04-20T06:55:36Z","content_type":null,"content_length":"24356","record_id":"<urn:uuid:cf3bd128-eaaa-4b63-b9c3-fa039f1055c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Permittivity and Permeability of Free Space Permittivity of a vacuum is a number arrived at beginning with a value for the speed of light in the vacuum and the permeability of the vacuum. NIST uses the term "electric constant" for what is commonly known as the permittivity of free space: Here's their official value: http://physics.nist.gov/cgi-bin/cuu/Value?eqep0 http://physics.nist.gov/cuu/Constant.../gif/eqep0.gif The permeability of a vacuum is called by NIST "the magnetic constant": and has a value of 4pi x 10^-7 N A^-2, which is clearly an experimentally established value. The speed of light in a vacuum, on the other hand, experimentally determined (at least originally). This constant is called "the speed of light in vacuum by NIST": As I understand the concept of permittivity, it represents something similar to a modulus of elasticity. It is a measure of how polarized a medium becomes when subjected to an electric field. Though most discussions I've encountered on the topic tend to give permittivity and permeability of free space very little discussion, I have always been inclined to believe these concepts are of essential relevance in correctly understanding Nature at a fundamental level. Does anybody else agree or disagree with that suggestion? What do these concepts mean to you? I am aware that the equations of EM can be written in units where these constants vanish. I guess if time were measured in meters, and permeability were set to 1, these constants could be dropped from the expressions. Nonetheless, there seem to be concepts which underpin Maxwell's equations that are implicitly, if not explicitly assumed. Does free space become polarized in the presence of an electric field? Put differently, one might ask if an electric field the polarization of free space.
{"url":"http://www.physicsforums.com/showthread.php?t=122121","timestamp":"2014-04-19T04:33:03Z","content_type":null,"content_length":"54247","record_id":"<urn:uuid:9694bb39-0b45-4a50-9a8e-109500779fc8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
MEP question: does MEP year 6 prepare for AoPS? - K-8 Curriculum Board MEP question: does MEP year 6 prepare for AoPS? Started by serendipitous journey , Mar 06 2012 11:52 PM aops mep pre-algebra 15 replies to this topic Posted 06 March 2012 - 11:52 PM ... just wondering if we'd need a transition btw. MEP Year6 and AoPS ... Posted 06 March 2012 - 11:59 PM ... just wondering if we'd need a transition btw. MEP Year6 and AoPS ... Which AoPS? AoPS Introduction to Algebra would be a stretch for most 11 or 12yos, but pre-algebra, yes. Posted 07 March 2012 - 01:26 AM Which AoPS? AoPS Introduction to Algebra would be a stretch for most 11 or 12yos, but pre-algebra, yes. Thanks ... did you go from MEP to AoPS? I'd be happy with pre-algebra; but I think we'll get there when Button is 8 or 9. Good grief, I realize I'm going 'round in circles about this; am just not sure what to do with Button. He's in MEP Year 4 now, and is 6 1/2. Now that you mention the 11/12 yos, I remember that I was concerned about starting AoPS b/c of the reading/writing component, but I think he'll be ready for algebra in the next couple of years. At any rate: that's just what I needed to know! and anything you have to add is very much appreciated. Posted 07 March 2012 - 01:59 AM Thanks ... did you go from MEP to AoPS? No, I didn't know about MEP when DD the Elder went through K6 math and AoPS Pre-Algebra came out too later for her. DD the Younger is almost through Y2, and I've looked ahead, comparing content to Singapore. MEP Y6 will put the student in at least as good a position as Singapore (I've only seen the US edition, not Standards, so take that with a grain of salt). I'd be happy with pre-algebra; but I think we'll get there when Button is 8 or 9. Good grief, I realize I'm going 'round in circles about this; am just not sure what to do with Button. He's in MEP Year 4 now, and is 6 1/2. Now that you mention the 11/12 yos, I remember that I was concerned about starting AoPS b/c of the reading/writing component, but I think he'll be ready for algebra in the next couple of years. It's not just the readiness. AoPS Introduction to Algebra is *hard* and can't be done in 30 minutes a day. The Art of Problem Solving Vol. 1 while working through Life of Fred: Beginning Algebra . I'll then have her do AoPS Number Theory Probability & Counting before AoPs Introduction to Algebra , and proceeding from there. (pasted from a previous post) Things we've used in our great algebra put-off: LoF: Fractions Decimals & Percents Pre-Algebra 1 (and soon 2) Venn Perplexors A-D (we use them without the pre-drawn charts) Can You Count in Greek? (highly enjoyable) units, including codes and ciphersIt's Alive, It's Alive and Kicking (found a couple errors in solutions, including one major error) Logic Countdown, Logic Liftoff Orbiting with LogicThe CryptoclubBecoming a Problem Solving Genius Challenge MathBrain Maths (puzzles, from SingaporeMath.com, we found a few errors in the solutions for Book 1, many for Book 2) Mathematics 6 (Russian Math, selected sections and all starred problems; this text is a thing of beauty) CWP 5 and 6 Alien Math (working four operations in different number bases) Piece of Pi Patty Paper Geometry (I highly recommend this) Cryptoclub was a hoot. It takes awhile to get into the heavier math, but it was worth it are some more short, free codes & ciphers units from CIMT (the MEP folks). Posted 07 March 2012 - 02:36 AM I'm having her continue with The Art of Problem Solving Vol. 1 while working through Life of Fred: Beginning Algebra. I'll then have her do AoPS Number Theory and Probability & Counting before AoPs Introduction to Algebra, and proceeding from there. We're doing AoPS Vol.1 now. I was going to get ItA, because that was what I had previously planned, but forget that she finished Dolciani in 7th grade, then LOF Algebra. I'm wondering if AoPS Vol. 1 and 2 (along with all the LOFs and a Geometry course by a guy with a vich at the end of his name Posted 07 March 2012 - 02:43 AM We're doing AoPS Vol.1 now. I was going to get ItA, because that was what I had previously planned, but forget that she finished Dolciani in 7th grade, then LOF Algebra. I'm wondering if AoPS Vol. 1 and 2 (along with all the LOFs and a Geometry course by a guy with a vich at the end of his name Posted 07 March 2012 - 03:05 AM Posted 07 March 2012 - 07:08 AM Things we've used in our great algebra put-off: LoF: Fractions, Decimals & Percents and Pre-Algebra 1 (and soon 2) Venn Perplexors A-D (we use them without the pre-drawn charts) Can You Count in Greek? (highly enjoyable) Selected MEP units, including codes and ciphers It's Alive, and It's Alive and Kicking (found a couple errors in solutions, including one major error) Logic Countdown, Logic Liftoff, Orbiting with Logic The Cryptoclub Becoming a Problem Solving Genius Challenge Math Brain Maths (puzzles, from SingaporeMath.com, we found a few errors in the solutions for Book 1, many for Book 2) Mathematics 6 (Russian Math, selected sections and all starred problems; this text is a thing of beauty) CWP 5 and 6 Alien Math (working four operations in different number bases) Piece of Pi (meh) Patty Paper Geometry (I highly recommend this) Thanks for posting this! Posted 07 March 2012 - 07:55 AM THank you for posting that list! We've done some of them, but not all. I am thinking of spending another year on pre-pre-algebra stuff (MM6, CWP5-6) before heading into pre-algebra and these are great resources. Posted 07 March 2012 - 08:03 AM THank you for posting that list! We've done some of them, but not all. I am thinking of spending another year on pre-pre-algebra stuff (MM6, CWP5-6) before heading into pre-algebra and these are great resources. Yeah, I think we're going to take a math sabbatical next year. My daughter wants to try Math on the Menu (it's in the math section of TWTM). Posted 07 March 2012 - 10:13 AM Posted 07 March 2012 - 12:54 PM Thanks so much, Moira. I think the crypto stuff will be esp. engaging; the list is terrific to have. Posted 07 March 2012 - 07:02 PM Well, as I'm feeling optimistic, I'm not going to concern myself with it until later. I am hoping that between the Instructor's Manual and the text, I will be prepared. Posted 07 March 2012 - 07:25 PM Well, as I'm feeling optimistic, I'm not going to concern myself with it until later. I am hoping that between the Instructor's Manual and the text, I will be prepared. ... very dangerous. You go first. (and share with me Posted 07 March 2012 - 10:23 PM Edited by Honoria Glossop, 20 November 2012 - 10:42 PM. Posted 08 March 2012 - 07:06 PM Dear Moira and Shawna, (Apologies in advance for the derail, Ana!) I was liking the looks of the Solomonovich book, too, but am scared off by having no solutions....then I found this, by James Tanton: There's a second volume of geometry, too--the two volumes amount to nearly a thousand pages, and the author has a PhD in math from Princeton, so I'm pretty hopeful! I wish there were a lengthier sample, but it has definite possibilities, I think. Anyway, thought I'd toss the link out there for other soon-to-be geometers.... ETA: Meant to mention that Tanton has a lot of geometry (among other topics) videos on his youtube channel, so it's possible to get a bit more of a feel for his approach there than is possible from the brief samples at Lulu. I sent the videos to my oldest so she could see if she liked him (she's quite loyal to Salman Khan, so she's glaring at me I did google him looking for more samples of the geometry, but got sidetracked by his other work. I'm thinking about getting a few of the ebooks for my youngest because I really liked those. Thanks for mentioning him. Also tagged with one or more of these keywords: aops, mep, pre-algebra Education → K-8 Curriculum Board → Logic Stage & Middle Grade Challenges → • 8 replies • avilma Started by ATFm , 22 Feb 2014 farmer fred, aops • 330 views • 24 Feb 2014 Education → K-8 Curriculum Board → • 15 replies • musicianmom Started by Grantmom , 23 Jan 2014 math, aops • 533 views • 24 Jan 2014 Education → K-8 Curriculum Board → Logic Stage & Middle Grade Challenges → • 2 replies • daijobu Started by daijobu , 21 Dec 2013 mathcounts, aops • 347 views • 24 Dec 2013 Education → K-8 Curriculum Board → • 17 replies • kolamum Started by Entropymama , 06 Dec 2013 math, life of fred, mep • 593 views • 09 Dec 2013 Education → Accelerated Learner Board → • 18 replies • EndOfOrdinary Started by EndOfOrdinary , 25 Nov 2013 aops, math curriculum • 1260 views • 13 Dec 2013
{"url":"http://forums.welltrainedmind.com/topic/355037-mep-question-does-mep-year-6-prepare-for-aops/","timestamp":"2014-04-17T09:35:14Z","content_type":null,"content_length":"109194","record_id":"<urn:uuid:5b3e8e72-0059-4feb-8e7c-a44d7ef25df5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Explain a TTL NAND gate and its operation, Computer Engineering Give the circuit of a TTL NAND gate and explain its operation in brief. Operation of TTL NAND Gate: Fig.(d) Demonstrates a TTL NAND gate with a totem pole output. The totem pole output implies that transistor T[4] sits atop T[3] in order to give low output impedance. The low output impedance means a short time constant RC therefore the output can change rapidly from one state to the other. T[1] is a multiple type emitter transistor. Such transistor can be thought of like a combination of various transistors along with a common collector and base. Multiple emitter transistors along with about 60 emitters have been developed. In this figure, T[1] has 3 emitters thus there can be three inputs A, B, C. The transistor T[2] functions as a phase splitter since the emitter voltage is out of phase along with the collector voltage. The transistors T [3] and T[4] by the totem pole output, the capacitance CL shows the stray capacitance and so on. The diode D is added to make sure that T[4] is cut off while output is low. The voltage drop of diode D remains the base-emitter junction of T[4] reverse biased therefore only T[3] conducts while output is low. The operation can be described briefly by three conditions as specified below: Condition 1: At least one input is low (that is, 0). Transistor T[1] saturates. Thus, the base voltage of T[2] is almost zero. T[2] is cut off and forces T[3] to cut off. T[4] functions as an emitter follower and couples a high voltage to load. Output is high (that is Y=1). Condition 2: Each input is high. The emitter base junctions of T[1] are reverse biased. The collector base junction of T[1] is forward biased. Therefore, T[1] is in reverse active mode. The collector current of T[1] flows in reverse direction. Because this current is flowing in the base of T[2], the transistors T[2] and T[3] saturate and then output Y is low. Condition 3: The circuit is operating under II while one of the inputs becomes low. The consequent emitter base junction of T[1] starts conducting and T[1] base voltage drops to a low value. Thus, T [1] is in forward active mode. The high collector current of T[1] shifts the stored charge in T[2] and T[3] and hence, T[2] and T[3] go to cut-off and T[1] saturates and then output Y returns to Fig.(d) Logic Diagram of TTL NAND Gate with Totem Pole Output Posted Date: 5/4/2013 12:46:03 AM | Location : United States Your posts are moderated Write shorts notes on Remote Procedure Call (RPC) The facility whihc was created to help the programmers write client server software is termed as Remote Procedure Call. Within What is the protocol used by SAP Gateway process? The SAP Gateway method communicates with the clients based on the TCP/IP Protocol. What is a customer-to-business transaction? C2B (customer-to-business): The most significant activity into e-commerce isn’t selling. That is buying. Rather often which do General principles of pruning: The general principles are such that: 1. Given a node N that can be chosen by player one, thus if there is another node, X, along any path, Network address prefixed by 1110 is a? Network address prefixed through 1110 is a multicast address. Explain about the term false path? How it find out in circuit? What the effect of false path in circuit? By timing all the paths into the circuit the timing analyzer can find o Q. Disk operating system? The operating system (OS) is the first program that should be loaded into the memory of your PC before you can use it for any application. You can st Q. Illustrate working of T Flip-Flop? T (Toggle) flip-flop is attained from JK flip-flop by combining inputs J and K together. The implementation of T flip-flop is displayed in Sample Program In this program we only display: Line Offset Numbers -----------------Source Code----------------- The usability and user experience goalsprovidet heinteraction designer with high-level goals for the interactivep roduct. Thenextissueis how to design a product that satisfies thes
{"url":"http://www.expertsmind.com/questions/explain-a-ttl-nand-gate-and-its-operation-30161619.aspx","timestamp":"2014-04-20T21:23:25Z","content_type":null,"content_length":"32256","record_id":"<urn:uuid:65a78752-459e-4a0f-9d1e-2a2cbfb8f91d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Clara, CA Prealgebra Tutor Find a Santa Clara, CA Prealgebra Tutor ...The rules of English Grammar can be confusing, but having a good foundation is vital to communicating well, whether in speaking or writing. I do a fair amount of writing and I still keep my copy of Gregg Reference Manual handy. I regard it as 'my definitive reference for the rules of grammar.' Let me help you navigate through these rules so you can become a better communicator! 27 Subjects: including prealgebra, reading, English, French ...I'm currently in school to gain my credentialing in teaching in order to teach Mathematics for grades 6-12, and have been tutoring for over 10 years. I do have a passion for Math because I have my Bachelors of Science in Mathematics and Masters of Science in Actuarial Science. I look forward to... 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I presently have two piano students whom I teach privately. I'm an experienced substitute teacher in the Cupertino and Campbell School districts, where I teach K-8. I can teach nearly all 37 Subjects: including prealgebra, reading, English, physics I am a Mathematics and Statistics graduate from UC Berkeley. I have more than 5 years experience in private tutoring. I started my tutoring from De Anza College, where I tutored accounting in their tutorial center. 13 Subjects: including prealgebra, statistics, geometry, accounting I have a strong background in math and computers, including a MS in Computer Science and a MST in Math. I spent 20+ years working as a Software Engineer, and about 12 years teaching. Most of the teaching was secondary math up through second year at the university. 11 Subjects: including prealgebra, calculus, precalculus, trigonometry Related Santa Clara, CA Tutors Santa Clara, CA Accounting Tutors Santa Clara, CA ACT Tutors Santa Clara, CA Algebra Tutors Santa Clara, CA Algebra 2 Tutors Santa Clara, CA Calculus Tutors Santa Clara, CA Geometry Tutors Santa Clara, CA Math Tutors Santa Clara, CA Prealgebra Tutors Santa Clara, CA Precalculus Tutors Santa Clara, CA SAT Tutors Santa Clara, CA SAT Math Tutors Santa Clara, CA Science Tutors Santa Clara, CA Statistics Tutors Santa Clara, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Santa_Clara_CA_Prealgebra_tutors.php","timestamp":"2014-04-21T11:04:06Z","content_type":null,"content_length":"24204","record_id":"<urn:uuid:06aeb4af-bd6a-456d-8b8b-ceb910c03241>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
check for prime numbers using only IF-EL Unless the user input is limited to a very small range of values, using only if-else is not really feasible. The amount of coding required could be huge (unless you used some sort of macro to construct the code). #include <iostream> using namespace std; int main () int n; cout << "Enter an integer !"<<endl; if (n%2==0) cout << n<< " is not prime number" << endl; else if (n%2==1) cout<<n<<" is a prime number"<< endl; return 0; I think by using this will pretty much tell the user the number is prime or not i already tried it , it giving me that '9' is prime number its not going to work. you need a while loop and xsxs's will only test for evens and odds Well, if it can be done using if-else + a recursive function, it could also be done using if-else + goto or if-else + while or if-else + something. I actually think you can do it using template recursion too, the number of values you'd accept would be limited by how much recursion you did and compile time would increase as well, but it should be Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/98616/","timestamp":"2014-04-20T16:13:12Z","content_type":null,"content_length":"15451","record_id":"<urn:uuid:97b70385-28ef-44ed-b1cd-0ad071e0099c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Class 9 - Maths - CH1 - Number Systems - Ex 1.2 NCERT Chapter Solutions Is Pi rational or irrational? Q1: State whether the following statements are true or false. Justify your answers. (i) Every irrational number is a real number. ✓(TRUE) Explanation: Real numbers are the collection of rational and irrational numbers. (ii) Every point on the number line is of the form √m, where m is a natural number. ✗ (FALSE) Explanation: Since -ve numbers cannot be expressed as square roots. (iii) Every real number is an irrational number. ✗ (FALSE) Explanation: Real numbers contains both rational and irrational numbers. Q2. Are the square roots of all positive integers irrational? If not, give an example of the square root of a number that is a rational number. Answer: No, the square roots of all positive integers can be rational or irrational. e.g. √9 = 3 which is a rational number. Q3: Show how √5 can be represented on the number line. We can write √5 in Pythagoras form: = √(4 + 1) = √(22 + 1) 1. Take a line segment AB = 2 units (consider 1 unit = 2 cm) on x-axis. 2. Draw a perpendicular on B and construct a line BC = 1 unit length. 3. Join AC which will be √5 (Pythagoras theorem). Take A as center, and AC as radius draw an which cuts the x-axis at point E. 4. The line segment AC represents √5 units. Q4. Construct the ‘square root spiral’ (Class room activity.) Following video shows the square root spiral. Q5: Who discovered √2 or disclosed its secret? Answer: Hippacus of Croton. It is assumed Pythagoreans, followers of the famous Greek mathematician Pythagoras, were the first to discover the numbers which cannot be written in the form of a fraction. These numbers are called irrational numbers. Q6: Who showed that showed that "Corresponding to every real number, there is a point on the real number line, and corresponding to every point on the number line, there exists a unique real number"? Answer: Two German mathematicians, Deddkind and Cantor. Q6: Is π (pi) rational or irrational? Answer: π is an irrational number. 22/7 is just an approximation. Q7: How many rational numbers exist between any two rational numbers? Answer: Infinite Q8: Are irrational numbers finite? Answer: No. There are infinite set of irrational numbers. Q9: The product of any two irrational numbers is : (A) always an irrational number. (B) always a rational number. (C) always an integer. (D) sometimes rational, sometimes irrational number. Answer: (D) sometimes rational, sometimes irrational number e.g. √2 and √3 are two irrational numbers, √2×√3 = √6 an irrational number. √2 and √8 are two irrational numbers, √2×√8 = √(16) = 4 a rational number. ⟵Go to CH1-Number Systems - Ex 1.1 Goto CH1 - Number Systems Ex 1.3→ 2 comments: 1. great,,, i want to share about irrational number 2. i want the solution for this question of rationalisation sq root of (3+2*sq root of2)
{"url":"http://cbse-notes.blogspot.com/2012/06/class-9-maths-ch1-number-systems-ex-12.html","timestamp":"2014-04-19T12:00:54Z","content_type":null,"content_length":"123750","record_id":"<urn:uuid:fa008b9e-326b-493b-817c-d237aa283035>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: using the first n observations in a dataset w/o evaluating the [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: using the first n observations in a dataset w/o evaluating the whole thing? From "Rodini, Mark" <mrodini@compasslexecon.com> To <statalist@hsphsun2.harvard.edu> Subject RE: st: using the first n observations in a dataset w/o evaluating the whole thing? Date Thu, 3 Apr 2008 17:28:04 -0700 Thanks for the replies, and the long explanation. (Yes I did mean little _n: typo!) Anyway, I tried the suggestion: use in 1/99 using mydata and I did indeed find it took time. In fact, I tried to apply the idea to a 20MB dataset after having only set the memory to 10MB, and it completely froze up. It only worked on the 20MB dataset if I set memory to >20MB, and it was slow --as though it were reading the whole thing Oh well. -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of David Kantor Sent: Thursday, April 03, 2008 5:20 PM To: statalist@hsphsun2.harvard.edu Subject: Re: st: using the first n observations in a dataset w/o evaluating the whole thing? At 07:31 PM 4/3/2008, Mark Rodini wrote: >Suppose I have a large Stata dataset (e.g. 3,000,000 observations) and >only with to read in the first, say, 100 observations. >I have tried the code, which works: >use mydata if ( _N<100 ) >However, evidently, this code goes through ALL 3 million observations >evaluate the expression in parentheses, which can be very time >(and sort of defeats the purpose). Is there a way to only read the >first 100 observations without having to evaluate the entire dataset? >Perhaps some application of the "set obs 100"? But I have not been >Thank you. First, that is not officially valid syntax, though it is accepted. I find that it gets you 0 observations, though it does read through the whole file. You probably mean _n (little n), rather than _N. (I suppose _N is . during the loading process, so _N <100 is false). Officially correct syntax is, use if _n <100 using mydata or, better yet, use in 1/99 using mydata (This latter syntax is much more efficient.) But in any case, my experience has been that it always reads through the whole file. And you can tell it's dong that if you have 3000000 observations. The reason is that, in the file structure, there are some important elements that come after the data (values labels, I believe, for example), so there is a reason to have to read the whole file. At least that's how it's been as far as I know; I don't know if they've changed the file structure in that regard in version 10. I may have written to Stata Corp. about this some time in the past; if I had my way, there would either... be nothing after the end of the data segment, or be some way to jump directly to the part of the file that lies after the data. (The latter idea may or may not work, depending on file-system issues.) In either case, I would want it to not read the whole file if you asked for an initial subset. But as things stand now, we are stuck with this behavior. The only thing you can do is, if you plan to experiment on a small segment of the file (and want to load it many times), load a small segment and save it under a different name. Thus, you go through the lengthy process just once. use in 1/99 using mydata save mydata_shortversion use mydata_shortversion -- should load quickly. Hope this helps. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00191.html","timestamp":"2014-04-18T08:21:35Z","content_type":null,"content_length":"9258","record_id":"<urn:uuid:b57f8639-7ba4-40d8-846a-4c0b95fea939>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: adding observation of means of variables Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: adding observation of means of variables From Abhimanyu Arora <abhimanyu.arora1987@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: adding observation of means of variables Date Thu, 16 Feb 2012 14:09:40 +0100 Thanks a lot, Nick, very reassuring indeed. All points accepted. On Thu, Feb 16, 2012 at 1:56 PM, Nick Cox <n.j.cox@durham.ac.uk> wrote: > I don't see what is puzzling you here. > The referees can be shown a Stata log, or given a do-file that contains the Stata instructions to replicate the results in your paper. The means can be obtained directly by (e.g.) -summarize- and just displayed in the log or by the do-file. > Adding them to the dataset is only needed if the referees insist that results be sent to them as a dataset, and even then, means can be added as extra variables. Adding them as observations is a choice, and only that. If you are doing nothing further with the dataset, the main disadvantage I emphasised in my first reply will indeed not bite anyone. > Nick > n.j.cox@durham.ac.uk > Abhimanyu Arora > I see your point if it were the original dataset, true, but while I do > start with it in the -do- file, it had to be modified on the way. > But sure, if you have better suggestions, would love to hear them. > On Thu, Feb 16, 2012 at 1:34 PM, Nick Cox <n.j.cox@durham.ac.uk> wrote: >> OK, but there is no need to add the means to the dataset to do that. > Abhimanyu Arora >> Thanks very much for your conscientious advice. >> Basically, I had created tables for in my paper (which had averages in >> the last row). Now that the analysis is complete, I just would like to >> make sure the numbers are replicable from A-Z (in stata itself---I >> thought bringing them out as datasets would be ok for my purpose), in >> case the referees would like to see where they come from. >> On Thu, Feb 16, 2012 at 1:04 PM, Nick Cox <n.j.cox@durham.ac.uk> wrote: >>> Phil gives accurate advice, and as he said there are other ways to do it. >>> Here's another: >>> set obs `=_N + 1' >>> ds, has(type numeric) >>> qui foreach v in `r(varlist)' { >>> su `v', meanonly >>> replace `v' = r(mean) in L >>> } >>> That said, I think this is a bad idea for working with Stata. No, let me rephrase that: it's a very bad idea. A rule of thumb, blunt though it will seem, is that if you have to ask how to do this you don't yet understand Stata well enough to use it safely. >>> My advice is not to do this. >>> It's a spreadsheet practice that matches the way spreadsheets are set-up. It's not a good idea for working with statistical software like Stata, The problem is that once those extra observation(s) are added, you _must_ always exclude them from further analyses with the same dataset. Otherwise you just get nonsense results. Add to that the fact that if you -sort- your dataset, or some program or command -sort-s your data as a side-effect (now rare but not impossible), those observations with summaries will typically no longer be at the end of your dataset, so you need to invent extra machinery to keep track of where they are. >>> Better advice would depend on knowing quite why you want to this. Keeping means in variables, although there can be redundancy, can be a reasonable idea for some purposes. >>> Nick >>> n.j.cox@durham.ac.uk >>> Phil Clayton >>> Not as far as I know, but it's easy to program. Here's one solution: >>> preserve >>> collapse (mean) * >>> tempfile means >>> save `means' >>> restore >>> append using `means' >>> The above assumes that all variables are numeric. If they're not, you could replace: >>> collapse (mean) * >>> with: >>> ds, has(type numeric) >>> collapse (mean) `r(varlist)' >>> On 16/02/2012, at 10:33 PM, Abhimanyu Arora wrote: >>>> Is there a direct command that appends an observation to the dataset, >>>> giving the means of all the numeric variables? >>>> Perhaps I am using -findit- not that efficiently, but if I am not >>>> mistaken there was one... > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00783.html","timestamp":"2014-04-20T11:24:53Z","content_type":null,"content_length":"13884","record_id":"<urn:uuid:51708472-5379-4613-8b59-df420d5debd4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
On Tue, Mar 6, 2012 at 16:23, Tim Prince On 03/06/2012 03:59 PM, Kharche, I am working on a 3D ADI solver for the heat equation. I have implemented it as serial. Would anybody be able to indicate the best and more straightforward way to parallelise it. Apologies if this is going to the wrong forum. If it's to be implemented in parallelizable fashion (not SSOR style where each line uses updates from the previous line), it should be feasible to divide the outer loop into an appropriate number of blocks, or decompose the physical domain and perform ADI on individual blocks, then update and repeat. True ADI has inherently high communication cost because a lot of data movement is needed to make the _fundamentally sequential_ tridiagonal solves local enough that latency doesn't kill you trying to keep those solves distributed. This also applies (albeit to a lesser degree) in serial due to way memory works. If you only do non-overlapping subdomain solves, you must use a Krylov method just to ensure convergence. You can add overlap, but the Krylov method is still necessary for any practical convergence rate. The method will also require an iteration count proportional to the number of subdomains across the global domain times the square root of the number of elements across a subdomain. The constants may not be small and this asymptotic result is independent of what the subdomain solver is. You need a coarse level to improve this scaling. Sanjay, as Matt and I recommended when you asked the same question on the PETSc list this morning, unless this is a homework assignment, you should solve your problem with multigrid instead of ADI. We pointed you to simple example code that scales well to many thousands of processes.
{"url":"http://www.open-mpi.org/community/lists/users/att-18712/attachment","timestamp":"2014-04-20T01:51:44Z","content_type":null,"content_length":"3616","record_id":"<urn:uuid:b643b396-4d4c-4ef3-8aef-6443404de523>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 10 questions!! Need help !! Algebra 1 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504d6f87e4b0d222920bb369","timestamp":"2014-04-18T00:32:08Z","content_type":null,"content_length":"68993","record_id":"<urn:uuid:3dbaf4cc-a49a-46e3-8c23-45bea28dff50>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
bayes' theorem March 28th 2010, 09:02 AM #1 Junior Member Oct 2009 bayes' theorem The probability that a person has a deadly virus is 5 in 1000. A test will correctly diagnose this disease 95% of the time and incorrectly on 20% of occasions. Find the probability of this test giving correct diagnosis. <-- not sure how to set up this expression. Pr(correct diagnosis) = Pr(Correct diagnosis | Person has disease) x Pr(Person has disease) + Pr(Correct diagnosis | Person does not have disease) x Pr(Person does not have disease). March 28th 2010, 01:02 PM #2
{"url":"http://mathhelpforum.com/statistics/136081-bayes-theorem.html","timestamp":"2014-04-17T23:10:21Z","content_type":null,"content_length":"32999","record_id":"<urn:uuid:d91471c2-2dc4-40f9-8f4d-a4ac46c088cb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Space and Matrix Transformations - Building a 3D Engine This article is my second of many articles describing the mechanics and elements of a 3D Engine. This article describes how to navigate 3D space programmatically through 4x4 matrix transforms. I discuss basic transforms, model transforms, view transformer and projection transforms. The purpose of the series is intended to consolidate a number of topics written in C# allowing non-game programmers to incorporate the power of 3D drawings into their application. While the vast majority of 3D Engines are used for game development, this type of tool is very useful for visual feedback and graphic data input. This tool began with a focus on simulation modeling but the intent is to maintain a level of performance that will allow real-time graphics. Fortunately work has been really good; consequenty has work has pushed writing articles down the list priorities. This has took significantly longer than I had planned. I am not a computer programmer by profession and as such I would expect that ideas and algorithms expressed in this series to be rewritten and adapted to your application by “real” programmers. Topics To Be Discussed Basics: Drawing: • Drawing Points, Lines and Triangles • Textures and Texture Coordinates • Loading a Static Mesh Files (.obj) • SceneGraphs Standard 3D Objects and Containment: Sorting Tree and Their Application: • Spheres and Oriented Boxes • Octree Tree Sort • Capsules and Cylinders • BSP Tree Sort • Lozenges and Ellipsoids • Object Picking and Culling • Lights and Special Effects • Distance Methods • Intersection Methods I am currently rewriting one of my company’s software models and have proposed revamping our GUI interface providing users a drawing interface. This series is a product of a rewrite of my original proof of concept application. I am an Electrical Engineer with Lea+Elliott and as part of my responsibilies I develop and maintain our company's internal models. That being said, I am not a professional programmer so please keep this in mind as you look through my code. Prior to Using the Code Before you can use this code you need to download the Tao Framework to access the Tao namespaces. You also need to reference the AGE_Engine3D.dll, add the AGE_Engine3D to your project or copy the applicable .cs file and add them to your project. The Basics Both 3D APIs (DirectX and OpenGL) work with 4D vectors and 4x4 matrixes. I have seen different explanations but this is how I compose my matrix transforms. The basic 4x4 Matrix is a composite of a 3x3 matrixes and 3D vector. These matrix transformations are combined to orient a model into the correct position to be displayed on screen. Unlike normal multiplication, matrix multiplication is not commutative. With matrixes, A*B does not necessary equal B*A. That being said, the order that these transforms are applied is extremely important. This is discussed more in the subsequent sections. [Note: These samples are written implementing the Tao OpenGL interface, but could very easily be modified to service DirectX or XNA interface.] All transforms listed in this article are implemented in the AGE_Matrix44 class through static methods. The method's name is a good indication of its propose such as: public static AGE_Matrix_44 HProjGL(float left, float right, float top, float bottom, float near, float far) Creates a 4 x 4 Projection Matrix Simple Transforms Scale Matrix (Hs) The Scale Matrix is used to scale a model in one, two or three dimensions. It is composed of a 4x4 matrix with a 3D scaling vector on the diagonal. The scaling vector components represent a scaling in their respective dimension. It is in the form of: It is implemented in static AGE_Matrix44.HWorld method. public static AGE_Matrix_44 HWorld(ref AGE_Vector3Df Location, ref AGE_Vector3Df Scale) AGE_Matrix_44 result = AGE_Matrix_44.Identity(); result.col[0][0] = Scale.X; result.col[1][1] = Scale.Y; result.col[2][2] = Scale.Z; return result; Rotation Matrix (Hr) There are three rotation matrixes that can be used to rotate a model around the X-Axis, Y-Axis, and Z-Axis. There are three variations of a 4x4 matrix with various arrangements on the M matrix mentioned above. It is in the form of: It is implemented in static AGE_Matrix44.HRotation method. public static AGE_Matrix_44 HRotation(float theta_X, float theta_Y, float theta_Z) AGE_Matrix_44 result = AGE_Matrix_44.Identity(); //Rotate about the X-Axis if (theta_X != 0) AGE_Matrix_44 H_Rot_X = AGE_Matrix_44.Identity(); H_Rot_X.col[1][1] = H_Rot_X.col[2][2] = (float)System.Math.Cos(theta_X); H_Rot_X.col[1][2] = (float)System.Math.Sin(theta_X); H_Rot_X.col[2][1] = -H_Rot_X.col[1][2]; result = result * H_Rot_X; //Rotate about the Y-Axis if (theta_Y != 0) AGE_Matrix_44 H_Rot_Y = AGE_Matrix_44.Identity(); H_Rot_Y.col[0][0] = H_Rot_Y.col[2][2] = (float)System.Math.Cos(theta_Y); H_Rot_Y.col[2][0] = (float)System.Math.Sin(theta_Y); H_Rot_Y.col[0][2] = -H_Rot_Y.col[2][0]; result = result * H_Rot_Y; //Rotate about the Z-Axis if (theta_Z != 0) AGE_Matrix_44 H_Rot_Z = AGE_Matrix_44.Identity(); H_Rot_Z.col[0][0] = H_Rot_Z.col[1][1] = (float)System.Math.Cos(theta_Z); H_Rot_Z.col[0][1] = (float)System.Math.Sin(theta_Z); H_Rot_Z.col[1][0] = -H_Rot_Z.col[0][1]; result = result * H_Rot_Z; return result; Rotation Matrix via Quaternion (Hq) I often use quaternion for creating my rotation matrixes. It is simple and intuitive. Creating a quaternion for rotation requires a vector identifying the axis of rotation and the angle of rotation. I believe it is commonly used in ArcBall(add hyperlink to) and other orbiting camera schemes. I will discuss it in another article but if you want to look at the code it is included. public static AGE_Matrix_44 HRotation(ref AGE_Quaternion Rotation) Transpose Matrix (Ht) The Transpose Matrix is used to move a model from one position to another. It is composed of a 4x4 identity matrix with a 3D translation vector in the 4th column. The translation vector represents a change in location. It is in the form of: It is implemented in static AGE_Matrix44.HWorld method. public static AGE_Matrix_44 HWorld(ref AGE_Vector3Df Location, ref AGE_Vector3Df Scale) AGE_Matrix_44 result = AGE_Matrix_44.Identity(); result.col[3][0] = Location.X; result.col[3][1] = Location.Y; result.col[3][2] = Location.Z; return result; World Space (Hw) The World Space Transform is the first transform usually applied to a model. This transform is normally used to scale and orient the model relative to its world. The Model is defined in a model space coordinate system and needs to be translated to the world coordinate system. Let's assume you have a model of a person and it normalized such that the model dimensions are within the range [-1, 1] with an origin of <0,0,0>. You could populate a world with actors referencing this single resource by applying different World Transforms. The World Transform is composed of basic transforms typical applied in this order. The table (a.k.a. TheTable in the code below) shown in the animation above is a SpatialBaseContainer object. The follow code shows its creation and positioning in 3D space. Note that TheTable only references the table geometry, so if we could have many tables referencing the same geometry using different world transform populating. override public void InitializeSimulation() if (ResourceManager.LoadMeshResouse("Table.obj")) TheTable = new SpatialBaseContainer(); TheTable.Geometry = ResourceManager.GetMeshObject("Table.obj"); TheTable.NowRotate(0, 0, AGE_Engine3D.Math.AGE_Functions.PI / 2); TheTable.NowMove(new AGE_Vector3Df(48.1973f, 222.4320f, 0f)); The order transforms that are applied is important. Had I placed the table in the center of the room then rotated the table, it would be in a completely different spatial place. It would equate to a 322 feet error. Child Models and Local Transforms There are many cases where a model may have sub, child or leaf models. These are kin to a glass relative to its parent table. If the table is not already scaled and aligned with the parent node (a room) the table will have to be scaled, rotated and translated to its proper place via a local transform. This is accomplished by combining the basic transform in a specific order in the same matter as the world space transform. If a child model is allowed to change relative to its parent then each child model will have a separate the overall World Transform and its local transform. The Total World Transform would look something like this for various objects. By specifying that the TheGlass is a child to TheTable, I only have to locate TheGlass relative to the table. The transforms applied to TheTable will be carried forward to TheGlass. I can also create children to TheGlass and locate them relative to TheGlass. The Following code and image demonstrate this concept. override public void InitializeSimulation() if (ResourceManager.LoadMeshResouse("Glass.obj")) TheCups = new SpatialBaseContainer(); TheCups.Geometry = ResourceManager.GetMeshObject("Glass.obj"); TheCups.NowMove(new AGE_Vector3Df(0.0f, 0.0f, 3.083f)); //Child Cup #1 SpatialBaseContainer NewKid = new SpatialBaseContainer(); NewKid.Geometry = TheCups.Geometry; NewKid.NowScale(new AGE_Vector3Df(1.0f, 0.75f, 2)); NewKid.NowMove(new AGE_Vector3Df( 3f,0, 0)); //Child Cup #2 NewKid = new SpatialBaseContainer(); NewKid.Geometry = TheCups.Geometry; NewKid.NowScale(new AGE_Vector3Df(1.0f, 3.0f, .5f)); NewKid.NowMove(new AGE_Vector3Df(-3f,0, 0)); View Space (Hv) After a model is transformed to its position into World Space it will then be transformed in to View Space or Camera Space. The transform is characterized by Camera Location and the View direction. The Transform is in the form of: public static AGE_Matrix_44 HView(ref AGE_Vector3Df Eye, ref AGE_Vector3Df Target, ref AGE_Vector3Df UpVector) AGE_Matrix_44 result = AGE_Matrix_44.Zero(); AGE_Vector3Df VDirection = (Target - Eye); VDirection = VDirection.UnitVector(); AGE_Vector3Df RightDirection = AGE_Vector3Df.CrossProduct(ref VDirection, ref UpVector); RightDirection = RightDirection.UnitVector(); AGE_Vector3Df UpDirection = AGE_Vector3Df.CrossProduct(ref RightDirection, ref VDirection); UpDirection = UpDirection.UnitVector(); result.col[0][0] = RightDirection.X; result.col[1][0] = RightDirection.Y; result.col[2][0] = RightDirection.Z; result.col[0][1] = UpDirection.X; result.col[1][1] = UpDirection.Y; result.col[2][1] = UpDirection.Z; result.col[0][2] = -1 * VDirection.X; result.col[1][2] = -1 * VDirection.Y; result.col[2][2] = -1 * VDirection.Z; result.col[3][0] = -1* AGE_Vector3Df.DotProduct(ref RightDirection, ref Eye); result.col[3][1] = -1 * AGE_Vector3Df.DotProduct(ref UpDirection, ref Eye); result.col[3][2] = AGE_Vector3Df.DotProduct(ref VDirection, ref Eye); result.col[3][3] = 1; return result; Because the OpenGL uses a right-handed coordinate system the D vector is multiplied by -1 in the matrix Q construction, but for left-handed coordinate system this would not be necessary.[1] Provided in the zip file are simple camera classes. In the sample application I used the AGE_WalkThroughCamera class allowing me to create a camera that could move through the building and actively update the camera's focus. This is accomplished by updating the Target and Eye locations, then recalculating the View Matrix. This creates the animation seen in the video above. Projection Matrixes (Hproj and Hortho) Perspective Transform After a model is transformed to its position into View Space, it will then be transformed in to its final viewed position via projection transformation. There are two basic types of projection transforms that I am aware of: Orthographic, and Perspective. The Perspective projection mimics the way we perceive the real world. Objects that are closer appear larger and parallel lines converge at the horizon. Here is how the Perspective transform is constructed. It is implemented in static HProjGL method. public static AGE_Matrix_44 HProjGL(float left, float right, float top, float bottom, float near, float far) float invRDiff = 1 / (right - left + float.Epsilon); float invUDiff = 1 / (top - bottom + float.Epsilon); float invDDiff = 1 / (far - near + float.Epsilon); AGE_Matrix_44 result = AGE_Matrix_44 .Zero(); result.col[0][0] = 2 * near * invRDiff; result.col[1][1] = 2 *From Subject Received Size Categories Newton, Curtis RE: Accident on the DCC System at the Mexico City Airport 1:35 PM 8 KB near * invUDiff; result.col[2][0] = (right + left) * invRDiff; result.col[2][1] = (top + bottom) * invUDiff; result.col[2][2] = -1 * (far + near) * invDDiff; result.col[2][3] = -1; result.col[3][2] = -2 * (far * near) * invDDiff; return result; Orthographic Transform The Orthographic projection is the somewhat the opposite of the Perspective projection. An object’s depth in the view plane has no bearing of size of the object and parallel lines remain parallel. Here is how the Orthographic transform is constructed. It is implemented in static HOrtho method. public static AGE_Matrix_44 HOrtho(float left, float right, float top, float bottom, float near, float far) float invRDiff = 1 / (right - left); float invUDiff = 1 / (top - bottom); float invDDiff = 1 / (far - near); AGE_Matrix_44 result = AGE_Matrix_44.Zero(); result.col[0][0] = 2 * invRDiff; result.col[3][0] = -1 * (right + left) * invRDiff; result.col[1][1] = 2 * invUDiff; result.col[3][1] = -1 * (top + bottom) * invUDiff; result.col[2][2] = -2 * invDDiff; result.col[3][2] = -1 * (far + near) * invDDiff; result.col[3][3] = 1; return result; How to use the Code Once we have all of these different types of transforms how are they used? In each call to the rendering function the sample application [Implementing the OpenGL interface], the application resets the Projection Matrix (GL_PROJECTION) and the Model/View Matrix (GL_MODELVIEW). Since the Camera and Projection rarely change during each render call, I load both the Projection Matrix and View Matrix into the OpenGL’s Projection Matrix. I also load the identity matrix in the Model/View matrix which will be overloaded as each entity is rendered. static public void RenderScence() Object LockThis = new Object(); lock (LockThis) Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT); //Verify the Current Camrea is Initalized if (!CurrentCamera.IsInitalized) CurrentCamera.ResizeWindow(ScreenWidth, ScreenHeight); //Get the Combined Projection and View Transform AGE_Matrix_44 HTotal = CurrentCamera.HTotal; //Load it in to OpenGl //If there are lights turn lighting on //Render each visable Spatial Object foreach (ISpatialNode obj in WorldObjectList) if (obj.IsVisable) obj.RenderOpenGL(); //Render HUD //Display the new screen Now that the setup is complete the application cycles through all of my render-able objects and calls their rendering function. Each object is responsible for setting the Model/View Matrix appropriate to the objects requirements. //In the SpatialBaseContainer Class public void RenderOpenGL() if (this.IsVisable) //Verify the Model Transform is Updated if (!this.IsHModelUpdated) //Verfiy any Parent Transfor is Included if (!this.IsHTotalUpdated) //Load the Model Transform in to OpenGL //Render Parent Geometry if (Geometry != null) //Render any children foreach (IRenderOpenGL ChildGeometry in this.Children) There is a lot more included in the .zip file not discussed here, but I will go into the other classes and concepts in other articles. Further Reading • 2009-09-02 - Second Article Released.
{"url":"http://www.codeproject.com/Articles/42086/Space-and-Matrix-Transformations-Building-a-3D-Eng?fid=1548853&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed","timestamp":"2014-04-20T05:56:21Z","content_type":null,"content_length":"101035","record_id":"<urn:uuid:9bc6689f-f22a-4ea4-b5d9-e41861bdc08e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Activation Energy Also, since k is a function of temperature, if [tex]k_{1}[/tex] and [tex]k_{2}[/tex] are rate constants measured at [tex]T_{1}[/tex] and [tex]T_{2}[/tex] we have from the Arrhenius Equation, \frac{k_{2}}{k_{1}} = e^{\frac{E_{a}}{R}(\frac{1}{T_{1}}-\frac{1}{T_{2}})} So actually this eliminates the need to know the frequency factor A. All you need to know is the rate constants measured at two different temperatures or even their ratio [tex]\frac{k_{2}}{k_{1}}[/ tex] and that is enough to get [tex]E_{a}[/tex], which turns out to be, E_{a} = R\frac{T_{1}T_{2}}{T_{2}-T_{1}}log_{e}\frac{k_{2}}{k_{1}}[/tex] Please make note of this correction in my previous post.
{"url":"http://www.physicsforums.com/showpost.php?p=246185&postcount=3","timestamp":"2014-04-16T22:14:35Z","content_type":null,"content_length":"7852","record_id":"<urn:uuid:0da6979f-e4d6-4bd5-90b5-a2ee4e150954>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood Is Hard January-March 2006 (vol. 3 no. 1) pp. 92-94 ASCII Text x Sebastien Roch, "A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood Is Hard," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 3, no. 1, pp. 92-94, January-March, 2006. BibTex x @article{ 10.1109/TCBB.2006.4, author = {Sebastien Roch}, title = {A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood Is Hard}, journal ={IEEE/ACM Transactions on Computational Biology and Bioinformatics}, volume = {3}, number = {1}, issn = {1545-5963}, year = {2006}, pages = {92-94}, doi = {http://doi.ieeecomputersociety.org/10.1109/TCBB.2006.4}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE/ACM Transactions on Computational Biology and Bioinformatics TI - A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood Is Hard IS - 1 SN - 1545-5963 EPD - 92-94 A1 - Sebastien Roch, PY - 2006 KW - Analysis of algorithms and problem complexity KW - probability and statistics KW - biology and genetics. VL - 3 JA - IEEE/ACM Transactions on Computational Biology and Bioinformatics ER - Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel. [1] L. Addario-Berry, B. Chor, M.T. Hallett, J. Lagergren, A. Panconesi, and T. Wareham, “Ancestral Maximum Likelihood of Evolutionary Trees is Hard,” J. Bioinformatics and Computational Biology, vol. 2, no. 2, pp. 257-271, 2004. [2] R. Agarwala, V. Bafna, M. Farach, B. Narayanan, M. Paterson, and M. Thorup, “On the Approximability of Numerical Taxonomy (Fitting Distances by Tree Metrics),” SIAM J. Computing, vol. 28, pp. 1073-1085, 1999. [3] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi, Complexity and Approximation. Berlin: Springer, 1999. [4] J. Cavender, “Taxonomy with Confidence,” Math. Biosciences, vol. 40, pp. 271-280, 1978. [5] B. Chor and T. Tuller, “Maximum Likelihood of Evolutionary Trees is Hard,” Proc. Ninth Int'l Conf. Computational Molecular Biology (RECOMB 2005), 2005. [6] A. Clementi and L. Trevisan, “Improved Non-Approximability Results for Minimum Vertex Cover with Density Constraints,” Theoretical Computer Science, vol. 225, nos. 1-2, pp. 113-128, 1999. [7] W. Day, D. Jonhson, and D. Sankoff, “The Computational Complexity of Inferring Rooted Phylogenies by Parsimony,” Math. Biosciences, vol. 81, pp. 33-42, 1986. [8] W. Day and D. Sankoff, “The Computational Complexity of Inferring Phylogenies by Compatibility,” Systematic Zoology, vol. 35, pp. 224-229, 1986. [9] A.W.F. Edwards and L.L. Cavalli-Sforza, “Reconstruction of Evolutionary Trees,” Phenetic and Phylogenetic Classification, V.H. Heywood and J. McNeill, eds. Systematics Assoc., London, vol. 6, pp. 67-76, 1964. [10] J.S. Farris, “A Probability Model for Inferring Evolutionary Trees,” Systematic Zoology, vol. 22, pp. 250-256, 1973. [11] J. Felsenstein, “Evolutionary Trees from DNA Sequences: A Maximum Likelihood Approach,” J. Molecular Evolution, vol. 17, pp. 368-376, 1981. [12] J. Felsenstein, Inferring Phylogenies. Sunderland: Sinauer Assoc., 2004. [13] L. Foulds and R. Graham, “The Steiner Problem in Phylogeny is NP-Complete,” Advances in Applied Math., vol. 3, pp. 43-49, 1982. [14] M.R. Garey and D.S. Johnson, Computers and Intractability. A Guide to the Theory of NP-Completeness, San Francisco: W.H. Freeman, 1976. [15] J. Neyman, “Molecular Studies of Evolution: A Source of Novel Statistical Problems,” Statistical Decision Theory and Related Topics, S.S. Gupta and J. Yackel, eds. New York: Academic Press, pp. 1-27, 1971. [16] C. Semple and M. Steel, Phylogenetics. Oxford Univ. Press, 2003. [17] C. Tuffley and M. Steel, “Links between Maximum Likelihood and Maximum Parsimony under a Simple Model of Site Substitution,” Bull. Math. Biology, vol. 59, no. 3, pp. 581-607, 1997. [18] H.T. Wareham, On the Computational Complexity of Inferring Evolutionary Trees, MSc thesis, Technical Report no. 9301, Dept. of Computer Science, Memorial Univ. of Newfoundland 1993. Index Terms: Analysis of algorithms and problem complexity, probability and statistics, biology and genetics. Sebastien Roch, "A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood Is Hard," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 3, no. 1, pp. 92-94, Jan.-March 2006, doi:10.1109/TCBB.2006.4 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tb/2006/01/n0092-abs.html","timestamp":"2014-04-19T13:26:21Z","content_type":null,"content_length":"51008","record_id":"<urn:uuid:cd08a150-de66-4515-8019-5913c8a8d5ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Hello everybody, I have a probability question. Could you please help me prove that "Conditional Independence does not imply Independence", and vice-versa "(Absolute) Independence does not imply Conditional Independence"? Every book and online resource seems to take it for granted. They only offer simple counter examples, but I haven't found a formal proof, e.g., by contradiction. If you can reference a book chapter or post the basics of the proof(s) it would help a lot :-) Thank you for your time!
{"url":"http://www.mathisfunforum.com/post.php?tid=18932&qid=251759","timestamp":"2014-04-21T12:33:43Z","content_type":null,"content_length":"16291","record_id":"<urn:uuid:045c132d-cff0-4a0a-8d15-ed0f60421447>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyzing 4:1 TLTs For Optical Receivers Transmission-line transformers can provide reliable interfaces between different impedances for broadband applications in RF, wireless, and optical systems. Transmission-line transformers (TLTs) provide broadband impedance transformation in a wide variety of RF circuits. When using TLTs, analysis of the frequency response is an important consideration.^ 1,2 Such analyses usually assume that the load and source are impedance matched through the transformer, although this is not always the case. At times, matching between the primary and secondary impedances of a TLT is either impractical or undesirable. An example of this is the interface circuit between a photodiode and an amplifier used in a particular optical receiver architecture. Based on this example, Guanella and Ruthroff TLTs were analyzed, although TLTs are useful in many applications. A number of architectures are used for optical receivers.^3 This article concerns a special case (Fig. 1) with a broadband 75-Ω power amplifier. The design challenge is to find an interface circuit for optimal system sensitivity without degrading the system bandwidth and amplitude flatness. From the viewpoint of the RF circuitry, the photodetector can be modeled as a current source with a high source impedance. When the resistors used for the DC bias are taken into account, the photodetector and bias circuit can be modeled as a current source with a source impedance on the order of kiloohms. On the amplifier side, it can be replaced by a load impedance, R[L], which is equal to its input impedance, R[A]. This reduces Fig. 1 to the equivalent circuit shown in Fig. 2. The load and source impedances are chosen as 75 Ω and 2 kΩ, respectively, for the purposes of numerical analysis. TLTs are often selected as the interface in such cases when wide bandwidths are required. From a power-transfer consideration alone, a stepdown transformer that matches R[L] (75 Ω) to R[s] (2 kΩ) would deliver maximum system responsivity. However, such a high-ratio impedance transformation (26:1) is not only difficult to implement but results in an unacceptable reduction in bandwidth due to the photodiode's parasitic capacitance. As a compromise, a 4:1 TLT can provide enough gain and bandwidth, even though it will result in a condition of mismatched impedances. A TLT generally has a reactive component and exhibits frequency-dependent operation. Thus, the transformed impedance Z[in], seen from the diode side as defined in Fig. 2, can be expressed in general Assuming a lossless transformer, power dissipated in the real part of Z[in], denoted as P[in], is equal to that in R[L], which in turn is proportional to the system responsivity. A simple circuit calculation shows that: Thus, through Eq. 1, the issue of frequency dependence of the system responsivity is reduced to finding Z[in] as a function of frequency. In a TLT, parasitic effects are greatly suppressed because most stray capacitance is absorbed by the line inductance to form the characteristic impedance of the transmission line. This suppression of parasitic effects is the main reason that TLTs have higher frequency limits than conventional transformers. Although in some applications the ultimate high-frequency limit is set by the residual parasitic effects, this article will consider only the frequency response due to the intrinsic circuit parameters such as the length and characteristic impedance of the transmission line and load and source impedances. As part of this idealized analysis, the TLT is considered lossless and free of parasitic effects. Furthermore, to get an insight on how the frequency dependence arises, two conditions are listed below.^1 When these conditions are met, a TLT is considered ideal in that it has a frequency independent response regardless of the circuit parameters. 1. Only odd mode excitation is allowed, i.e. the currents in the conductors making up the transmission line are in opposite directions. High-permeability toroids are employed in many TLTs for this 2. The transmission line is short compared with the wavelength (say, less than l/8), which leads to a situation in which the voltages at two ends of a transmission line are the same and the currents at each end of an individual conductor are equal. Validity of the first condition usually sets a low-frequency limit on the ideal TLT, while validity of the second condition sets the high-frequency limit on the ideal TLT. For two types of TLTs, Guanella and Ruthroff, the analysis is first performed under these ideal conditions and then focused on the non-ideal situation at high frequencies. Figure 3 shows a Guanella transformer. Essentially, it consists of two pairs of transmission lines that are connected in series at the high-impedance side (the input in this case) and in parallel at the low-impedance side (the output). The assignments of the voltage and current on the transmission lines, V[i] and I[i], are only correct for the two ideal conditions mentioned earlier, and under these conditions the following relationships can be easily established: The series connection at the input leads to V[in] = 2V[i] and I[in] = I[i]. The parallel connection at the output leads to V[out] = V[i] and I[out] = 2I[i]. Therefore, Then R[in] = 4 R[out], confirming the 4:1 impedance transformation ratio and the frequency independency. When the length of the transmission line becomes relatively long, the second condition is no longer valid and modifications are needed. In Fig. 3, the input impedance Z[in] is simply the sum of two serial impedances transformed from the same load through two identical pairs of transmission lines respectively (true only when the first ideal condition holds). These two impedances are obviously identical and designated as Z[L]'. Because they are identical, an analysis on the impedance transformation through one pair of transmission lines is sufficient. In this configuration, the current through load Z[L] is the sum of currents from each pair and therefore is twice that from a single pair. Hence, if a single TLT is used for analysis, the equivalent load should be doubled from the real value of Z[L] to give the same current and voltage at the load. The pertinent equation for Z[L]' can be found in any textbook on microwave engineering: Z[L] = the load impedance (in Fig. 2), Z[c] = the characteristic impedance of the transmission line, and bl = the phase angle. From Eq. 2, the frequency dependence of Z[in] (=2Z[L]') is solely through the phase angle bl when Z[L] is purely resistive. Numerical results using Eq. 1 and 2 for several values of Z[c] and R[s]=2 kΩ, R[L]=75 Ω are shown in Fig. 4. The normalized power P(bl)/P(0) in logarithmic scale is used in the plot, where P(0) is the low frequency limit of P(bl) and given by: The plot shows that the curve for Z[c]=2R[L]=150 Ω is flat, indicating that it is not dependent on frequency. This can be easily concluded by examining Eq. 2. Also, the condition for perfect flatness is independent of the source impedance, R[s]. Page Title Figure 5 shows the circuit schematic of another popular 4:1 transmission line transformer, the Ruthroff transformer. Unlike the Guanella TLT, which requires two pairs of transmission lines, a 4:1 impedance transformation can be achieved with a single transmission line in the Ruthroff topology. For the case when a balanced-to-balanced (with a center ground tab) impedance transformation is required, however, two parallel pairs of transmission lines are still needed. The single and two-TLT Ruthroff configurations are shown in the inserts in Fig. 6 and 7, respectively. It is evident that an analysis of the single-TLT configuration is sufficient. Under ideal conditions, voltages and currents at input and output ends are: and therefore V[in]/I[in] = 4(V[out]/I[out]), indicating a 4:1 impedance transformation. When the transmission line is long, voltages V[1] and V[2] and currents I[1] and I[2] are related through: where V^+ and V^− are forward and backward traveling waves at port 2, respectively. Solving Eqs. 3 through 6 gives V[1] and I[1] expressed in terms of V[2], I[2], and the transmission-line parameters, Z[c] and bl: In addition, Using Eqs. 7 through 11, after some algebra, the input impedance Z[in] is: Figures 6 and 7 show plots of numerical results for single and two-TLT configurations, respectively, for R[s] = 2 kΩ, R[L] = 75 Ω, and Z[c] as a variable. As shown in the insert of Fig. 7, for the two-TLT configuration the actual values for R[s] and R[L] used in Eq. 12 are 1 kΩ and 37.5 Ω, respectively, to account for the fact that Eq. 12 is for the single-TLT configuration. A close comparison of two groups of curves in Figs. 6 and 7 reveals that the curve with a Z[c] in the single-TLT configuration is identical to that with 2Z[c] in the two-TLT configuration. In fact, this can be readily seen from Eq. 12, since R[L] used in the two-TLT configuration is simply one-half that in the single-TLT configuration. This feature is of significance in practical design because a TLT with high Z [c] is often difficult to realize in practice. The following can be concluded: 1. Even with a limited number of curves plotted in Figs. 4, 6, and 7, one can see that the frequency response curves are greatly affected by the choice of Z[c], Z[L], and Z[s]. For the 4:1 Guanella transformer there is a condition (Z[c] =2Z[L]) for which the response is completely frequency independent whereas the frequency response of the 4:1 Ruthroff transformer always starts to roll off at certain frequency point. 2. It has been assumed that transmission lines are lossless, aslthough in reality they exhibit loss. As shown in Figs. 4, 6, and 7, for both types of TLT when Z[c] is chosen properly the response curve increases with frequency up to a certain point (around βl=1.5). This feature can be utilized in practice to compensate the loss in the transmission line and can it also provide system gain with an upward slope. 3. The frequency response of the Ruthroff transformer monotonically decreases at high frequencies, while the Guanella's response is periodic with frequency. 4. For the load and source impedances considered in this article, the Ruthroff transformer with the two-TLT configuration requires the lowest Z[c] to have an upward response curve. The approach also supports differential operation, but usually requires two parallel toroidal transformers. The authors would like to thank Dr. Derald O. Cummings for introducing some basic concepts and techniques on the subject at the early stage of this project. 1. W. Alan Davis and Krishna K. Agarwal, Radio Frequency Circuit Design, Wiley, New York, 2001. 2. Jerry Sevick, Transmission Line Transformers, 4th Ed., Noble Publishing, Norcross, GA, 2001. 3. Stephen B. Alexander, Optical Communication Receiver Design, Bellingham, SPIE Optical Engineering Press, San Jose, CA, 1997.
{"url":"http://mwrf.com/components/analyzing-41-tlts-optical-receivers","timestamp":"2014-04-18T21:08:16Z","content_type":null,"content_length":"89527","record_id":"<urn:uuid:843e644f-6676-47be-8ee2-8857751fe793>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Monochromatic cows September 1998 The following is a fallacious proof that "all cows in a field are the same colour". A quick trip out into the countryside will soon provide a counter-example to the Theorem - but where's the flaw in the argument? Theorem: All cows in a field are the same colour Proof by induction on the number of cows Induction hypothesis: n cows in a field are the same colour, for all n=1,2,3,4,... Initial Step Clearly one cow in a field is the same colour as itself, so the induction hypothesis is true for n=1. Induction Step Now suppose n is at least 1, we have n cows in a field F, and that the induction hypothesis has been proved for all fields containing at most n cows. By the induction hypothesis all the cows in F are the same colour. Firstly, remove any cow from F and put it to one side. Now take a cow from some other field and put it in field F. We again have n cows in field F, so by the induction hypothesis they are all the same colour. Finally, put back the first cow you thought of. We already know that it's the same colour as all the other cows in F, and so now we have n+1 cows in field F, and they're all the same colour. This completes the induction step! Submitted by Anonymous on July 26, 2013. Induction hypothesis is governed by principle ordering which has been neglected when we remove 1 cow from the set and replace with another random one outside the set.
{"url":"http://plus.maths.org/content/monochromatic-cows","timestamp":"2014-04-16T16:07:18Z","content_type":null,"content_length":"23913","record_id":"<urn:uuid:572e3246-70d2-4cd0-8023-3799e5e58c65>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Day 1 I wrote my First Day plan about a month before school actually started, and surprisingly, it didn’t change much. (I was right on target when I said we would probably run out to time and not be able to even start the “My Math Stuff.”) My Algebra students tried to count “Squares on a Chessboard” and became slightly overwhelmed by the enormity of the task after initially being sure it was just 64! My Math 8 students investigated “Rice on a Chessboard” that will lead nicely into our first unit on Exponents and Scientific Notation. Day 2 We spent a good chunk of this day making magnets, math portfolios to store their assessments and tracking forms, and Stars for Standards, but also did a “Get to Know You” activity (modified from an idea by MathMamaWrites) requiring students to think about positive and negative in the coordinate plane. Topics were randomly chosen from their “Know Me Notecards” and groups placed their magnets Day 3 In Algebra, we went back to the chessboard squares and made some more progress. Some groups studied the relationship between the size of the squares and the number of squares. Other groups looked at the size of the board and the number of squares. Patterns were emerging! For the daily Math Greeting, students placed their magnets on the Venn Diagram regarding their thoughts on their relationship to math. This is an Algebra class. Some for the other periods were more interesting, but I neglected to take pictures :( The glare (and an old pen) made it hard to read. Top circle says “enjoy,” left one says “understand,” and right one is “work hard.” msSunFun: Musical Math Partners One of the “math games” we play in my classes that I would like to share for It is quite flexible, gets kids out of their seats, and gives students an opportunity to use mental math skills. Each student receives some sort of “card” depending on the topic of the day. Some sort of “path” is designed for students to travel around the room without too many “log jams.” Here is a diagram of how I make it work in my room: Students follow a path around a “row” of four pairs of desks, but at the end of the row they may choose to turn either left or right and continue around that “row.” (Even though there are two arrows in the diagram, students are single file when walking between the desks.) As the teacher plays their choice of music, students “travel” around the path – usually only 10-15 seconds. Once the music stops, students take a seat in the nearest desk and become “partners” with the person in the adjacent seat. (If you have more desks than students, you might want to identify some of the desks as “out of the game.” As it is, you will often have students that must continue to “travel” after the music stops in order to find a partner. If you have an odd number of students, the “leftover” person becomes partners with the teacher.) Students “do the math” (more details below) – usually trying to finish more quickly than the other person – followed by a short debriefing regarding how they solved the problem, and then they trade cards so that they have different experiences with each new partner. Possibilities. . . The possibilities are only limited by your imagination / ingenuity. :) Integer Operations Each person holds a card with an integer. (A deck of playing cards work well for this: black = positive and red = negative.) When the music stops, the teacher randomly flips an operation card and students perform the operation from “left card” to “right card.” (I would only recommend division if you are interested in also focusing on fractions and mixed numbers.) Fraction/Decimal Operations Similar to the Integer Operations, only cards have fractions (or decimals.) I am fairly certain I would not include multiplication if decimals were involved, unless they were single digits. :) For subtraction, if students are not yet familiar with negative numbers, students can subtract the lesser value from the greater one. Fraction/Decimal/Percent Comparisons Each student has a card containing a number. (You could do all fractions, all decimals, or a mix.) When the music stops, students race to decide which value is greater. Evaluating Algebraic Expressions Each student has a card containing two pieces of information: an algebraic expression (as simple or complex as you wish) and a numerical value (also as simple or complex as you wish – including negatives, decimals, or fractions – but remember the goal is for students to evaluate fairly quickly.) When the music stops, students must evaluate the expression of the left card by the number on the right card and then vice versa. Operations on Algebraic Expressions Each student has a card with an algebraic expressions. When the music stops, teacher randomly selects add or subtract and students perform the operation from left to right. If the expressions are binomials, multiplication could also be included. (I am not sure I want students multiplying trinomials using mental math.) Solving Linear Equations Two options using cards throw the previous two “games” if the expressions are fairly simple. Option 1: For the cards that have the expression and the number, students set the left expression equal to the number on the right card and solve, then switch to number on the left equal to expression on the right. Option 2: For the cards that all have algebraic expressions, students set the two expressions equal and solve. (This would work best if the expressions were fairly simple, but again, it is up to the teacher depending on the level of students and the current unit of study.) Measures of Center and Spread Each student has a card containing a fairly small data set. When the music stops, the two data sets are combined, and students are to find median, mode, and range. Depending on the size of the sets and the values, you could also have students find mean or the quartiles and interquartile range. Plotting Points Each student has a card with an integer and each desk pair contains a coordinate grid. When the music stops, students point to the location on the coordinate grid (of course, the left card is x coordinate, and the right card is y coordinate.) Pythagorean Theorem Each student has a card with a value representing a side of a right triangle. When the music stops, the teacher either chooses “two legs” or “hypotenuse and leg.” Students find the remaining side. (Since relatively small values should be chosen in order for students to compute mentally, there would be some duplication. If students with equal values became partners, the second option could provide some interesting discussion!) Distance on a Coordinate Plane Each student has a card containing the coordinates of a point. When the music stops, student locate the two points and find the distance between them. That’s about all for now. Feel free to add more ideas in the Comments. This is certainly an activity to use only after students have a fairly strong understanding of the concepts used in the tasks. Teachers need to be careful when choosing the values / expressions on the cards so that mental calculation is fairly accessible. Made4Math Monday: Formative Assessment Forms I definitely left some things “hanging” on my last SBG post, so I thought I would “kill two birds with one stone,” so to speak, and write about some of the ways I incorporate formative assessment into my daily routine. A lot of people have written about “Exit Tickets,” and I guess, in a way, that’s what this is. I LOVE Sarah’s recent post at Everybody is a Genius (I’m not sure about everybody, but she IS a genius!) about the laminated dry erase ones :) I think I might try that for my more “reflective” tasks to end the period, but I like the “permanence” of the system that I have set up. Here is my: Question of the Day After we have spent a bit of time on a concept (but before it is time for a Mini-Assessment (Quiz)) I use Question of the Day to assess where students are at on a concept and us it to guide my instruction for the next few days. I will put up a slide such as this: at the end of class and given them about five minutes to work on it. (Depends on the task.) Rather than just using quarter sheets of paper (or anything dry erase) I wanted to have a tool that I could use to provide feedback and also to have students keep as a record of their growth. I came up with the following simple “form”: I print out four copies and then use the “booklet” app on our copy machine so that it shrinks it down and prints all four onto an 8.5 x 11 sheet. The original copy has more “space” on the top section because when it shrinks down there is an extra “gap” at the bottom of the page. This provides opportunities for 8 QOD’s before the students need a new sheet. They will store them in the “pocket” of their Math Notebook once they receive them back the following day. I use colored markers to write their number at the top so it’s easier to sort and hand back to color or number groups. The specific standard (CCSS) is written on the form, and we update scores on their Tracking Sheets (see SBG post) fairly regularly. When the forms are “full” students store them in their Math Portfolio – a hanging file folder in a crate that contains their tracking sheet as well as other “larger” formative assessments. Since there is only one question (ok, there’s usually two to three parts though) they don’t take long to grade and record. I can also “sort” the forms to create “Just for Today” seats (also on Sticks and Seats post) to either differentiate instruction or provide support for students who have not yet mastered the concept. Writing About Math I have students write in math class quite often, but I haven’t always spent the time reading and commenting on their responses as I should. This year we are implementing the Common Core State Standards, and there are more than a few that employ the use of the verb “explain” or “describe.” I am looking forward to challenging students to meet standard on those in addition to the more skill-based ones. I decided to modify my Question of the Day form to use it for Writing About Math prompts as well. Sometimes the prompt will be tied to a particular standard and graded/recorded as such, but other times I am just looking into their thinking about a concept and wanting a way to provide feedback. Recording System Since I record multiple scores for each standard (and multiple standards on each assessment) I needed a way to organize that information so that I can see the progressions of scores in each area. I created a “grade book” form that allows for up to four scores (more if I sneak in a re-assessment score next to the original) for each standard with room for three standards on each sheet: Once I have my class lists I will enter them on a blank copy, make four copies of that, and again use the booklet feature on the copy machine. This time I transfer them to the 8.5 x 14 size, otherwise it shrinks down more than I want. When folded they create a nice size for up to twelve standards. I usually don’t have more than that in any one unit, so I create a new “grade sheet” for each unit. When I am entering grades into the online Gradebook, I just have to look at the most recent column and update scores that have changed from previous assessments. Final Note: QOD and WAM are not always relegated to “end of period tasks.” I also use them at the beginning of the period. After I collect them we discuss possible responses. I am excited to use “My Favorite No” either when we do them at the beginning of the period, or the following day as a re-cap / remediation activity! New Blogger Initiative: Mystery Number Puzzles For the third prompt of the New Blogger Initiative, I have decided to choose Option 1, if not purely for the sake of wanting to learn more about how to use LaTeX, especially within a blog post. My Algebra students will be diving into solving linear equations within the first week or so of the start of school. Since they have already had experience with 1-step and 2-step equations in sixth and seventh grade, I wish to quickly expand their repertoire into solving multi-step equations by first taking on those involving just a single “x” term. (Ok, seriously, I’m not going to use LaTeX for that! Would it just be $x$? Soooo frustrating not to know until it’s published!) (Aha! If I publish it as “Private” I can look at the preview of the actual online version – success!) Initially, the tasks will not involve equations at all, but those goofy “Mystery Number Puzzles” such as: I am thinking of a number. . . . When I multiply it by 2 and then add three, multiply the result by 4 and divide by 6, then subtract 5, my answer is 1. WHAT is my number?! Of course, if this task is just given verbally, it would be EXTREMELY challenging to solve – who can remember back that many steps?! My lesson plan involves posting pages all over the classroom (in those lovely page protectors of course.) Students working in pairs would all start at a different spot in the room and complete a Scavenger Hunt. (I know I read something about this idea somewhere, but I don’t have the link. Edit: Found the link to Math-In-Spire thanks to @msrubinteach.) The bottom of the page will have a “Mystery Number Puzzle.” Once the pair has “solved the puzzle” they search around the classroom for that “solution” on the top half of another page. There will also be a “symbol” on the page for them to record. On the bottom will be a new puzzle to solve, and so on, and so on. There will be three different “sets” of six puzzles, so after six problems they SHOULD be back where they started. (Unless, of course, they solved a problem incorrectly and moved onto another “track” – uh oh!) The “symbols” they record will also be in the proper order if the problems were solved correctly. A class discussion involving strategies used to solve the puzzles would ensue. After this opening activity, we will take a look at how to record this information mathematically. The first step would be to determine how to actually write down the original puzzle in mathematical form. After a brief partner brainstorming session the following “should” be agreed upon. (It will be interesting to see whether the use of parentheses will be remembered and/or emphasized.) I am thinking of a number. . . When I multiply it by 2: $2x$ and then add three: $2x+3$ multiply the result by 4: $4(2x+3)$ and divide by 6: (here’s the tricky LaTeX part) $\dfrac{4(2x+3)}{6}$ then subtract 5: $\dfrac{4(2x+3)}{6}-5$ my answer is 1: $\dfrac{4(2x+3)}{6}-5=1$ WHAT is my number?! It is important to note here that some students may alternatively use: $(4(2x+3))\div6-5=1$ instead of the fraction bar, and that is totally acceptable. Next phase: pairs return to their “starting page,” flip up the page, and write out the mathematical representation on the top half of the BACK of the page using a dry erase marker (or dry erase crayon.) After returning their page to its original position, students do a Gallery Walk around the room and mentally conjure up the proper representation before peeking on the back side to see what other students wrote. The last step involves representing the solution process mathematically. I am not “picky” when it comes to solving these types of linear equations. As long as there is only one term involving $x$ (wow, it just starts to roll right out of your fingers) the entire process can be done using inverse operations. In fact, my goal would be for students to be able to write the solution for THIS equation in the following way: $x=\dfrac{6(1+5)/4-3}{2}=3$. I certainly do not expect that right off the bat, but I am very comfortable with something along the lines of: I would hope that students would begin to complete at least a few steps at a time while still showing the computation appropriately, such as: As students begin to describe their steps/explain their thinking, I would introduce the term “inverse” to move along their mathematical vocabulary. Finally, students return to their “page” and write their mathematical solution process using as few “lines” as they feel comfortable using, as long as ALL of the steps in the process are shown. Gallery Walk THIS time involves making sure each pair DID show each operation in the process! (Ok, it’s not really over yet, but probably for that day. Given a multi-step equation, students will write – and solve – the Mystery Number Puzzle associated with it. Students will create their own puzzles and challenge others to solve them. THEN, what to do when the puzzle involves a step like: subtract your original number. . .? JUST when they’re feeling all proud and confident you set the bar a bit higher and off they go again!) Final note: found a link on (someone’s blog – I think it was a new blogger – yeah, that really narrows it down- I was just so awestruck when I clicked the link that I never went back) post to an AWESOME WebApp called WebEquation, at least for iOs. (Maybe Android?) You can literally just WRITE a mathematical expression/equation/whatever and it will turn it into LaTeX and “Math Script” While I totally appreciate Sam’s link to more info on LaTeX syntax, I seriously would just go to WebEquation and handwrite the expression to find the proper way to “type” it. Unfortunately, you can’t just copy and paste the LaTeX script – it comes out in HTML or something. However, you CAN press on the LaTeX until it opens a new window showing the “Math Type” version and then click to save that as an image: (Ok, technically I cheated. It would not actually paste the image of the equation, but I pasted it onto a Pages document and then took a picture to insert here. Not sure it’s worth the effort here. It does make a nice work around for Pages and Keynote though!) So. . . depending on the complexity of the expression, I might just skip the LaTeX altogether! msSunFun: Homework Hassles Somewhere along the clogged up Google Reader, I missed the topic for this week’s: It didn’t take more than opening up Flipboard to find out that the dreaded “Homework” is what’s on the menu today. I have taught Middle School Math for the past four years. Until this year I have always taught sixth along with “something else.” Beginning Wednesday I will have three classes of Algebra (8th graders) and three classes of Math 8. I must admit that I am not exactly looking forward to the “Homework Hassles.” I use Standards Based Grading in all of my classes, and one of the tenets that I feel strongly about is that a student’s grade is based purely on their understanding of math concepts and not on “participation,” “effort,” “behavior,” or “homework.” Therefore, even though I assign homework fairly regularly in my classes, it does not factor into their overall grade. Let me rephrase that: There are no “points” from homework involved in their grade, but I do feel there is a fairly strong correlation between a student’s homework completion rate and their overall grade. Unfortunately, (surprisingly enough,) not all middle school students necessarily have that same philosophy. From my (albeit brief) observation of student behavior, sixth graders are much more “regular” in regards to homework completion. I think that whatever habit they developed in Elementary School tends to stay with them as the begin their life as a middle schooler. At some point, for some of the students, “grades” start to get in the way. If an assignment for another class counts as a part of their “grade” then it surely takes precedence over their math homework that “doesn’t really count” for their grade. By the time they are eighth graders, the percentage of the student population who find “more important things to do with their time” has definitely increased. Instead of completing a set of practice problems that your teacher has assigned in order to help you learn and retain the math concepts, school has become a series of earning “points” (or not earning them) to reach a desired “grade.” Enough ranting, here is my “multi-pronged” approach for this year. Math@Home Logs For at least the first month of school, all students will be required to fill out a log, tracking the date, the assignment, time spent, whether it was completed, student initial, parent initial, and teacher initial. This will be kept in the front “pocket” of their math notebook. I will stamp (or not stamp) daily and collect the log on Fridays. A “bulk” email will be sent home to any students who were negligent in completing the assignments and/or having their parent/guardian sign off on their log. If a student has been successful at completing their homework for the first month of school they will be excused from completing the log unless their habits begin to change. No Homework Notice As I circulate and check off (stamp) homework, student who did not complete the assignment (didn’t start, didn’t finish, or didn’t show process) will be given the following reminder: and will then be required to fill out a GoogleDocs form detailing the reason why they didn’t complete the homework along with a plan for completion. I can print off the spreadsheet whenever necessary. As students do complete the assignment I can delete that row from the sheet. Morning Math / Learn @ Lunch / Afternoon Academy These are just my fancy ways of saying before school / during lunch / after school. Each Monday, students with assignments still missing will need to choose 15-60 minutes (depending on how much is missing) of time in which they will come to class and work on their missing work. (I have a cool grid on my board all ready to go for this. Students will each have a magnet with their name that can be placed in the location of their “first appointment.” No picture today – maybe later this week when I am at school.) Making Homework Accessible and Appropriate. Why have homework? What is my goal for my students when I assign it? These are important questions to consider when making decisions about a lesson each day. One goal I have for this year is to make sure I don’t assign problems “too early” in the concept development cycle. Just because a topic was introduced in class that day, doesn’t mean that students are really ready to “practice” on their own. Instead an assignment might focus on previous skills that will be helpful in solidifying the current concepts. This will take careful consideration, since I am not always sure how much progress will be made in class each day. For some units I design quite a bit of the homework assignments myself, trying to incorporate some “puzzle/problem” solving activities that require students to come up with strategies that will, in the long run, help develop their understanding of math concepts. (See the example in this “Tricky Tables” post.) In addition, I hope to make textbook homework less “rote” and more “reflective” at least SOME of the time. A few posts by David Coffey that I read recently share how to “flip homework” so that students are really analyzing a set of problems instead of just “cranking out answers.” I certainly don’t have the “magic bullet.” I forced myself to refrain from reading today’s posts before composing this one, but I am looking forward to finding some ideas that will be useful. :) Tech Talk: Keynote + Absolute Board = Awesome! Continuing my series on iPad Apps. . . When I first received my iPad a year ago last spring, I was DESPERATE to use it in the classroom. My school laptop was getting old and cranky, freezing up at the most inopportune moments. I have used Power Point in the classroom for quite awhile – not to “tell” students information, but to “ask” them questions. I searched through the App store quite extensively, looking for something cheap to do the job, but nothing looked too promising, so I bit the bullet, forked out $9.99 + tax and bought. . . My kids use an Apple laptop at home, so I was familiar with the program. (We actually bought a “five-user pack” of Keynote/Pages/Numbers way back when. . . I cringe when I think of how much we spent. Who knew what would happen to the App industry?!) The iPad version doesn’t quite have as many features as the Mac version, but an upgrade sometime during the past year brought the two closer together. I have been able to email my old PowerPoints to myself and open them in Keynote. Occasionally there are a few glitches – borders on a table don’t show up, or a cool font isn’t available – but for the most part it has worked out well. I have different slide backgrounds that I use for different activities in the class, and have added to my collection with the ones in Keynote. Working on the iPad is more enjoyable than on the laptop, and moving or modifying slides is a snap. You can “nest” slides under one another, so this summer I combined all of my slides for each day of a unit together, with a general lesson plan as the “top” slide. Clicking on the triangle by the slide will “collapse” all of the slides underneath. Drawback #1 One glaring absence in Keynote is the ability to use superscript (and subscript, but I don’t need that NEARLY as often.) It is really unfathomable to me that this is not available in the font modifications. HOWEVER, I have found a work around. Remember when I said I emailed PowerPoints and opened them within Keynote. The superscript STAYS so I just have a “fall back” slide that I go to when I need to use an exponent. I can change the font/color/size and the superscript will change along with it! (Strange, but true!) I use a similar “shortcut” with square roots, as the keyboard shortcut for the radical symbol is non-existant, but I’ve copied it over from PowerPoint. Since the highest level of math I teach is Algebra, it’s not as if I need a full-fledged Math Editor (although it would be nice!) I am playing around with Mathbot and TeXit and learning a bit of LaTex so that I can paste in some more complex equations later on this year. Drawback #2 I always though it would be nice to be able to “draw” on the slides! It is, after all, an iPad App! There is really no “freehand tool.” You can created shapes and lines and curves, but you can’t just “scribble something out” on a slide. At first I was really bummed about this, but I have come to realize that maybe I wouldn’t really want to, since I would need to use the slide again the following period. It would be kind of a pain to make multiple copies of each of the slides I wish to write on, so maybe not such a deal-breaker. (Although, it still would be an awesome feature just for CREATING the slides, but I doubt that Apple is listening.) So. . .from the multitude of options that I have downloaded, played with, and even used for awhile, the winner in the end is the FREE App. . It really IS free! The way I use this in conjunction with Keynote on my iPad is that I will take a screen shot of the PowerPoint slides I plan on using so that they are stored in the Camera Roll. When I pull up Absolute Board I can quickly pick the slide and it will fill the screen. (For awhile I did it ahead of time, but it doesn’t take any longer to grab the slide than it does to select it from the pages stored in Absolute Board.) I can zoom in and out, change pen size and color, and write down a solution process as a student shares it aloud. (I have found that it is a BIG time saver to have me record rather than a student “write” on the iPad. There are other opportunities in class for them to write.) The “marked up” slides are then saved in Absolute Board, and I can pull them up later. I am not sure what the limit is on the number of “pages” you can store. Every once in awhile I will “purge” the old drawings, but I can also save them to my photo album if I wish! These two together make a great combination for me! Made4Math Monday: Sticks and Seats One of the benefits of working part time for a NUMBER of years was that I could find the time to be a “parent volunteer” while my own kids were in Elementary School. I must say that it was a valuable experience as a teacher as well. Most of my teaching experience up until that point was at the high school level, so many of the classroom routines were new to me:) One of those ideas was “picking with Popsicle sticks.” The question is. . .how do you pick sticks when you have six different classes? Do you have six different cups of sticks? After a few years of “refinement” I have a method that works well for me. I picked up a package of big, brightly colored “craft sticks” at the dollar store. There are actually six different colors, but I generally only use four. I wrote the numbers 1-8 on the bottom of the sticks and I am good to go for a class of up to 32 students. Each student is assigned a color and a number (which they remember quite readily after the first few days.) I, on the other hand, have a “cheat sheet” that I post on the board and “borrow” during class so I can actually call on the student’s name instead of just the “color-number combination.” Here is my “Wizarding World of Harry Potter” butterbeer cup that I will being using this year. (I am finally retiring my University of Minnesota cup with the lettering all worn off, but I am still using it to hold my little six inch rulers.) Here is one of the (currently empty) “cheat sheets” that will be filled with names once our class lists are finalized. (Ha! I just noticed the Yellow column twice. I must have used my class that only had three colors last year and copied and pasted a new column. It’s supposed to say Green!) Soooo. . . what if you have more than 32 students in a class? I’m glad that you asked. There are a number of different ways that you can deal with this issue. Since there are more than four colors available, you can certainly use more colors and fewer sticks of each color if you wish. Last year I had a class of 34 students and I included two blue sticks (in addition to the red, orange, yellow and green ones) numbered 1-2. If you have far fewer than 32 students (less than 24, for instance) you can eliminate one color all together, and never “pick” that color during that class. I have an additional “rule” that I follow as well. If I pick a stick that does not “belong” to anyone in the class (not just because they are absent) then I am the one who must answer the question! (Occasionally I have a class with lots of “unclaimed sticks” and I end up “calling on myself” more often than I wish, so I will put a limit on how many times I can answer in a period.) I rarely use the sticks for “total cold calls” in class. Quite often students work on a problem, discuss it with a partner, and then I pick a stick to share their response. Other times I will have students work on five or six problems and as I begin to pick sticks, the first person chosen is allowed to decide WHICH of the five or six problems he or she wishes to share. (If no choice is made, I will choose!) By the time the last person is chosen, at least we have already discussed the other problems – even if the most challenging one was left for last. However, this is certainly not always the case. Some students definitely WANT to share their response to the toughest problem! I do not allow students to “pass,” even if they have not completed the problem. I will instead ask questions, questions, and more questions, to help them reach a solution. I will also call on raised hands after a problem has been shared if students wish to add more information, an alternative process, or an alternative solution. I generally leave the stick out of the cup for the rest of the period. I am not sure this is a “good thing.” Some students then “relax” knowing they won’t be called again. Others are disappointed that they won’t get “picked” another time. I have to chuckle sometimes at the responses I get when I start to pick a stick. (Some times I will have already grabbed it, just waiting for the time to call.) Since they know their color, some are happily anticipating that it might be them, while others are nervously hoping it WON’T be them! Often the people up front will see the number as the stick is drawn (apparently I’m not very adept at hiding it) and actually KNOW the particular student before I can even look it up! Occasionally a student professes “disbelief” because they (let’s be honest, it’s usually a “he!”) “called it” that he would be picked. Oh, sixth graders – I will miss them this year :( Why all this hassle just to assign a stick to a student? I have ulterior motives. :) The color-number combination is also often used to assign seats! On a daily basis, students enter the room and need to figure out where they are sitting and who they are sitting with for that particular day. (I alluded to this in my First Day post, but here is the full text version.) Some days the desks will be in groups of four and they might be sitting with their number groups or “half” of their color groups (evens and odds or highs and lows.) Other days they will be sitting in two’s where they are generally paired up with someone else in their color group. Since there are usually six or seven other people in their color group, this partner will also change from day to day. Part of my reason for posting the lists is that they act as a “cheat sheet” for the students as well. You may have noticed signs above the class listings that notify students the groups for the day. I have laminated (double-sided so I can just flip for a new option) all of the different seating choices available. For the “color pairs” there are seven different signs that all have each number paired with a different “partner.” (Now there’s a math problem for you!) I also have laminated signs for each number group that I set out on the groups of four, and for each color group that I place out to determine the “row” or “section” for that color. These are all stored in a basket right under the section of the board I use for group assignments. Just to make things even MORE confusing, I also have a few additional ways of picking the groups for the day. One is “Find Your Match” (pairs or trios) that I described in my post on Math Cards. The other is “Just for Today” groups (pairs, trios, or quads.) I often use this option after a formative assessment or during review activities where I purposely group students so that at least one person in the group has a strong understanding of the concepts. I sort their formative assessments and put them in groups, then jot down the names on a blank seating chart in a page protector using an overhead pen (I guess they still have their uses!) I assign new “color groups” every quarter. (I used to do it more frequently, but it can be time consuming, and with eight people in group, plus the other options, they get quite a variety throughout the quarter.) Initially the groups are usually alphabetical. By the second quarter I put some work into assigning groups. I often have the highest performing students with the same number (or two to three numbers) so that I can choose to use “Number Groups” when I want to differentiate a bit. (Those groups would “receive” a more challenging problem than some of the other groups.) I am also aware of when I end up with “high-low” pairings so that I either take advantage of built in “tutors” or at least I do not plan an activity in which partners might be “competing” against each other. A definite part of my lesson planning involves deciding how students will be grouped for the day. Finally, for the last quarter I usually allow some student input regarding who they would like to have in their group. I take requests of 2-3 people for each student and I can usually place them all in a group with at least ONE of the people they requested, often with two (or another in their number group.) Again, THIS is a challenge as well! Back to Sticks There were blue sticks I used in my class of 34 last year. When in “Number Groups,” they just created a group of five. When in color groups pairs they were my “Wild Cards.” They took the place of students who were absent for the day or sat together as a pair. (I also randomly drew sticks that they would switch with so each day there were different student’s sitting in the “Wild Card” seats.) For odds/evens or high/lows, if everyone was present, I would have them join a “convenient group” with the most extra space to form groups of five. I would also probably do this for a class of 25 or 26 instead of having four color groups with “lots of empty seats.” Last year I came up with a new way to use the sticks during group work. If the students were in Number Groups, I would walk around with one stick of each color in my hand. When stopping to check on a group or to answer a question, I would place the sticks behind my back, mix them up and pick one to determine who to call on to either ask or answer a question. If students were in Color Groups, a collection of sticks with different numbers could be used. (Although I also used an octahedron, but this sometimes resulted in MANY rolls if I was at an “even” group and the numbers kept coming up Whatever For!? Sticks: I am TERRIBLE at calling on “purely random” students, so the sticks help me to do that on a more regular basis. (We also have class discussions that don’t involve sticks – it depends on the particular activity.) Seats: I want every student in the class to be comfortable and familiar working with every other student in the class. We all have our strengths and weaknesses that we bring to a group and I want student to be aware of that fact, and looking for opportunities to share their strengths while acknowledging and working on their weaknesses. Not every student “enjoys” working with every other student in the class, but they know it will change the next day. Sometimes we will stay in the same groups for two consecutive days, but really no more than that. Some students swear to me that they have been with the same partner waaaay too often because they “happen” to end up partners in find your match (over and over. . .), they are IN the same color group so they are in pairs and quads with that person, AND I even put them together in a “Just for Today” group!! Oh, the injustices of being a middle school student! Minor Pitfalls Last year my sixth graders were my first class of the day. The buses arrive by 7:30 and class starts at 8:00, so more than a few of them would “hang out” in class for awhile. Then, I started noticing some patterns. Within a “column” of eight seats, the pairs can choose their spots on a first come – serve basis. Some students would quickly “claim” spots so that they could be just across the aisle from their “BFF” – who they were not actually in a group with (on purpose, from my point of view) but wished to sit near them anyways. I began to be quite careful about where each color groups was assigned, or where each number group was placed to try to avoid the “cliques.” I was not always successful. This year I have 8th grade Algebra students first thing in the morning. I am not sure they will “hang out” in class before school, but if so, I think I will wait until closer to 7:55 to make the group placements for the day! Whew! “Leadership Team” meeting this morning, working in my room all afternoon, and I still finished this post at a decent hour – West Coast Time! Still nine more days ’till students arrive – unless you count out Open House on Wednesday, but I think I’m ready for that :) Reflections: Lurking to Learning Today is my one month “blogversary!” I don’t have much to add to the conversation on Advisory for msSunFun, but I do have an item on my To Do page that I would like to tackle: My journey to becoming a blogger. I first “lurked” onto the “mathedublogger” scene just over SEVEN years ago! At the time I was teaching at an Alternative High School, and we had just been “chosen” to participate in a 1-1 program beginning during the 2005-6 school year. We (the teachers) had some training at the end of the school year and then took our laptops home over the summer. Wow! The Internet in your lap! During the hours/days/weeks spent searching for online tools and resources I ran across some “bloggers” talking about what they do in the classroom – especially in regards to technology. It was still a fairly new idea from my perspective. Every once in awhile I would wander back and take a look at what was happening. Often one site would link me to another, and so on, and so on. . . Meanwhile, I moved to one of the middle schools in the district and was no longer involved in the 1-1 program. However, our Computer Lab teacher shared GoogleReader with us, and my lurking seriously went up a notch or two. My tiny iPod touch became my window into the “mathedubloggosphere.” I don’t have the data to back it up, but I really do believe that the MathEd blog scene has grown exponentially – meaning that the growth was actually quite gradual to begin with, but then started to take off! I remember running across dy/dan (what a cool blog name, I thought) before he WAS dy/ dan! I “lurked” as some of my favorites, Continuous Everywhere but Differentiable Nowhere, f(t), and MathMamaWrites built their followings. ThinkThankThunk next slammed onto the scene along with his SBG cohort Point of Inflection. Even though most of the bloggers were teaching at the high school level (or higher) and I was now working with sixth graders, I could relate to a lot of what they were saying – especially in regards to SBG. I thought, hmmmmm, maybe I should do this. However, I am the LAST one in a large group of people that I don’t really know to actually speak up. . .And I wonder why my own kids are so shy. . . The next “jump” in my lurking occurred when I got my iPad and found Flipboard, that I shared in this post. It makes scrolling through blogposts pure pleasure! I would look at the blog rolls of the bloggers I was reading, check out new writers, subscribe to their posts, and so on, and so on. . . After returning from our vacation this summer, I read some posts about Twitter Math Camp. Seriously? These people just planned their own “retreat/workshop/conference!?” Wow! More new bloggers were added to my feed. . .and then one afternoon I was weeding out in the yard, listening to my iPod when I heard the lyrics, “This is your life, are you who you want to be. . .” and something just clicked. Yeah, I can do this! You know, it makes so much sense to do it NOW, when I am just going back to teaching full time after being part time for 16 years. Sure, I have PLENTY of time n my hands! Oh well, I downloaded the WordPress app (recommended by Sam) and signed up for Twitter to boot! After all, I had already picked out a name and everything. The rest, as they say, is history. . .but not really what I expected. It has been a roller coaster ride over the past month. I started out with ideas about what I wanted to “say,” but I find myself “saying” a lot of other things too. msSunFun came onto the scene shortly after I first started. “Ok, I’ll try that.” Then I noticed the Made4Math Monday. “Suuuure, that too.” Sam posted his New Blogger Initiative. “Hey, that’s me! I’ll sign up.” All of a sudden I have more ideas to share than there are hours in the day. How to keep up? Oh yeah, and school is starting up soon, too! My Start, Stop, Continue post includes blogging goals that I hope I can keep. I even made a To Do’s page to keep myself honest. Posting comments on other blogs was one of my first “baby steps” after I started my blog. I really appreciated it when the blogger would then reply back. Especially now that I added all of the NBI bloggers to my feed, I find myself overwhelmed with how much there is to read. I want to make “meaningful comments,” as opposed to “I really love that idea.” Maybe that’s more of a “Twitter” response to a post. Twitter has been hard to “jump into.” I once tweeted that I felt like the “new kid at school,” just listening in on other conversations – except they may have happened hours ago! It is very strange sometimes. I will “reply” and then realize that the person may have already moved waaaay beyond that part of the conversation. I don’t know how people can even BEGIN to follow as many people as they When I was lurking it was so much more of a passive experience. Now that I am “in it,” I have been learning sooooo much more from others. I am reading posts and tweets from Middle School Math teachers who are out there in force, (@jruelbach, @4mulafun, @fawnpnguyen, @mr_stadel, @Borschtwithanna, @mathbratt, the list goes on, and on, and on) and I didn’t really know about them before. I LOVE the posts and twitter conversations with/between those involved in Math Education (@delta_dc, @mathhombre, @ChrisHunter36, @trianglemancsd) that make me think more deeply about learning mathematics! I feel a kinship with other “newbies” like me (@danbowdoin – although he is on the fast track to blogger stardom, @G8rAli – who should really start a blog, @aekland – who has such thoughtful posts, @ray_emily – who has an abundance of enthusiasm, and Pai Intersect – who I haven’t seen on Twitter, but has great insights on his blog.) I vacillate between thinking that “nothing I have to say has any value when compared to all of the ideas that others have shared” and “oh, I really want to chime in,” or “I think I should share that. . .” I am surprised at how much I have learned about myself and my teaching from writing posts for the blog. @ray_emily tweeted earlier today: “I’m finding I have a new clarity / fresh eyes on a topic after blogging about it.” I entirely agree! I am especially looking forward to learning even more as I blog about my experiences in the classroom :) New Blogger Initiative: Integer Context Cards For Week 2 of the “New Math Blogger Initiative” (or is it an “initiation?” hmmmmm) I decided to learn how to embed a document using Scribd and show something that I am proud to share! If you read my Made4Math post from last week (Math Cards) you know I like “multi-purpose” tools. I haven’t been teaching Middle School too long, in the grand scheme of things, and when I first had the opportunity to introduce Integers to a class of very low sixth graders (all Level 1 on the state test) I knew that putting them into context would make all the difference. So. . . what are some contexts for integers? Well, there’s temperature, and altitude, and money. . .? I brainstormed long and hard and came up with quite a list. Without further ado, my very first Scribd document! Integer Context Cards (Hmmmm. Rats! The font changed when I uploaded the document. The original was in Herculanum, a very cool looking ALL CAPS font, so now it appears that I don’t know my capitalization rules – oh well.) There are a total of eighteen different contexts with six cards for each one. Some are “stretching it” a bit, but still reasonable. Note: for altitude, one of the cards is for Arlington, Washington where my school is located, so you might want to “personalize” that one. :) I made one copy of each page on a different color of copy paper to make them easier to sort and then had them laminated, cut, and paperclipped in sets of six. (I only made one set for the entire class, but you can certainly create duplicates, especially if you have a very large class.) Order, Order, Order Phase 1: After a brief exposure to a few of the cards, we started our first activity with the cards. My students were already assigned to groups of six. We actually went out into a space in the hall for this.) Each group received a different set of cards, passed them out amongst the group members, and silently “raced” to put themselves in order from least to greatest. Once a group was done, they ALL had to raise their hands. After I checked for accuracy, they turned in their set and grabbed a new one. (The first year I did this I checked the groups off on a master sheet, but the following years I just trusted their memory – “We already did that set.”) The goal was to accurately get through as many sets as they could in the time allotted. Often enough, groups were in too much of a hurry to read carefully enough to identify the “key words” that signified whether their value was positive or negative. Getting a “no” when their hands were raised definitely encouraged them to take their time a bit more. Phase 2: The next activity took it just a bit further. Groups (of three this time) had a sheet of number lines. Each time they received a new set of cards they had to “fairly accurately” plot and label all six values (along with zero if it wasn’t in the set) on a number line. Choosing a scale was challenging for some of the sets (especially with the first group of students.) (See the cool font? Oh, well.) Integer Operations in Context A few weeks later in the year, my seventh grade class was working on operations with integers. I printed the Integer Card sheets four to a page and created little packets for students to share. (See the image above.) Using a “Think Pair Share” type of model, I posed questions using the people, places, or things on the cards and students had to write a math sentence, model the problem on a number line, and find the value (first on their own paper, then with a partner on a mini-whiteboard.) A big key was writing the math sentence as opposed to just finding the value. I wanted them to make the connection so that when they saw a “naked numbers” problem they could try to connect it to the contexts we worked with in class. We initially focused on addition and subtraction situations: (**The white text is shown first. After the Think-Pair-Share on whiteboards I reveal the number sentence and diagram and move onto the next problem.**) Next we moved onto subtraction in “finding differences” contexts, as well as multiplication and division. I dropped the “number line” requirement, although we did end up sharing it on some of the more challenging multiplication/division situations. (Again, the white text for each problem is given first, TPS, then the yellow answer is revealed, discussed, and we move on to the next problem.) In subsequent years I used the Integer Operation activities with my sixth grade classes as we introduced this seventh grade concept in the Spring after the state test. During “Review Activities” time in class I had small groups create original “stories” along with their associated number sentences, from the cards, but I think I would like to make that a more integral part of the initial learning as well. Ahhhhh, context! Even if it is really “pseudo context,” in this situation students have something to grab onto that helps them make sense of the integer operations as opposed to a mercurial “set of rules to follow.” On the other hand, it is THROUGH these experiences that students begin to create their OWN rules about the patterns involved in integer operations, but only when the concepts “make sense” to THEM! I think we are finding a bit of EMU right there. :) Note: There are multiple slides for each operation, and if I got my act together I might copy and paste them all into one slide show and try to attach them, but that would take a higher level of understanding on Scribd than I currently possess, as the slides are all now on my iPad, and I am just sharing screenshots. Made4Math Monday: Monster Whiteboards Yesterday I finished getting my “monster” whiteboards ready and I brought them into my classroom today. I am far from the first to have created these learning tools. Frank Nochese sang their praises here, and Anna followed up with another post as well. I use the little mini-whiteboards quite regularly with pairs, and they will still have a place in my class, but the opportunity for groups of three or four is really exciting! I especially envision using them for problem solving, such as the chessboard problems for my First Day Activities. I am concerned that we will not always have enough time to complete the “solving” as well as the “sharing” in one class period, but my solution will be to at least take photos of all of the boards and project them on the screen the following day as groups present. Another intriguing use will be the Mistake Game, as described by Kelly O’Shea. (Go there! Read it!) Sooooooo looking forward to trying this out – especially with my Algebra students because I think they will thrive on it. However, in the long run I predict it will be incredibly valuable to my Math 8 students in freeing them from fear of failure. The classes on the whole are not full of students who have been successful in the past, and developing a classroom culture where mistakes are acceptable and even celebrated as ways to learn concepts more deeply (EMU!) will be a huge step for their learning. I bought mine at the local Home Depot for about $13.50 per sheet. I picked up two that they cut (for free!) into the six 24″ x 32″ pieces recommended by Frank. (I considered going 2′ x 2′, but I am very glad I went with the extra inches on the length.) My next task was to put duct tape (or duck tape) on the edges to keep them from degrading. This is where Anna gets all the credit! I had planned on “buying” some of my daughter’s stash until I “did the math!” Each board requires almost ten feet of tape (112 inches of perimeter, plus a bit extra on either end that I cut off while taping.) Since I had twelve boards, the 120 feet of tape would have decimated her supply. (As it was she wasn’t going to give me the “fancy” stuff anyways.) I decided to go with my school colors and picked up a blue roll and a yellow roll of 20 yds each. Needless to say, there’s not much left. (Maybe just enough to use for periodic “repairs.”) The tape cost $7 for the two rolls, for a grand total of $34 + tax, or about $3 a board. (The cuts were free, and didn’t take long at all, but the taping process took me over an hour!) Here they are! Notice as well, the PERFECT storage space that is holding the other ten! Now, for the writing instruments. I am always frustrated by how quickly the dry erase markers die. Last year I picked up some of the Crayola Dry Erase Crayons. Other than the periodic breakage, they seem to “last” much longer. (Hey, when they break, one crayon turns into two!) In addition, I have to watch out for pieces that accidentally “chip” off, are left on the floor, and then get ground into the carpet.😞 The crayons look pretty sharp (sharp = awesome, not sharp = pointed) on these boards, and the color variety is great, with the exception of the yellow crayon – whoever thought it would work well on a whiteboard was sadly mistaken. I guess they sell yellow markers as well – maybe it’s for those black dry erase surfaces. I “inherited” a number of boxes of the crayons (eight colors in a box plus an eraser “mitt” and sharpener) when I moved into this classroom – enough that each group could have their own box! (I will generally only use eight boards at one time.) The kids liked markers better, but maybe the different colors will “sell it.” One more note about the duct tape. In hindsight I don’t think yellow was a great choice. As I was erasing one of the boards today I realized the dry erase particles will accumulate along the edges of the tape, so lighter colors will start to look “grungy” after awhile. Oh, well. One “short” story about my whiteboard history. In the early nineties (yes. . .I am dating myself, but I have already done that on a previous post) I was teaching Calculus. I had one particular student who repeatedly asked if he could go up and work out problems on the corner of the board while the class was working on a set of problems. It was certainly fine with me. He commented on how his thoughts seemed to “flow” from his brain when he used a whiteboard. It was not long before he bought himself a 1′ x 2′ framed board to use at home, as well as another one that he brought to school (and stored in the classroom) to use at his desk. I have no idea where he found these boards, as I had never seen them in stores. At the end of the year he gave me the one he had brought to school and I still have it to this day. (It was one of the last years that I taught Calculus. Not many years later, I went on maternity leave and then came back to teaching just part time. My own kids scribbled and doodled on it while they were toddlers.) Jump forward about ten years, when I was volunteering in my son’s first grade class, I observed his teacher using a class set of mini-whiteboards with her students. Within about a year, I had a set for my classes. Now I am on to the next phase – MONSTER whiteboards! (Ok, it wasn’t so short. Have I mentioned that I ramble . . .?)
{"url":"http://findingemu.wordpress.com/","timestamp":"2014-04-20T05:43:39Z","content_type":null,"content_length":"110552","record_id":"<urn:uuid:ea3577d2-c5f5-41d7-90a8-4ec8aa3b57fe>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Groups, Quantum Categories and Quantum Field Theory Results 1 - 10 of 50 - Jour. Math. Phys , 1995 "... For a copy with the hand-drawn figures please email ..." - Annals of Mathematics "... Abstract. In this paper we extend categorically the notion of a finite nilpotent group to fusion categories. To this end, we first analyze the trivial component of the universal grading of a fusion category C, and then introduce the upper central series ofC. For fusion categories with commutative Gr ..." Cited by 76 (17 self) Add to MetaCart Abstract. In this paper we extend categorically the notion of a finite nilpotent group to fusion categories. To this end, we first analyze the trivial component of the universal grading of a fusion category C, and then introduce the upper central series ofC. For fusion categories with commutative Grothendieck rings (e.g., braided fusion categories) we also introduce the lower central series. We study arithmetic and structural properties of nilpotent fusion categories, and apply our theory to modular categories and to semisimple Hopf algebras. In particular, we show that in the modular case the two central series are centralizers of each other in the sense of M. Müger. Dedicated to Leonid Vainerman on the occasion of his 60-th birthday 1. introduction The theory of fusion categories arises in many areas of mathematics such as representation theory, quantum groups, operator algebras and topology. The representation categories of semisimple (quasi-) Hopf algebras are important examples of fusion categories. Fusion categories have been studied extensively in the literature, , 1996 "... It has been discussed earlier that ( weak quasi-) quantum groups allow for conventional interpretation as internal symmetries in local quantum theory. From general arguments and explicit examples their consistency with (braid-) statistics and locality was established. This work addresses to the reco ..." Cited by 49 (8 self) Add to MetaCart It has been discussed earlier that ( weak quasi-) quantum groups allow for conventional interpretation as internal symmetries in local quantum theory. From general arguments and explicit examples their consistency with (braid-) statistics and locality was established. This work addresses to the reconstruction of quantum symmetries and algebras of field operators. For every algebra A of observables satisfying certain standard assumptions, an appropriate quantum symmetry is found. Field operators are obtained which act on a positive definite Hilbert space of states and transform covariantly under the quantum symmetry. As a substitute for Bose/Fermi (anti-) commutation relations, these fields are demonstrated to obey local braid relation. Contents 1 Introduction 1 2 The Notion of Quantum Symmetry 5 3 Algebraic Methods for Field Construction 9 3.1 Observables and superselection sectors in local quantum field theory . . . . 10 3.2 Localized endomorphisms and fusion structure . . . . . .... , 1999 "... The geometry of D-branes can be probed by open string scattering. If the background carries a non-vanishing B-field, the world-volume becomes noncommutative. Here we explore the quantization of world-volume geometries in a curved background with non-zero Neveu-Schwarz 3-form field strength H = dB. U ..." Cited by 44 (6 self) Add to MetaCart The geometry of D-branes can be probed by open string scattering. If the background carries a non-vanishing B-field, the world-volume becomes noncommutative. Here we explore the quantization of world-volume geometries in a curved background with non-zero Neveu-Schwarz 3-form field strength H = dB. Using exact and generally applicable methods from boundary conformal field theory, we study the example of open strings in the SU(2) Wess-Zumino-Witten model, and establish a relation with fuzzy spheres or certain (non-associative) deformations thereof. These findings could be of direct relevance for D-branes in the presence of Neveu-Schwarz 5-branes; more importantly, they provide insight into a completely new class of world-volume geometries. "... A 2-Hilbert space is a category with structures and properties analogous to those of a Hilbert space. More precisely, we define a 2-Hilbert space to be an abelian category enriched over Hilb with a ∗-structure, conjugate-linear on the hom-sets, satisfying 〈fg,h 〉 = 〈g,f ∗ h 〉 = 〈f,hg ∗ 〉. We also ..." Cited by 43 (13 self) Add to MetaCart A 2-Hilbert space is a category with structures and properties analogous to those of a Hilbert space. More precisely, we define a 2-Hilbert space to be an abelian category enriched over Hilb with a ∗-structure, conjugate-linear on the hom-sets, satisfying 〈fg,h 〉 = 〈g,f ∗ h 〉 = 〈f,hg ∗ 〉. We also define monoidal, braided monoidal, and symmetric monoidal versions of 2-Hilbert spaces, which we call 2-H*-algebras, braided 2-H*-algebras, and symmetric 2-H*-algebras, and we describe the relation between these and tangles in 2, 3, and 4 dimensions, respectively. We prove a generalized Doplicher-Roberts theorem stating that every symmetric 2-H*-algebra is equivalent to the category Rep(G) of continuous unitary finite-dimensional representations of some compact supergroupoid G. The equivalence is given by a categorified version of the Gelfand transform; we also construct a categorified version of the Fourier transform when G is a compact abelian group. Finally, we characterize Rep(G) by its universal properties when G is a compact classical group. For example, Rep(U(n)) is the free connected symmetric 2-H*-algebra on one even object of dimension n. 1 - Rev. Math. Phys , 1999 "... A two-sided coaction δ: M → G ⊗M⊗G of a Hopf algebra (G, ∆, ǫ, S) on an associative algebra M is an algebra map of the form δ = (λ ⊗ idM) ◦ ρ = (idM ⊗ ρ) ◦ λ, where (λ, ρ) is a commuting pair of left and right G-coactions on M, respectively. Denoting the associated commuting right and left actions ..." Cited by 25 (1 self) Add to MetaCart A two-sided coaction δ: M → G ⊗M⊗G of a Hopf algebra (G, ∆, ǫ, S) on an associative algebra M is an algebra map of the form δ = (λ ⊗ idM) ◦ ρ = (idM ⊗ ρ) ◦ λ, where (λ, ρ) is a commuting pair of left and right G-coactions on M, respectively. Denoting the associated commuting right and left actions of the dual Hopf algebra ˆ G on M by ⊳ and ⊲, respectively, we define the diagonal crossed product M ⊲ ⊳ ˆ G to be the algebra generated by M and ˆ G with relations given by ϕm = (ϕ (1) ⊲m ⊳ ˆ S −1 (ϕ (3)))ϕ (2), m ∈ M, ϕ ∈ ˆ G. We give a natural generalization of this construction to the case where G is a quasi–Hopf algebra in the sense of Drinfeld and, more generally, also in the sense of Mack and Schomerus (i.e., where the coproduct ∆ is non-unital). In these cases our diagonal crossed product will still be an associative algebra structure on M ⊗ ˆ G extending M ≡ M ⊗ ˆ1, even though the analogue of an ordinary crossed product M ⋊ ˆ G in general is not well defined as an associative algebra. Applications of our formalism include the field algebra constructions with quasi-quantum - Commun. Contemp. Math "... Abstract. We classify all unitary modular tensor categories (UMTCs) of rank ≤ 4. There are a total of 35 UMTCs of rank ≤ 4 up to ribbon tensor equivalence. Since the distinction between the modular S-matrix S and −S has both topological and physical significance, so in our convention there are a tot ..." Cited by 13 (7 self) Add to MetaCart Abstract. We classify all unitary modular tensor categories (UMTCs) of rank ≤ 4. There are a total of 35 UMTCs of rank ≤ 4 up to ribbon tensor equivalence. Since the distinction between the modular S-matrix S and −S has both topological and physical significance, so in our convention there are a total of 70 UMTCs of rank ≤ 4. In particular, there are two trivial UMTCs with S = (±1). Each such UMTC can be obtained from 10 non-trivial prime UMTCs by direct product, and some symmetry operations. Explicit data of the 10 non-trivial prime UMTCs are given in Section 5. Relevance of UMTCs to topological quantum computation and various conjectures are given in Section 6. 1. - In "... Abstract. After a brief review of recent rigorous results concerning the representation theory of rational chiral conformal field theories (RC-QFTs) we focus on pairs (A, F) of conformal field theories, where F has a finite group G of global symmetries and A is the fixpoint theory. The comparison of ..." Cited by 12 (3 self) Add to MetaCart Abstract. After a brief review of recent rigorous results concerning the representation theory of rational chiral conformal field theories (RC-QFTs) we focus on pairs (A, F) of conformal field theories, where F has a finite group G of global symmetries and A is the fixpoint theory. The comparison of the representation categories of A and F is strongly intertwined with various issues related to braided tensor categories. We - In preparation "... We show that the author’s notion of Galois extensions of braided tensor categories [22], see also [3], gives rise to braided crossed G-categories, recently introduced for the purposes of 3-manifold topology [31]. The Galois extensions C ⋊ S are studied in detail, and we determine for which g ∈ G non ..." Cited by 10 (4 self) Add to MetaCart We show that the author’s notion of Galois extensions of braided tensor categories [22], see also [3], gives rise to braided crossed G-categories, recently introduced for the purposes of 3-manifold topology [31]. The Galois extensions C ⋊ S are studied in detail, and we determine for which g ∈ G non-trivial objects of grade g exist in C ⋊ S. 1 - J. reine angew. Math "... Abstract. We give a full classification of all braided semisimple tensor categories whose Grothendieck semiring is the one of Rep ` O(∞) ´ (formally), Rep ` O(N) ´ , Rep ` Sp(N) ´ or of one of its associated fusion categories. If the braiding is not symmetric, they are completely determined by th ..." Cited by 9 (0 self) Add to MetaCart Abstract. We give a full classification of all braided semisimple tensor categories whose Grothendieck semiring is the one of Rep ` O(∞) ´ (formally), Rep ` O(N) ´ , Rep ` Sp(N) ´ or of one of its associated fusion categories. If the braiding is not symmetric, they are completely determined by the eigenvalues of a certain braiding morphism, and we determine precisely which values can occur in the various cases. If the category allows a symmetric braiding, it is essentially determined by the dimension of the object corresponding to the vector representation. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1654348","timestamp":"2014-04-18T12:26:43Z","content_type":null,"content_length":"36708","record_id":"<urn:uuid:981cc65b-cc16-4019-b702-a8a4348d35e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Here's the stuff we found when you searched for "neck problems" • neck up • Somebody Else's Problem • Monty Hall Problem • oil-water friend problem • Consumer producer problem • Boundary value problems • The Eugenics Problem • Houston, we've had a problem • problem solving • Hilbert's third problem • Elizabeth I's decision to deal with her financial problems • 99 problems but a bitch ain't one • suicide is a temporary solution to a permanent problem • Great Neck • whispers circles into my neck by soft fingers • education problem • the problem at hand • The inherent problem with liberal programs • Problems with the CyberCrime Treaty • The &mdash; problem • Other People's Problems • Integrability Problem in Economics • Black Hand Over Europe - The Croat Problem - III. What the Man-in-the-Street Thinks • Problems with functionalism as an international relations theory • Smart cow problem • The problem with nodes condoning time travel • Perl proggy for making guitar neck diagrams of scales • XOR Problem • Gettier problem • The Problem of the Prisoners • Suicide is a permanent solution to a temporary problem • Everything Daylogs • Problems with John Rawls' Veil of Ignorance • The four vortex problem • The problem with domestic robots today • Hamlet and His Problems • Me versus Mental Health Problems • neck down • NP Complete Problem • Monty Hall problem solution • Technology is the remedy for problems caused by technology • Computer Problem Report Form • One problem with being born really soon after Christmas • coupon collector's problem • The real problem of Israel • Two water jugs problem • The problems with tests in science • Models of American Racial Discrimination • Nintendo Wii power supply problem • Helping children with reading problems • Play the bird with the long neck • If a girl offers you her cheek, kiss her neck • No Shoes, No Shirt, No Problem • 2038 problem • the disappearing area problem • The cheating husband problem • I have this problem with saying "no" to people • problem domain • Pragmatism by William James: Lecture III: Some Metaphysical Problems ... • Foreign aid to Africa • Common knowledge problem • Houston, we have a problem • neck pain • My Problem Child • The mutual problem of Christians and feminists • The problem with Italian food • How to Solve an Academic Problem • The problem with people who think life is inexpressibly beautiful is that they so often try to express it anyway • Problems with the progressive movement in America • The year 292277026596 problem • A, B, and C problems • Bongard Problem • The Mercosur Problem • socialist calculation problem • It's hers elf, that's the problem. • The girls avoid band inflicted neck hickeys • Electrical problems of great magnitude • A Mathematical Problem • Monty Hall problem problem • Pancake Sorting Problem • horizon problem • answer: coupon collector's problem • high ball problems • problem (user) • The straggler problem • The problem of evil in Saint Augustine's Confessions • hailstone problem • P versus NP Problem • Bones of the head and neck • the candy canes problem • I've got severe gibberish problems • Malfatti's right triangle problem • knapsack problem • the fake coin problem • The four problems of surgery, how they were overcome, and when • Sleeping barber problem • The Life on Mars Problem • The Basel problem • The Five-Card Logic Problem • what are you, in love with your problems? • stiff neck • Drinking problem • philosophical problem • Pinky isn't the problem • the cause of, and solution to, all of life's problems • Hammer and nail problem • The problem with having parents who don't fully understand computers • lonely runner problem • A problem with solipsism • Problem of Good • The Final Problem • I realize that I've been missing the backs of necks • old chestnut: house number problems • word problem • Scrappy Mac's Punchout Problem • flatness problem • The Tiger Problem • The real problem with Microsoft • The Devil's contract problem • Ski rental problem • the problem of horror in theatre • Problems with flying today • Collatz Problem (node_forward) • Problem Sleuth • neck stall • The Problem of Pain • Life is not a management problem. • The Problem with Bush and Gore • Approximation Problem • Suicide is a permanent solution to a permanent problem • Physics Problem #3 • Violence and its effect on the individual artist as a social problem • Problems with peer-to-peer file-sharing programs • A problem with shared responsibility • Spinoza's solution to the mind/body problem • Being tickled in the neck by a girl you know a little bit • problem solving matrix • The problem with the Law of increasing possibility • The problem of life and death • Women and their weight problem • The Problem With Popplers • The 7-Eleven problem • Graph-Colouring Problem • circle / ellipse problem • Ostrich approach to problem solving • A problem ordering beer in San Diego • Problems with Porn • How to break your neck and freak people out • The three geometric problems of antiquity • answer: house number problems • The problems of the modern west • The IE Problem • Is there a problem, (insert title here)? • The Tiger Problem - Answer • kitchen remodeling problem • How to resolve carrier-level telco problems • Ski rental problem solution • Smale's Problems • A trying problem to a worthwhile solution • double neck guitar • intractable problem • The Everything Initials Problem • The problem with small towns • The real problem with computerized translations • blue-eyed suicide - a math problem • Dining Cryptographers Problem • System resource problems • The Problem Vector • On the problem of empathy • Butterflies are passive aggressive and put their problems on the shelf, but they're beautiful • Pandeism and the problem of good and evil • neck cheese • Violent solutions to technical problems If you Log in you could create a "neck problems" node. If you don't already have an account, you can register here.
{"url":"http://everything2.com/title/neck+problems","timestamp":"2014-04-19T13:23:06Z","content_type":null,"content_length":"29640","record_id":"<urn:uuid:6c43f3c6-b8a3-40e1-bdc7-c3c3874e50ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the coalescence factor? I am reading an article on experimental nuclear physics. The article is about deuteron and triton production in Pb + Pb collisions. In the article they mention the coalescence factor which is given [itex]B_{A}=A\frac{2s_{A}+1}{2^{A}}R^{N}_{np}\left(\frac{h^{3}}{m_{p}\gamma V}\right)^{A-1}[/itex] The coalescence factor has something to do with the formation of light clusters A(Z,N). So A is the mass number of the cluster, the R_np thing is the ratio of neutrons to protons participating in the collision, gamma is just the lorentz factor related to the velocity of the cluster and V is the volume of of particles at freeze-out (after hadronization, when there are no more strong interactions between the nucleons). I don't know what s_A is though. I have worked out the units for A = 2 to be: [itex]s_{A}\cdot \frac{ev^{2}s}{m}+\frac{ev^{2}s}{m}[/itex] But i don't know the units of s_A. I was hoping for this to turn out unit-less, since it could then be interpreted as a sort of probability of a cluster A(Z,N) to form. Now i'm not sure what it is. I apologize if this post was supposed to go in homework. Technically it is a sort of homework question, since i am supposed to present the article as an exam tomorrow. But i figured that there was a bigger chance that someone could help me with this on the HEP board.
{"url":"http://www.physicsforums.com/showthread.php?p=4150536","timestamp":"2014-04-17T03:50:12Z","content_type":null,"content_length":"31111","record_id":"<urn:uuid:d8aa2fd1-b39d-4a70-b205-5095bc83b2d4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
from The American Heritage® Dictionary of the English Language, 4th Edition • n. The general form or a quantity indicative of the general form of a statistical frequency curve near the mean of the distribution. from Wiktionary, Creative Commons Attribution/Share-Alike License • n. A measure of "peakedness" of a probability distribution, defined as the fourth cumulant divided by the square of the variance of the probability distribution. Greek kurtōsis, bulging, curvature, from kurtos, convex; see sker-^2 in Indo-European roots. (American Heritage® Dictionary of the English Language, Fourth Edition) From Ancient Greek κύρτωσις ("bulging, convexity"), from κυρτός (kurtos, "bulging"). (Wiktionary) Log in or sign up to get involved in the conversation. It's quick and easy.
{"url":"https://wordnik.com/words/kurtosis","timestamp":"2014-04-21T08:21:22Z","content_type":null,"content_length":"34435","record_id":"<urn:uuid:fa562fd3-1204-464f-94c3-8319ad579fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Circular Logic not worth a Millikelvin Guest post by Mike Jonas A few days ago, on Judith Curry’s excellent ClimateEtc blog, Vaughan Pratt wrote a post “Multidecadal climate to within a millikelvin” which provided the content and underlying spreadsheet calculations for a poster presentation at the AGU Fall Conference. I will refer to the work as “VPmK”. VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted. The background to VPmK was outlined as “Global warming of some kind is clearly visible in HadCRUT3 [] for the three decades 1970-2000. However the three decades 1910-1940 show a similar rate of global warming. This can’t all be due to CO2 []“. The aim of VPmK was to support the hypothesis that “multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming that can be accounted for 99.98% by the AHH law []“, · the sawtooth is a collection of “all the so-called multidecadal ocean oscillations into one phenomenon“, and · AHH law [Arrhenius-Hofmann-Hansen] is the logarithmic formula for CO2 radiative forcing with an oceanic heat sink delay. The end result of VPmK was shown in the following graph Fig.1 – VPmK end result. · MUL is multidecadal climate (ie, global temperature), · SAW is the sawtooth, · AGW is the AHH law, and · MRES is the residue MUL-SAW-AGW. As you can see, and as stated in VPmK’s title, the residue was just a few millikelvins over the whole of the period. The smoothness of the residue, but not its absolute value, was entirely due to three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“. If the aim of VPmK is to provide support for the IPCC model of climate, naturally it would remove all of those things that the IPCC model cannot handle. Regardless, the astonishing level of claimed accuracy shows that the result is almost certainly worthless – it is, after all, about climate. The process What VPmK does is to take AGW as a given from the IPCC model – complete with the so-called “positive feedbacks” which for the purpose of VPmK are assumed to bear a simple linear relationship to the underlying formula for CO2 itself. VPmK then takes the difference (the “sawtooth”) between MUL and AGW, and fits four sinewaves to it (there is provision in the spreadsheet for five, but only four were needed). Thanks to the box filters, a good fit was obtained. Given that four parameters can fit an elephant (great link!), absolutely nothing has been achieved and it would be entirely reasonable to dismiss VPmK as completely worthless at this point. But, to be fair, we’ll look at the sawtooth (“The sinewaves”, below) and see if it could have a genuine climate meaning. Note that in VPmK there is no attempt to find a climate meaning. The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“. The sinewaves The two main “sawtooth” sinewaves, SAW2 and SAW3, are: Fig.2 – VPmK principal sawtooths. (The y-axis is temperature). The other two sinewaves, SAW4 and SAW5 are much smaller, and just “mopping up” what divergence remains. It is surely completely impossible to support the notion that the “multidecadal ocean oscillations” are reasonably represented to within a few millikelvins by these perfect sinewaves (even after the filtering). This is what the PDO and AMO really look like: Fig.3 – PDO. (link) There is apparently no PDO data before 1950, but some information here. Fig.4 – AMO. Both the PDO and AMO trended upwards from the 1970s until well into the 1990s. Neither sawtooth is even close. The sum of the sawtooths (SAW in Fig.1) flattens out over this period when it should mostly rise quite strongly. This shows that the sawtooths have been carefully manipulated to “reserve” the 1970-2000 temperature increase for AGW. Fig.5 – How the sawtooth “reserved” the1980s and 90s warming for AGW. VPmK aimed to show that “multidecadal climate has only two significant components”, AGW and something shaped like a sawtooth. But VPmK then simply assumed that AGW was a component, called the remainder the sawtooth, and had no clue as to what the sawtooth was but used some arbitrary sinewaves to represent it. VPmK then claimed to have shown that the climate was indeed made up of just these two components. That is circular logic and appallingly unscientific. The poster presentation should be formally retracted. [Blog commenter JCH claims that VPmK is described by AGU as "peer-reviewed". If that is the case then retraction is important. VPmK should not be permitted to remain in any "peer-reviewed" 1. Although VPmK was of so little value, nevertheless I would like to congratulate Vaughan Pratt for having the courage to provide all of the data and all of the calculations in a way that made it relatively easy to check them. If only this approach had been taken by other climate scientists from the start, virtually all of the heated and divisive climate debate could have been avoided. 2. I first approached Judith Curry, and asked her to give my analysis of Vaughan Pratt’s (“VP”) circular logic equal prominence to the original by accepting it as a ‘guest post’. She replied that it was sufficient for me to present it as a comment. My feeling is that posts have much greater weight than comments, and that using only a comment would effectively let VP get away with a piece of absolute rubbish. Bear in mind that VPmK has been presented at the AGU Fall Conference, so it is already way ahead in public exposure anyway. That is why this post now appears on WUWT instead of on ClimateEtc. (I have upgraded it a bit from the version sent to Judith Curry, but the essential argument is the same). There are many commenters on ClimateEtc who have been appalled by VPmK’s obvious errors. I do not claim that my effort here is in any way better than theirs, but my feeling is that someone has to get greater visibility for the errors and request retraction, and no-one else has yet done so. 118 Responses to Circular Logic not worth a Millikelvin 1. Assume AGW is a flat line and repeat the analysis. When the fit remains near perfect, trumpet the good news that AGW is no more! 2. Another failed attempt to force dynamic, poorly understood, under-sampled, natural process into some kind of linear deterministic logic system. It simply will not work. This foolishness is not worth any further time or effort on the part of serious scientists. 3. What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do. Nobody that I’ve seen has yet done so. Now I’m not in any way suggesting that what VP has done is in any way useful science. But still, can you not simply alter the 4 or 5 sines waves to show that you can provide just as good a fit without the AHH curve? If you can, please present it here. And if you cannot, then VP remains uncontested. 4. Why is it that the rationalizations of the Warmistas are beginning to remind me of Ptolemy and The Almagest? 5. I worked in the Banking Industry for most of my adult life. During that time, many people would be applying for finance for this business or that business – maybe a mortgage, maybe a loan. All would arrive with their shiny spreadsheet proving their business model was viable and would soon show profitability I never saw any proposed business plan to the Bank that didn’t show remarkable profit – certainly none ever predicted a loss. Nonetheless, the vast majority of those business plans would fail abysmally. Just goes to show, any spreadsheet can be made to produce whatever results the author wants – just tweak here or tweak there. Now about milli-kelvins ? 6. ‘VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.” 1. its not circular. 2. its not a proof or support for models. 3. you cant retract a poster. 4. This is basically the same approach that many here praise when scafetta does it. basically he is showing that GIVEN the truth of AGW, the temperature series can be explained by a few parameters. GIVEN, is the key you misunderstand the logic of his approach. 7. “[Blog commenter JCH claims that VPmK is described by AGU as "peer-reviewed". If that is the case then retraction is important. VPmK should not be permitted to remain in any "peer-reviewed" Describing VPmk as peer-reviewed is incorrect. Abstracts published by AGU for either poster sessions or presentations made at the meeting are not peer-reviewed. There are quite a few comments at the blog after JCH on this topic. 8. VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted. That is over-wrought. Vaughan Pratt described exactly what he did and found, and published the data that he used and his result. If the temperature evolution of the Earth over the next 20 years matches his model, then people will be motivated to find whatever physical process generates the sawtooth. If not, his model will be disconfirmed along with plenty of other models. Lots of model-building in scientific history has been circular over the short-term: in “The Structure of Scientific Revolutions” Thomas Kuhn mentions Ohm’s law as an example, and Einstein’s special relativity; lots of people have noted the tautology of F = dm/dt where m here stands for momentum. Pratt merely showed that, with the data in hand, it is possible to recover the signal of the CO2 effect with a relatively low-dimensional filter. No doubt, the procedure is post hoc. The validity of the approach will be tested by data not used in fitting the functions that he found. 9. Steven Mosher wrote: 4. This is basically the same approach that many here praise when scafetta does it. I agree with that. 10. Steven Mosher: You innumerate four points in your post at December 13, 2012 at 9:04 am. I address each of them in turn. 1. its not circular. (Clearly, it is “circular” in that it removes everything from the climate data except what the climate models emulate then says the result of the removal agrees with what the climate emulate when tuned to emulate it.) 2. its not a proof or support for models. (Agreed, it is nonsense.) 3. you cant retract a poster. (Of course you can! All you do is publish a statement saying it should not have been published, and you publish that statement in one or more of the places where the “poster” was published; e.g. in this case, on Judith Curry’s blog.) 4. This is basically the same approach that many here praise when scafetta does it. (So what! Many others – including me – object when Scafetta does it. Of itself that indicates nothing.) The poster by Vaughan Pratt only indicates that Pratt is a prat: live with it. 11. OOOps! I wrote (Clearly, it is “circular” in that it removes everything from the climate data except what the climate models emulate then says the result of the removal agrees with what the climate emulate when tuned to emulate it.) Obviously I intended to write (Clearly, it is “circular” in that it removes everything from the climate data except what the climate models emulate then says the result of the removal agrees with what the models emulate when tuned to emulate it.) 12. I agree. I commented about it on Curry’s blog and called it worthless. I was particularly annoyed that he used HadCRUT3 which is error-ridden and anthropogenically distorted. I could see that he was using his computer skills to create something out of nothing and did not understand why that sawtooth did not go away. That millikelvin claim is of course nonsense and was simply part of his applying his computer skills without comprehending the data he was working with. Suggested that he write a program to find and correct those anthropogenic spikes in HadCRUT and others. 13. On the sidelines of the V.Pratt’s blog presentation there was a secondary discussion between myself and Dr. Svalgaard about the far more realistic causes of the climate change. Since Dr.S. often does peer review on the articles relating to solar matters, leaving the trivia out, I consider our exchanges as an ‘unofficial peer review of my calculations’, no mechanism is considered in the article, just the calculations. This certainly was not ‘friendly’ review, although result may not be conclusive, I consider it a great encouragement. If there are any scientists who are occasionally involved in the ‘peer review’ type processes, I would welcome the opportunityto submit my calculations. My email as in my blog id followed by 14. Vaughan presented an interesting idea which has been roundly tested by many commenters in a spirit of science and hotly contested views within a framework of courtesy by Vaughan defending his ideas. Personally I’m not convinced that CET demonstrates his theory , in fact I think it shows he is wrong, but if every post, whether here or at climate etc, was discussed in such a thorough manner everyone would gain, whatever side of the fence they are on. 15. vukcevic says: December 13, 2012 at 9:33 am This certainly was not ‘friendly’ review, although result may not be conclusive, I consider it a great encouragement It seems that In the twisted world of pseudo-science even a flat-out rejection is considered a great encouragement. 16. Steveta_uk Re Pratt’s “if you can provide a better fit, please do.” Science progresses by “kicking the tires”. Models are only as robust and the challenges put to them and their ability to provide better predictions when compared against hard data – not politics. The “proof the pudding is in the eating”. Currently the following two models show better predictive performance than IPCC’s models that average 0.2C/decade warming: Relationship of Multidecadal Global Temperatures to Multidecadal Oceanic Oscillations Joseph D’Aleo and Don Easterbrook, Evidence-Based Climate Science. Elsevier 2011, DOI: 10.1016/ Nicola Scafetta, Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models Journal of Atmospheric and Solar-Terrestrial Physics 80 (2012) 124–137 For others views on CO2, see Fred H. Haynie The Future of Global Climate Change No amount of experimentation can ever prove me right; a single experiment can prove me wrong. Albert Einstein 17. “22-year and 11-year solar cycles and all faster phenomena“. What happened to longer term cycles? Oh, they must be AGW./sarc 18. Curve-fitting is not proof of anything, especially when the input data is filtered. That heat-sink delay also needs some scrutiny. Worse, the data time range is fairly short, in geological terms. On top of that, a four component wave function? Get real. It’s wiggle-matching, with some post hoc logic thrown in. I’m underwhelmed. 19. Mosh: 4. This is basically the same approach that many here praise when scafetta does it. Sorry what N. Scaffeta does is fit all parameters freely and see what results. What Pratt did was fit his exaggerated 3K per doubling model ; see what’s left , then make up a totally unfounded waveform to eliminate it. Having thus eliminated it, he screwed up his maths and managed to also eliminate the huge discrepancy that all 3K sensitivity model have after 1998. Had he got the maths correct it would have been circular logic. As presented to AGU it was AND STILL IS a shambles. Attribution to AMO PDO is fanciful. The whole work is total fiction intended to remove the early 20th c. warming that has always been a show-stopper for CO2 driven AGW. At the current state of the discussion on Curry’s site, he has been asked to state whether he recognises there is an error or stands by the presentation as given to AGU and published on Climate At the time of this posting , no news from Emeritus Prof Vaughan Pratt. 20. The IPCC says that the current total forcing (all sources – RCP 6.0 scenario) is supposed to be about 2.27 W/m2 in 2012. On top of that, we should be getting some water vapour and reduced low cloud cover feedbacks from this direct forcing so that there should be a total of about 5.0 W/m2 right now. The amount of warming, however, (the amount that is accumulating in the Land, Atmosphere, Oceans and Ice) is only about 0.5 W/m2. Simple enough to me. Climate Science is much like the study of Unicorns and their invisibility cloaks. 21. I want to draw people’s attention to the frequency content of VPmK SAW2 and SAW3 wave forms. Just by eye-ball, these appear to be 75 year and 50 year frequencies. As Mike Jonas points out that early in the paper VP posits they come from major natural ocean oscillations but later a more flexible “whatever its origins.” I am not going to debate the origins of the low frequency. Take only from VPmK the temperature record contains significant very low frequency wave forms, wavelengths greater than 25 years, needed to match even heavily filtered temperatures records where three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“. All that is left in VPmK data is very low frequency content and there appears to be a lot of it. My comment below takes the importance of low frequency in VPmK and focuses on BEST: Berkley Earth and what to me appears to be minimally discussed wholesale decimation and counterfeiting of low frequency information happening within the BEST process. If you look at what is going on in the BEST process from the Fourier domain, there seems to me to be major losses of critical information content. I first wrote my theoretical objection to the BEST scalpel back in April 2, 2011 in “Expect the BEST, plan for the worst.” I expounded at Climate Audit, Nov. 1, 2011 and some other My summary argument remains unchanged after 20 months: 1. The Natural climate and Global Warming (GW) signals are extremely low frequency, less than a cycle per decade. 2. A fundamental theorem of Fourier analysis is frequency resolution dw/2π Hz = 1/(N*dt) .where dt is the sample time and N*dt is the total length of the digitized signal. 3. The GW climate signal, therefore, is found in the very lowest frequencies, low multiples of dw, which can only come from the longest time series. 4. Any scalpel technique destroys the lowest frequencies in the original data. 5. Suture techniques recreate long term digital signals from the short splices. 6. Sutured signals have in them very low frequency data, low frequencies which could NOT exist in the splices. Therefore the low frequencies, the most important stuff for the climate analysis, must be derived totally from the suture and the surgeon wielding it. From where comes the low-frequency original data to control the results of the analysis ? Have I misunderstood the BEST process? Consider this from Muller (WSJ Eur 10/20/2011) Many of the records were short in duration, … statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one. “Simply sliced the data.” “Avoided data-selection bias” – and by the theorems of Fourier embraced high frequency selection bias and created a bias against low frequencies. There is no free lunch here. Look at what is happening in the Fourier Domain. You are throwing a way signal and keeping the noise. How can you possibly be improving signal/noise ratio? Somehow BEST takes all these fragments lacking low frequency, and “glues” them back together to present a graph of temperatures from 1750 to 2010. That graph has low frequency data – but from where did it come? The low frequencies must be counterfeit – contamination from the gluing process, manufacturing what appears to be a low frequency signal from fitting high frequency from slices. This seems so fundamentally wrong I’d sooner believe a violation of the 1st Law of Thermodynamics. A beautiful example of frequency content that I expect to be found in century scale un-sliced temperature records is found in Lui-2011 Fig. 2. reprinted in WUWT In China there are no hocky sticks Dec. 7, 2011 The grey area on the left of the Fig. 2 chart is the area of low frequency, the climate signal. In the Lui study, a lot of the power is in that grey area. It is this portion of the spectrum that BEST’s scalpel removes! Fig. 4 of Lui-2011 is a great illustration of what happens to a signal as you add first the lowest frequency and successively add higher frequencies. Power vs Phase & Frequency is the dual formulation of Amplitude vs Time. There is a one to one correspondence. If you apply a filter to eliminate low frequencies in the Fourier Domain, and a scalpel does that, where does it ever come back? If there is a process in the Berkley glue that preserves low frequency from the original data, what is it? And were is the peer-review discussion of its validity? If there is no preservation of the low frequencies the scalpel removes, results from BEST might predict the weather, but not explain climate. 22. The real shame here lies in The Academy. As the author did supply all details behind the work when asked, I must assume it was done in good faith. The problem is that PhD’s are being awarded without the proper training/education in statistical wisdom. Anybody can build a model and run the numbers with a knowledge of mathematical nuts and bolts and get statistical “validation.” But a key element that supports the foundation upon which any statistical work stands seems to be increasingly ignored. That element has a large qualitative side to it which makes it more subtle thus less visible. Of course I am speaking of the knowing, understanding, and verifying of all the ASSUMPTIONS (an exercise with a large component of verbal logic) demanded by any particular statistical work to be trustworthy. I had this drilled into me during my many statistical classes at Georgia Tech 30 years ago. Why this aspect seems to be increasingly ignored I can’t say, but I can say, taking assumptions into account can be a large hurdle to any legitimate study, thus very inconvenient. I imagine publish or perish environments and increasing politicization may have much to do here. The resultant fallout and real crime is the population of scientists we are cultivating are becoming less and less able to discriminate between the different types of variation that need to be identified so that GOOD and not BAD decisions are more likely. Until the science community begins to take the rigors of statistics seriously, its output must be considered seriously flawed. To do otherwise risks the great scientific enterprise that has achieved so much. 23. lsvalgaard says: December 13, 2012 at 10:07 am Currently I am only concerned with the calculations, no particular mechanism is considered in my article, just volumes of data, AMO, CET, N.Hemisphere, Arctic, atmospheric pressure, solar activity, the Earth’s magnetic variability, comparisons against other known proxies and reconstructions. Since you couldn’t fail my calculations, you insisted on steering discussion away from the subject (as shown in this condensed version) with all trivia from both sides excluded: Let’s remember: Dr. L. Svalgaard :If the correlation is really good, one can live with an as yet undiscovered mechanism. I do indeed do consider it a great encouragement that you didn’t fail calculations for One step at the time. Thanks for the effort., its appreciated. Soon I’ll email Excel data on the using 350 years of geological records instead of geomagnetic changes .Two reinforce each other. We still don’t exactly understand how gravity works, but maths is 350 years old. I missed your usually ‘razor sharp dissection’ of Dr. Pratt’s hypothesis 24. I don’t understand the objections to simplifying models until the correct outcome is achieved. After all if the sun really had anything to do with temperature it would get colder at night and warmer during the day. 25. vukcevic says: December 13, 2012 at 11:11 am Since you couldn’t fail my calculations Of course, one cannot fail made-up ‘data’. What is wrong with your approach is to compute a new time series from two unrelated time series, and to call that ‘observed data’. 26. Stephen Rasey: re your post at December 13, 2012 at 11:00 am. Every now and then one comes across a pearl shining on the sand of WUWT comments. The pearls come in many forms. Your post is a pearl. Its argument is clear, elegant and cogent. Thankyou. 27. Should it not be milliKelvin? 28. Steveta_uk says: What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do.. A “better fit” is not useful. In cases like this the correct model will not provide a better fit, because the correct model has deviations between theory and reality due to noise.. If I have a 100% normally distributed population and take 100 samples then the result will never be an exact normal distribution. I could model a “distribution” that better matched my samples, but it would most certainly not tell me anything useful. In fact it would lead me to believe my actual population was not normal. This is why I am highly suspicious of any model that is trained on old data. It is basically an exercise in wiggle matching, not an exercise in getting the underlying physics correct. The best climate models will have pretty poor fit to old temperatures. 29. Substitute a 1300 year wave length sine with a max at MWP and min at LIA for the AGW function and you will get similar results. 30. lsvalgaard says: December 13, 2012 at 11:21 am vukcevic says: December 13, 2012 at 11:11 am Since you couldn’t fail my calculations Of course, one cannot fail made-up ‘data’. What is wrong with your approach is to compute a new time series from two unrelated time series, and to call that ‘observed data’. Wrong Doc. Magnometer at the Tromso does it every single minute of the day and night. In red are Incoming variable solar magnetic field sitting on the top of the variable Earth’s magnetic field. Rudolf Wolf started it with a compass needle, Gauss did it with a bit more sophisticated apparatus, and today numerous geomagnetic stations do it as you listed dozens in your paper on IDV. So it is OK for Svalgaard of Stanford to derive IDV from changes of two combined magnetic fields, but is not for Vukcevic. Reason plain and obvious, it would show that the SUN DOES IT ! Here is how geomagnetic field (Eart + solar) is measured and illustrated by our own Dr. Svalgaard and he maintains they are not added together in his apparatus. Can anyone spot 3 magnets? Dr. S are you really serious to suggest that no changes in the Earth field are registered by your apparatus? Case closed! 31. Steveta_uk says: December 13, 2012 at 8:55 am What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do. Nobody that I’ve seen has yet done so. Now I’m not in any way suggesting that what VP has done is in any way useful science. But still, can you not simply alter the 4 or 5 sines waves to show that you can provide just as good a fit without the AHH curve? If you can, please present it here. And if you cannot, then VP remains uncontested. Let me get this correct. I publish a paper showing that the rise and fall of women’s skirts plus a saw tooth pattern provides a good fit to the curve. Since no one can provide a better ‘fit’ than that the paper has to 32. vukcevic says: December 13, 2012 at 12:22 pm So it is OK for Svalgaard of Stanford to derive IDV from changes of two combined magnetic fields, but is not for Vukcevic. It is OK to derive two time series of the external field driven by the same source, but not to confuse and mix the external and internal fields that have different sources and don’t interact. Case is indeed closed, as you are incapable of learning. 33. As mentioned by Steveta_uk and others …. Rather than engage in histrionics, the way to refute the Vaughan Pratt poster is to create a similar spreadsheet (or modify his spreadsheet) to show a nominal AGW signal. As nearly as I can tell, Dr. Pratt has done everything responsible skeptics ask: - Formulated a hypothesis - Presented all of the supporting data - Published in an accessible forum - Asked for feedback. I have not dropped in on the thread for a couple of days. However, I suspect that if someone has published a spreadsheet model that refutes Dr. Pratt’s, then it would have been mentioned here. I do not care whether four parameters can fit an elephant. I would like to see someone mathematically refute Dr. Pratt’s model. I took a look and realized I do not have the time to reacquire the expertise to do it. (I had the expertise years ago and even have the optimization code I wrote for my AI class that could be adapted to this problem). In my case, I strongly believe there are contradictory cases (it is just math and there are a lot of variables), but until someone devotes the mental sweat to create one (maybe Nick Scafetta has per Steven Mosher @ 9:04 am), Dr. Pratt’s result stands as he has described it. He asks people to show the contradictions. Finally re circularity. I agree that that post hoc curve fitting can be described as circular. All of the GCMs do it to reproduce historical temperature. What Dr. Pratt has done is simplify the curve fitting to a spreadsheet we can all use. 34. I am with Steveta, on this one. Unless someone comes up with better numerology, Professor Pratt’s numerology stands. 35. Steveta_uk says: “What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do.” Why would anyone want spend time searching for a “better fit” of an exaggerated exponential, bend down my a broken filter, plus a non physically attributable wiggle to ANYTHING? Please explain the motivation and rewards of such an exercise. 36. RobertInAz: I have not dropped in on the thread for a couple of days. …. Dr. Pratt’s result stands as he has described it. He asks people to show the contradictions. Then you ought to do so before commenting , no? He asks for criticisms but it’s fake openness. He clearly has no intent of admitting even the most blatant errors in his pseudo-paper-poster. Oops is not in the vocabulary of this great scientific authority. 37. We have a few people with logic problems here. Steven Mosher: 1. “its not circular“. Of course it’s circular. Circular Logic is when you claim to have proved something that you assumed in the first place. 2. “its not a proof or support for models“. It certainly looks like an attempt to support the models. But VP’s motives are private, which is why I used “(presumably)“. 3. “you cant retract a poster“. As Richard Courtney has pointed out, of course you can retract a poster. 4 “This is basically the same approach that many here praise when scafetta does it“.. What people think about Scafetta or anyone else is irrelevant to the content of VP’s poster. However, if it illogically helps you to accept what I say now, here is what I once commented on a Loehle and Scafetta paper: “I’m calling BS on this paper.“. http://tinyurl.com/cxxo4lw You say “if you cannot [provide a better fit], then VP remains uncontested“. Utter tosh. I have just contested VP’s poster. Therefore VP’s poster has indeed been contested. You do not have to put up alternatives in order to contest something. Matthew R Marler” You say “That is over-wrought. Vaughan Pratt described exactly what he did and found, and published the data that he used and his result.“. Yes it’s true that VP published his data, methods and result. I congratulated him on publishing all the data and workings, and I truly meant it. If only all climate scientists did that then surely climate science couldn’t have got into its current mess. But the result was still obtained by circular logic. 38. jack hudson says: December 13, 2012 at 11:07 am On statistics – I have notice that since computers and statistical packages became readily available in the 1980′s there has been a shift away from using a trained statistician to do-it-yourself statistics. ‘Six Sigma’ in industry is an example. The statistical training I got from the ‘Six Sigma’ program at work was absolute crap. All they taught was how to use the computer program with not even a basic explanation of different types of distribution to go with it or even the warning to PLOT THE DATA so you could see the shape of the distribution. They did not even get into attributes vs variables! It reminds me of the shift from the use of well trained secretaries who would clean up a technonut’s English and pry the need infor out of him to having everyone write their own reports. My plant manager in desperation insisted EVERYONE in the plant take night courses in English composition. Too bad Universities do not insist that anyone using statistics must take at least three semesters of Stat. 39. lsvalgaard says: December 13, 2012 at 12:43 pm It is OK to derive two time series of the external field driven by the same source, but not to confuse and mix the external and internal fields that have different sources and don’t interact. As this apparatus does: records combined solar and Earth’s fields or as Vukcevic does in here: calculates combined solar and Earth’s fields Do you suggest than the combined field curve that happens to match temperature change in the N. Hemisphere is coincidental, it just appeared by chance ? 40. fhhaynie says: December 13, 2012 at 12:17 pm Substitute a 1300 year wave length sine with a max at MWP and min at LIA for the AGW function and you will get similar results. That is pretty much what the Chinese, Liu Y, Cai Q F, Song H M, et al. did. They used 1324 years. GRAPH: http://jonova.s3.amazonaws.com/graphs/china/liu-2011-cycles-climate-tibet-web.gif Figure 4 Decomposition of the main cycles of the 2485-year temperature series on the Tibetan Plateau and periodic function simulation. Top: Gray line,original series; red line, 1324 a cycle; green line, 199 a cycle; blue line, 110 a cycle. Bottom: Three sine functions for different timescales. 1324 a, red dashed line (y = 0.848 sin(0.005 t + 0.23)); 199 a, green line (y = 1.40 sin(0.032 t – 0.369)); 110 a, blue line (y = 1.875 sin(0.057 t + 2.846)); time t is the year from 484 BC to 2000 AD. 41. @Gail. anyone using statistics must take at least three semesters of Stat. I agree. Three semesters of statistics would be more generally useful throughout adult life than three semesters of Calc. I’m 34 years out of my B.Sc, 31 from my Ph.D. The college texts I return to most often are my Stat books, Johnson & Leone 1977. Not that calc isn’t useful. Not that it isn’t required for “Diff_E_Q”. But statistics is the one of the only courses that by design gives you training in uncertainty, to quantify what you don’t 42. vukcevic says: December 13, 2012 at 1:22 pm records combined solar and Earth’s fields It records the external and internal fields superposed by Nature and thus existing in Nature or as Vukcevic does in here: calculates combined solar and Earth’s fields no, you calculate a field that does not exist in Nature by combining two that are not physically related Do you suggest than the combined field curve that happens to match temperature change in the N. Hemisphere is coincidental, it just appeared by chance ? I say that the quantity you calculate does not exist in Nature and therefore that any correlation is spurious or worse. But I thought your case was closed. Keep it that way, instead of carpet bombing every thread on every blog with it. 43. Can’t you just take the fourier transform of the raw data to find the frequency components and phase shifts and whatever else is left over? I do applaud the guy’s work with sines and exponentials. At least it isn’t the standard linear regression garbage!!! 44. Vaughan Pratt has commented on ClimateEtc that “The game therefore is to come up with two curves we can call NAT and HUM for nature and humans, such that HadCRUT3 = NAT + HUM + shorter-term variations, where NAT oscillates without trending too far in a few centuries, and some physically justifiable relation between HUM and the CDIAC CO2 data can be demonstrated. To within MRES I’ve proposed SAW for NAT, AGW for HUM, [...]“. In order to show that VP’s version of NAT and HUM are unsupported, it is sufficient to show, as I have done, that they are unsupported. It is not necessary to propose a different version. However, others have looked at NAT. eg, http://wattsupwiththat.com/2010/09/30/amopdo-temperature-variation-one-graph-says-it-all/ I haven’t investigated their workings, so I am not in a position to say whether their graph is worth anything, but at least it is using real data on the PDO and AMO. If they have got it right (NB. that’s an “If”), then they have nailed NAT, and HUM looks to be around a flat zero. 45. lsvalgaard says: December 13, 2012 at 1:41 pm I calculate combined effect of two variables, as you could calculate combined effect of wind and temperature on the evaporation, but in my case it happens that both variables are magnetic fields. Are you happy now? I post on other blogs so the readers also should be aware You don’t need to follow me around, if you think it is not worth your attention. Why are you so concerned ? In a way I am pleasantly surprised that you are devoting all your attention to my ‘nonsense’ rather than the ‘brilliant’ work of your Stanford colleague, discussed above. Either you think Dr. Pratt’s work is of a superb quality or utter rubbish, in either case no comment of yours is required. Good night. 46. vukcevic says: December 13, 2012 at 3:01 pm I calculate combined effect of two variables, as you could calculate combined effect of wind and temperature on the evaporation, but in my case it happens that both variables are magnetic fields. Are you happy now? Wind and temperature and evaporation are physically related. Your inputs are not. that they are both magnetic fields is irrelevant, it makes as much sense to combine them as it would the fields of the Sun and Sirius. I post on other blogs so the readers also should be aware I think the readers are ill served with nonsense. You don’t need to follow me around, if you think it is not worth your attention. Why are you so concerned ? Because scientists have an obligation to combat pseudo-science and provide the public with correct scientific information. Even though not all do that. In a way I am pleasantly surprised that you are devoting all your attention to my ‘nonsense’ rather than the ‘brilliant’ work of your Stanford colleague, discussed above. You should be ashamed of peddling your nonsense, not pleased when found out. Either you think Dr. Pratt’s work is of a superb quality or utter rubbish Curve fitting is what it is. If one believes in it has little bearing on the mathematical validity of the fitting procedure. I asked him to make an experiment for me and the result was that what he called the ‘solar curve’ was different in solar data and in CET and HadCRUT3 temperature data and between the latter two as well. This settled the matter for me at least. 47. Mike Jonas: But the result was still obtained by circular logic. In filtering, there is a symmetry: if you know the signal, you can find a filter that will reveal it clearly; if you know the noise, you can design a filter to reveal the signal clearly. Pratt assumed a functional form for the signal (he said so at ClimateEtc), and worked until he had a filter that revealed it clearly. The thought process becomes “circular” if you “complete the circle”, so to speak, and conclude that: since he found what he assumed, then it must be true. My only claim is that, given what he did, the result can be, and should be, tested on future data. I have written about the same regarding the modeling of Vukcevic and Scafetta. I would say the same regarding the curve-fitting of Liu et al cited by Gail Combs above. Elsewhere I have written the same of the modeling of Latif and Tsonis, and of the GCMs. I do not expect any extant model to survive the next 20 years’ worth of data collection, but I think that the data collected to date do not clearly rule out very much — though alarmist predictions made in 1988-1990 look less credible year by year. 48. Mike Jonas: However, others have looked at NAT. eg, http://wattsupwiththat.com/2010/09/30/amopdo-temperature-variation-one-graph-says-it-all/ I haven’t investigated their workings, so I am not in a position to say whether their graph is worth anything, but at least it is using real data on the PDO and AMO. If they have got it right (NB. that’s an “If”), then they have nailed NAT, and HUM looks to be around a flat zero. Well said. 49. RobertInAz – “As nearly as I can tell, Dr. Pratt has done everything responsible skeptics ask: - Formulated a hypothesis” – Yes, he did that. “- Presented all of the supporting data” – Yes, he did that, and I congratulated him for it. “- Published in an accessible forum” – Yes, he did that. “- Asked for feedback.” – Yes, he did that. And just in case you haven’t noticed, I have given him feedback. That feedback stated in no uncertain terms, and with full explanation, using the same data as Dr. Pratt, that he had used circular logic and that his findings were worthless. Matthew R Marler – “The thought process becomes “circular” if you “complete the circle”, so to speak, and conclude that: since he found what he assumed, then it must be true. My only claim is that, given what he did, the result can be, and should be, tested on future data.“. VP’s claimed results flowed from his initial assumptions. That’s what makes it circular. And just in case you didn’t notice, I tested the key part of his result (the sawtooth) against existing data (the PDO and AMO) and found that it did not represent the “multidecadal ocean oscillations” as claimed. So (a) the logic was circular, and (b) it was tested anyway and found wanting. 50. I submit that a simpler and better fit of the unfiltered data is 0.573-0.973sine(x/608+.96)+0.108sine(x/63+1.21)+0.038sine(x/20+1.46) where x=2PI*year. AGW may be covarient with that 608 year cycle and contributes a little bit to the magnitude of the coefficient -.973. Most of the residual looks like a three to five year cycle. 51. Marler & Jonas, Vuk, et al. 8<) The thought process becomes “circular” if you “complete the circle”, so to speak, and conclude that: since he found what he assumed, then it must be true. My only claim is that, given what he did, the result can be, and should be, tested on future data. I have written about the same regarding the modeling of Vukcevic and Scafetta. I would say the same regarding the curve-fitting of Liu et al cited by Gail Combs above. Elsewhere I have written the same of the modeling of Latif and Tsonis, and of the GCMs. I do not expect any extant model to survive the next 20 years’ worth of data collection, but I think that the data collected to date do not clearly rule out very much — though alarmist predictions made in 1988-1990 look less credible year by year. OK, so try this: We have a (one) actual real world temperature record. It has a “lot” of noise in it, but it is the only one that has actual data insideits noise and high-frequency (short-range or month-to-month) variation and its longer-frequency (60 ?? year and (maybe) 800-1000-1200 year variations. Behind the noise of recent changes – somewhere “under” the temperatures between 1945 and 2012 – there “might be” a CAGW HUM signal that “might be” related to CO2 levels: and there “might be” a non-CO2 signal related to UHI effect starting about 1860 for large cities that tails off to a steady value , and starts off between 1930 and 1970 for smaller cities and what are now rural areas. Both UHI “signals” would begin at near 0, ramp up as population increases in the area between 5000 and 25,000, then slows as the area saturates with new buildings and people after 25,000 people. The satellite data must be assumed correct for the whole earth top to bottom. The satellite data varies randomly month-to-month by 0.20 degrees. So it appears you must first put an “error band” of +/-0.20 degrees around your thermometer record BEFORE looking for any trends to analyze. Any data within +/- .20 degrees of any running average can be proven with today’s instruments for the entire earth to be just noise. Then, you try to eliminate the 1000 year longer term cycle – if you see it at all. Then, after all the “gray band” is eliminated, then you can start eliminating the short cycle term. Or looking at the problem differently, start looking for the potential short cycle. 52. So we take a mathematical process we know can fit any curve (I seem to recall Fourrier first showed that a set of sinusoidal oscillations at various frequencies and amplitudes would fit any curve to any accuracy given enough different sine waves; in other words any curve can be expressed as a spectrum) and gasp when it only takes 4 curves to fit to a few mK data that are (a) known to oscillate, so will fit closely with relatively few curves (b) have the non-oscilliatory component arbitrarily removed (see (a)) and (c) don’t actually vary very much, so a few milli Kelvin is proportionally not as precise a fit as it sounds. Of course we don’t mention that none of the data points can possibly be defined to the level that expressing in milli Kelvin has any meaning. 53. RACookPE1978 – What you say seems to generally make sense. One problem is that every time you enter a new factor (UHI, etc, and there are lots of them) into the equation, you also have to widen your error band. What seems reasonable to me is that the ocean oscillations are visible in the last century or so of surface temperature record, and obviously contributed significantly to 20thC warming. UHI is clearly present too – see Watts 2012, though there is probably more of it in the USA than elsewhere. Other factors such as land clearing, sun + GCRs, volcanoes, etc, etc, need to be added to the mix. The amount “left over” for CO2-driven AGW ends up being a small amount with large error bands. The failure of the tropical troposphere to warm faster than the surface does seem to indicate that the IPCC estimate of AGW is way too high. 54. Mike Jonas: VP’s claimed results flowed from his initial assumptions. That’s what makes it circular. When results follow from assumptions that’s logic or mathematics. It only becomes circular if you then use the result to justify the assumption. You probably recall that Newton showed that Kepler’s laws followed from Newton’s assumptions. Where the “circle” was broken was in the use of Newton’s laws to derive much more than was known at the time of their creation. In like fashion, Einstein’s special theory of relativity derived the already known Lorentz-Fitzgerald contractions; that was a really nice modeling result, but the real tests came later. I tested the key part of his result (the sawtooth) against existing data (the PDO and AMO) and found that it did not represent the “multidecadal ocean oscillations” as claimed. Pratt claimed that he did not know the exact mechanism generating the sawtooth. You showed that one possible mechanism does not fit. I think we agree that Pratt’s modeling does not show the hypothesized CO2 effect to be true. At ClimateEtc I wrote that Pratt had “rescued” the hypothesis, not tested it. That’s all. Most of what you have written is basically true but “over-wrought”. There is no reason to issue a retraction. 55. Just because something can be explained by sine waves proves nothing. In VPmK the anthropogenic component could be replaced by another sine wave of long periodicity to represent the rising AGW That said, I am of the opinion that much of the change in global temperatures can indeed be explained by AGW and the AMO (plus TSI (sunspots) and Aerosols (volcanoes)). See for example my discussion of the paper by Zhou and Tung and my own calculations at: The main conclusion of this work is that AOGCMs have overestimated the AGW component of the temperature increase by a factor of 2 (welcomed by sceptics but not by true believers in AGW) but there is still a significant AGW component (welcomed by true believers but not by sceptics). 56. Steven Mosher says: December 13, 2012 at 9:04 am 4. This is basically the same approach that many here praise when scafetta does it. As usual, Mosher continues in his attempts to mislead people about the real merits of my research. The contorted Mosher’s reasoning is also discussed here: I do not hope that my explanation will convince him because he clearly does not want to understand. However, for the general reader this is how the case is. My research methodology does not have anything to do with the curve fitting exercise implemented in Pratt’s poster. My logic follows the typical process used in science, which is as follows. 1) preliminary analysis of the patterns of the data without manipulation based on given hypotheses. 2) identification of specific patterns: I found specific frequency peaks in the temperature record . 3) search of possible natural harmonic generators that produce similar harmonics; I found them in major astronomical oscillations. 4) search of whether the astronomical oscillations hindcast data outside the data interval studied in (1): I initially studied the temperature record since 1850, and tested the astronomical model against the temperature and solar reconstructions during the Holocene! 5) use an high resolution model to hindcast the signal (1): I calibrate the model from 1850 to 1950 and check its performance in hindcasting the oscillations in the period 1950-2012 and 6) the tested harmonic component of the model is used as a first forecast of the future temperature. 7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here There is nothing wrong with the above logic. It is the way science is actually done, although Mosher does not know it. My modelling methodology equivalent to the way the ocean tidal empirical models (which are the only geophysical models that really work) have been developed. Pratt’s approach is quite different from mine, it is the opposite. He does not try to give any physical interpretation to the harmonics but interprets the upward trending at surely due to anthropogenic forcing despite the well known large uncertainty in the climate sensitivity to radiative forcing. I did the opposite, I interpret the harmonics first and state that the upward trending could have multiple causes that also include the possibility of secular/millennial natural variability that the decadal/multidecadal oscillations could not predict. Pratt did not tested his model for hindcasting capabilities, and he cannot do it because he does not have a physical interpretation for the harmonics. I did hindcast tests, because harmonics can be used for hindcast tests. Pratt’s model fails to interpret the post 2000 temperature, as all AGW models, which implies that his model is wrong. My model correctly hindcast the post 2000 temperature: see again In conclusion, Mosher does not understand science, but I cannot do anything for him because he does not want to understand it. However, many readers in WUWT may find my explanation useful. 57. Nicola Scafetta says: December 13, 2012 at 9:32 pm 7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here It fails around 2010 and you need a 0.1 degree AGW to make it fit. I would say that there doesn’t look to be any unique predictability in your model. A constant temperature the past ~20 years fits even better. 58. anybody know the formula for the sawtooth (which multidecadal series and what factors)as there is certainly no resemblance to the amo… by far the most dominant oscillation. The sawtooth presented looks well planned to me, must have taken a lot of work to construct to get the desired residuals… 59. lsvalgaard says: December 13, 2012 at 9:47 pm It fails around 2010 and you need a 0.1 degree AGW to make it fit. A model cannot fail to predict what it is not supposed to predict. The model is not supposed to predict the fast ENSO oscillations within the time scale of a few years such as the ElNino peak in 2010. That model uses only the decadal and multidecadal oscillations. 60. Nicola Scafetta says: December 13, 2012 at 10:07 pm That model uses only the decadal and multidecadal oscillations. Does that model predict serious cooling the next 20-50 years? 61. lsvalgaard says: December 13, 2012 at 10:29 pm Nicola Scafetta says: December 13, 2012 at 10:07 pm That model uses only the decadal and multidecadal oscillations. Does that model predict serious cooling the next 20-50 years? Yes it does see graph 2. What has that to do with Scafetta? Not much directly, but put in simple terms : Either the solar and the Earth’s internal variability, which we can only judge by observing the surface effects or changes in the magnetic fields as an internal proxy: - sun affects the Earth, since the other way around appears to be negligible. - or caused by common factor, possibly planetary configurations Surface effects correlation: Internal variability (as derived from magnetic fields as a proxy) correlation: As the CET link shows, the best and yhe longest temperature record is not immune to the such sun-Earth link, and neither are relatively reliable shorter recent records from the N. Hemisphere You and Matthew R Marler call it meaningless curve fitting, but as long as data from which the curves are derived it is foolish to dismiss as nonsense.I admit that I can’t explain above in the satisfactory terms, if you whish to totally reject it you got your reasons, and you were welcome to say that in the past as you are now and future. 62. The model used here is fine for interpolation, ie to calculate the temperature at time T-t where T is the present and t is positive. So it would be useful to replace the historic temperature record by a formula. If we need to know temperatures at time T+t then this is an extrapolation which is valid only if the components of the formula represent all the elements of physical reality that detemine the evolution of the climate. But this is precisely what has not been shown! There was a transit of Venus earlier this year and we are told thar the next one will be in 2117. We can be confident in this prediction because we know that the time evolution of the planets is given accurately by the laws of Newton/Einstein. Climate science contains no equivalent body of knowledge 63. Matthew R Marler – Oh the perils of re-editing a comment in a tearing hurry. You correctly point out that what I said “VP’s claimed results flowed from his initial assumptions. That’s what makes it circular” was wrong. The correct statement is “VP’s claimed results are his initial assumptions. That’s what makes it circular.“. What we have is this: He assumed that climate consisted of IPCC-AGW and something else. His finding was that the climate consisted of IPCC-AGW and something else. Now, if we had learned something of value about the ‘something else’, then there could have been merit in his argument. But we didn’t. The ‘something else’s only characteristic was that it could be represented by a combination of arbitrary sinewaves and three box filters. The ‘something else’ began its short life as “all the so-called multidecadal ocean oscillations“, but that didn’t last long because it clearly could not be even remotely matched to the actual ocean oscillations. The ‘something else’ ended its short life as a lame “whatever its origins“. The sum total of VP’s argument is precisely zero. On the ‘something else’ you say of me that “You showed that one possible mechanism does not fit.“. Well, actually I tested the one and only mechanism postulated by VP. There wasn’t anything else that I could test. As I point out above, even VP walked away from that postulated mechanism. I am bemused by your assertion that VP “had “rescued” the hypothesis” and that “There is no reason to issue a retraction.“. There was no rescue of anything, since no argument was presented other than the abovementioned assertions and meaningless curve-fitting. Since the poster has absolutely no merit, the retention of its finding is misleading. The only decent thing for VP do now is to retract it. 64. vukcevic says: December 14, 2012 at 1:26 am it is foolish to dismiss as nonsense. The nonsense part is to make up a data set from two unrelated ones. 65. richardscourtney says: December 13, 2012 at 11:29 am Stephen Rasey: re your post at December 13, 2012 at 11:00 am. Every now and then one comes across a pearl shining on the sand of WUWT comments. The pearls come in many forms. Your post is a pearl. Its argument is clear, elegant and cogent. Thankyou. 66. Nicola Scafetta says: December 13, 2012 at 9:32 pm My modelling methodology equivalent to the way the ocean tidal empirical models (which are the only geophysical models that really work) have been developed. Correct. The tidal models are not calculated from first principles in the fashion that climate models try and calculate the climate. The first principles approach has been tried and tried again and found to be rubbish because of the chaotic behavior of nature. 67. lsvalgaard says: December 14, 2012 at 5:21 am vukcevic says: December 14, 2012 at 1:26 am it is foolish to dismiss as nonsense. The nonsense part is to make up a data set from two unrelated ones. That the data sets are unrelated is an assumption. This can never be know with certainty unless one has infinite knowledge. Something that is impossible for human beings. The predictive power of the result is the test of the assumption. If there is a predictive power greater than chance, then it is unlikely they are unrelated. Rather, they would simply be related in a fashion as yet unknown. 68. ferd berple says: December 14, 2012 at 7:14 am That the data sets are unrelated is an assumption. That they are related is the assumption. Their unrelatedness is derived from what we know about how physics works. 69. lsvalgaard says: December 14, 2012 at 5:21 am The nonsense part is to make up a data set from two unrelated ones. All magneto-meters recordings around the world do it and did it from the time of Rudolf wolf as you yourself show here: to today at Tromso Unrelated ? Your own data show otherwise How does it compare with Pratt’s CO2 millKelvin? More you keep repeating ‘unrelated’ more I think you are trying to suppress this to be more widely known. 70. I have always figured the models were fits to the data. When I took my global warming class at Stanford the head of Lawrence livermores climate modeling team argued at first the models were based on physical formulas but I argued that they keep modifying the models to match the hind casting they do more and more and all the groups do the same. Studies have shown and he admitted readily that none of the models predict any better than each other. In fact only the average of all the models did better than any individual model. Such results are what one expects from a bunch of fits. He acknowledged that they were indeed fits. If what you are doing is fitting the models then a Fourier analysis of the data would produce a model like vpmk which would be much better fit than the computer models and all that vpmk did was demonstrate that if you want to fit the data to any particular set of formulas you can do it easier and with much higher accuracy using a conventional mathematical technique than trying to make a complex iterative computer algorithm with complex formulas match the data. No wonder they need teams of programmers and professors involved since they are trying to make such complex algorithms match the data. Vpmks approach is simpler and way more accurate. The problem with all fits however is that since they don’t model the actual processes involved they are no better at predicting the next data point and can’t be called “science” in the sense that experiments are done and physical facts uncovered and modeled. Instead we have a numerical process of pure mathematics which has no relationship to the actual physics involved. Vpmks “model” conveniently shows the the cyclical responses fading and the exponential curve overtaking. This gives credence to the underlying assumptions he is trying to promulgate but it is no evidence that indeed any physical process is occurring so it is as likely as not to predict the next data point. The idea that the effect of all the sun variations and amo/pdo/Enso have faded is counter to all intuitive and physical evidence. The existence of 16 years of no trend is indicative that whatever effect co2 is having is being completely masked by natural phenomenon vpmk is diminishing yet if the 16 year trend would show that if anything the natural forcings are much higher than before. Instead vpmk attributes more of the heat currently in the system to co2 and reduces the cooling that would be expected by the current natural ocean and sun phenomenon. So just as vpmks model shows natural phenomenon decreasing to zero effect the actual world we know that this is not the case so again another model not corresponding to reality. 71. Steveta_UK “If you can, please present it here”: Here is FOBS without any Multidecadal removed: If you squint closely you might see that there are two lines a red one and a blue one, the standard deviation of the residuals over the 100 year period, 1900 to 2000, is 0.79 mK. For fun, here it is run forward to 2100: -There is a lot to be said for thinking about sawteeth :-) 72. Gail Combs says: [IF] “I publish a paper showing that the rise and fall of women’s skirts plus a saw tooth pattern provides a good fit to the curve. Since no one can provide a better ‘fit’ than that the paper has to stand?” In the spreadsheet: AGW = ClimSens * (Log2CO2y – MeanLog2CO2) + MeanDATA Perhaps I’m a little dense but somebody might have to explain to me the physics behind that formula before I could take any of this seriously. Also in the spreadsheet is a slider bar for climate sensitivity, given enough time to “play” one should be able to slide that to 1 C per 2x CO2 and adjust the sawtooth to arrive at the same results, thereby “proving” (not) climate sensitivity is 1 C / 2X CO2. Seems like a lot of time and effort for nothing to me. 73. “The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.” Wonderfully scientific approach – we need a fudge factor to save the “AHH theory”, so we’ll simply take the deviation from reality, call it “a sawtooth, whatever its origin” and the theory is PARDON ME? Don’t we give the warmists billions of Dollars? Can’t we expect at least a little bit of effort from them when they construct their con? 74. Stephen Rasey says: December 13, 2012 at 11:00 am “I want to draw people’s attention to the frequency content of VPmK SAW2 and SAW3 wave forms. ” Very powerful! Dang, I didn’t think of that! 75. Here is a note to Dr. Svalgaard, not for his (he knows it, possibly far better than I do) but for benefit of other readers. Relatedness of solar and the Earth’s magnetic fields could be considered in three ways 1. Influence of solar on the Earth’s field, well known but short lasted from few hours to few days 2. Common driving force (e.g. planetary ) – considered possible but insignificant forcing factor. 3. Forces of same kind integrated at point of impact by the receptor. Examples of receptors could be: GCR, saline ions in the ocean currents, interactions between ionosphere and equatorial storms (investigated by NASA), etc Simple example of relatedness through a receptor: Daylight and torch light are unrelated by sources and do not interact, but a photocell will happily integrate them, not only that but there is an interesting process of amplification, which I am very familiar with. In the older type image tubes, before CCDs era (saticon, ledicon and plumbicon) there is an exponential law of response at low light levels from the projected image, which further up the curve is steeper and more linear. A totally ‘unrelated’ light from back of the tube known as ‘backlight or bias light’ is projected at the photosensitive front layer. Effect is a so called ‘black current’ which lifts ‘the image current’ from low region up the response curve, result is more linear response and surprisingly higher output, since the curve is steeper further away from the zero. Two light sources are originally totally unrelated, they do not interact with each other in any way, but they are integrated by the receptor, and further more an ‘amplification’ of the output current from stronger source is achieved by presence of a weaker. I know that our host worked in TV industry and may be familiar with the above. So I suggest to Dr. Svalgaard to abondan ‘unrelated’ counterpoint and consider the science in the ‘new light’ of my finding 76. vukcevic: You and Matthew R Marler call it meaningless curve fitting, I don’t think I said that your modeling was “meaningless”; I have said that that the test of its “truth” will be how well it fits future data. 77. Re: models and harmonics I was asked by one of WUWT participants (we often correspond by email) would it be possible to extrapolate CET by few decades. I had ago. The first step was to separate the summer and the winter data (using two months around the two solstices, to see effect of direct TSI input) result: this graph at a later stage was presented on couple of blogs, but Grant Foster (Tamino), Daniel Bailey (ScSci) and Jan Perlwitz (NASA) fell flat on their faces trying to elucidate, why no apparent warming in 350 years of the summer CET, but gentle warming in the winters for whole of 3.5 centuries. Meteorologists knows it well: the Icelandic Low semi-permanent atmospheric pressure system in the North Atlantic. Its footprint is found in the most climatic events of the N. Hemisphere. The strength of the Icelandic Low is the critical factor in determining path of the polar jet stream over the North Atlantic In the winter the IL is located at SW of Greenland (driver Subpolar Gyre), but in the summer the IL is to be found much further north (most likely driver the North Icelandic Jet, formed by complex physical interactions between warm and cold currents), which as graps show had no major ups or downs. Next step: finding harmonic components separately for the summers and winters, Used one common and one specific to each of the two seasons, all below 90 years. By using the common and two individual components, I synthesized the CET adding average of two linear trends. Result is nothing special, but did indicate that a much older finding of the ‘apparent correlation ‘ between the CET and N. Atlantic geological records now made more sense. I digressed, what about the CET extrapolation? Well, that suggest return to what we had in the 1970s, speculative. Although the CET is 350 years long, I would advise caution, anything longer than 15-20 years is no more than the ‘blind fate’. Note: I am not scientist, in no way climate expert, only models I did are the electronic ones, both designed and built working prototypes. 78. Matthew R Marler since I have quoted you wrongly, I do apologise. 79. I mention retraction in my initial post, but I’ll now make it an explicit request: Vaughan Pratt, please will you issue a formal retraction of your poster “Multidecadal climate to within a millikelvin”. 80. vukcevic says: December 14, 2012 at 9:45 am 3. Forces of same kind integrated at point of impact by the receptor. So I suggest to Dr. Svalgaard to abondan ‘unrelated’ counterpoint and consider the science in the ‘new light’ of my finding There is no integrated effect as the external currents have a short life time and decay rapidly. Your ‘findings’ are not science in any sense of that word. You might try to explain in one sentence how you make up the ‘data’ you correlate with. Other people have asked for that too, but you have resisted answering [your 'paper' on this is incomprehensible, so a brief, one sentence summary here might be useful]. 81. Ha! Mr.Whack-a-mole must have gone to bed ;-) Time for a teeny weeny extrapolation, methinks. The past 1,500 years temperature history, (base data in red) (The five free phase sine waves, as above) 82. First, let me also congratulate the author for having the courage to provide all of the data and calculations. Such transparency is in the best interests of science. I also really liked the very first question raised in this thread – “Assume AGW is a flat line and repeat the analysis” and thought that should be a challenge to take up. What I did may be overly simplistic so please correct my attempt. I downloaded the Excel spreadsheet and reset cell V26 (ClimSens) to a value of zero. As expected, the red AGW line on the graph dropped to flat. I then set up some links to the green parameters so they could be dealt with as a single range (a requirement for the Excel Solver Add-in). I played with a few initial parameters to see what they might do, then fired off Solver with the instruction to modify the parameters below with a goal of maximizing cell U35 (MUL R2). No other constraints were applied. Converging to the parameters below, Solver returned a MUL R2 of 99.992%, very slightly higher than the downloaded result. The gray MRES line in the chart shows very flat. (I think it needs one more constant to bring the two flat lines together but couldn’t find that on the spreadsheet.) Have I successfully fit the elephant? Does this result answer Steveta_uk’s challenge above (Dec 13 at 8:55 am)? Or have I missed something here? Cell name value D26 ToothW 2156.84… G23 Shift 1 1928.48… G26 Scale 1 1489.03… H23 Shift 2 3686.05… H26 Scale 2 1386.24… I23 Shift 3 4356.71… I26 Scale 3 2238.07… J23 Shift 4 3468.56… J26 Scale 4 0 K23 Shift 5 2982.83… K26 Scale 5 781.58… M26 Amp 2235.58… 83. Update: Might have found that constant. Setting cell D32 to a value of -0.1325 roughly centers the MRES line around zero and makes the gray detail chart visible. 84. Chas says: December 14, 2012 at 12:45 pm “Ha! Mr.Whack-a-mole must have gone to bed ;-) Time for a teeny weeny extrapolation, methinks. The past 1,500 years temperature history, (base data in red)” Exactly like in the history books! /sarc Thanks, Chas. Beautiful. 85. Mike, I guess that because you are maximising the R2 you are ending up with two parellel but offset fits (by about 32mK). If you minimised the sum of the residuals you would kill two birds with one stone.This would have to be the sum of the absolute values of the residuals or the sum of the squared residuals to stop the negative residuals cancelling out the positive ones. I get the standard deviation ALL of your residuals to be about 2mK; PV selected his SD from the best 100year period to get the ‘less than a millikelvin’ bit, I think. -I notice that the residuals seem to have a clear sine wave in them with an amplitude of about 5mK whilst at the same time you have SAW4 with an amplitude of zero -I wonder if Solver hasnt This all is on the basis that I have entered your solutions correctly! In some ways your fit ought to make more sense to VP than his fit; you have the first sine wave and he is left wondering why his first wave doesnt exist. 86. Leif Svalgaard says: December 14, 2012 at 12:22 pm how do you make up the ‘data’ ‘Phil Jones from CRU’ syndrome, unable to read the Excel file? How to calculate spectrum of the changes in the Earth’s magnetic field see pages 13 & 14, you can repeat the calculations. Since you are so infuriated by ‘unrelated’ magnetic fields, and my ‘art of making-up the data’ you should closely examine Fig. 26, in case you did miss it. That should make it even more See you. 87. Mike Rossander – Thanks for doing that. Please can you put the result graph online. Regarding my request to Vaughan Pratt to retract. I made the same request on ClimateEtc, to which he replied: “The “emperor has no clothes” gambit. Oh, well played, Mike. Mate in one, how could I have overlooked that? ;) Mike, I would agree with your “simple summary” with two small modifications: replace the first F by F(v) and the second by F(V). My clothes will remain invisible to the denizens of Bill Gates’ case-insensitive world, but Linux hackers will continue to admire my magnificent clothing. Here F is a function of a 9-tuple v of variables, or free parameters v_1 through v_9, while V is a 9-tuple of reals, or values V_1 through V_9 for those parameters (a valuation in the terminology of logic). F(v) is a smooth 9-dimensional space whose points are curves expressible as analytic functions of y (smooth because F is an analytic function of the variables and therefore changes smoothly when the variables do). F(V) is one of those curves. To summarize: 1. I assumed F(v). 2. I found F(V) (just as you said, modulo case) 3. at the surface of F(v) very near F3(HadCRUT3). That’s all I did. As you say, very simple. If needed we can always make the simple difficult as follows. With the additional requirement that “near” is defined by the Euclidean or L2 metric (as opposed to say the L1 or taxicab metric), “near” means “least squares.” The least squares approach to estimation is perhaps the most basic of the wide range of techniques treated by estimation theory, on which there is a vast literature. Least-squares fitting has the downside of exaggerating outliers and the advantage of Euclidean geometry, whose metric is the appropriate one for pre-urban or nontaxicab geometry. Euclidean geometry works just as nicely in higher dimensions as it does in three, thereby leveraging the spatial intuitions of those who think visually rather than verbally. We picture F(v) as a 9-dimensional manifold (i.e. locally Euclidean) embedded in the higher-dimensional manifold of all possible time series for 1850-2010 inclusive. Without F3 the latter would be a 161-dimensional space. F3 cuts this very high-dimensional space down to a mere 7*2 = 14 dimensions, on the premise that 161/7 = 23 years is the shortest period still barely visible above the prevailing noise despite losing 20 dB. F3(HadCRUT3), H for short, is a point in this 14-dimensional space. The geometrical intuition here is that F3(HadCRUT3) is way closer to F(V) than HadCRUT3, not because F3 moved it anywhere but merely because the dimension decreased. Given two points near and far from a castle wall, the nearest point on the wall to either can be estimated much more accurately for the point near the wall than for the one far away. Whence F3. Isn’t geometry wonderful? (The factor of two in 7*2 comes from the fact that the space of all sine waves of a given period 161/n years, for some n from 1 to 7, is two-dimensional, having as its two unit vectors sin and cos for that frequency (as first noticed by Fourier?). Letting our imagination run riot, the same factor of 2 is arrived at via De Moivre’s theorem exp(ix) = cos(x) + i sin(x) but that might be too complex for this blog—when I wrote to Martin Gardner in the mid-1970s to complain that his Scientific American column neglected complex numbers he wrote back to say they were a tad too complex for his Scientific American readers.) We’d like F(V) to be the nearest point of F(v) to H in F(v), i.e the global minimum, though this may require a big search and we often settle for a local minimum, namely a point F(V) in F(v) that is nearest H among points in the neighborhood of F(V). In either case MRES is the vector from F(V) to H, that is, H – F(V). Since manifolds are smooth, MRES is normal to (the surface of) F(v) at F(V). Hence very small adjustments to V will not change the length of MRES appreciably, as one would hope with a local minimum. Hmm, better stop before I spout even more abstract nonsense. ;)“. As a very capable and experienced physicist once said to me: Nonsense dressed up in complicated technical language is still nonsense. The “simple summary” to which he referred is on WUWT here: Vaughan Pratt then added a further shot, it seems it was aimed as much at WUWT as at me: “Mike, you can find my “formal retraction” here (right under the post you just linked to). I wrote “The ‘emperor has no clothes’ gambit. Oh, well played, Mike. Mate in one, how could I have overlooked that? ;)” Feel free to announce on WUWT that I “formally retracted” with those words. In the spirit of full disclosure do please include the point that only Windows fans would consider that a retraction, not Linux hackers. WUWT readers won’t have a clue what that means and will simply assume I retracted, whereas those at RealClimate will have no difficulty understanding my meaning. Climate Etc. may be more evenly split.“. 88. vukcevic says: December 14, 2012 at 2:52 pm my ‘art of making-up the data’ you should closely examine Fig. 26, in case you did miss it. You are ducking the question again. 89. lsvalgaard says: December 14, 2012 at 10:11 pm Let’s summaries: Subject of my article is calculation which shows that the natural temperature variability in the N. Hemisphere is closely correlated to the geomagnetic variability with no particular mechanism is 1. You objected: the data was artificially ‘made up’ - this was rebutted by showing that the ‘new data’ is simple arithmetic sum of two magnetic fields. 2. You said: this is not valid since the fields do not interact. - this was rebutted by showing that interaction is the property of receptor, e.g. magneto-meters do react to both combined fields. Secondary interaction is also recognized via induction of electric currents. 3. You said: currents are of short duration from few hours to up to a few days, therefore effect is insignificant. - this happens on a regular bases and it may be sufficient to alter the average temperature of about 290K by + or – 0.4K. 4. You are returning to the starting point: ‘made up’ data (see item 1) - It is not my intention to go forever in circles. You made more than 20 posts, here and elsewhere, regarding my finding, with a very little or no success to invalidate it. My intention is to get more scientific appraisal, as a next step I emailed Dr. J. Haig (from my old university), her interests include solar contribution to the climate change. She is a firm supporter of the AGW theory, you can contact and join forces, if you whish to do so. Content of the email is posted here: 90. vukcevic says: December 15, 2012 at 4:01 am 1. You objected: the data was artificially ‘made up’ - this was rebutted by showing that the ‘new data’ is simple arithmetic sum of two magnetic fields. Repeating an error is not a rebuttal, and a simple inspection of your graph shows that your made up data is not the sum of two ‘magnetic fields’. So, again, how exactly is the data made up? 91. This is in a way of explanation to Dr. Svalgaards statement ‘you made up data’ you are pseudo scientist ,some would think even possibly fraudster, but I am certain that Dr.S didn’t imply it. Here we go: Since the AMO is trend-less time function (oscillating around zero value) than it is assumed that the signed and normalized SSN (to the AMO values) is an adequate representative of the heliospheric magnetic field at the Earth’s orbit. It is equally possible to use McCracken, Lockwood or Svalgaard & Cliver data, but these do not contain either sufficient resolution and mutually disagree, so it is considered that the SSN, as most familiar and internationally accepted data set is the best for the purpose. Earth Magnetic field has number of strong spectral components. One of them is exactly same as the Hale cycle period (as calculated from the SSN). I could have used it as an non-dumped oscillator (clean Cos wave) , but match to the AMO is not as good as using the signed SSN. This points to the fact that the SSN is more likely factor, unless of course the Earth harmonic has same annual ‘modulation’ in manner of the SSN, which would be an extraordinary finding. Such possibility is considered on page 14 Fig. 25 curve dF(t). For purpose of comparison to the AMO from the Earth spectrum is then taken second component and employed as clean Cos wave. I suspect that this component is due to the feedback ‘bounce’ due to propagation delay in the Earth’s interior (see link to Hide and Dickey paper) but this is speculative.It is huge puzzle why the Earth’s magnetic field oscillation component should have as its main period one which is exactly same as the SSN derived Hale cycle, but of much stronger intensity than the heliospheric field . I do not think, but many solar scientist (including yourself) postulate that the solar dynamo has amplification property. Would something of a kind exist within the Earth’s dynamo than it would explain the strong Earth’s component as well as Antarctic field http://www.vukcevic.talktalk.net/TMC.htm .How could this occur: I speculate that since dept of Earth’s crust is 20-40 km, and the geomagnetic storm induced currents reach down to 100km (Svalgaard). It is possible that a magnetized bubble of liquid metal is formed than amplified by the field of the Earth’s dynamo, in a manner of the solar dynamo amplification. Although highly speculative, and despite you promoting solar dynamo amplification you will reject the geo dynamo amplification, but it would explain a lot. Mathematics of periodic oscillations ‘amplification’ is dead simple Cos A + Cos B = 2 Cos (A+B)/2 x Cos (A–B)/2, and vice versa, result: one short and one long period of oscillation, giving rise to the two AMO’s characteristic periods of 9 and 64 years (see pages 5 & 6).Where this process occurs it is not known (it could be in the magnetic field itself or in the oceans as receptor of the two oscillation. Now to the Excel file: Word sum is used with its more general meaning, to described any of the four arithmetic operations as used in the Excel file, but here is a list: Column1: Year; Column2: SSN; Column3: (+ & – 1 to sign SSN); Column4: Hale cycle- SSN with sign (times); Column5: Normalized SSN to AMO (divide); Column6: – Earth field oscillator (Cos, times, minus, divide); Column7: Geosolar oscillator (times); Column8: Geosolar oscillator moved forward by 15 years; Column9: AMO; Column10: AMO 3yma (plus & divide). So what is all this about: http:// www.vukcevic.talktalk.net/GSC1.htm Annoying fact is that you know all of the above, why do you want it all spelt out god only knows. I am not answering any more questions, have go at your Stanford colleague and his milliKelvins. , instead I shall refer you to appropriate page in my article, Excel file you have and this post. Thank you and good bye. 92. I suggest that Vaughan Pratt and some commentators here read up a bit on Fourier theory. Any (yes ANY) periodic or aperiodic continuous function can be decomposed into sine waves to any precision wanted. So it follows that you can also subtract any arbitrary quantity (for example an IPCC estimate of global warming) from that continuous function and it can still be decomposed into sine waves just as well as before, though they would be slightly different sine waves. However note that there is absolutely no requirement that those sine waves have any physical reason or explanation. 93. vukcevic says: December 15, 2012 at 10:06 am why do you want it all spelt out god only knows. What you describe is a perfect example of fake data, selected and made up to fit the best, based on invalid physics. That you call the ‘data’ is deceptive in the extreme. The one deceived in in first line yourself. What happened to your grandiose plan of sending your stuff to all geophysics departments [before AGU] at all major Universities? 94. vukcevic says: December 15, 2012 at 10:06 am why do you want it all spelt out god only knows What you describe is a perfect example of fake, selected, tortured, and made-up stuff twisted to fit an idea. Calling the result ‘data’ is deceptive in the extreme; the one most deceived being yourself, belying your claim that “ the ‘new data’ is simple arithmetic sum of two magnetic fields. BTW, what happened to your grandiose plan of carpet-bombing [before AGU] geophysics departments at all major universities to drum up support for your ideas? 95. Mike Jonas: Regarding my request to Vaughan Pratt to retract. I made the same request on ClimateEtc, to which he replied: I thought I would mention again that Einstein’s 1905 paper on special relativity showed that: by assuming the speed of light to be constant he could derive the already well-known Lorentz-Fitzgerald contraction, an exercise you would regard as circular because the Lorentz-Fitzgerald contraction was already known, and the mechanism by which the speed of light can be independent of the relative motions of source and receiver (whereas the frequency and wavelength are not so independent) is a mystery. I don’t mean to embarrass Dr Pratt by elevating him into the Pantheon with Einstein, but the logic of the two theoretical derivations is the same in the two cases: the result is known, and the procedure produces it. The suspicion surrounding Einstein’s result was such that the Swedish Academy awarded him the Nobel Prize for a different 1905 paper, and the general theory did not begin to gain widespread acceptance until the Eddington expedition in 1919, and that was the subject of acrimonious debate. There is no more reason for Pratt to withdraw this paper than there would have been for Einstein to withdraw his first paper on relativity. 96. Dr. Svalgaard said: What you describe is a perfect example of fake data, selected and made up to fit the best, based on invalid physics. That you call the ‘data’ is deceptive in the extreme. The one deceived in in first line yourself. Calling the result ‘data’ is deceptive in the extreme; the one most deceived being yourself, belying your claim that “ the ‘new data’ is simple arithmetic sum of two magnetic fields. No need to be so furious, sun matters, you know. Hey, not just the ‘ordinary garden nonsense’ this time, something far more valuable. ‘would need to distill the argument into relatively simple points, show a few key figs’ as another university professor said, and than I’ll dispatch few emails. Have a happy Xmas and N. Year. p.s. Apparently a new fable by Hans Christian Anderson is discovered; try to get a first print copy for your grandchildren. 97. vukcevic says: December 15, 2012 at 12:05 pm No need to be so furious, sun matters, you know. No need to be so evasive. Honesty matters, you know. Hey, not just the ‘ordinary garden nonsense’ this time, something far more valuable D-K effect again. There is nothing valuable at all in your stuff. 98. lsvalgaard says: Nicola Scafetta says: “7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here” It fails around 2010 and you need a 0.1 degree AGW to make it fit. I would say that there doesn’t look to be any unique predictability in your model. A constant temperature the past ~20 years fits even better. I recently read an article by a professor at Stanford, one of the top universities in the U.S. , that claimed at 3 deg. / 2XCO2 model was accurate to within one thousandth of a degree. But I’m a bit concerned because he doesn’t know how to do a running mean. Do you think it matters ? 99. lsvalgaard says: A constant temperature the past ~20 years fits even better. The same could be said of 3K per doubling, it sure as hell isn’t within a 1/1000 degree whatever way you spin it. 100. Matthew R Marler says “I thought I would mention again that Einstein’s 1905 paper on special relativity showed that: by assuming the speed of light to be constant he could derive the already well-known Lorentz-Fitzgerald contraction, an exercise you would regard as circular because the Lorentz-Fitzgerald contraction was already known[...]“. That doesn’t look at all logical to me. Circular logic is where your finding is what you assumed in the first place. The fact that you can derive A from B when A is already known doesn’t make the logic circular. Circular is deriving A from A. 101. Mike Rossander and tty – Thanks for your comments. I refer to them on ClimateEtc - - as follows: Thanks, tty. It explains neatly why VP’s finding is completely meaningless without any physical reason or explanation for the sinewaves. Thanks also to Mike Rossander for repeating VP’s exact process but with slightly different parameters. These are the sinewaves generated using VP’s spreadsheet and Mike Rossander’s parameters: and this is the result: Every single argument of VP’s in support of the process that he used with IPCC-AGW applies absolutely equally to the exact same process with AGW = zero. Vaughan Pratt – Now do you see that your argument has no merit? 102. The updated ‘AGW = zero” spreadsheet is here: 103. Mike Rossander says: December 14, 2012 at 1:18 pm Mike Rossander is my hero! 104. By showing that a climate sensitivity to CO2 of 0 C/W/m^2 gives as good (or better) fit than the consensus climate sensitivity, Mike Rossander and Mike Jonas have completely rubbished the Thanks for your efforts! Now assume a climate sensitivity to CO2 of -3 C/W/m^2. That should be fun! I predict an almost perfect fit once again can be achieved with the correct weighting of suitable sinusoids. 105. Scafetta doesnt share his code or his data. he is a fraud 106. Vukcevic says that: he has discovered/invented following formula AMO = SSN x Fmf AMO = Atlantic Multidecadal Oscillation (or de-trended N.H. temp) SSN = Sunspot number with polarity Fmf = frequency of the Earth’s magnetic field ripple (undulation). He calls above arithmetic sum ‘Geo-Solar Cycle’ calculations are accurate, but he is unable to provide valid physical mechanism. Svalgaard says that: Vukcevic – writes nonsense, pseudo science, suffers from Denning-Kruger mental aberration, making up data, deceptive in the extreme (implies fraud), honesty matters (implies dishonest) In the years gone by, where I come from, the above attributes had to be earned. Fortunately that is not the case any more. Similar pronouncements were often repeated by the self-appointed ‘guardians of eternal truth’ regardless of geography or historic epoch. (@ mosher forwarded the relevant Excel file to you too) 107. Mike Jonas: Every single argument of VP’s in support of the process that he used with IPCC-AGW applies absolutely equally to the exact same process with AGW = zero. On that we agree. The test between the two models will be made with the next 20 years’ worth of data. Having found filters and estimated coefficients, they are not free to modify those coefficients willy-nilly to get good fits each time the data disconfirm their model forecast. 108. Matthew R Marler – Neither of the “sawtooths” bears any relationship to the real world. There is no point in testing them. 109. vukcevic says: December 15, 2012 at 10:47 pm AMO = SSN x Fmf AMO = Atlantic Multidecadal Oscillation (or de-trended N.H. temp) SSN = Sunspot number with polarity Fmf = frequency of the Earth’s magnetic field ripple (undulation). He calls above arithmetic sum ‘Geo-Solar Cycle’ Zeroth: Is it AMO or [manipulated] temps? Not the same. First, the formula uses multiplication [I assume that is what the 'x' stands for], so is not a sum. Second, which SSN is used? The International [Zurich] SSN or the Group SSN? Third, the ‘polarity’ of the SSN is nonsense. You might talk about the polarity of the HMF, but then that should go from maximum to maximum [when the polar fields change]. In any case, the polarity that is important for the interaction with the Earth is the North-South polarity which changes from hour to hour. Fourth, ‘frequency’ is not a magnetic field [you said that you added two magnetic fields]. Fifth, ‘ripple’ is what? and undulation? Sixth, ‘Earth’s field’, measured where? And why there? In the years gone by, where I come from, the above attributes had to be earned. With the expertise you perfected back then, you are still earning it in earnest now. Similar pronouncements were often repeated by the self-appointed ‘guardians of eternal truth’ regardless of geography or historic epoch. Nonsense is nonsense no matter where and when. 110. lsvalgaard says: December 16, 2012 at 6:21 am Some of the points you raise are explained beforehand, see my post above The above formula is a summary in its most abstract form. I have also made it clear that all four basic arithmetic calculations for simplicity I always address as ‘sum’, but for your benefit (see the link above) each of the arithmetic calculations has been specifically itemized in the Excel file description. You also have access to my article, which is over 20 pages long, contains 39 illustrations of which 35 are my own product, and many of the questions you pose are elaborated in detail in the article. You also have the Excel file with further information. You will appreciate that the blog is not capable of furnishing full reproduction, therefore reading of the article is a prerequisite, which of course you are welcome to do. Thank you for the note, once the final publication (this is the first draft) is composed the points you made will be fully considered. For time being you may consider snippets of information as ‘unofficial leaks’ from a future publication, which currently is a ‘high vogue’ on the fringes of climate science, and should be treated as such, or else ignored. Thanks again for your attention. 111. lsvalgaard says: December 16, 2012 at 6:21 am May I add, I feel highly privileged and grateful that most if not all, of your attention and time during the last few days, on this blog and elsewhere, was devoted to the ‘ironing out’ of any inadvertent inconsistencies, that may be found in the draft of my article, to the preference of the Dr. Pratt’s paper, which surely must be this year’s the most important contribution to understanding of the anthropogenic warming. Thank you again, sir. 112. vukcevic says: December 16, 2012 at 11:09 am May I add, I feel highly privileged and grateful that most if not all, of your attention and time during the last few days, on this blog and elsewhere, was devoted to the ‘ironing out’ of any inadvertent inconsistencies, that may be found in the draft of my article As I said, your descriptions here on WUWT have been deceptive and your ‘paper’ is incomprehensible. You cannot presume that anybody would try to decipher what you actually mean. The purpose of publication is to make the paper comprehensible to a [scientifically literate] reader with only cursory knowledge of the subject to understand your claims by simply skimming the paper [your version is much too dense with details and loose ends]. You could start by responding to my seven points, right here on WUWT. As things stand, the paper is still nonsense, and you commit the deadly sin of arguing with the referee instead of responding concisely to the points raised. 113. Mike Rossander is the first, and so far only, person to respond to my challenge to provide an alternative description of modern secular (> decadal) climate to the one I presented at the AGU Fall Meeting 12 days ago. I only wish there were more Mike Rossanders so as to increase the chances of obtaining a meaningful such description. Thank you, Mike! Although Mike’s magic number of 99.992% may seem impressive, it’s unclear to me how a single number addresses the conclusion of my poster, which starts as follows. “We infer from this analysis that the only significant contributors to modern multidecadal climate are SAW and AGW, plus a miniscule amount from MRES after 1950.” If one views MRES as merely the “unexplained” portion of modern secular climate then 99.992% does indeed beat 99.99%. However a cursory glance at these charts reveals two clear differences between the upper and lower charts, respectively mine and Mike’s. 1. On the left, the upper chart is “within a millikelvin” (as measured by standard deviation) for the “quiet” century from 1860-1960. It then moves up until 1968, quietens back down to 1990, then moves up again. At no point does it go below the century-long quiet period. (Ignore the decade at each end, secular or multidecadal climate can’t be measured accurately to within a single I justify my “within a millikelvin” title by claiming that, although the fluctuations from 1860 to 1960 are certainly “unexplained” (R2 is defined as unity minus the “unexplained variance” relative to the total variance), the non-negative bumps thereafter admit explanations, namely non-greenhouse impacts of growing population and technology in conjunction with the adoption of emissions controls. Their clear pattern therefore makes it unreasonable to count them as part of the unexplained variance. The lower chart on the other hand simply wiggles up and down throughout the entire period, and moreover with a standard deviation three times as large. It is just as happy to go negative as positive, and it draws absolutely no distinction between the low-tech low-population 19th century and the next century. WW1 consisted largely of firing Howitzers, killing many millions of soldiers with bayonets and machine guns, and dropping bricks from planes, while WW2 consisted largely of blowing cities and dams partially or in some cases completely to smithereens with conventional and nuclear weapons of devastating power. The clear consequences of this trend led to the cancellation of WW3 by mutual agreement. There is not a trace of this progression in Mike’s chart, just oscillations both above and below the line. Moreover they even die down a little near the end—what clearer proof could you ask for that the increasing human population and its technology cannot be having any impact whatsoever on the climate? 2. On the right, the upper chart shows in orange the Hale or magnetic cycle of the Sun. The lower one does the same except that it is much messier and gives no reason to suppose that a cleaner picture is possible. Now let’s look at how far one needs to bend over in order to cope with zero AGW. Here are the ten coefficients Mike and I are using to express the sawtooth shape, expressed in natural units rather than the incomprehensible slider units. For shifts this is the fraction of a sawtooth period t to be shifted by, so for example 0.37t means a shift of 37% of the sawtooth period. For scale it is the attenuation of the harmonics, so for example 0.37X means an attenuation down to 37% of full strength for that harmonic in the case of a perfect sawtooth. 0 and X are synonymous with 0X and 1X respectively. Shifts: 0.092848t 0.268605t 0.335671t 0.246856t 0.198283t Scales: 1.48903X 1.38624X 2.23807X 0 0.78158X (The 0 for Scale4 is clearly a bug in how the problem was presented to Solver.) Shifts: 0 0 0 t/40 t/40 Scales: 0 X X X/8 X/2 If we take the number of bits needed to represent these coefficients as a measure of how much information each of us has to pump into the formulas to force them to fit the data, the difference should be clear to those who can count bits. Also note that all of my shifts are way smaller than all of Mike’s shifts. His five sine waves bear no resemblance whatsoever to the harmonics of a My question to Mike, and to anyone else who shares Mike’s and my interest in modern secular climate analysis, is, can one create a more plausible MRES, a cleaner HALE cycle, and a less obviously contrived collection of coefficients, while still setting climate sensitivity to zero? If this can be done we would have a much stronger case against global warming. (Incidentally the reason the logic looks circular to MIke Jonas is that iterative least-squares fitting is circular: it entails a loop and loops are circular. The correct question is not whether the loop is circular but whether it converges.) 114. Good afternoon, Vaughan. I think you may have misunderstood the intent of my comment. And perhaps of the original criticism in the post above. My analysis was trivial, limited and deeply flawed. It had to be so because it was based on no underlying hypothesis about the factors being modeled (other than the ClimSens control value). It was an exercise that took about 15 min and was intended only to illustrate the dangers of over-fitting one’s data. For example, you argue above that the fact that the shifts in your scenario are smaller is an element in favor of that scenario. Unless there is a physical reason why small shifts should be preferred, that claim is unjustifiable. A shift of zero may be perfectly legitimate – or totally irrelevant. That statistics are only useful to the extent that they illuminate some underlying physical process. By the same token, you can’t say that the coefficients are “contrived” unless you have a physical understanding that indicates what they SHOULD be. The closeness of R2 is also essentially irrelevant. Minor changes to the parameters drove that value off the 0.99x values quite easily. We can reasonably interpret an R2 of 0.2 as “bad” and an R2 of “0.8″ good but the statement that “R2 of 0.99992 is better than 0.9990″ is well beyond the reliability of the statistics to honestly say. On the contrary, given everything we know about the randomness of the input data (and about the known uncertainties of the measurement techniques), an R2 that high is almost certainly evidence that we have gone beyond the underlying data. There’s too little noise left in the solution. I say that because my physical model includes assumptions about the existance and magnitude of human error in data collection, transcription, etc. and that those errors should be random, not systemic. What you need (and what neither of us have done) is a formal test of overfitting. Unfortunately, without an assumed physical model, I don’t know of any reliable way to structure that test. A common approach would be to run the Student’s t Test on each parameter in isolation, comparing the results of the model at the set parameter value vs the hypothesized null value. But your model does not make apparent what the null value ought to be. As noted above, we can not blindly assume that it is zero. An alternate approach would be to restructure the model so you can feed an element of noise into all your data and rerun the analysis a few thousand times. Parameters which remain relevant across the noisy samples are probably more reliable. That’s not an approach that can be easily built in Excel, however. Having said all that, you are certainly more familiar with your model than I ever will be. (The organization of your workbook is well above average but reverse-engineering someone else’s excel spreadsheet is almost always an exercise in futility.) Maybe you see a way to run a proper test against the parameters that I don’t. One last thought. I ran my trivial test with the hypothesis that ClimSens should equal zero. There is no reason why that is necessarily the appropriate null, either. ClimSens really should be its own parameter, also included in the statistical tests of overfitting. 115. Vaughan. I have used a statistical curve fitting approach in testing one component of the AGW model that you assume to be valid.That component is the assumption that anthropogenic emissions has caused all of the atmospheric increase in CO2. Read http://www.retiredresearcher.wordpress.com. 116. Hi Mike. All your points are eminently sensible, particularly as regards the dangers of over-fitting (your main concern). That was also a concern of mine, and is why I locked down 5 degrees of freedom in SAW. One way to structure a test of overfitting is to compare the dimensions of the data and model spaces. To my mind the best protection against overfitting is to keep the latter smaller than the former. When it can be made a lot smaller one can claim to have a genuine theory. When only a little smaller, or barely at all, it is not much better than a mere description from some point of view (namely that of a choice of basis). I would say my Figure 10 was more the latter, argued as follows. For this data, 161 years of annualized HadCRUT3 anomalies is a point in the 160-dimensional space of all anomaly time series centered on the same interval, one dimension being lost to the definition of anomaly. My F3 filter projects this onto the 14-dimensional space of the first seven harmonics of a 161-year period. However it attenuates harmonics 6 and 7 down into the noise, making 10 dimensions (two per harmonic) a more reasonable estimate, perhaps 12 if you can crank up the R2 really high to bring up the signal-to-noise ratio.) In principal my model has 14 dimensions, of which I lock down 5 leaving 9. You locked down the 3 AGW dimensions leaving the 11 SAW dimensions. However two of these only partially benefited you because Solver wanted to drive Scale4 negative. I’m guessing you left the box checked that told Solver not to use negative coefficients, so it stopped moving Scale4 when it hit 0, which in turn made tShift4 ineffective. The Evolutionary solver might have been able to add 1250 (half a period) to tShift4 discontinuously to simulate Scale4 going negative. 9 dimensions for the model space is dangerously close to the 10 dimensions of the model space, so I’m within a dimension of being as guilty of overfitting as you. The real difference is not dimensional however but choice of basis: sine waves in your case, a more mixed basis in mine including 3 dimensions for the evident rise that I’ve modeled as log(1+exp(t)) per the known physics of radiative forcing and increasing CO2. Rather than using a t-test I’d be inclined to move the data and model dimensions apart by dropping the 4th and 5th harmonics altogether (since they aren’t really carrying their weight in increasing R2) while halving the period of F3. The data space would then have 20-24 dimensions and the model space 6. But I would then spend 5-6 dimensions in describing HALE as a smoothly varying sine wave. That would greatly reduce the contribution of HALE to the unexplained variance, at the price of reducing the 6:20 gap to 11-12:20-24. That’s still an 8-13 dimensional gap between the data and model spaces, which I would interpret as not being at serious risk of overfitting. 3 of the HALE dimensions describe the basic 20-year oscillation, which the remaining 2-3 modulate. As Max Manacker insightfully remarked at Climate Etc. a day or so ago, it’s not the number of parameters that counts so much as whether they’re the right ones. Whether parameters are meaningful depends heavily on the choice of basis for the space. Do they have any physical relevance? Physics aside, your suggesting of feeding noise in would be a straightforward way of quantifying the dimension gap empirically that also took into account the attenuation by F3 of harmonics 6 and 7 as well as the role of R2, all in the one test so that would be very nice. I have this on my list of things to look into, hopefully the other things won’t push it down too far. I would have replied sooner except that I spent today following up on the suggestion to try other ClimSens values besides 0 and 2.83, made by both you and “Bill” on CE. Very interesting results, more on this later as this comment is already so deep into tl;dr territory that I ought to submit it for the 2013 literature Nobel. 117. Hi fhaynie. I’m having trouble reading your first figure about dependence on latitude. When I click on it I get only an arctic plot. Is there some way I can blow it up to a readable size? Your emphasis on carbon 13 and 14 is commendable, but I must confess I have thought less about them than the raw CDIAC emissions and land-use-change data since 1750. These can be converted to ppmv contributions using 5140 teratonnes as the mass of the atmosphere and 28.97 as its average molecular weight. One GtC of emitted CO2 therefore contributes 28.97/12/5.14 = 0.47 ppmv CO2 to the CDIAC says that in 2010 the anthropogenic contribution including land use changes was 10.6 GtC. For that year Mauna Loa recorded an increase of 2.13 ppmv. The former translates to a contribution of 10.6 * 0.47 = 4.98 ppmv.. Hence 2.13/4.98 = 42.7% percent of our contribution was retained in the atmosphere in 2010, with the remaining 57.3% being presumably taken up by the ocean, plants, soil, etc. It would be very interesting to compare this with your analysis based on molecular species.of CO2. Do you have an estimate of the robustness of this sort of analysis? I very much like this direction you’re pursuing, it could lead to useful insights. What feedback have you had from those knowledgeable about this sort of approach? And are there more detailed papers I can read on this? 118. Vaughan, I am in the process of using similar techniques doing mass balances dividing the earth’s surface into five regions. If you are interested, you can find my email address at http:// I will gladly share what I am doing as well as thoughts on the mistakes we can make using curve fitting programs. I have had some long email conversations with two individuals that promote the global mass balance you cite. They have a strong vested interest in being right. Most of the favorable comments did not go into any technical detail. This entry was posted in measurement, Temperature and tagged Judith Curry, Vaughan Pratt. Bookmark the permalink.
{"url":"http://wattsupwiththat.com/2012/12/13/circular-logic-not-worth-a-millikelvin/?like=1&source=post_flair&_wpnonce=c49b096ccc","timestamp":"2014-04-16T22:03:08Z","content_type":null,"content_length":"339926","record_id":"<urn:uuid:5dc3bf40-db9d-49c6-bdb6-f4d24830e165>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Swan like theorem and covering spaces up vote 7 down vote favorite Let $X$ be a finite CW complex. Swan's theorem provide an equivalence \[ Vec(X)~\xrightarrow{\sim} ~ProjMod(hom_{Top}(X,\mathbb{R})) \] between the category of finite dimensional vector bundles over $X$ and the category of finitely generated projective modules over the ring of continuous functions from $X$ to the reals. This isomorphism behaves well with the monoidal structures $\oplus$. There is an intermediate step in this construction: The category $Vec(X)$ of finite dimensional vector bundles over $X$ is equivalent to locally free modules of finite rank over the sheaf $C_X(-)= hom_{Top}(-,\mathbb{R})$ on $X$. The category $Cov(X)$ of covers of $X$ is equivalent to the category of locally constant sheaves of sets on $X$. Is it possible to formulate this analogously to the above correspondence? So maybe locally constant sheaves are somehow special modules over $C_X(-)$ and this category possibly corresponds to some special modules over $C_X(X)$. Maybe, this is also compatible with disjunct unions of coverings and sums of the corresponding modules. Maybe it is also necessary to require, that the covering is regular. (The fat things are edits made, partially based on the answers below.) at.algebraic-topology vector-bundles gt.geometric-topology It might be worth exploring the connection between vector bundles with flat connections and covering spaces via representations of the fundamental groupoid. It then remains to think about what vector bundles with flat connections are modules for, but my guess is that stacks would come in and/or differential cohomology. – David Roberts Oct 27 '10 at 22:31 add comment 2 Answers active oldest votes If I can take only the finite covers, then yes, I think. (After all, Swan's theorem is a characterization of finite-dimensional vector bundles, not all vector bundles.) This is easier to do over $\mathbb{C}$ than over $\mathbb{R}$. In addition to the entire sheaf $C_X(-)$, let $C(X) = C_X(X)$ be the algebra of global continuous functions. Since the module $M$ is locally free, what you want to do is to choose a basis for $C(U) = C_X(U)$ for enough open sets $U$, and such that the bases agree when you restrict to smaller up vote 5 open sets. You could just ask for this directly, but there is an indirect algebraic condition that comes to the same thing. Namely, you can ask for $M$ to not only be finitely generated down vote and projective, but also a semisimple commutative algebra over $C(X)$. This gives you the unordered basis in each fiber. Over $\mathbb{R}$, it's not quite enough to require that $M$ be a semisimple algebra, because you could end up creating $C(Y,\mathbb{C})$ for a finite cover $Y$ of $X$. So, you could also impose the condition that $f^2 + g^2 = 0$ has no non-trivial solutions in $M$. Unfortunately I do not understand your answer. What is $M$ here? If the cover of $X$ is regular, the fibers are discrete groups and in particular normal subgroups of $\pi_1(X)$. I wonder what the action of $C(X)$ is. – roger123 Oct 28 '10 at 9:32 3 In the end $M = C(Y)$, where $Y$ is the covering space of $X$. The projection from $Y$ to $X$ induces a ring homomorphism from $C(X)$ to $C(Y)$, and then $C(X)$ acts on $M$ via this ring homomorphism. – Greg Kuperberg Oct 28 '10 at 10:05 1 What I'm saying is, if $M$ is a $C(X)$-algebra which (a) satisfies Swan's conditions as a module, (b) is a semisimple algebra, and (c) has no solutions to $f^2 + g^2 = 0$, then, conclusion, it is isomorphic to $C(Y)$ for a finite cover $Y$. – Greg Kuperberg Oct 28 '10 at 17:40 1 @roger123 You can construct a locally constant set-theoretic sheaf directly this $C(X)$-algebra $M$. You can first make a sheaf version of $M$, and then you can look at the corresponding sheaf of minimal idempotents. (Recall that an idempotent in a ring is an element $a$ such that $a^2 = 0$. An idempotent is minimal if it is not the sum of two [commuting] idempotents.) Or, if you like, the minimal idempotents are a canonical basis of $M(U)$ for an open set $U$ on which $M$ is trivial. – Greg Kuperberg Oct 29 '10 at 17:57 @roger123 If $Y$ is a finite-sheeted covering of $X$, then the gluing maps between local trivializations of $Y$ are functions from $U \subset X$ to permutations. You can get a vector 1 bundle if you replace these permutations by the same permutation matrices. Or, in the present context, $C(Y)$ satisfies Swan's theorem as a module over $C(X)$ and thus gives you a vector bundle, exactly the same bundle as you would get from using permutation matrices. – Greg Kuperberg Oct 30 '10 at 6:03 show 6 more comments A theorem I found in a paper of Horst Madsen: For every locally connected space $X$ there exists a category equivalence between the category of regular covering spaces of $X$ and the category of Galois extensions of $C(X) up vote 1 down vote I've just found this on the web and don't know how this fits to your question. I would also like to see if there is such a relation as you mentioned. add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology vector-bundles gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/43795/swan-like-theorem-and-covering-spaces","timestamp":"2014-04-21T12:42:10Z","content_type":null,"content_length":"62831","record_id":"<urn:uuid:11e35da9-a0f5-475c-9474-7cd4d36769a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
How to mathematically calculate the accuracy and resolution of weighing system If you are only measuring after you remove weight from your system in batch format, you should use the hysteresis specification of your load cells combined with the accuracy of your electronics to determine accuracy. If you measure after adding AND removing weight from your system, you can use the combined error (linearity and hysteresis) of your load cells combined with the accuracy of your electronics to determine system accuracy. You have to take into account the error of your controller, load cells, gravity correction, and verify that there are no other issues that may affect your accuracy reading (like EMI/RFI noise, scale binding, correct load cell placement, etc) and environmental variations (such as humidity, temperature and wind). For example, using a 16,500lb Hardy load cell: Linearity 0.012% = 1.98lb or approximately 1 in 10,000 Hysteresis 0.025% = 4.125lb or approximately 1 in 3000 Combined error 0.02% = 3.3lb or approximately 1 in 5000 The above does not including mechanical, electronic or electrically induced errors. Before designing a system, an engineer should consider carefully what he or she expects from it and then relate this to the component accuracies making up the system. No physical measuring system can be completely accurate. An error band must be defined for a system which gives an indication of any expected deviations from true value. The parameters under which this applies must also be clear and concise. Accuracy terms such as "1 part in 3000" are commonly used. Calculating true weigh system accuracy is very difficult, and many customers do not know what they really require from their system. They often request a "system to be as accurate as possible". Proper installation is critical to maximize a systems accuracy, but other considerations such as connecting pipes and conduits must be taken into account. One thing is certain, good load cells do not make a poor system good, but poor load cells can make a good system poor. Hardy Instruments and load cells are considered by many to be the most accurate available in today’s process weighing market. In the vast majority of applications, weighing occurs in only a small portion of the load cell’s range. Thus non-repeatability is the most important specification for most system designers. LOAD SENSOR NON-REPEATABILITY: A standard non-repeatability for a typical load sensor is 0.01% of Full Scale. This is the equivalent of one part in 10,000 of total system load cell capacity. In a 300 pound example the non-repeatability would be + or - 0.03 pounds. The simple definition for non-repeatability is: the maximum error seen if the same amount of material was repeatedly add to or removed from the same vessel, under the same environmental conditions. This situation is often encountered in batching applications. SYSTEM STATISTICAL ERROR: In a multiple load cell system the “combined error” parameters are not added together but are combined using the following formula: The square root of the sum of load cell number one’s “combined error” squared, plus load cell number two’s “combined error” squared, plus load cell number three’s “combined error” squared, etc. In a system using three 10,000 lb load cells, the “combined error” of each load cell is 3 pounds. Entering this data in the formula: Square root of (32 + 32 + 32) or Square root of (9 + 9 + 9) or Square root of 27 or 5.196 lbs (the square root of 27). SYSTEM RESOLUTION: The stability of the weight reading (often referred to as useful resolution) is affected by electrical noise in the form of both Radio Frequency Interference (RFI), and Electro Magnetic Interference (EMI). These sources of interference affect the signal to noise ration of the load censor input to the weight controller. Following standard electrical wiring procedures a weight stable to 0.3 micro volt is typical for Hardy controllers. Utilizing 2 mV/V load cells and a 5 volt excitation this translates to one part in 30,000. In a 300 lb example, a stable weight reading of + or - 0.01 pound would be obtained. To insure good results attention must be paid to shielding, grounding and cable routing. EQUIPMENT DISPLAYED RESOLUTION: The high internal and displayed resolution of the Hardy Instrument weight controller line allows precise mathematical computations. This resolution yields good results for such features as WAVERSAVER and development of various digital and analog outputs without introducing errors in either displayed or transmitted data. The Internal Resolution is one part in 654,000 for a 2 mV/V load cell and one part in 985,000 for a 3 mV/V load point. SYSTEM ACCURACY: All of the above terms relate only to the electrical specifications of the system. Mechanical errors often introduce system errors. Mechanical errors can at times be difficult to identify. Proper system performance requires that the mechanical system be properly designed. In a properly designed system all of the weight will be vertically applied to the load points. In addition, there will be no redundant load paths from non-flexible connections, such as piping, ducting, tubing, etc. As you can see the proper installation of load cells is critical to a scale systems accuracy. Mechanical errors caused by binding is the number one cause of inaccuracy in a scale system. By using the above terms and formula’s you can calculate your weighing system’s accuracy. Respective topics: Previous:Automated weighing scale keeps materials moving Next:Process Automation System
{"url":"http://www.meterforall.com/technical/760.html","timestamp":"2014-04-18T23:15:20Z","content_type":null,"content_length":"20100","record_id":"<urn:uuid:573f5b79-0fb4-4a02-96c6-de18f9b2bb9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Cochlear travelling wave. An epiphenomenon? [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Cochlear travelling wave. An epiphenomenon? Dear Antony and List: I agree that there is more mathematical backing for the TW concept, but that's because coherent alternatives have just not existed! And it is worth while remembering that mathematical models are only as good as the physical insights they represent. As auditory scientists, we would do well not to let the TW theory become a dogma that permits no other hypothesis to be considered. My modeling has got to the point of being able to explain the shape of the partition's tuning curve, but it would be nice to do more elaborate modeling. I hope that additional mathematical analysis will be developed once the initial concepts have been clarified. Patience is needed, for it took many decades for sophisticated mathematical models of the TW to be developed after Bekesy's initial model was put forward. I am trained as a physicist, not a mathematical modeler, but if anyone out there has a particular interest in exploring the mathematical properties of a resonance theory of the cochlea, I would welcome their interest. Thankyou for your good wishes. Andrew Bell. -----Original Message----- From: AUDITORY Research in Auditory Perception [mailto:AUDITORY@LISTS.MCGILL.CA]On Behalf Of Antony Locke Sent: Thursday, 29 June 2000 8:39 To: AUDITORY@LISTS.MCGILL.CA Subject: Re: Cochlear travelling wave. An epiphenomenon? Dear Andrew Many thanks for your reply that attempted to address the most basic 'level of description' of energy flow within the cochlea; i.e. how stapes vibrational energy is transferred to the characteristic place. Many thanks also to Jont Allen's comments, all of which I agree with. In my opinion, the travelling wave model captures the qualitative and, to a large extent, quantitative features of the cochlear response to sound at this level of description. The model is based on established anatomical, physiological and mechanical observations within a rigorous mathematical framework. Any model that counters the travelling wave model must refer to the same observations and be developed within an equally valid mathematical In conclusion, I reiterate Jont's suggestion: I look forward to reading about your model in a quality peer reviewed journal. Best wishes Antony Locke
{"url":"http://www.auditory.org/mhonarc/2000/msg00220.html","timestamp":"2014-04-17T21:26:19Z","content_type":null,"content_length":"5736","record_id":"<urn:uuid:bba26d74-cebd-458d-bca4-6f1706075357>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Lagrange Multipliers April 5th 2009, 09:11 AM #1 Jan 2009 Lagrange Multipliers Find the maximum value of log(x) + log(y) + 3log(z) on the octant of the sphere x^2 + y^2 + z^2 = r^2. Deduce that if a,b and c are real numebrs, abc^3<=27[(a+b+c)/(5)]^5 I've done the first part (I think). Not suire at all about the second though.. I guess it uses Lagrange multipliers but cant see what to you as the constraints etc many thanks Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/82359-lagrange-multipliers.html","timestamp":"2014-04-19T01:08:31Z","content_type":null,"content_length":"28572","record_id":"<urn:uuid:f1366102-8332-42cf-8df1-655cd478e8e9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the function for this graph? (picture inside) September 2nd 2010, 11:54 AM #1 Aug 2010 what type of function is the attached graph? why?? thank you! p.s: it isn't a parabola.... i was thinking some sort of sin or cosine? i dont know.... also, here's the equation : Last edited by mr fantastic; September 2nd 2010 at 03:53 PM. Reason: Re-titled in lower case. If this is $4x^3-32x^2+64x$ then it's called a cubic equation, a polynomial of degree 3. As Archie Meade indicates, it's a cubic function. Does this answer your question? September 2nd 2010, 12:02 PM #2 MHF Contributor Dec 2009 September 2nd 2010, 02:02 PM #3 Oct 2009
{"url":"http://mathhelpforum.com/algebra/155047-what-function-graph-picture-inside.html","timestamp":"2014-04-18T13:24:32Z","content_type":null,"content_length":"35120","record_id":"<urn:uuid:ca12f0be-a9a8-46c2-abcc-34844c7ad87e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Mechanics/The Hydrogen Atom The Hydrogen AtomEdit By now you're probably familiar with the Bohr model of the atom, which was a great help in classifying the position of fundamental atomic specta lines. However, Bohr lucked out in more ways than one. The hydrogen atom turns out to be one of the few systems in Quantum Mechanics that we are able to solve almost precisely. This has made it tremendously useful as a model for other Quantum Mechanical systems, and as a model for the behavior of atoms themselves. We can assume that the hydrogen atom is governed by the Coulomb potential, namely: $V(r) = -\frac{e^2}{4 \pi \epsilon _0} \frac{1}{r}$ such that, $H\Psi = -\frac{\hbar ^2}{2m}\frac{d^2\Psi}{dx^2} - \frac{e^2}{4 \pi \epsilon _0} \frac{1}{r}$ Obviously, simply by inspection, we can see that the Hydrogen Atom is a spherical system. Hence it makes more sense to deal with the Hydrogen atom in spherical coordinates. One should remember at this point that, via Separation of variables, you can obtain the solution to the spherical Laplacian in three-dimensional space: $abla ^2f = {1 \over r^2} {\partial \over \partial r} \left( r^2 {\partial f \over \partial r} \right) + {1 \over r^2 \sin \theta} {\partial \over \partial \theta} \left( \sin \theta {\partial f \ over \partial \theta} \right) + {1 \over r^2 \sin^2 \theta} {\partial^2 f \over \partial \phi^2}$ the solutions to this function when we use separation of variables inside a Hamiltonian gives us two different functions, the Radial Wave functions (not useful now, but good to know): $A \cdot J_l(Z_{nl}\frac{r}{a})$ where $J_l$ are the spherical Bessel functions of type l, and $Z_{nl}$ are the zeroes of said Bessel functions. The other component, the angular component are the Spherical harmonics which are explored in detail on Wikipedia. Essentially, we're ahead of the game at this point. We already know the answers. The hydrogen wave function will have to involve the spherical solutions to the Laplacian and will be related to both the angular and radial components. This is most fortunate; for us to attempt to solve the Laplacian while doing the hydrogen atom would be difficult. However, we have some tasks left. The situation must be normalized, and we must deal with the fact that our potential has a pesky r dependence. We end up with the results that you knew we were going to. We can write down the hydrogen wave equation as: $\Psi _{nlm}(r, \phi, \theta) = R_{nl}(r)Y^l_m(\theta, \phi)$ where $Y^l_m(\theta, \phi)$ are the spherical harmonics. Last modified on 2 November 2007, at 01:40
{"url":"http://en.m.wikibooks.org/wiki/Quantum_Mechanics/The_Hydrogen_Atom","timestamp":"2014-04-17T22:23:05Z","content_type":null,"content_length":"18185","record_id":"<urn:uuid:055275db-6a14-4b29-9da2-a3ef2bc3f417>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving trig January 7th 2010, 02:07 AM Solving trig Need help on how solve the following equation for 0 <= x <= $2\pi$: This is what i have done, but it is incorrect. $tan2x=0$which gets 0 January 7th 2010, 02:22 AM January 7th 2010, 02:38 AM Hello Paymemoney Noting that $\sin2x = 2\sin x \cos x$, we get: $2\sin x \cos x = \cos x$ You need to be a little careful now. Don't just divide both sides by $\cos x$, because it might be zero, and the first commandment of arithmetic is 'Thou shalt not divide by zero'. So you need to $2\sin x \cos x = \cos x$ $\Rightarrow \cos x = 0$ or $2\sin x = 1$ Can you go on to complete this now? January 7th 2010, 02:45 AM thanks i get how to do it now
{"url":"http://mathhelpforum.com/trigonometry/122745-solving-trig-print.html","timestamp":"2014-04-18T18:38:22Z","content_type":null,"content_length":"10188","record_id":"<urn:uuid:3576b701-8ebe-4f8b-b1c4-2375ffff6c81>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Strengths of R (was Re: [SciPy-dev] IPython updated (Emacs works now)) Anthony Rossini rossini at u.washington.edu Wed Feb 20 11:39:42 CST 2002 On Tue, 19 Feb 2002, eric wrote: > Hey Tony, > > Note that one thing I'm working on doing is extending ESS (an Emacs mode for > > data analysis, usually R or SAS, but also for other "interactive" data analysis > > stuff) to accomodate iPython, in the hopes of (very slowly) moving towards using > > R and SciPy for most of my work. > What are the benefits of R over Python/SciPy. Is there a philosophical > difference that makes it better suited for statistics (and other things for that > matter), or is it simply that it has more stats functionality and is much more > mature? If there is a different philosophy behind it, can you summarize it? > Maybe we can incorporate some of its strong points into SciPy's stats module. > Travis Oliphant is working on it as we speak. We could definitely do with some > of you stats guys input! John Barnard, who is on this list, should speak up, being one of the few other people that I know of (Doug Bates and some of his students at U Wisc being another exception) that actually use Python for "work" (database or computation). R (and the language it implements, S) is a language intended for primarily interactive data analysis. So, a good bit of thought has gone into data structures (lists/dataframes in R parlance, which are a bit like arrays with row/column labels which can be used interchangeably instead of row/column numbers), data types such as factors (and factor coding -- some analytic approaches are not robust to choice of coding style for nominal (categorical, non-ordered) data. It has a means for handling missing data (similar to the MA extension for Numeric), and it also has a strong modeling structure, i.e. fitting linear models (using least squares or weighted least squares) is done in a language which "looks right", i.e. lm(y ~ x) fits a model which looks like y = b x + e, e following the usual linear models assumptions. as well as smoothing methods (splines, kernels) done in similar fashion. Models are a data object as well, which means that you can act on it appropriately, comparing 2 fitted models, etc, etc. R, as opposed to the commercial version S or S-PLUS, also has a flexible packaging scheme for add-on packages (for things like Expression Array analysis, spatial-temporal data analysis, graphics, and generalized linear models, marginal models, and it seems like hundreds more. It also can call out to C, Fortran, Java, Python, and Perl (and C++, but that's recent, in the last year or so). Database work is simple, as well, though not up to Perl (or Python) interfaces. It also has lexical scoping, and is based originally on scheme (though the syntax is like python). However, it's not a true OO language like python, and some things seem to be hacks. This is mostly an aesthetic problem, not a functional problem. It's worth a look if you do data analysis. In many ways, the strength is in the ease of programming good graphics, analyses, etc, with output which is easily read and intelligible. It has problems, in terms of scope of data and speed. It's not as clean to read as python (i.e. I _LIKE_ meaningful indentation, which makes me weird :-), and isn't as generally glexible (it took me twice as long to write a reader for Flow Cytometry standard data file formats in R than in Python) but annotation of the resulting data is much easier in R than in Python (and default summary statistics, both numerical and graphical, are easier to work with). So, I don't think I'll be giving up R, but I am looking forward to SciPy (esp things like the sparse array work, which is much more difficult to handle in R, in a nice format). One thing that I did write was RPVM, for using PVM (LAN cluster library) with R; and patched up PyPVM so that the previous authors work actually worked :-); that is how I'm thinking of doing the interfacing. In general, R is great for pre- and post-processing small and medium sized datasets, as well as for descriptive and inferential statistics, but for custom analyses, one would still go to C, Fortran, or C++ after prototyping (much like Python). I can try to say more, but it's hard to describe a full language quickly. See http://www.r-project.org/ for more details (and for ESS, http://software.biostat.washington.edu/statsoft/ess/, if anyone is interested). HOWEVER, it doesn't have More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2002-February/000502.html","timestamp":"2014-04-17T01:35:26Z","content_type":null,"content_length":"7362","record_id":"<urn:uuid:ae337ed1-f5a3-4035-a773-c8e095bced33>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Brown representability Posted by Mike Shulman Time for more debunking of pervasive mathematical myths. Quick, state the Brown representability theorem! Here’s (roughly) the original version of the theorem that Brown proved: Theorem. Let $\mathcal{H}$ denote the homotopy category of pointed connected CW complexes, and let $F:\mathcal{H}^{op}\to Set_*$ be a functor to pointed sets which takes coproducts to products, and homotopy pushouts to weak pullbacks (i.e. spans with the existence, but not the uniqueness, property of a pullback). Then $F$ is a representable functor. (The restriction to CW complexes is, of course, equivalent to saying we invert weak homotopy equivalences among all spaces.) As a graduate student, it was a revelation to me to realize that Brown representability is closely related, at least in spirit, to the Adjoint Functor Theorem. In fact, although this isn’t always made explicit, an essential step in the usual proof of the (special, dual) AFT is the following: Lemma. Let $\mathcal{C}$ be cocomplete, locally small, and well-copowered, with a small generating set, and let $F:\mathcal{C}^{op}\to Set$ be a functor that takes colimits in $\mathcal{C}$ to limits. Then $F$ is representable. Clearly there is a family resemblance. But how does Brown’s theorem get away with such weaker assumptions on limits? The category $\mathcal{H}$ is certainly not cocomplete — it only has weak colimits — but we still get a strong representability theorem, not merely a “weak” one of some sort. The answer is that Brown uses a stronger generation property, as can be seen from the following abstract version of Brown’s original theorem (also due to Brown): Theorem. Let $\mathcal{K}$ be a category with coproducts, weak pushouts, and weak sequential colimits, which admits a strongly generating set $\mathcal{G}$, closed under finite coproducts and weak pushouts, and such that for $X\in\mathcal{G}$ the hom-functor $\mathcal{K}(X,-)$ takes the weak sequential colimits to actual colimits. Then if $F:\mathcal{K}^{op}\to Set$ takes coproducts to products and weak pushouts to weak pullbacks, it is representable. Recall that $\mathcal{G}$ is a generating set if the hom-functors $\mathcal{K}(X,-)$ are jointly faithful, i.e. detect equality of parallel arrows, and a strongly generating set if these functors are moreover jointly conservative, i.e. detect invertibility of arrows. In the homotopy category of pointed connected spaces, the pointed spheres $\{S^n | n \ge 1 \}$ are of course strongly generating, since by definition, mapping out of them detects homotopy groups. Now in surprisingly many places, you may find Brown’s original representability theorem quoted without the adjective “connected”. You may even be tempted to conjecture that it should also still be true without the adjective “pointed”. However, there is no obvious strongly generating set in either of these cases! If we add $S^0$, then we can detect $\pi_0$-isomorphisms of pointed spaces, but the other pointed spheres only detect higher homotopy groups of the basepoint component. On the other hand, mapping out of something like $(S^n)_+$ doesn’t detect homotopy groups of other components either, since in that case the loops are “free” rather than based — and the unpointed spheres $S^n$ have the same problem in the unpointed homotopy category. (This should, of course, be contrasted with the fact that the $(\infty,1)$-category of unpointed spaces does have a strong generator in the $(\infty,1)$-categorical sense: namely, the point. Strong-generation is not preserved on passage to homotopy categories!) In fact, as pointed out recently on MO by Karol Szumiło (thanks Karol!), Brown representability for non-connected and unpointed spaces is false! This was proven back in 1993 by Peter Freyd and Alex Heller. Their construction begins with the group $G$ that is freely generated by an endo-homomorphism $\phi:G\to G$ and an element $b\in G$ such that $\phi(\phi(x)) = b \cdot \phi(x)\cdot b^{-1}$ for all $x\in G$. (Aside: This is a slightly unusual sort of “free generation”. Exercise for homotopy type theorists in the audience: define this group $G$ as a higher inductive type.) Now $\phi$ induces an endomorphism $B \phi :B G \to B G$, and since free homotopies of continuous maps between classifying spaces of groups correspond to conjugations, $\phi$ is idempotent in the homotopy category of unpointed spaces. Freyd and Heller proved that this idempotent does not split. (I have not attempted to understand their proof of this, but I find it believable.) Now, however, $ \phi$ induces an idempotent of the representable functor $[-,B G]$, which does split (since idempotents split in $Set$); let $F$ be the splitting. Since $F$ is a retract of a representable, which takes (weak) colimits to (weak) limits, it does the same; but it is not itself representable because $\phi$ does not split. Finally, $B \phi _+ :B G_+ \to B G_+$ gives a similar counterexample in pointed, non-connected spaces. Now the original, and probably still most common, use of the Brown representability theorem is to produce spectra which represent cohomology theories. In that case, the suspension axiom $H^n(X) \cong H^{n+1}(\Sigma X)$ (for reduced cohomology) means that the whole thing is determined by its behavior on pointed connected spaces (since $\Sigma X$ is connected), so this is not a problem. There are also different tricks that appear to work for any single functor as long as it lands in (abelian) groups rather than sets. Note also that the collection of spheres $\{S^n | n \in \mathbb{Z}\}$ is strongly generating in the category of all spectra. But outside of those domains, Brown representability is subtler than I used to think. (I was first made aware of this issue several years ago, when Peter May and Johann Sigurdsson ran into it while trying to use the abstract version cited above to produce right adjoints to derived pullback and smash product functors in parametrized homotopy theory. However, I didn’t realize until now that the nonconnected and unpointed versions were actually false even in the classical case of ordinary spaces.) Posted at August 24, 2012 5:55 PM UTC Re: Brown representability Their construction begins with the group $G$ that is freely generated by an endo-homomorphism $&#981;\colon G \to G$ and an element $b \in G$ such that $\phi(\phi(x)) = b \cdot \phi(x) \cdot b^ {-1}$ for all $x \in G$. …and that group is, if I’m not mistaken, Thompson’s group $F$. (Freyd seems to prefer not to call it Thompson’s group, because he believes he discovered it first, and maybe he did. But everyone else calls it Thompson’s group.) Posted by: Tom Leinster on August 25, 2012 1:41 AM | Permalink | Reply to this Re: Brown representability Hah, I was going to point out the appearance of F, but Tom has (fittingly) beaten me to it. Posted by: Yemon Choi on August 25, 2012 9:21 AM | Permalink | Reply to this Re: Brown representability Posted by: David Corfield on August 25, 2012 10:38 PM | Permalink | Reply to this Re: Brown representability Ah, excellent! I wasn’t sure whether this post was worth writing, but you’ve proven that it absolutely was, because otherwise I might never have realized that. Posted by: Mike Shulman on August 26, 2012 1:57 AM | Permalink | Reply to this Re: Brown representability Posted by: David Corfield on August 25, 2012 10:23 PM | Permalink | Reply to this Re: Brown representability Well, there are variants of the classical AFT. E.g. it’s enough for the category to be total, rather than cocomplete with a generating set. And there are versions for indexed categories and enriched categories, and more generally for objects in a Yoneda structure. Posted by: Mike Shulman on August 26, 2012 4:37 AM | Permalink | Reply to this Re: Brown representability That’s a very nice summary of the whole story. However, it occurred to me that the notion of strongly generating set is not the right one here. Namely, the functors represented by spheres are not jointly faithful in the homotopy category of based connected spaces. Think about any nontrivial cohomology operation of nonzero degree, it is represented by a map $K(A, m) \to K(B, n)$ with $m &lt; n$. It induces zero on homotopy groups just like the trivial map but is nontrivial. On the other hand the functors represented by spheres are jointly conservative and it seems to be all that is relevant to the Representability Theorem. I also want to point out that an earlier paper of Heller that I mentioned in the comment to your MO question also gives a counterexample in the non-based case which is even more striking since it is a half-exact functor which is not even a retract of a representable functor. I admit that I don’t really understand the construction, but it seems that the problem is that the fundamental group of the representing space would have to be proper class. Posted by: Karol Szumiło on August 25, 2012 10:41 PM | Permalink | Reply to this Re: Brown representability the notion of strongly generating set is not the right one here An excellent point. It may be worth mentioning at this point, in case some reader is unaware, that there is no unanimity in the literature on the meaning of “generating set”. I like the definitions I gave in the post above, because they are systematic: $\mathcal{G}$ is generating if for all $X$, the family of all morphisms $G\to X$ for $G\in\mathcal{G}$ is jointly epimorphic, and strongly generating if that family is jointly strong-epimorphic (or, more precisely, extremal-epimorphic — so “extremally generating” would be an even better name — but strong and extremal epimorphisms are the same in any category with pullbacks, so people are often sloppy about the difference, and I’ve decided not to try to fight that battle). However, some people say “separating set” and “generating set” where I say “generating set” and “strongly generating set”, and I don’t know whether all of these people require a generating set (in their terminology) to also be separating. (In a category with equalizers, the implication is automatic — but of course the homotopy category lacks equalizers.) It would be nice to have a word for morphisms which satisfy the property that makes an epimorphism strong, but are not necessarily themselves epic, and similarly for the sort of “strongly not-generating sets” that we see here. a counterexample in the non-based case which is even more striking since it is a half-exact functor which is not even a retract of a representable functor. … it seems that the problem is that the fundamental group of the representing space would have to be proper class. Thanks for pointing that out here! Although I’m not sure whether I find it more striking to have a functor that fails to be representable because an idempotent fails to split, or a functor that fails to be representable for cardinality reasons. I think both of them seem like rather “boring” reasons. (-: But I suppose the second example implies that Brown representability also fails in the Cauchy completion of the unbased homotopy category, which is not obvious if you’ve only seen the first example. Posted by: Mike Shulman on August 26, 2012 4:51 AM | Permalink | Reply to this Re: Brown representability It may be worth mentioning at this point, in case some reader is unaware, that there is no unanimity in the literature on the meaning of “generating set”. There is one such notion that is especially relevant to this discussion, namely generating sets in triangulated categories. A set of objects $\mathcal{G}$ of a triangulated category $\mathcal{T}$ is generating if given an object $X$ such that for all $G \in \mathcal{G}$ and $m \in \mathbb{Z}$ we have $\mathcal{T}(\Sigma^m G, X) = 0$, then it follows that $X = 0$. In the presence of coproducts this is (somewhat non-trivially) equivalent to $\mathcal{G}$ generating all objects $\mathcal{T}$ under shifts, cones and coproducts. For example in the stable homotopy category the sphere spectrum $\mathbb{S}$ is generating even though $\{\Sigma^m \mathbb{S}\}$ is not a generating set in the sense of your post because of the existence of non-trivial stable cohomology operations. The relevance is that if $\mathcal{T}$ has coproducts and a set of compact generators, then every cohomological functor $\mathcal{T}^\mathrm{op} \to \mathbf{Ab}$ is representable. The proof is a variation on the usual proof, but it works even more neatly because of the stability. Posted by: Karol Szumiło on August 27, 2012 11:08 AM | Permalink | Reply to this Re: Brown representability Isn’t that the same as this sort of “strong non-generation”, though, because a morphism in a triangulated category is an isomorphism iff its cone is zero, and hom-functors are exact? Posted by: Mike Shulman on August 27, 2012 6:24 PM | Permalink | Reply to this Re: Brown representability You are right of course. I noticed that this material is not very well covered in nLab. Maybe I will try to add something. Posted by: Karol Szumiło on August 28, 2012 10:25 AM | Permalink | Reply to this Re: Brown representability Maybe it’s worth adding another “issue” that disappears when you go stable, say the smallness of the generators. I learned that from the paper “Representability theorems for presheaves of spectra” by Jardine. It contains a version of Brown representability for any compactly generated model category (an object $x$ is compact in $C$ if $colim_i[x,y_i]\to [x,colim_i y_i]$ is a bijection in $Ho(C)$ for every (countable) sequential colimit). However, Brown representability seems to be unknown if the category is $\alpha$-compactly generated for a regular cardinal $\alpha$ (change sequential colimits for $\alpha$-sequential colimits). In the case $C$ is stable, its homotopy category will be well generated and you can proof Brown representability even without any model structure (as done in Neeman’s book). Jardine also shows that this problem disappears if we “enrich” the problem, i.e. if we start with a simplicial $\alpha$-compactly generated (cofibrantly generated pointed) model category $C$ and a simplicial functor $F\colon C^{op}\to sSet_*$ (sending w.e. to w.e and homotopy colimits to homotopy limits), then $F(-)\simeq Map(-,X)$. Posted by: Oriol Raventós on September 5, 2012 4:59 PM | Permalink | Reply to this Re: Brown representability I’d just like to explicitly cite Heller’s paper “On the representability of homotopy functors” J. London Math. Soc. (2) 23 (1981), no. 3, 551–562, which Karol already mentioned on math overflow, as his Theorem 1.3 is the most general positive result that I know, but I don’t think the paper is as well known as it should be. I should also mention that the representability of homology functors (which isn’t representability in the sense of category theory, since it involves a symmetric monoidal structure), is much more Posted by: Dan Christensen on September 26, 2012 5:18 PM | Permalink | Reply to this Re: Brown representability Alex Heller’s papers from that time were full of insights that perhaps now are better appreciated, yet you rarely see references to them. He did a lot of work on homotopy Kan extensions, for Posted by: Tim Porter on September 26, 2012 8:32 PM | Permalink | Reply to this Re: Brown representability Indeed they are! I’ve been discovering that his monograph “Homotopy theories” and later paper on stable homotopy theories are full of useful stuff about what we now call derivators. Posted by: Mike Shulman on September 26, 2012 10:36 PM | Permalink | Reply to this Re: Brown representability Yes, in that memoir Alex had some comment about Grothendieck doing something but it seems to be going in a different direction! I seem to remember telling both of them about the others work way back in 1984 or shortly after, but I do not know if the message was understood. They thought of themselves doing different things. Jean-Marc Cordier and I worked on trying to extend his proofs (including that of Brown representability) to the simplicially enriched context (as we called it then). That part of our results were never published but some of the more basic stuff did appear in the paper (Homotopy Coherent Category Theory, Trans. Amer. Math. Soc. 349 (1997) 1-54.). Posted by: Tim Porter on September 27, 2012 6:45 AM | Permalink | Reply to this Re: Brown representability How about a link to `Alex Heller’ if there is such an entry? And if it’s there, does it due justice to his work? Posted by: jim stasheff on September 27, 2012 1:37 PM | Permalink | Reply to this Re: Brown representability There is now, but it does not do justice to his contribution to the subject. I have made some links but not enough. Posted by: Tim Porter on September 28, 2012 7:56 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2012/08/brown_representability.html","timestamp":"2014-04-16T07:57:07Z","content_type":null,"content_length":"60484","record_id":"<urn:uuid:5caed21b-9d51-40d0-8b6a-015a47b648f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding a point on a line, given another point on the same line, and knowing its dist June 8th 2010, 03:51 PM #1 May 2010 Finding a point on a line, given another point on the same line, and knowing its dist Hi there. I'm tryin to find a point, lets call it C. I'm working on a Rē. What I know is that the point belongs to the line L: $y=\displaystyle\frac{x}{2}+\displaystyle\frac{1}{2 }$ And that the distance to the point B(1,1), that belongs to L is $\sqrt[ ]{20}$. How can I find it? I know there are two points, cause of the distance over the line. Hi there. I'm tryin to find a point, lets call it C. I'm working on a Rē. What I know is that the point belongs to the line L: $y=\displaystyle\frac{x}{2}+\displaystyle\frac{1}{2 }$ And that the distance to the point B(1,1), that belongs to L is $\sqrt[ ]{20}$. How can I find it? I know there are two points, cause of the distance over the line. Hi Ulysses, A straightforward way to solve is to consider the slope of the line, 1/2, which means that for a rise of 1, we have a run of 2. The hypotenuse length was obtained using the Pythagorean theorem. Then notice that $2 * \sqrt{5} = \sqrt{4*5} = \sqrt{20}$. That means that for a run of 4 (or -4) and rise of 2 (or -2) starting from B we get to C. So C = (5, 3) or C = (-3, -1). Thanks undefined! June 8th 2010, 04:16 PM #2 June 8th 2010, 04:59 PM #3 May 2010
{"url":"http://mathhelpforum.com/algebra/148310-finding-point-line-given-another-point-same-line-knowing-its-dist.html","timestamp":"2014-04-17T19:39:46Z","content_type":null,"content_length":"38036","record_id":"<urn:uuid:e6bfb12a-b0d1-40a8-9d5c-eeb7a3f89d46>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Factors and Multiples - Hamiltonian Path Date: 11/02/98 at 22:29:02 From: Joey Dawson Subject: Factors and Multiples We have to make a sequence of numbers, all different, each of which is a factor or a multiple of the one preceding it. The numbers in this sequence have to be from 1-100, 1 being the last number in the sequence. An example of this is: 3-15-30-10-5-35-7-21-3. We have to use as many numbers as possible. Everyone in my class has figured out the most number of numbers there has to be in the sequence. Can you help me figure out how many numbers is the most numbers there can be in the sequence? I would appreciate it very much if you could help me. Thank you very much. Your mathematician friend, Joey Dawson Date: 11/04/98 at 12:01:56 From: Doctor Peterson Subject: Re: Factors and Multiples Hi, Joey. This is a pretty big problem. I haven't found any easy solution. I'll just tell you how I've been looking at it and maybe you will be able to finish it up. I see this in terms of what we call graph theory, which has to do with things like maps that show relations between objects. To keep things simple, let's just work with numbers up to 12, rather than all the way to 100. I have drawn a diagram that shows which numbers could be adjacent in your sequence, by drawing a line between any pair of numbers one of which is a divisor of the other: / \ / \ / \ 9 6----2---8 This says that 2 divides 4, 6, 8, 10, and 12; 3 divides 6, 9, and 12; 4 divides 8 and 12 and is a multiple of 2; 5 divides 10; 6 divides 12 and is a multiple of 2 and 3; 8, 9, 10, and 12 don't divide anything but are multiples of other numbers. 7 and 11 divide nothing in this range, so they are disconnected from the rest of this "graph." Of course 1 could be connected to every number, but since it has to be last number in the sequence (the answer would be a little different if that weren't required), we don't need to show it. You can see that 7 and 11 won't be included in any sequence you can make, because there is no number other than 1 that they could be next to. In general, any prime bigger than half the limit (100 in your problem) will be neither a multiple nor a factor of any number in the The problem now is just to draw the longest path that goes through number only once. That's not hard. There's one path that will go through the whole "connected" part of the graph: 3 12---4 / \ / \ 9 6 2---8 The longest sequence, then, is: 5, 10, 2, 8, 4, 12, 6, 3, 9, 1 (or the reverse), and includes all the numbers that could possibly been in such a sequence. (This is called a "Hamiltonian path.") I expanded this to include the numbers 1-25, and found no Hamiltonian path through all the numbers. I had to leave out three numbers in addition to 13, 17, 19, and 23 that are not connected to the rest. But I'm not sure I could prove I found the longest path, without trying possible paths. With 100 numbers, it would be even harder. But at least we know a maximum size for the path (that is, which numbers _have_ to be left out), although we don't know whether we can actually make a path that size until we try. Keep playing with this, and let me know if anyone finds a way other than brute force to find the longest sequence or to prove that you can find a Hamiltonian path in the graph for 1-100. You may find some feature of these particular graphs that makes it easy to find a path. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54255.html","timestamp":"2014-04-16T22:13:08Z","content_type":null,"content_length":"8704","record_id":"<urn:uuid:caa33db1-e92d-4d38-bf9b-0414a831866a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Causal inference using the algorithmic Markov condition , 2008 "... We describe eight data sets that together formed the CauseEffectPairs task in the Causality Challenge #2: Pot-Luck competition. Each set consists of a sample of a pair of statistically dependent random variables. One variable is known to cause the other one, but this information was hidden from the ..." Cited by 8 (7 self) Add to MetaCart We describe eight data sets that together formed the CauseEffectPairs task in the Causality Challenge #2: Pot-Luck competition. Each set consists of a sample of a pair of statistically dependent random variables. One variable is known to cause the other one, but this information was hidden from the participants; the task was to identify which of the two variables was the cause and which one the effect, based upon the observed sample. The data sets were chosen such that we expect common agreement on the ground truth. Even though part of the statistical dependences may also be due to hidden common causes, common sense tells us that there is a significant cause-effect relation between the two variables in each pair. We also present baseline results using three different causal inference methods. , 2007 "... and their relation to thermodynamics ..." "... We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case ..." Cited by 6 (2 self) Add to MetaCart We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains. 1 - In UAI , 2011 "... This work addresses the following question: Under what assumptions on the data generating process can one infer the causal graph from the joint distribution? The approach taken by conditional independencebased causal discovery methods is based on two assumptions: the Markov condition and faithfulnes ..." Cited by 3 (1 self) Add to MetaCart This work addresses the following question: Under what assumptions on the data generating process can one infer the causal graph from the joint distribution? The approach taken by conditional independencebased causal discovery methods is based on two assumptions: the Markov condition and faithfulness. It has been shown that under these assumptions the causal graph can be identified up to Markov equivalence (some arrows remain undirected) using methods like the PC algorithm. In this work we propose an alternative by defining Identifiable Functional Model Classes (IFMOCs). As our main theorem we prove that if the data generating process belongs to an IFMOC, one can identify the complete causal graph. To the best of our knowledge this is the first identifiability result of this kind that is not limited to linear functional relationships. We discuss how the IFMOC assumption and the Markov and faithfulness assumptions relate to each other and explain why we believe that the IFMOC assumption can be tested more easily on given data. We further provide a practical algorithm that recovers the causal graph from finitely many data; experiments on simulated data support the theoretical findings. 1 - In NIPS 2008 workshop on causality, volume 7. JMLR W&CP, in press, 2009a "... The NIPS 2008 workshop on causality provided a forum for researchers from different horizons to share their view on causal modeling and address the difficult question of assessing causal models. There has been a vivid debate on properly separating the notion of causality from particular models such ..." Cited by 3 (3 self) Add to MetaCart The NIPS 2008 workshop on causality provided a forum for researchers from different horizons to share their view on causal modeling and address the difficult question of assessing causal models. There has been a vivid debate on properly separating the notion of causality from particular models such as graphical models, which have been dominating the field in the past few years. Part of the workshop was dedicated to discussing the results of a challenge, which offered a wide variety of applications of causal modeling. We have regrouped in these proceedings the best papers presented. Most lectures were videotaped or recorded. All information regarding the challenge and the lectures are found at "... The causal Markov condition (CMC) is a postulate that links observations to causality. It describes the conditional independences among the observations that are entailed by a causal hypothesis in terms of a directed acyclic graph. In the conventional setting, the observations are random variables a ..." Add to MetaCart The causal Markov condition (CMC) is a postulate that links observations to causality. It describes the conditional independences among the observations that are entailed by a causal hypothesis in terms of a directed acyclic graph. In the conventional setting, the observations are random variables and the independence is a statistical one, i.e., the information content of observations is measured in terms of Shannon entropy. We formulate a generalized CMC for any kind of observations on which independence is defined via an arbitrary submodular information measure. Recently, this has been discussed for observations in terms of binary strings where information is understood in the sense of Kolmogorov complexity. Our approach enables us to find computable alternatives to Kolmogorov complexity, e.g., the length of a text after applying existing data compression schemes. We show that our CMC is justified if one restricts the attention to a class of causal mechanisms that is adapted to the respective information measure. Our justification is similar to deriving the statistical CMC from functional models of causality, where every variable is a deterministic function of its observed causes and an unobserved noise term. Our experiments on real data demonstrate the performance of compression based causal inference. 1 , 2009 "... Telling cause from effect based on high-dimensional observations ..." "... Directed acyclic graph (DAG) models are popular tools for describing causal relationships and for guiding attempts to learn them from data. They appear to supply a means of extracting causal conclusions from probabilistic conditional independence properties inferred from purely observational data. I ..." Add to MetaCart Directed acyclic graph (DAG) models are popular tools for describing causal relationships and for guiding attempts to learn them from data. They appear to supply a means of extracting causal conclusions from probabilistic conditional independence properties inferred from purely observational data. I take a critical look at this enterprise, and suggest that it is in need of more, and more explicit, methodological and philosophical justification than it typically receives. In particular, I argue for the value of a clean separation between formal causal language and intuitive causal
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10431007","timestamp":"2014-04-16T10:54:58Z","content_type":null,"content_length":"33968","record_id":"<urn:uuid:19b42a46-9b9a-4077-8d8c-47c5b7708e88>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2010 [00017] [Date Index] [Thread Index] [Author Index] Re: Expanding Integrals with constants and 'unknown' functions • To: mathgroup at smc.vnet.net • Subject: [mg110068] Re: Expanding Integrals with constants and 'unknown' functions • From: "David Park" <djmpark at comcast.net> • Date: Tue, 1 Jun 2010 04:20:45 -0400 (EDT) The Presentations package ($50) from my web site has functionality for this. The Student's Integral section allows you to write integrate[...] (small i) instead of Integrate[...] and obtain the integral in a held form. There are then commands to manipulate the integral (operating on the integrand, change of variable, integration by parts, trigonometric substitution, and breaking the integral out) and then evaluate using Integrate, NIntegrate, or a custom integral table. Also, you can pass any assumptions at the time you use Integrate, instead of when the integral is initially written, so it always formats nicely. The following code also uses another Presentations routine, MapLevelParts, that allows you to apply an operation to a subset of level parts in an expression - here the two integrals that can be evaluated without asking Mathematica to evaluate the integral with the undefined function. I copied the output as text, with box structures, so if you copy from the email back into a notebook is should format as a regular expression. integrate[a + z + s[z], {z, clow, chigh}] % // BreakoutIntegral % // MapLevelParts[UseIntegrate[], {{1, 2}}] \*SubsuperscriptBox[\(\[Integral]\), \(clow\), \(chigh\)]\(\((a + z + s[z])\) \[DifferentialD]z\)\) a \!\( \*SubsuperscriptBox[\(\[Integral]\), \(clow\), \(chigh\)]\(1 \*SubsuperscriptBox[\(\[Integral]\), \(clow\), \(chigh\)]\(z \*SubsuperscriptBox[\(\[Integral]\), \(clow\), \(chigh\)]\(s[z] chigh^2/2+a (chigh-clow)-clow^2/2+\!\( \*SubsuperscriptBox[\(\[Integral]\), \(clow\), \(chigh\)]\(s[z] David Park djmpark at comcast.net From: Jesse Perla [mailto:jesseperla at gmail.com] I have an integral involving constants and an 'unknown' function. I would like to expand it out to solve for the constants and keep the integrals of the unknown function as expected. Integrate[a + z + s[z], {z, clow, chigh}] I want to get out: (a*chigh + chigh^2/2 - a*clow - clow^2/2) + Integrate[s[z], {z, clow, However, FullSimplify, etc. don't seem to do anything with this. Any
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jun/msg00017.html","timestamp":"2014-04-19T02:09:45Z","content_type":null,"content_length":"27397","record_id":"<urn:uuid:9f491cf3-69a1-4b71-b129-9a03883d5be4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Fuzzy logics as the logics of chains From Mathfuzzlog Title: Fuzzy logics as the logics of chains Journal: Fuzzy Sets and Systems Volume 157 Number 5 Pages: 604-610 Year: 2006 Download from the publisher Post-publication comments Even though we still consider our arguments valid, it seems that the proposed delimitation of the class of fuzzy logics (in the technical sense) has not become widely accepted in the community of mathematical fuzzy logic. Perhaps a more neutral term for Cintula's class of weakly implicative fuzzy logics would be more acceptable to other researchers---e.g., semilinear logics (Nick Galatos has pointed out to us that semi-X is often used in universal algebra for the property that all irreducibles are X, like in semi-simple). The best behaved among "semilinear weakly implicative logics" seem to be those which are substructural in Ono's sense (i.e., logics of residuated lattices),^[1] as they also internalize the deductive role of conjunction and implication. Since the name substructural fuzzy logics has already been taken for a slightly different concept by Metcalfe and Montagna in their JSL paper, a suitable name for this class of logics (also called deductive fuzzy logics in the paper On the difference between traditional and deductive fuzzy logic) could be "semilinear substructural logics", i.e., the logics of semilinear residuated lattices. These proposals, however, are not intended to suggest replacing the name "mathematical fuzzy logic" for the discipline of logic, and only regard the name of the formally defined class of logics (which is not regarded as coinciding with the agenda of mathematical fuzzy logic by many researchers anyway). -- LBehounek 20:10, 2 September 2008 (CEST) (The comment was inspired by a discussion with Petr Cintula, Nick Galatos, Rosťa Horčík, Petr Hájek, and Carles Noguera in Prague, August 2008. However, it only expresses my own opinion immediately after the discussion; the opinions of other participants of the discussion may differ, and my own opinion may easily change in the future.) References for this page 1. ↑ Ono, H., 2003, "Substructural logics and residuated lattices — an introduction". In F.V. Hendricks, J. Malinowski (eds.): Trends in Logic: 50 Years of Studia Logica, Trends in Logic 20:
{"url":"http://www.mathfuzzlog.org/index.php/Fuzzy_logics_as_the_logics_of_chains","timestamp":"2014-04-21T15:00:31Z","content_type":null,"content_length":"19351","record_id":"<urn:uuid:a2b382ee-0f6f-426a-bbcc-06ff94bab60a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2010 [00082] [Date Index] [Thread Index] [Author Index] Re: Difficulty with NDSolve (and DSolve) • To: mathgroup at smc.vnet.net • Subject: [mg106167] Re: Difficulty with NDSolve (and DSolve) • From: Mark McClure <mcmcclur at unca.edu> • Date: Sun, 3 Jan 2010 03:43:31 -0500 (EST) • References: <201001021004.FAA07409@smc.vnet.net> On Sat, Jan 2, 2010 at 5:04 AM, KK <kknatarajan at yahoo.com> wrote: > I am trying to numerically solve a differential equation. However, I > am encountering difficulty with Mathematica in generating valid > numerical solutions even for a special case of that equation. > The differential equation for the special case is: > F'[x] == - (2-F[x])^2/(1-2 x + x F[x]) and > F[1]==1. > These equations are defined for x in (0,1). Moreover, for my context, > I am only interested in solutions with F[x] in the range <1. Of course, the basic problem is that the solution is not one-to-one in a neighborhood of your initial condition. Thus, let's set up an initial condition y0 at x0=1/2 and try to choose y0 so that the solution passes through (1,1) - kind of like the shooting method for solving boundary value problems. We can set up a function that tells us the value of the solution at x=1 as a function of the initial condition at x0=1/2 like as follows: In[1]:= valAt1[y0_?NumericQ] := Quiet[Check[(F[x] /. NDSolve[{F'[x] == -(2 - F[x])^2/(1 - 2 x + x F[x]), F[1/2] == y0}, F[x], {x, 1/2, 1}][[1, 1]]) /. x -> 1, In[2]:= {valAt1[-4], valAt1[-3]} Out[2]= {0.208967, bad} Note that we use Check to check for error messages in case that x=1 is outside the range of the solution. The computation shows that this singularity between y0=-4 and y0=-3. We can use a bisection method to find more precisely where this happens like so: In[3]:= a = -4; In[4]:= b = -3; In[5]:= Do[ c = (a + b)/2; If[valAt1[c] === bad, b = c, a = c], In[6]:= N[a] Out[6]= -3.35669 Now, we find the solution taking y0=a. In[7]:= Clear[F]; In[8]:= F[x_] = F[x] /. NDSolve[{F'[x] == -(2 - F[x])^2/(1 - 2 x + x F[x]), F[1/2] == a}, F[x], {x, 0.001, 1}][[1]]; In[9]:= F[1] Out[9]:= 1. The computation shows that it worked. Hope that helps, Mark McClure • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00082.html","timestamp":"2014-04-19T04:42:25Z","content_type":null,"content_length":"27250","record_id":"<urn:uuid:c36da26e-7b06-4399-b0ed-b2500ba9d6c6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Monte Carlo Inference is a graduate level course taught in serial with Time Series (last 16 lectures). This year it is being offered in the Lent term. It covers topics like random number generation, importance sampling, the bootstrap and jacknife for estimation and hypothesis tests, MCMC, sequential importance sampling, simulated annealing, and the EM algorithm. Course syllabus (including recommended texts) • The revision examples class will be held in MR4 on 18 May, from 4-6pm. The last three years worth of exam questions (except on uniform pseudo-random number generation) will be covered Example sheets: • Example Sheet 1 (12 Feb 2010) Examples class 24 Feb, 4-6pm in MR9 (CMS) -- Solutions • Example Sheet 2 (22 Feb 2010), requires the data file ar3.txt and optionally the code rmultnorm.R Examples class 9 March, 4-6pm in MR13 (CMS) -- Solutions • Example Sheet 3 (8 March 2010) Examples class 4 May, 4-6pm in MR11 (CMS) -- Solutions □ R code to accompany #3: boh.R • reject.R: Rejection sampling from Lecture 2 • rou.R: Ratio of Uniforms sampling from Lecture 3 • is.R: Importance Sampling from Lectures 4 and 5 • resample.R: Resampling methods (Jackknife and Bootstrap) from Lecture 8 • gibbs.R which requires gibbs_norm.R: Gibbs sampling from Lecture 9 • da.R: Data augmentation from Lecture 9 • mh.R which requires mh_norm.R: the Metropolis Hastings algorithm from Lecture 10 • ess.R which requires gibbs_norm.R and mh_norm.R: Effective sample size from Lecture 11 • rjmcmc.R: Reversible Jump MCMC from Lecture 12 • sa.R: Simulated Annealing from Lecture 14 • em.R: Expectation Maximisation from Lecture 16 Robert B. Gramacy -- 2010 straight to the top
{"url":"http://www.statslab.cam.ac.uk/~bobby/teaching.html","timestamp":"2014-04-16T10:16:44Z","content_type":null,"content_length":"17765","record_id":"<urn:uuid:e4e4a3ae-bdf8-4c69-b177-23c841007c29>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime Curios!: 5 Curios Search: 5 = 12 in base 3. There is no other prime p that can be expressed as p = 12...n in base n+1, because p has n as a factor if n is odd, or n/2, if n is even. [Necula] Submitted: 2004-02-03 17:12:04; Last Modified: 2008-01-30 11:28:00.
{"url":"http://primes.utm.edu/curios/cpage/8188.html","timestamp":"2014-04-19T11:56:49Z","content_type":null,"content_length":"4616","record_id":"<urn:uuid:3c5dee9d-cfdd-419a-84d3-24c5c5325416>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
The Prime Pages: Paul Underwood A titan, as defined by Samuel Yates, is anyone who has found a titanic prime. This page provides data on those that have found these primes. The data below only reflect on the primes currently on the list. (Many of the terms that are used here are explained on another page.) Proof-code(s): p6, x43, p98, p102, p62, L97, L158, L256, p308, p367, p372, c70 E-mail address: paulunderwood@mindless.com Web page: http://www.mersenneforum.org/showpost.php?p=298027&postcount=44 Username: Underwood (entry created on 01/18/2000) Database id: 181 (entry last modified on 04/18/2014) Active primes: on current list: 7.5 (unweighted total: 13), rank by number 112 Total primes: number ever on any list: 1520.78 (unweighted total: 2977) Production score: for current list 46 (normalized: 32), total 46.8134, rank by score 103 Largest prime: 3 · 2^3136255 - 1 (944108 digits) via code L256 on 03/08/2007 Most recent: V(89849) (18778 digits) via code c70 on 01/09/2014 Entrance Rank: mean 78386.92 (minimum 12, maximum 103515) Surname: Underwood (used for alphabetizing and in codes) Unverified primes are omitted from counts and lists until verification completed.
{"url":"http://primes.utm.edu/bios/page.php?id=181","timestamp":"2014-04-18T20:44:19Z","content_type":null,"content_length":"18379","record_id":"<urn:uuid:7b18d7d0-d759-49e6-a888-2433e79b2cf7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help differentiating an expression. Answer is given just unsure of the steps. November 11th 2011, 03:00 PM #1 Nov 2009 Need help differentiating an expression. Answer is given just unsure of the steps. Hello, this problem states to differentiate the expression with respect to q. I am not sure of the steps taken to get the final result. This is an economics problem differentiating the variable cost function to get the marginal cost function. Re: Need help differentiating an expression. Answer is given just unsure of the steps So you want to differentiate: $6(\frac{q}{10k^{0.4}})^{1.67}$ with respect to q? In which case, we have: = $\frac{6q^{1.67}}{(10k^{0.4})^{1.67}}$ But the $\frac{6}{(10k^{0.4})^{1.67}}$(everything except the $q^{1.67}$) is just constant. It's a stable value, like 4 or 7 - it isn't a variable. For the sake of simplicity, I'm going to let $\frac{6}{(10k^{0.4})^{1.67}}=c$ When we differentiate $4x^n$ with respect to $x$, we get: $4nx^{n-1}$. When we differentiate $ax^n$ with respect to $x$, we get: $anx^{n-1}$ We have: $c\times~q^{1.67}$ So, differentiating, we get: $1.67c\times~q^{0.67}$ Then, slipping c back in as $\frac{6}{(10k^{0.4})^{1.67}}$, we have: $\frac{1.67\times~6\times~q^{0.67}}{(10k^{0.4})^{1. 67}}$ and $1.67\times~6=10.02$ which is where that number arises. November 11th 2011, 03:31 PM #2
{"url":"http://mathhelpforum.com/calculus/191672-need-help-differentiating-expression-answer-given-just-unsure-steps.html","timestamp":"2014-04-16T11:03:34Z","content_type":null,"content_length":"37387","record_id":"<urn:uuid:baa9c500-7e21-4fbe-ae72-72c113a4954b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
The graduate student seminar meets from 3:30 to 4:30 pm Thursdays in Goldsmith 317. Want to give a talk? Email Steve Hermes at srhermes@etc... for details. January 17 Organizational Meeting January 24 A Gentle Introduction to A[∞]-Algebras Stephen Hermes In topology, the singular cochain complex of a space comes with some extra structure which allows us to define an associative product called the cup product. This structure of a chain complex with an associative product appears throughout various branches of mathematics, leading us to the notion of a differential graded algebra (dga). Unfortunately, these associative products are not well-behaved with respect to homotopy: a homotopy equivalent chain complex may not inherit an associative product from one that does. In this talk we will discuss how we can alleviate this problem by introducing the notion of A[∞]-algebras. Along the way we will see how to cure all sorts of non-ideal behaviour of dgas. January 31 Ramsey Theory Nick Stevenson By the pigeonhole principle of Dirchlet, if we partition 'many' objects into a 'small' number of classes, there will be 'many' objects contained in a single class. Suppose that these objects are now interrelated in a 'small' number of ways. Given a set of 'many' such objects, will there be 'many' objects all of which are related in a single way? Ramsey Theory studies this question. I will begin by surveying the Ramsey theory of the edge colourings of finite graphs and hypergraphs, then moving on to partition calculus (the Ramsey theory of infinite sets), an important branch of set theory. February 7 Kedlaya's Point Counting Algorithm Andrija Peruničić Let X be a hyperelliptic curve of genus g defined over a finite field. Kedlaya developed an algorithm for (quickly) counting the number of points on this curve. It can also be used for computing its zeta function. The goal of my talk will be to describe this algorithm. First, I will describe how to lift objects associated to the curve to a p-adic setting. The advantage of doing so is that there is a well-behaved cohomology theory, called Monsky-Washnitzer cohomology, which allows us to use the Weil conjectures. With their help, we will be able to explicitly compute the zeta function of the curve under consideration. You will be able to follow the algorithm even if you've never dealt with the p-adic numbers before, and are only now learning about cohomology! February 14 CAT(0) Cube Complexes and Hyperplanes Mike Carr Department policy states that graduate students must see the definitions of the CAT(0) condition and of Right-Angled Artin Groups at least once per semester. In addition to fulfilling our quota, we will describe the relationship between a cube complex and its hyperplanes. Finally, we'll outline an application: a "geometric" proof of a theorem from combinatorial group theory. February 28 No Seminar No Seminar due to Brandeis-Harvard-MIT-Northeastern Colloquium March 7 No Seminar March 14 π Day Chris Ohrt March 21 No Seminar No Seminar due to Brandeis-Harvard-MIT-Northeastern Colloquium April 4 Lower Bounds for Knot Genus via Contact Topology Katherine Raoux Computing the genus of an arbitrary knot can be difficult. Adding a contact structure to R^3, allows us to compute a lower bound for the genus of an arbitrary Seifert surface, of any Legendrian knot. Since any knot can be nicely approximated by a Legendrian knot, this is also a lower bound for the genus of the knot. I will show that the trefoil is not the unknot. April 11 I Got 99 Problems but a Word Ain't One: A Geometric Method to Solve the Word Problem in Right-Angled Artin Groups Matt Cordes Given a finitely generated group, the word problem asks if there is an algorithm to determine if a word in the generators of the group is the identity. The word problem is not solvable in general, but it is in the case of right-angled Artin groups. I will give a geometrically flavored proof of this fact. April 18 Alyson Burchardt April 25 John Bergdall Previous Semesters
{"url":"http://people.brandeis.edu/~srhermes/gss.html","timestamp":"2014-04-19T04:51:42Z","content_type":null,"content_length":"5723","record_id":"<urn:uuid:c01c5caa-98a2-4d1c-9779-e67c0dca6a0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: Needing a Random List of Non-repeating Values whose Range and Length Replies: 0 Re: Needing a Random List of Non-repeating Values whose Range and Length Posted: Jan 19, 2013 1:13 AM On 1/18/13 at 12:54 AM, brtubb090530@cox.net wrote: >Consider the following: >ril[range_Integer]:= RandomInteger[{1,range},range] >This function is almost what I want; but I need one which doesn't >include any repeated values. This is intended for use, for example, >for a card deck, or dice, etc. Perhaps you should look at RandomSample. For example, if you associate the integers from 1 to 52 with the 52 possible cards (excluding jokers) in a deck would return 5 values representing a random 5 card draw from the deck
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2430024&messageID=8109711","timestamp":"2014-04-16T11:27:08Z","content_type":null,"content_length":"14228","record_id":"<urn:uuid:59e5ac27-59c5-48b1-9ba4-d023bc6ee767>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Dynamics of Nonholonomic Systems &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp Translations of The goal of this book is to give a comprehensive and systematic exposition of the mechanics of nonholonomic systems, including the kinematics and dynamics of nonholonomic systems Mathematical with classical nonholonomic constraints, the theory of stability of nonholonomic systems, technical problems of the directional stability of rolling systems, and the general Monographs theory of electrical machines. The book contains a large number of examples and illustrations. 1972; 518 pp; Volume: 33 reprinted 2004 List Price: US$135 Member Price: Order Code: MMONO/
{"url":"http://ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-33-S","timestamp":"2014-04-20T16:08:46Z","content_type":null,"content_length":"13976","record_id":"<urn:uuid:49adb929-2677-4bcf-be45-45d110bfefa3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Index of /sandbox/latex_directive System Message: ERROR/3 (../sandbox/latex_directive/README.txt, line 5) Unknown interpreted text role "latex". .. role:: m(latex) Role and directive There is a role called latex that can be used for inline latex expressions: :latex:`$\psi(r) = \exp(-2r)$` will produce :m:`$\psi(r)=\exp(-2r)$`. Inside the back-tics you can write anything you would write in a LaTeX ducument. System Message: ERROR/3 (../sandbox/latex_directive/README.txt, line 16); backlink Unknown interpreted text role "m". For producing displayed math (like an equation* environment in a LaTeX document) there is a latex role. If you write: .. latex-math:: $\psi(r) = e^{-2r}$ you will get: System Message: ERROR/3 (../sandbox/latex_directive/README.txt, line 30) Unknown directive type "latex". .. latex:: $\psi(r) = e^{-2r}$
{"url":"http://docutils.sourceforge.net/sandbox/latex_directive/","timestamp":"2014-04-17T03:56:07Z","content_type":null,"content_length":"5519","record_id":"<urn:uuid:d077fd93-be9e-47ce-b6da-e35281092375>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
e^(i*pi) = -1: pi = 0 ? Date: 10/17/97 at 11:29:15 From: John K. Koehler Subject: Weird tricks Dr. Math, I know that from a certain trig identity we get the equation e^(i*pi) = -1. I have been playing around with this equation and have found some disturbing things. I hope that you can help me. e^(2*i*pi) = 1 e^(-2*pi) = 1 (raised both sides to i and 1^n is 1) -2*pi = 0 (took the ln of both sides and ln(1) = 0) pi = 0 ! i*pi = ln(-1) {1} ln(-n) = ln(-1) + ln(n) therefore by {1} ln(-n) = i*pi + ln(n) i^2 = -1 e^(i*pi) = i^2 e^(-pi) = i^(2i) e^(-pi/2) = i^i e^(-pi/2) is a real number and therefore so is i^i Can you please tell me why I get such ridiculus results? I asked my calc2 teacher last year - but all he could tell me was that imaginary numbers don't work like reals, and I still don't see how that would change anything. Thanks in advance, Date: 10/17/97 at 13:08:36 From: Doctor Bombelli Subject: Re: Weird tricks Well, John, not all the results you have are ridiculous! In fact, you and your teacher are both right; complex numbers don't always work like real numbers, but sometimes they do. Complex numbers can be written in many equivalent ways: z = x+iy z = re^*(it) where t is the angle the point (x,y) makes with the positive x axis. t is called the argument of z--arg(z) z = r[cos(t)+i sin(t)] In fact, the definition of e^(it) is cos(t)+i sin(t). This is why we have e^(-ipi) = cos(pi)+i sin(pi)= -1. We would like w = log(z) and z = e^w to match up, just as for real Let z = re^(it) and w = u+iv. z = e^w = e^(u+iv) = e^u*e^(iv) and z is also re(^it) Match the real part and the "i part" (the imaginary part). r = e^u and e^(it) = e^(iv) So u = ln(r) and v can be any value with cos(v) = cos(t) [since e^(it) is cos(t)+i sin(t)]. So we say, for complex numbers, ln(z)=ln(r)+i arg(z)+i2kpi (k an integer). In fact, ln(z) has many values. We also define, in this way, that z^a = e^[a ln(z)] when a is complex. (Check that this works for real numbers a, also.) Note that 1^a = e^[ln(1)] = e^[0+i2kpi], so only one of the answers is e^0 = 1. Here is your problem: e^(2*i*pi) = 1 e^(-2*pi) = 1 (raised both sides to i and 1^n is 1) -2*pi = 0 (took the ln of both sides and ln(1) = 0) pi = 0 In steps 2 and 3 you don't have one-to-one functions any more. (The reason e^x = 1 implies x = 0 is because the real logarithm function is one-to-one. It is kind of like why x^3 = 1 gives x = 1 and x^2 = 1 gives x = 1 or -1. The cube root is one-to-one, but the square root isn't.) Now in your second experiment: i*pi = ln(-1) {1} ln(-n) = ln(-1) + ln(n) therefore by {1} ln(-n) = i*pi + ln(n) This is okay. ln(-n) = ln(n) + i pi + i 2kpi from the definition above. In your third experiment: i^2 = -1 e^(i*pi) = i^2 e^(-pi) = i^(2i) e^(-pi/2) = i^i e^(-pi/2) is a real number and therefore so is i^i i^i is e^[i ln(i)], and ln(i) is ln(1) + i pi/2 + i 2kpi So... i^i = e^[0 + i^2 pi/2+ i^2 2kpi] = e^[- pi/2 - 2kpi] which is a real number. I commend you on playing around with this stuff. That is how new mathematics is learned, on all levels. -Doctor Bombelli, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/53907.html","timestamp":"2014-04-16T17:22:39Z","content_type":null,"content_length":"8387","record_id":"<urn:uuid:4fc767a4-418e-4d6c-8678-c3d9113e656a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
fuzzy logic library c code Abstract: 9221 0 Fax: +49 6131 9221 33 Email: info@phytec.de AS FLDE now Fuzzy logic development environment with code generation for C and Siemens 80C991 80C991 12-bit fuzzy logic coprocessor. SYNDESIS , RTRCD-166 RTRCD-166 now Real-Time Remote Cross Debugger for Fuzzy Logic Systems on 16 bit 3 3964R 9221 fuzzyTECH actia C166 D3830 siemens family 80 code for FUZZY Siemens , fuzzy logic design environment for all 16 bit Siemens microcontrollers. Generates optimized Original pages, MICROCONTROLLER PHYTEC c167 fuzzy logic controller FUZZY MICROCONTROLLER fuzzy assembly , specific Software (CAN, Fuzzy, .) now HighTec EDVAS PXprofi PXcan Systeme GmbH PXtcp, 12.91 logic library c code datasheet abstract PXnfs, PXfile ... Kb Abstract: code needed to set up the library calls is about 3000 bytes. Many have argued that using fuzzy logic , hot with a degree of membership of 0.8. Fuzzy Logic, Changing Our View Automotive: MCS®51 Controller Icc Specification Notice Verifying ROM Code on Locked 8XC196Kx/Jx Family Devices 8 Fuzzy Logic , membership function data and the fuzzy system code. The library code is fixed in size. OCR Scan pages, intel 80286 TECHNICAL 80286 schematic datasheet abstract The equation in Figure , Corporation 1 What is Fuzzy Logic? Consider the statement "it is very hot out 4064.5 side." It is not possible to ... Kb Abstract: Providing Comprehensive and Efficient Solutions for Implementing Fuzzy Logic Controls in a Wide Range of Industries M enta logic S Y S T E M S I N C . "Fuzzy Technology in Real Systems ' , tools and advanced control systems using fuzzy logic, neural networks and other intelligent , 12 projects based on fuzzy logic general purpose 3 phases AC industrial code for Implementing Fuzzy Logic Control Systems. OUR PRODUCTS A ll our products come w ith user friendly OCR Scan pages, FUZZY MICROCONTROLLER FUZZY MICROCONTROLLER fuzzy logic control microchip software , product development. M enta logit S Y S T E M S I N C . Fuzzy Controlled AC/DC Drives The 6219.93 datasheet abstract MSI ... Kb Abstract: instruction hence eliminating bit manipulations. OPTIMIZATION OF FUZZY CODE Inform 8XC196 8XC196 fuzzy logic , Fuzzy Logic Control with the Intel 8XC196 8XC196 Embedded Microcontroller Navin intel 80c196 INSTRUCTION SET fuzzy controller fuzzy fuzzy logic motor code Govind Senior Systems Engineer Intel Corporation Chandler, AZ Abstract: Fuzzy logic control is being 11 8XC196 instruction set handbook fuzzy logic fuzzy logic library c code scr increasingly applied to , time code in assembly and C, with short development time, are shown for the Original pages, firing METHODS intel 80c196 microcontroller motor 5 hp DC motor speed control Intel 8XC196 8XC196 , FUZZY LOGIC Fuzzy logic is being increasingly used over a wide range of areas 52.43 using scr FUZZY MICROCONTROLLER code for FUZZY MICROCONTROLLER 8XC196 8XC196 such as industrial ... Kb 8XC196 abstract Abstract: 1.1, W.A.R.P. 2.0, MATLAB and ANSI C DESCRIPTION Adaptive Fuzzy Modeller (AFM) is a tool that easily allows to obtain a model of a system based on Fuzzy Logic data structure, starting from the , describing the membership functions. The generated Fuzzy Logic knowledge base represents an 4 W.A.R.P. 1.1 PLCC68 matlab fuzzy set fuzzy logic library c code fuzzy logic c optimized , files to FUZZYSTUDIOTM project files, MATLAB and C code, in order to use this environment Original pages, code matlab simulator communication datasheet abstract as a support , different sub-menus: LOCAL RULES it allows to add new rules to the fuzzy logic 69.97 knowledge base determined ... Kb Abstract: FUZZYSTUDIOTM1.0 - W.A.R.P.2.0 using FUZZYSTUDIOTM2.0 - MATLAB - C Language - Fu.L.L. (Fuzzy Logic Language). , 1.1, W.A.R.P. 2.0, MATLAB and ANSI C DESCRIPTION Adaptive Fuzzy Modeller (AFM) is a tool that easily allows to obtain a model of a system based on Fuzzy Logic data structure, starting 4 PLCC68 matlab simulator communication matlab fuzzy logic library c code from the , describing the membership functions. The generated Fuzzy Logic knowledge base represents an Original pages, W.A.R.P. 1.1 matlab fuzzy set datasheet abstract optimized , files to FUZZYSTUDIOTM project files, MATLAB and C code, in order to use this environment 88.42 as a support ... Kb Abstract: Disk Information). Q U ESTIO N #1: Which of the following is NOT characteristics of fuzzy logic? A)B)C , a fuzzy logic model can be proven using A)B)C)D)Liapunov's direct method. Nyquist's stability , complex non-linear systems with well understood behavior? A)B)C)Neural networks. Fuzzy 7 logic. Conventional , behavior and difficult to model? A)B)C)Neural networks. Fuzzy logic. OCR Scan pages, fuzzy logic c code datasheet abstract Conventional approaches. , process typically. A)B)C)D)applies rules to crisp applies rules to fuzzy 71.14 applies rules to fuzzy applies ... Kb Abstract: Controller Basic Structure Fuzzy Logic Controller + Fuzzy Knowledge Base (Fuzzy Inference) e solar converter dc to dc diagram SUN POSITION SENSOR light sensor interface , 17. Solar Energy Fuzzy Control System Structure Fuzzy Logic Controller + Fuzzy Knowledge Base , C ds 30 with 8051 Solar Charge Controller driver circuits Solar Sensor Light Intensity 10K 4 19 9 6 7 8 10 C S R D W R C IN LK C O T LK U R /2 EF IN + , it in our Nios II control system. Original pages, Solar Light Controller 4 pin solar tracker circuits solar tracking sensor Fuzzy Logic Control Design Since professor L. A. Zadeh of the , Fuzzy Logic Controller Structure 1274.92 Future scope of UART using Verilog 8051 PROGRAM C step motor solar panel Figure 16 shows the fuzzy logic controller (FLC) structure. The FLC is ... Kb circuit diagram datasheet abstract Abstract: of variables and rules in Fuzzy Logic Language (FLL). The code provided by Compiler depends on , to build projects targeted to W.A.R.P. processor, using the Fuzzy Logic terminology, in a user , 10 ST90E40 processor 80386 25-PIN 20-PIN RS232 mouse diagram fuzzy logic library c files containing a description, in Fuzzy Logic Language of all information regarding variables and , Original pages, code LED simulation Matlab RS232-C RS232-C abstract quite similar to the usage of "MATLAB" exporter also with respect to library functions. C Language , 122.71 fact, a fuzzy project can be stored on the Weight Associative Rule Processor located on the board. ... Kb Abstract: desired height, is transmitted to the Levitating a Beach Ball Using Fuzzy Logic Wanting a hot trade , hovering near the top of a large plastic tube, they do it all with fuzzy logic. hen we took on the task of , fast So, out of desperation, we thought we interface (SCI), pos m ed fast would 12 ultrasonic transducer polaroid PI fuzzy c source code fuzzy logic library pic c put fuzzy logic to the test. We and A/D very fast pos big wanted to see if fuzzy logic would converter OCR Scan pages, code FUZZY pic MICROCONTROLLER pwm datasheet abstract make Table 1- To , RS-232 RS-232 link. The fuzzy logic fuzzy logic control was Inform algorithm, 3336.66 running on the PC, calcu ... Kb
{"url":"http://www.datasheetarchive.com/fuzzy%20logic%20library%20%20c%20code-datasheet.html","timestamp":"2014-04-19T07:36:33Z","content_type":null,"content_length":"53895","record_id":"<urn:uuid:5c2e1908-39d3-40df-8ebc-0aff58fc15ca>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Consecutive numbers' Brain Teaser Consecutive numbers Math brain teasers require computations to solve. Puzzle ID: #14553 Category: Math Submitted By: Gerd Between 1000 and 2000 you can get each integer as the sum of nonnegative consecutive integers. For example, 147+148+149+150+151+152+153 = 1050 There is only one number that you cannot get. What is this number? Show Hint Show Answer What Next?
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=14553&comm=0","timestamp":"2014-04-17T01:37:25Z","content_type":null,"content_length":"23205","record_id":"<urn:uuid:5338add8-e6e0-4c18-92e5-d3d94e1cafab>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming project help? 10-08-2008 #1 Registered User Join Date Oct 2008 Programming project help? Hello, I am really confused as to how to start my project that I was given for my C++ course. The instructions are: Write a function declaration for a function that computes interest on a credit card account balance. The function takes arguments for the initial balance, the monthly interest rate, and the number of months for which interest must be paid. The value returned is the interest due. Do not forgetto compound the interest- that is, to charge interest on the interest due. This interest due is added into the balance due, and the interest for the next month is computed using this larger balance. Use a while loop. Embed the function in a program that reads the values for the interest rate, initial account balance, and number of months, then outputs ths interest due. Embed your function definition in a program that lets thge user compute the interest due on the credit account balance. The program should allow the user to repeat the calculation until the user said he or she wants to end the program. The information that I am given through my instructor is: Read the description carefully, insure that you allow the user a choice to run the program or exit. The function prototype is: double interest (double initBalance, double rate, int months); Your initial output to the user should be something like this: Credit card interest Enter: initial balance, monthly interest rate as a decimal fraction, e.g. for 1.5% per month write 0.015 and the number of months the bill has run. I will give you the interest that has accumulated. 100 .1 2 Interest accumulated = $21.00 Y or y repeats, any other character quits Credit card interest Enter: initial balance, monthly interest rate as a decimal fraction, e.g. for 1.5% per month write 0.015 and months the bill has run. I will give you the interest that has accumulated. 100 .1 3 Interest accumulated = $33.10 Y or y repeats, any other character quits Therefore, I am confused as to how to start with the function prototype that he gave me to start with... double interest (double initBalance, double rate, int months); Is there a possibility that anyone can lead me in the right direction to start with? Thank you very much, Debbie Bremer The prototype looks good. Surely you can program a function that compounds interest. I dont know how to type in the code for coompound interest. Basically right now, I have the program to display the instructions...but as far as the math goes for interest...im confused. Can anyone help? Google for "compound interest calculation", perhaps? Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. How would you calculate it by hand? Write it out in non-code, then put it in code. #include <cmath> double interest (double initBalance, double rate, int months) return initBalance * pow(M_E, rate*months); This will do continuously compounded interest... but I am no economics expert so its the formula I know better. Of course this is a start, and just a mere example since this is NOT the formula you were asked to use. For some reason I cannot get my math to work. This is what I have so far: #include <iostream> using namespace std; int main ( ) int months, count = 1; double init_Balance, rate, interest = 0, new_balance, total_Interest = 0, int_Accumulated; char repeats; total_Interest = 0; cout << " Credit card interest\n "; cout << "Enter: Initial balance, monthly interest rate as a decimal fraction, e.g. for 1.5&#37; per month write 0.015, and the number of months the bill has run.\n "; cout << "I will give you the interest that has accumulated.\n "; cin >> init_Balance, rate, months; for ( int count; count <= months; count++) interest = ( rate * init_Balance ); new_balance = ( init_Balance + interest ); total_Interest = ( interest + ); cout << count ++; cout << "Interest accumulated = $\n"; cin >> int_Accumulated; cout << "Y or y repeats, any other character quits. "; }while ( repeats != 'Y' && repeats != 'y' ); return 0; Please indent your code. You cannot possible ask us to read your unreadable code mess. For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ ok sorry about that. Well? Edit your original post and indent. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law #include <iostream> using namespace std; int main ( ) int months, count = 1; double init_Balance, rate, interest = 0, new_balance, total_Interest = 0, int_Accumulated; char repeats; total_Interest = 0; cout << " Credit card interest\n "; cout << "Enter: Initial balance, monthly interest rate as a decimal fraction, e.g. for 1.5&#37; per month write 0.015, and the number of months the bill has run.\n "; cout << "I will give you the interest that has accumulated.\n "; cin >> init_Balance >> rate >> months; for ( int count; count <= months; count++) interest = ( rate * init_Balance ); new_balance = ( init_Balance + interest ); total_Interest = ( interest + ); cout << count ++; cout << "Interest accumulated = $\n"; cin >> int_Accumulated; cout << "Y or y repeats, any other character quits. "; }while ( repeats != 'Y' && repeats != 'y' ); return 0; The code I just posted will not compute my math in the program. can anyone help me fix my problem? You don't initialize count in the for-loop, which means it has a pretty much random initial value - very likely more than months, so the loop never runs. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law im a little confused by what you mean. does that mean i usea different variable there? also, im not even sure if my math equations are right? for ( int count; count <= months; count++) Should be: for ( int count = 0; count <= months; count++) Also, if you want to execute the loop once for every month, you need to compare with <, not <=. total_Interest = ( interest + ); This line is meaningless and syntactically incorrect. new_balance = ( init_Balance + interest ); You overwrite new_balance every iteration without ever doing anything to the value. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law 10-08-2008 #2 10-08-2008 #3 Registered User Join Date Oct 2008 10-08-2008 #4 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 10-08-2008 #5 Registered User Join Date Oct 2001 10-08-2008 #6 10-10-2008 #7 Registered User Join Date Oct 2008 10-10-2008 #8 10-10-2008 #9 Registered User Join Date Oct 2008 10-10-2008 #10 10-10-2008 #11 Registered User Join Date Oct 2008 10-10-2008 #12 Registered User Join Date Oct 2008 10-10-2008 #13 10-10-2008 #14 Registered User Join Date Oct 2008 10-10-2008 #15
{"url":"http://cboard.cprogramming.com/cplusplus-programming/107885-programming-project-help.html","timestamp":"2014-04-17T18:56:49Z","content_type":null,"content_length":"101124","record_id":"<urn:uuid:d81e864c-bb18-42ac-a906-f20286f3b0c6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantitative Methods Optimal Solution a-j 1. 422410 Quantitative Methods Optimal Solution a-j Quality Air conditioning manufactures three hoe air conditioners: an economy model, a standard model, and a deluxe model. The profits per unit are $63, $95, and $135, respectively. The production requirements per unit are as follows: Number of Fans Number of Cooling Coils Manufacturing Time Economy 1 1 8 hours Standard 1 2 12 hours Deluxe 1 4 14 hours For the coming production period, the company has 200 fan motors, 320 cooling coils, and 2400 hours of manufacturing time available. How many economy models (E), standard models (S), and deluxe models (D) should the company produce in order to maximize profit? The linear programming model for the problem is as follows: Max 63E + 95S + 135D 1E + 1S + 1D < 200 Fan motors 1E + 2S + 4D < 320 Cooling coils 8E + 12 S + 14 D < 2400 Manufacturing Time E, S, D, > 0 Use the 100% rule for objective function coefficients and right hand side ranges where appropriate. Do not run the changed model. Assume that any changes given in a of the problem are the only changes being made in the model a) What is the optimal solution? What values of the decision variables lead to it? (3 points) I believe this is the right answer for this first question, the rest I am not sure of. E = 80, S = 120, D = 0. Profit= $16,440 b) Suppose the coefficient of E is changed to 67. Will the optimal solution change? Why or why not?(3 points) c) Suppose the coefficient of D is changed to 167 . Will the optimal solution change? Why or why not? (3 points) d) Suppose the coefficient of E is changed to 70.5, that of S to 90.5 and that of D to 130. Will the optimal solution change? Why or why not? (6 points) e) Suppose the coefficient of E is changed to 60.25, that of S to 97 and that of D to 142. Will the optimal solution change? Why or why not? (6 points) f) Which resource is not completely used? (3 points) g) Suppose the number of available fan motors is increased to 225. By how much will Quality AirĂ¢??s profit increase? (5 points) h) Suppose the number of available cooling coils is increased to 375. By how much will Quality AirĂ¢??s profits increase? (5 points) i) Suppose the number of available fan motors is changed to 215, that of cooling coils to 310 and the number of available manufacturing hours to 2350. Will the dual prices of those constraints change? Why or why not? (7 points) j) Suppose number of available fan motors is changed to 185, the number of cooling coils to 345 and the number of available manufacturing hours to 2275. Will the dual prices of those constraints change? Why or why not? (7 points) Quantitative Methods Optimal Solution a-j
{"url":"https://brainmass.com/math/linear-programming/422410","timestamp":"2014-04-17T16:19:47Z","content_type":null,"content_length":"31846","record_id":"<urn:uuid:23dd2d0c-ed88-43e9-8fa9-7a677186b484>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Seeding random number generators Java tutorials home java.util.Random Random number generators XORShift High quality random Seeding generators Entropy SecureRandom Random sampling Random simulations and nextGaussian() Seeding random number generators In our discussion of random number generators so far, we've seen the generator's period as a major limiting factor in "how much randomness" it can generate. But an equally important— and sometimes more important— issue is how we seed or initialise a random number generator. Let's take, for example, Java's standard generator java.util.Random, which has a period of 2^48. We can imagine this as a huge wheel with 2^48 randomly-numbered notches. Whenever we create an instance of java.util.Random, the wheel starts in a "random" place, and moves round by one notch every time we generate a number from that instance. Wherever we start from, we'll end up back at the same place after generating 2^48 numbers. If the place where we start is "truly random", then a sequence length of 2^48 may be sufficient for our application. But how do we pick a random place on the wheel to start from? The random place that we start from is in effect the seed or initial state of the generator. For this, we ideally want to pick a number (or some sequence of bits) that is "truly unpredictable". Or put another way, we want to find some source of entropy (or "true unpredictability") available to the program. System clocks and System.nanoTime() The traditional solution used by java.lang.Random is to use one of the system clocks that the computer makes available. In recent versions of the JVM, the measure taken is that reported by System.nanoTime(), which typically gives the number of nanoseconds since the computer was switched on. (In older versions of the JVM, or possibly as a fallback position on some platforms, System.currentTimeMillis() is used, which reports the current "wall clock" time in milliseconds since 1 Jan 1970.) So how good is System.nanoTime()? Well, on the face of it, it's a 64-bit value, so ample for generating starting points for all 2^48 possible sequences. But now consider that there are "only" 10^9 nanoseconds every second. So in the first five minutes that the computer is switched on, "only" 3x10^11— or about 2^38— possible values could be returned by System.nanoTime(). In other words, in the first 5 minutes of the computer being on, only about one thousandth of the possible series of java.lang.Random will ever be generated. (Of course, if Windows takes three of those five minutes to boot up, and/or if the system cannot actually report timings with nanosecond granularity, that reduces the number of possibilities even further...) For many casual applications, 2^38 possibilities or something in that order is still plenty. But we certainly wouldn't want to use System.nanoTime() to seed, say, a random encryption key, or a "serious" game or gambling application where a user's ability to guess the random sequence could result in financial loss. Of course, for such applications, we also shouldn't be using java.lang.Random. But the problem of seed selection still applies. Even if we have a high-quality random number generator with a period of, say, 2^160, that period doesn't really buy us "extra randomness" if we are only able to generate, say, 2^38 distinct seeds and/or an adversary can make further predictions about the seed. Looking for other sources of entropy To get more randomness in the choice of the generator's "initial position", we need to look for more sources of entropy on the local machine. On the next page, we discuss approaches to finding entropy for seeding a random number generator. Written by Neil Coffey. Copyright © Javamex UK 2013. All rights reserved.
{"url":"http://javamex.com/tutorials/random_numbers/seeding.shtml","timestamp":"2014-04-18T20:46:41Z","content_type":null,"content_length":"8975","record_id":"<urn:uuid:cf8efe02-708e-461d-9a0c-638c5fd5132e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Second Droz-Farny Circle Let ABC be a triangle. Draw the circles about the midpoints of AB, AC, and BC that pass through the orthocenter (the point of intersection of the triangle's altitudes). These three circles intersect the sides of ABC or their extensions in six points. Then those points lie on a circle (the second Droz-Farny circle).
{"url":"http://demonstrations.wolfram.com/SecondDrozFarnyCircle/","timestamp":"2014-04-19T15:17:56Z","content_type":null,"content_length":"42139","record_id":"<urn:uuid:3c0de290-738a-47b2-9b3a-d8483200de03>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 12 , 1998 "... . Rational terms (possibly infinite terms with finitely many subterms) can be represented in a finite way via -terms, that is, terms over a signature extended with self-instantiation operators. For example, f ! = f(f(f(: : :))) can be represented as x :f(x) (or also as x :f(f(x)), f(x :f(x)), ..." Cited by 21 (12 self) Add to MetaCart . Rational terms (possibly infinite terms with finitely many subterms) can be represented in a finite way via -terms, that is, terms over a signature extended with self-instantiation operators. For example, f ! = f(f(f(: : :))) can be represented as x :f(x) (or also as x :f(f(x)), f(x :f(x)), . . . ). Now, if we reduce a -term t to s via a rewriting rule using standard notions of the theory of Term Rewriting Systems, how are the rational terms corresponding to t and to s related? We answer to this question in a satisfactory way, resorting to the definition of infinite parallel rewriting proposed in [7]. We also provide a simple, algebraic description of -term rewriting through a variation of Meseguer's Rewriting Logic formalism. 1 Introduction Rational terms are possibly infinite terms with a finite set of subterms. They show up in a natural way in Theoretical Computer Science whenever some finite cyclic structures are of concern (for example data flow diagrams, cyclic te... - CGH , 1997 "... Acyclic Term Graphs are able to represent terms with sharing, and the relationship between Term Graph Rewriting (TGR) and Term Rewrtiting (TR) is now well understood [BvEG + 87, HP91]. During the last years, some researchers considered the extension of TGR to possibly cyclic term graphs, which ..." Cited by 20 (6 self) Add to MetaCart Acyclic Term Graphs are able to represent terms with sharing, and the relationship between Term Graph Rewriting (TGR) and Term Rewrtiting (TR) is now well understood [BvEG + 87, HP91]. During the last years, some researchers considered the extension of TGR to possibly cyclic term graphs, which can represent possibly infinite, rational terms. In [KKSdV94] the authors formalize the classical relationship between TGR and TR as an "adequate mapping" between rewriting systems, and extend it by proving that unraveling is an adequate mapping from cyclic TGR to rational, infinitary term rewriting: In fact, a single graph reduction may correspond to an infinite sequence of term reductions. Using the same notions, we propose a different adequacy result, showing that unraveling is an adequate mapping from cyclic TGR to rational parallel term rewriting, where at each reduction infinitely many rules can be applied in parallel. We also argue that our adequacy result is more , 1999 "... . We present a categorical formulation of the rewriting of possibly cyclic term graphs, based on a variation of algebraic 2-theories. We show that this presentation is equivalent to the well-accepted operational definition proposed by Barendregt et alii---but for the case of circular redexes, fo ..." Cited by 12 (6 self) Add to MetaCart . We present a categorical formulation of the rewriting of possibly cyclic term graphs, based on a variation of algebraic 2-theories. We show that this presentation is equivalent to the well-accepted operational definition proposed by Barendregt et alii---but for the case of circular redexes, for which we propose (and justify formally) a different treatment. The categorical framework allows us to model in a concise way also automatic garbage collection and rules for sharing/unsharing and folding/unfolding of structures, and to relate term graph rewriting to other rewriting formalisms. R'esum'e. Nous pr'esentons une formulation cat'egorique de la r'e'ecriture des graphes cycliques des termes, bas'ee sur une variante de 2-theorie alg'ebrique. Nous prouvons que cette pr'esentation est 'equivalente `a la d'efinition op'erationnelle propos'ee par Barendregt et d'autres auteurs, mais pas dons le cas des radicaux circulaires, pour lesquels nous proposons (et justifions - in Engineering, Communication and Computing 7 , 1993 "... . Dealing properly with sharing is important for expressing some of the common compiler optimizations, such as common subexpressions elimination, lifting of free expressions and removal of invariants from a loop, as source-to-source transformations. Graph rewriting is a suitable vehicle to accommoda ..." Cited by 8 (4 self) Add to MetaCart . Dealing properly with sharing is important for expressing some of the common compiler optimizations, such as common subexpressions elimination, lifting of free expressions and removal of invariants from a loop, as source-to-source transformations. Graph rewriting is a suitable vehicle to accommodate these concerns. In [4] we have presented a term model for graph rewriting systems (GRSs) without interfering rules, and shown the partial correctness of the aforementioned optimizations. In this paper we define a different model for GRSs, which allows us to prove total correctness of those optimizations. Differently from [4] we will discard sharing from our observations and introduce more restrictions on the rules. We will introduce the notion of Bohm tree for GRSs, and show that in a system without interfering and non-left linear rules (orthogonal GRSs), Bohm tree equivalence defines a congruence. Total correctness then follows in a straightforward way from showing that if a program M co... , 1989 "... .We study properties of rewrite systems that are not necessarily terminating, but allow instead for trans#nite derivations that have a limit. In particular, we give conditions for the existence of a limit and for its uniqueness and relate the operational and algebraic semantics of in#nitary theories ..." Cited by 8 (1 self) Add to MetaCart .We study properties of rewrite systems that are not necessarily terminating, but allow instead for trans#nite derivations that have a limit. In particular, we give conditions for the existence of a limit and for its uniqueness and relate the operational and algebraic semantics of in#nitary theories. We also consider su#cient completeness of hierarchical systems. Is there no limit? ---Job 16:3 1. Introduction Rewrite systems are sets of directed equations used to compute by repeatedly replacing equal terms in a given formula, as long as possible. For one approach to their use in computing, see #23#. The theory of rewriting is an outgrowth of the study of the lambda calculus and combinatory logic, and # Preliminary versions #6, 7# of ideas in this paper were presented at the Sixteenth ACM Symposium on Principles of Programming Languages, Austin, TX #January 1989# and at the Sixteenth EATCS International Colloquium on Automata, Languages and Programming, Stresa, Italy #July 1989#. , 1992 "... A methodology for polymorphic type inference for general term graph rewriting systems is presented. This requires modified notions of type and of type inference due to the absence of structural induction over graphs. Induction over terms is replaced by dataflow analysis. 1 Introduction Term graphs ..." Cited by 5 (1 self) Add to MetaCart A methodology for polymorphic type inference for general term graph rewriting systems is presented. This requires modified notions of type and of type inference due to the absence of structural induction over graphs. Induction over terms is replaced by dataflow analysis. 1 Introduction Term graphs are objects that locally look like terms, but globally have a general directed graph structure. Since their introduction in Barendregt et al. (1987), they have served the purpose of defining a rigorous framework for graph reduction implementations of functional languages (Peyton-Jones (1987)). This was the original intention. However the rewriting of term graphs defined in the operational semantics of the model, makes term graph rewriting systems (TGRSs) interesting models of computation in their own right. One can thus study all sorts of issues in the specific TGRS context. Typically one might be interested in how close TGRSs are to TRSs and this problem is examined in Barendregt et al. (19... , 1995 "... We define the notion of transfinite term rewriting: rewriting in which terms may be infinitely large and rewrite sequences may be of any ordinal length. For orthogonal rewrite systems, some fundamental properties known in the finite case are extended to the transfinite case. Among these are the Pa ..." Cited by 4 (1 self) Add to MetaCart We define the notion of transfinite term rewriting: rewriting in which terms may be infinitely large and rewrite sequences may be of any ordinal length. For orthogonal rewrite systems, some fundamental properties known in the finite case are extended to the transfinite case. Among these are the Parallel Moves lemma and the Unique Normal Form property. The transfinite Church-Rosser property (CR 1 ) fails in general, even for orthogonal systems, including such well-known systems as Combinatory Logic. Syntactic characterisations are given of some classes of orthogonal TRSs which do satisfy CR 1 . We also prove a weakening of CR 1 for all orthogonal systems, in which the property is only required to hold up to a certain equivalence relation on terms. Finally, we extend the theory of needed reduction from the finite to the transfinite case. The reduction strategy of needed reduction is normalising in the finite case, but not in the transfinite case. To obtain a normalising str... - Electronic Notes in Theoretical Computer Science , 1995 "... Infinitary rewriting allows infinitely large terms and infinitely long reduction sequences. There are two computational motivations for studying these: the infinite data structures implicit in lazy functional programming, and the use of rewriting of possibly cyclic graphs as an implementation techni ..." Cited by 4 (0 self) Add to MetaCart Infinitary rewriting allows infinitely large terms and infinitely long reduction sequences. There are two computational motivations for studying these: the infinite data structures implicit in lazy functional programming, and the use of rewriting of possibly cyclic graphs as an implementation technique for functional languages. We survey the fundamental properties of infinitary rewriting in orthogonal term rewrite systems, and its relation to cyclic graph rewriting. 1 Introduction Our interest in term and graph rewriting arises from functional languages and their implementation. Functional programs can be seen as term rewrite systems. 2 Terms can be thought of as trees. Representing these trees as graphs allows repeated subterms to be replaced by multiple pointers to the same subgraph. This optimisation has a dramatic effect when rewrite steps are performed. Whenever a variable appears more than once on the right-hand side of a rule, when that rule is applied to a graph multiple poi... "... Term graph rewriting provides a simple mechanism to finitely represent restricted forms of infinitary term rewriting. The correspondence between infinitary term rewriting and term graph rewriting has been studied to some extent. However, this endeavour is impaired by the lack of an appropriate count ..." Cited by 4 (4 self) Add to MetaCart Term graph rewriting provides a simple mechanism to finitely represent restricted forms of infinitary term rewriting. The correspondence between infinitary term rewriting and term graph rewriting has been studied to some extent. However, this endeavour is impaired by the lack of an appropriate counterpart of infinitary rewriting on the side of term graphs. We aim to fill this gap by devising two modes of convergence based on a partial order resp. a metric on term graphs. The thus obtained structures generalise corresponding modes of convergence that are usually studied in infinitary term rewriting. We argue that this yields a common framework in which both term rewriting and term graph rewriting can be studied. In order to substantiate our claim, we compare convergence on term graphs and on terms. In particular, we show that the resulting infinitary calculi of term graph rewriting exhibit the same correspondence as we know it from term rewriting: Convergence via the partial order is a conservative extension of the metric convergence. , 1996 "... We will present a syntactical proof of correctness and completeness of shared reduction. This work is an application of type theory extended with infinite objects and co-induction. ..." Cited by 2 (0 self) Add to MetaCart We will present a syntactical proof of correctness and completeness of shared reduction. This work is an application of type theory extended with infinite objects and co-induction.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2590162","timestamp":"2014-04-20T14:08:14Z","content_type":null,"content_length":"38669","record_id":"<urn:uuid:994ec4c4-b2ab-4fe4-ae76-4a05a2f44fd2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] About Paradox Theory Taylor Dupuy taylor.dupuy at gmail.com Fri Sep 23 16:20:46 EDT 2011 There is an interesting way to view contradictions in Propositional logic via arithmetization: 1. Convert propositions, with propositional variables X1,X2,... into polynomials over FF_2=ZZ/ 2 ZZ with indeterminates x1, x2, ... X & Y <-> xy X or Y <-> xy + x +y !X <-> x+1 True <-> 1 False <-> 0 2. Satisfiability of a proposition is equivalent existence of solutions of polynomial equations in FF_2. If P(X1,X2,...,Xn) is a proposition and p(x1,x2,...,xn) is its associated polynomial the P is satisfiable if and only if p=1 admits a solution over FF_2. For example the proposition X & !X is clearly not satisfiable which corresponds to the fact that x(x+1)=1 or x^2+x +1 =0 has no solutions over FF_2. One would think then that a canonical way to extend truth values in propostional logic would be to allow values in FF_2[x]/(x^2+x+1) (or the relevant extension for an unsatisfiable proposition) and carry the truth tables back over to logic via (1). I've never heard of a theory but have always wondered if its already been developed and what it would look like. -Does a theory using this idea in propositional logic exist? Can one make proofs make sense with these truth values? -Does a first order theory using ideas like this make sense? I could never get past what the methods of proof should be. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: </pipermail/fom/attachments/20110923/2187bf78/attachment.html> More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2011-September/015807.html","timestamp":"2014-04-19T09:28:41Z","content_type":null,"content_length":"4054","record_id":"<urn:uuid:92080ff2-3304-4104-acc5-3b1a2812a77d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
X-Rays, Neutrons and Muons ISBN: 978-3-527-30774-6 248 pages September 2012 Read an Excerpt Spectroscopy is a versatile tool for the characterization of materials, and photons in the visible frequency range of the electromagnetic spectrum have been used successfully for more than a century now. But other elementary particles such as neutrons, muons and x-ray photons have been proven to be useful probes as well and are routinely generated in modern cyclotrons and synchrotrons. They offer attractive alternative ways of probing condensed matter in order to better understand its properties and to correlate material behavior with its structure. In particular, the combination of these different spectroscopic probes yields rich information on the material samples, thereby allowing for a systematic investigation down to atomic resolutions. This book gives a practical account of how well they complement each other for 21st century material characterization, and provides the basis for a detailed understanding of the scattering processes and the knowledge of the relevant microscopic interactions necessary for the correct interpretation of the experimentally obtained spectroscopic data. See More Foreword V Preface XI About the Book XIII 1 Introduction 1 1.1 Some Historical Remarks 1 1.2 The Experimental Methods 2 1.3 The Solid as a Many Body 6 1.4 Survey over the Spectral Region of a Solid 9 References 11 2 The Probes, their Origin and Properties 13 2.1 Origin 13 2.1.1 The Photon 13 2.2 Properties 15 2.3 Magnetic Field of the Probing Particles 25 3 Interaction of the Probes with the Constituents of Matter 27 3.1 The Nuclear Interaction of Neutrons 27 3.2 Interaction of X-Rays with Atomic Constituents 36 3.3 Magnetic Interaction 49 3.4 Corollar 55 References 58 4 Scattering on (Bulk-)Samples 59 4.1 Introduction 59 4.2 The Sample as a Thermodynamic System 59 4.3 The Scattering Experiment 66 4.4 Properties of the Scattering and Correlation Function 83 4.5 General Form of Spin-Dependent Cross-Sections 88 4.6 Summary and Conclusions 123 References 125 5 General Theoretical Framework 127 5.1 Time Development of the Density Operator 127 5.2 Generalized Suspectibility 142 5.3 Dielectric Response Function and Sum Rules 157 References 173 Appendix A: Principles of Scattering Theory 175 A.1 Potential Scattering (Supporting Section 3.1.1) 175 A.2 Two Particle Scattering (Supporting Section 3.1.2) 179 A.3 Abstract Scattering Theory (Supporting Section 3.2) 180 A.4 Time-Dependent Perturbation 183 A.5 Scattering of Light on Atoms 186 A.6 Polarization and its Analysis 188 References 190 Appendix B: Form Factors 191 References 195 Appendix C: Reminder on Statistical Mechanics 197 C.1 The Statistical Operator P 197 C.2 The Equation of Motion 198 C.3 Entropy 198 C.4 Thermal Equilibrium – The Canonical Distribution 199 C.5 Thermodynamics 200 Appendix D: The Magnetic Matrix-Elements 203 D.1 The Trammell Expansion 203 D.2 The Matrix Elements 205 D.3 Conclusion 210 References 211 Appendix E: The Principle of a mSR-Experiment 213 Appendix F: Reflection Symmetry and Time-Reversal Invariance 217 F.1 Invariance Under Space Inversion Q 217 F.2 Invariance Under Time Reversal 218 Appendix G: Phonon Coupling to Heat Bath 221 References 223 Further Reading 225 Index 227 See More Walter E. Fischer (1939-2008) was the former head of the Department of Condensed Matter Research with Neutrons and Muons (NUM) at the Paul Scherrer Institute (PSI) in Villigen, Switzerland. He pioneered in establishing the spallation neutron source SINQ at PSI which went into operation in the mid-1990s. Later he foundes a condensed matter theory group to complement the experimental work at the neutron source. See More Buy Both and Save 25%! X-Rays, Neutrons and Muons (US $85.00) -and- Elements of Modern X-ray Physics, 2nd Edition (US $68.00) Total List Price: US $153.00 Discounted Price: US $114.75 (Save: US $38.25) Cannot be combined with any other offers. Learn more.
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-3527307745,subjectCd-CH92.html","timestamp":"2014-04-16T14:59:59Z","content_type":null,"content_length":"47659","record_id":"<urn:uuid:64f86e9c-8207-4150-8989-1e707aa1e385>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00143-ip-10-147-4-33.ec2.internal.warc.gz"}