text
stringlengths
256
16.4k
combined_ftest_5x2cv: 5x2cv combined *F* test for classifier comparisons - mlxtend combined_ftest_5x2cv: 5x2cv combined F test for classifier comparisons Example 1 - 5x2cv combined F test 5x2cv combined F test procedure to compare the performance of two models from mlxtend.evaluate import combined_ftest_5x2cv The 5x2cv combined F test is a procedure for comparing the performance of two models (classifiers or regressors) that was proposed by Alpaydin [1] as a more robust alternative to Dietterich's 5x2cv paired t-test procedure [2]. paired_ttest_5x2cv.md. Dietterich's 5x2cv method was in turn designed to address shortcomings in other methods such as the resampled paired t test (see paired_ttest_resampled) and the k-fold cross-validated paired t test (see paired_ttest_kfold_cv). p_A p_B Then, we estimate mean and variance of the differences: \overline{p} = \frac{p^{(1)} + p^{(2)}}{2} s^2 = (p^{(1)} - \overline{p})^2 + (p^{(2)} - \overline{p})^2. The F-statistic proposed by Alpaydin (see paper for justifications) is then computed as which is approximately F distributed with 10 and 5 degress of freedom. Using the f statistic, the p value can be computed and compared with a previously chosen significance level, e.g., \alpha=0.05 \alpha [1] Alpaydin, E. (1999). Combined 5×2 cv F test for comparing supervised classification learning algorithms. Neural computation, 11(8), 1885-1892. clf1 = LogisticRegression(random_state=1, solver='liblinear', multi_class='ovr') Note that these accuracy values are not used in the paired f test procedure as new test/train splits are generated during the resampling procedure, the values above are just serving the purpose of intuition. \alpha=0.05 for rejecting the null hypothesis that both algorithms perform equally well on the dataset and conduct the 5x2cv f test: f, p = combined_ftest_5x2cv(estimator1=clf1, print('F statistic: %.3f' % f) F statistic: 1.053 p > \alpha F statistic: 34.934 \alpha=0.05 p < 0.001 \alpha
What Is the Power Rule? | Outlier How to Use the Power Rule More Examples of the Power Rule The Power Rule is one of the first derivative rules that we come across when we’re learning about derivatives. It gives us a quick way to differentiate—that is, to take the derivative of—functions like x^2 x^3 , and since functions like that are ubiquitous throughout calculus, we use it frequently. Combining the Power Rule with the rules for differentiating sums and constant multiples of functions, we can differentiate a polynomial function like 2x^3 - 5x^2 + x - 1 without too much hassle. Additionally, we’ll see later that we can even use the Power Rule to differentiate some functions that aren’t written explicitly as powers of x \sqrt{x} \frac{1}{x^3} . Understanding the power rule opens up a decently sized library of functions that you can start using to explore the more conceptual or applied sides of derivatives as well. Here’s the Power Rule in its most general form: Based on that formula, we can break down the process for using the Power Rule into three steps: Write the function that you want to differentiate with the Power Rule in the standard form of f(x)=x^k . (Sometimes this has already been done for you.) Make sure you know what k is before computing! To form the derivative, write the power in the original function ( k Next to that, write the independent variable ( x in this case) with a power that is one less than it was before. To differentiate f(x)=x^7 , we don’t have to do anything for the first step since it’s already in the form of a power function. For the derivative, we write down the initial power ( 7 ) as a coefficient, then write x with a decreased power: f’(x)=7x^{7-1} = 7x^6 Next, let’s compute the derivative of g(x) = \sqrt{x} . We first need to rewrite the function as a power function: g(x) = x^{\frac{1}{2}} k=\frac{1}{2} , so that will be the number that we write as a coefficient and decrease by one to obtain the new power: g’(x) = \frac{1}{2} x^{\frac{1}{2} - 1} = \frac{1}{2} x^{-\frac{1}{2}} = \frac{1}{2} \frac{1}{\sqrt{x}} (That last simplification step is optional, but you’ll often see this particular derivative written that way.) Using slightly different notation and a different variable from the previous examples, let’s compute \frac{d}{da} \frac{1}{a^3} . As in the previous example, we need to first write the function we’re differentiating as a a^{-3} . We can then apply the Power Rule as we have previously, using k=-3 \frac{d}{da} a^{-3} = (-3)a^{(-3)-1} = -3a^{-4} As a final example, we can use the Power Rule in conjunction with the sum and constant multiple rules for derivatives to differentiate polynomials: \frac{d}{dx} (2x^3 - 5x^2 + x - 1) 2 \frac{d}{dx} x^3 - 5 \frac{d}{dx} x^2 + \frac{d}{dx} x - \frac{d}{dx} 1 2(3x^2) - 5(2x) + 1x^0 - 0 6x^2 - 10x + 1 Outlier Instructor Tim Chartier Explains the Power Rule Finally, it can be useful to see how the Power Rule works on a sequence of similar functions. In the following table we’ve used the Power Rule to differentiate the first several (whole number) powers of x Here’s a similar table for negative integer powers of x You don’t need to memorize these tables in order to be successful in calculus; it’s more important to internalize the process behind applying the Power Rule. But on the other hand, studying the patterns in the derivatives of those functions may help you build a better understanding of how the Power Rule works.
Single spin asymmetry - RHIC Spin Group Single spin asymmetry Last modified by Dmitri Smirnov on 16-08-2012 1 Measurements of {\displaystyle A_{N}} in pp —> pp 1.1 ~1990, E704 at FNAL, √s = 19.4 GeV 1.2 2003, PP2PP at BNL (RHIC Run 3), √s = 200 GeV 1.3 2004, H-jet at BNL (RHIC Run 4), √s = 13.7 GeV 1.4 2004, H-jet at BNL (RHIC Run 4), √s = 6.8 GeV 1.8 2009, PP2PP at STAR BNL (RHIC Run 9), √s = 200 GeV 1.9 2011, H-jet at BNL (RHIC Run 11), √s = 21.7 GeV 1.10 2012, H-jet at BNL (RHIC Run 12), √s = 6.8 GeV 1.11 2012, H-jet at BNL (RHIC Run 12), √s = 13.7 GeV {\displaystyle A_{N}} in pC —> pC {\displaystyle A_{N}} ~1990, E704 at FNAL, √s = 19.4 GeV Published References#Akchurin:1993xd First measurement of {\displaystyle A_{N}} in the CNI region {\displaystyle {\sqrt {s}}=19.4~{\text{GeV}}} 200 GeV/c polarized proton beam obtained from the decay of {\displaystyle \Lambda } {\displaystyle 0.001<|t|<0.032~({\text{GeV}}/c)^{2}\!\,} 2003, PP2PP at BNL (RHIC Run 3), √s = 200 GeV Published References#Bultmann:2005na {\displaystyle {\sqrt {s}}=200~{\text{GeV}}} {\displaystyle \sigma _{\text{tot}}=51.6~{\text{mb}},\rho =0.13} {\displaystyle 0.01<|t|<0.03~({\text{GeV}}/c)^{2}\!\,} Re r5 = −0.033 ± 0.035 and Im r5 = −0.43 ± 0.56 2004, H-jet at BNL (RHIC Run 4), √s = 13.7 GeV Published References#Okada:2005gu 100 GeV/c polarized proton beam on H-jet target {\displaystyle {\sqrt {s}}=13.7~{\text{GeV}}} {\displaystyle \sigma _{\text{tot}}=38.4~{\text{mb}},\rho =-0.08,\delta _{C}=0.02,b=12~({\text{GeV}}/c)^{-2}} {\displaystyle 0.001<|t|<0.032~({\text{GeV}}/c)^{2}\!\,} Re r5 = −0.0008 ± 0.0091 and Im r5 = −0.015 ± 0.029 Stat. only Stat. + Syst. 2004, H-jet at BNL (RHIC Run 4), √s = 6.8 GeV Published References#Alekseev:2009zza also includes results for double spin asymmetry {\displaystyle A_{NN}} 24 GeV/c polarized proton beam on H-jet target {\displaystyle {\sqrt {s}}=6.8~{\text{GeV}}} Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \sigma_\text{tot} = 38.4~\text{mb}} {\displaystyle 0.001<|t|<0.032~({\text{GeV}}/c)^{2}\!\,} {\displaystyle {\sqrt {s}}=7.7~{\text{GeV}}} Failed to parse (syntax error): {\displaystyle \operatorname{Re}\, r_5 = −0.055 \pm x.x, \operatorname{Im}\, r_5 = −0.016 \pm x.x} These data are from Sasha. I don't have the numbers to check the fits {\displaystyle {\sqrt {s}}=13.7~{\text{GeV}}} {\displaystyle {\sqrt {s}}=21.7~{\text{GeV}}} 2009, PP2PP at STAR BNL (RHIC Run 9), √s = 200 GeV Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \sqrt{s} = 200~\text{GeV}} Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \sigma_\text{tot} = 51.6~\text{mb}, \rho = 0.13} {\displaystyle \operatorname {Re} \,r_{5}=0.00167\pm 0.0017\pm 0.0061,\operatorname {Im} \,r_{5}=0.00722\pm 0.030\pm 0.049} 2011, H-jet at BNL (RHIC Run 11), √s = 21.7 GeV 250 GeV beams Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \sqrt{s} = 21.7~\text{GeV}} 2012, H-jet at BNL (RHIC Run 12), √s = 6.8 GeV {\displaystyle {\sqrt {s}}=6.8~{\text{GeV}}} {\displaystyle {\sqrt {s}}=21.9~{\text{GeV}}} {\displaystyle A_{N}} E950 at AGS, BNL (2002), References#Tojo:2002wb {\displaystyle {\sqrt {s}}=21.7~{\text{GeV}}} {\displaystyle 0.009<|t|<0.041~({\text{GeV}}/c)^{2}\!\,} Re r5 = −0.088 ± 0.058 and Im r5 = −0.161 ± 0.226 p-Carbon polarimeter at RHIC, BNL (2004) {\displaystyle {\sqrt {s}}} = 100 GeV (unpublished) References#Nachtmann:2003ik - A theoretical connection between the Regge theory and QCD physics References#Buttimore:1998rj http://nuclth02.phys.ulg.ac.be/compete/predictor/ Retrieved from "https://wiki.bnl.gov/rhicspin/index.php?title=Single_spin_asymmetry&oldid=3066"
Moment of force, Torque M, {\displaystyle {\boldsymbol {\tau }}} N·m 1 N·m = 1 kg·m2/s2 The unit is dimensionally equivalent to the units of energy, the joule; but the joule should not be used as an alternative for the newton metre. Linear momentum p kg·m/s or N·s (Linear) impulse J N·s or kg·m/s Angular momentum L kg·m2/s or N·m·s Normal stress, Shear stress {\displaystyle \sigma ,\tau } pascal Pa 1 Pa = 1 N / m2 = 1 kg/(m·s2) Named after Blaise Pascal.
choose the right option: A plane electromagnetic wave of energy U is reflected from the surface Then the momentum - Physics - Electromagnetic Waves - 14099481 | Meritnation.com Navidul Haque answered this If the surface is a perfect reflector, such as a mirror, and the incidence is normal, then the momentum transported to the surface in a time t is twice that given above, U=m {c}^{2} , and momentum(p) is mc so here p= U/c Let K be the momentum transferred to the surface after striking the electromagnetic wave will travel in the opposite direction to the initial direction. from conservation of momentum U/c + 0 = K - U/c K = 2U/c i.e., the momentum transferred to the surface by the incoming light is p = U/c, and that transferred by the reflected light also is p = U/c. Therefore, K = 2U/c is the momentum transferred by em wave to the surface.
-Step Sum and -Step Gap Fibonacci Sequence Maria Adam, Nicholas Assimakis, " -Step Sum and -Step Gap Fibonacci Sequence", International Scholarly Research Notices, vol. 2014, Article ID 374902, 7 pages, 2014. https://doi.org/10.1155/2014/374902 Maria Adam 1 and Nicholas Assimakis1,2 1Department of Computer Science and Biomedical Informatics, University of Thessaly, 2-4 Papasiopoulou street, 35100 Lamia, Greece 2Department of Electronic Engineering, Technological Educational Institute of Central Greece, 3rd km Old National Road Lamia-Athens, 35100 Lamia, Greece Academic Editor: H. Deng For two given integers , , we introduce the -step sumand -step gap Fibonacci sequence by presenting a recurrence formula that generates the th term as the sum of successive previous terms starting the sum at the th previous term. Known sequences, like Fibonacci, tribonacci, tetranacci, and Padovan sequences, are derived for specific values of , . Two limiting properties concerning the terms of the sequence are presented. The limits are related to the spectral radius of the associated -matrix. It is well-known that the Fibonacci sequence, the Lucas sequence, the Padovan sequence, the Perrin sequence, the tribonacci sequence, and the tetranacci sequence are very prominent examples of recursive sequences, which are defined as follows. The Fibonacci numbers are derived by the recurrence relation , , with , [1], [2, A000045]. The Lucas numbers are derived by the recurrence relation , , with , and , [1], [2, A000032]. The Padovan numbers are derived by the recurrence relation , , with , , [2, A000931]. The Perrin numbers are derived by the recurrence relation , , with , , and , [2, A001608]. Both Fibonacci and Lucas numbers as well as both Padovan and Perrin numbers satisfy the same recurrence relation with different initial conditions. Extending the above definitions, the -step Fibonacci sequences are derived [3]. For , the tribonacci numbers are derived by the recurrence relation , , with , and , [3–5], [2, A000073]. For , the tetranacci numbers are derived by the recurrence relation , , with , and , , [3], [2, A000078]. In this paper, we introduce -step sum and -step gap Fibonacci sequence, where the th term of the sequence is the sum of the successive previous terms starting at the th previous term, using 1’s as initial conditions. Further the closed formula of the th term of the sequence is given and the ratio of two successive terms tends to the spectral radius of the associated -matrix. 2. Definition of -Step Sum and -Step Gap Fibonacci Sequence For the integers , , we define the -step sum and -step gap Fibonacci sequence , whose th term is given by the following recurrence relation: with Combining (1) and (2) notice that all the terms of the sequence are positive integers and is the sum of terms starting the sum at the th previous term from ; thus, (1) can be written equivalently as Remark 1. (i) From (2)-(3) it is evident that for and all the terms of the sequence are equal to one. Hereafter consider , since the case is trivial. (ii) For , (3) and (2) give the th term of the sequence , which is formulated as with initial values Remark 2. The sequence gives known sequences for various values of the steps :(i)for , , (4)-(5) give the well-known Fibonacci sequence, ;(ii)for , , (4)-(5) give the tribonacci sequence, , [2, A000213];(iii)for , , (4)-(5) give the tetranacci sequence, , [2, A000288];(iv)for , , (2)-(3) give the Padovan sequence, , [2, A 000931]. In the following, the Dirac delta function (or function) is denoted by and the Heaviside step function (or the unit step function) . Moreover, the th number of the sequence follows immediately from (2) and (3) using the above definition of the function and considering that the first negative indexed terms are equal to zero: which is formulated in the following proposition. Proposition 3. For the given integers , , for all , the th number, , of the sequence is given by the following recurrence relation: with initial values as in (6). In the following, we are going to demonstrate a close link between matrices and Fibonacci numbers in (3) with initial values in (2). To this end, consider , . One can write the following linear system, where (3) constitutes its first equation: Hence, using a vector, the linear system in (8) can be formed as whereby it is obvious that the sequence can be represented by a matrix, , which is a block matrix such that where the first row consists of the vector-matrices , ; the entries of the vector are equal to zero and the rest entries of the vector are equal to one; the matrix is the identity matrix and the entries of the vector are equal to zero. Working as in the above, for , , and using (4) with initial values in (5), we can write the following linear system: The matrix, , of the coefficients of the above system, is defined as where the entries of the vector are equal to one, is the identity matrix, and the entries of the vector are equal to zero. Remark 4. (i) The well-known sequences, which are presented in Remark 2, correspond to in (12) for suitable integer value of and ; (a) for , the Fibonacci sequence corresponds to ;(b) for , the tribonacci sequence corresponds to ;(c) for , the tetranacci sequence corresponds to . (ii) The Padovan sequence corresponds to the matrix by (10) with , . (iii) The matrix in (12) has been defined and the determinant of has been investigated in [6] and some results on matrices related with Fibonacci numbers and Lucas numbers have been investigated in [7] and the transpose matrix of the general -matrix in [8]. Proposition 5. The th degree characteristic polynomial of in (10) is given by Proof. The proof of (13) is based on the induction method. For , , the characteristic polynomial of is , which satisfies (13). Let be a fixed integer and assume that the formula in (13) is true for ; that is, Then, of the matrix can be computed by using the Laplace expansion along the th column and the assumption of induction. Thus, we have hence, (13) holds for , too. Thus the result follows by the induction method. The set of all eigenvalues of is denoted by and called the spectrum of ; the nonnegative real number is called spectral radius of . Here, is an eigenvalue of , since the entries of are 0 or 1, [11, Theorem ]; further since where denotes the th entry of , [11, Theorem ], [12, Theorem 7], and [13]. Notice that if is an eigenvalue of , then , because has real coefficients. Further, since in (13) has the constant term equal to , it is evident that Hence, is a nonsingular and all the eigenvalues are nonzero. Remark 6. Notice that, for ,(i)the th degree characteristic polynomial of the matrix in (12) is formulated by (13), which has presented in [9, 10];(ii)the authors in [10] have shown bounds for ; the lower bound is more accurate than the associated bound in (16); in particular, (iii)the determinant of is computed by (18) and derived the same result as in [6]. Example 7. Consider , , and the well-known Fibonacci sequence , as in Remark 2. According to Remark 4(i), the matrix is derived by (12). It is evident that the characteristic polynomial is given by and its roots are and , the well-known number as the golden ratio. Example 8. Consider , . By (2)-(3) the associated sequence is formed as and , for all , which is well-known as the Padovan sequence (see, Remark 2). According to Remark 4(ii), the associated matrix is given by The characteristic polynomial is given by (13) as and its spectrum ,. For the integers and , it is worth noting that, since the entries of the matrix are positive integers, is an irreducible matrix [11, Lemma ]; it follows that the spectral radius is a positive, simple (without multiplicity) eigenvalue of [11, Theorem 8.4.4]. In addition, the entries of are positive integers; thus is a primitive matrix [11, Corollary 8.5.9]; that is, is the unique eigenvalue with maximum modulus [11, Definition 8.5.0]. Hence, in the following, we denote , , ,, all the distinct eigenvalues of , for which the following inequality holds: Furthermore, rewriting (7) as the -transform on both sides of (23) yields From (24) it is worth noting that the poles of are the eigenvalues of , which are all simple (distinct) and the complex eigenvalues are conjugate; furthermore, the degrees of the polynomials of numerator and denominator of coincide. Thus, the partial-fraction decomposition of (24) is given by where , are real and the others coefficients are complex or real numbers. In the following theorem, we are able to present the closed formula of the terms of the sequence , which depends on all the eigenvalues of . Theorem 9. Let , , , , be the eigenvalues of and the fixed integers , , with , . The th number of the sequence is given by where , , for all , are the determined coefficients of the partial-fraction decomposition in (25). Proof. The inverse -transform on both sides of (25) for all yields The closed formula of in (26) follows from the above equation and the definitions of and Heaviside step functions. 3. Limiting Properties of -Step Sum and -Step Gap Fibonacci Sequence The spectral radius of in (10) is a characteristic quantity, which appears in (26) and for some cases of is computed in Table 1. The spectral radius of with respect to and . From the values in Table 1 observe that the spectral radius (i)increases as increases and remains constant;(ii)decreases as increases and remains constant;(iii)lies in the interval verifying (16).Note that for , , the spectral radius is the tribonacci constant and for , , the spectral radius is the tetranacci constant [2]. The significance of is presented in the following theorem. Theorem 10. For the fixed integers , , with , , the positive numbers of the sequence in (26) satisfy the following limit properties: where is the spectral radius of in (10). Proof. Consider that the polar form of the determined coefficients in (25) is denoted by , and the eigenvalues (except the spectral radius) , for all . The substitution of , from the polar forms in (26) yields Using (30) and the property of the spectral radius from (22), we can write Since and are bounded sequences as well as the inequality (22) implies for every , it is obvious that Thus, the validity of (28) follows from (31) and (32). Furthermore, it is well known that for a sequence of nonzero complex numbers, if , then [14, Chapter 1], whereby it is evident that for the sequence of the positive integers (), the equality (29) follows immediately from (28). Remark 11. Notice that for every the formulas of in (26) and (30) are equivalent. Additionally, notice the following.(i)If is odd, then the characteristic polynomial in (13) has one real root, , and the others are complex conjugate. Thus, the complex eigenvalues and the coefficients in (25) appear in complex conjugate pairs, which are denoted by , , , , , , and , , , , , , , respectively. Then, using the complex conjugate properties, (30) follows where .(ii)If is even, then the characteristic polynomial in (13) has two real roots and the others are complex conjugate. The one real root is the unique real positive root ; it lies in the interval by (16) and has maximum modulus. The other real root is negative and lies in the interval (see in Acknowledgements). Thus, the complex eigenvalues and the coefficients in (25) appear in complex conjugate pairs and , are denoted as in (i). Then, using the complex conjugate properties, (30) follows where . Example 12. Consider the Padovan sequence of the Example 8. Notice that , and is odd. The eigenvalues of are given in Example 8, , , and . Since , it is evident that the inequality (22) is verified. The partial-fraction decomposition as in (25) yields , +, and . Thus, for the th number of the Padovan sequence is computed by (33) and given by Now, the limited properties of the Padovan sequence are derived by (28) and (29): Example 13. Consider the 2-step sum and 2-step gap Fibonacci sequence. Notice that is even. The eigenvalues of are , +, , and . The partial-fraction decomposition as in (25) yields , + , , and . Thus, for the th number of the sequence is computed by (34) and given by Now, the limited properties of the sequence are derived by (28) and (29): The -step sum and -step gap Fibonacci sequence was introduced. A recurrence formula was presented generating the th term of the sequence as the sum of successive previous terms starting the sum at the th previous term. It was noticed that known sequences, like Fibonacci, tribonacci, tetranacci, and Padovan sequences, are derived for specific values of . A closed formula of the th term of the sequence was given. The limiting properties concerning the ratio of two successive terms as well as the th root of the th term of the sequence were presented. It was shown that these two limits are equal to each other and are related to the spectral radius of the associated -matrix. These limits can be regarded as the -step sum and -step gap Fibonacci sequence constants, like the tribonacci constant and the tetranacci constant. The authors thank Dr. Aristides Kechriniotis for his valuable comments about the roots of the characteristic polynomial in (13), verifying that the maximum modulus of the unique real positive root lies in the interval and that the second real root lies in the interval in the case where is even. T. Koshy, Fibonacci and Lucas Numbers with Applications, John Wiley & Sons, New York, NY, USA, 2001. “The On-Line Encyclopedia of Integer Sequences,” http://oeis.org/. View at: Google Scholar http://rosettacode.org/wiki/Fibonacci_n-step_number_sequences. M. Elia, “Derived sequences, the Tribonacci recurrence and cubic forms,” The Fibonacci Quarterly, vol. 39, no. 2, pp. 107–115, 2001. View at: Google Scholar | Zentralblatt MATH B. Tan and Z. Wen, “Some properties of the Tribonacci sequence,” European Journal of Combinatorics, vol. 28, no. 6, pp. 1703–1719, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH E. Karaduman, “An application of Fibonacci numbers in matrices,” Applied Mathematics and Computation, vol. 147, no. 3, pp. 903–908, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH X. Fu and X. Zhou, “On matrices related with Fibonacci and Lucas numbers,” Applied Mathematics and Computation, vol. 200, no. 1, pp. 96–100, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH J. Ivie, “A general Q-matrix,” The Fibonacci Quarterly, vol. 10, no. 3, pp. 255–264, 1972. View at: Google Scholar D. Kalman, “Generalized Fibonacci numbers by matrix methods,” The Fibonacci Quarterly, vol. 20, no. 1, pp. 73–76, 1982. View at: Google Scholar G.-Y. Lee, S.-G. Lee, and H.-G. Shin, “On the k-generalized Fibonacci matrix {Q}_{k} D. Noutsos, “Perron-Frobenius theory and some extensions, Como, Italy, May 2008,” http://www.math.uoi.gr/~dnoutsos/Papers_pdf_files/slide_perron.pdf. View at: Google Scholar D. Noutsos, “On Perron-Frobenius property of matrices having some negative entries,” Linear Algebra and Its Applications, vol. 412, no. 2-3, pp. 132–153, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH E. Stein and R. Shakarchi, Complex Analysis, Princeton Lectures in Analysis, Princeton University Press, Princeton, NJ, USA, 2003. Copyright © 2014 Maria Adam and Nicholas Assimakis. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Impact of Correlated Noises on Additive Dynamical Systems Chujin Li, Jinqiao Duan, "Impact of Correlated Noises on Additive Dynamical Systems", Mathematical Problems in Engineering, vol. 2014, Article ID 678976, 6 pages, 2014. https://doi.org/10.1155/2014/678976 Chujin Li 1 and Jinqiao Duan1,2 Impact of correlated noises on dynamical systems is investigated by considering Fokker-Planck type equations under the fractional white noise measure, which correspond to stochastic differential equations driven by fractional Brownian motions with the Hurst parameter . Firstly, by constructing the fractional white noise framework, one small noise limit theorem is proved, which provides an estimate for the deviation of random solution orbits from the corresponding deterministic orbits. Secondly, numerical experiments are conducted to examine the probability density evolutions of two special dynamical systems, as the Hurst parameter varies. Certain behaviors of the probability density functions are observed. Dynamical systems arising from financial, biological, physical, or geophysical sciences are often subject to random influences. These random influences may be modeled by various stochastic processes, such as Brownian motions, Lévy motions, or fractional Brownian motions. A fractional Brownian motion , , in a probability space , with Hurst parameter , is a continuous-time Gaussian process with mean zero, starting at zero and having the following correlation function: In particular, when it is just the standard Brownian motion. The time derivative of a fractional Brownian motion, , as a generalized stochastic process, has nonvanishing correlation [1, 2] and it is thus called a correlated noise or colored noise. In the special case of , this noise is uncorrelated and thus is called white noise [3]. Correlated noises appear in the modeling of some geophysical systems [4–6]. For systematic discussions about fractional Brownian motions and their stochastic calculus, we refer to [7–12] and the references therein. Fractional Brownian motions have stationary increments and are Hölder continuous with exponent less than , but they are no longer semimartingales, even no longer Markovian. They possess some other significant properties such as long range dependence and self-similarity which result in wide applications in fields such as hydrology, telecommunications, and mathematical finance. During the last decade or so, several reasonable stochastic integrations with respect to fractional Brownian motions were developed. See, for example, Lin [13], Duncan et al. [14], Decreusefond and Üstunel [15], and the references mentioned therein. Stochastic differential equations (SDEs) driven by fractional Brownian motions also have been attracting more attention recently [1, 10, 16–18]. In this paper, we consider the following scalar stochastic differential equation (SDE): where the drift is a Lipschitz continuous function on , is the noise intensity, is a fractional Brownian motion with , and the initial state value is assumed to be independent of the natural filtration of . Since this system has a unique solution [17, 19], here we intend to understand some impact of correlated noises on this additive dynamical system as the Hurst parameter varies. This paper is organized as follows. In Section 2, we set up a fractional white noise analysis framework which makes correlated noises as functionals of standard white noises and prove a small noise limit theorem which implies the stochastic continuity of the system with respect to noise intensity. In Section 3, we show that the probability density function of satisfies a Fokker-Planck type partial differential equation with respect to the fractional white noise measure. Then, we implement numerical experiments to examine the probability density evolutions as the Hurst parameter varies. As to one linear system and one double-well system, certain behaviors of the probability density functions are observed. 2. Analysis Framework and Small Noise Limit White noise framework is one natural and flexible stochastic analysis thoughtway, and fractional white noise analysis takes correlated noise as functionals of standard white noise. This approach has shown to be very effective in investigating distributions and path properties of stochastic processes. In the following, we describe the fractional white noise analysis framework. Let be the Schwartz space of rapidly decreasing smooth functions on and the space of tempered distributions. And denote by the dual pairing on . For , define where is beta function; , , . Lemma 1. For , let Then, for , that is, is the dual map of . Now we can only prove the linear map is continuous from to . Since is not continuous from to (even not a proper operator in ), we could not obtain a dual map from to by duality. By using Itô’s regularization theorem, we construct a unique -valued random variable such that which extends the map in view of (5). Theorem 2. Let be the image measure of induced by the map T. Then, for any , the distribution of under is the same as under . In particular, is a fractional Brownian motion with Hurst constant . Moreover, where is the standard Brownian motion. (See proof in [20].) Let and be the filtrations generated by and , respectively. Then, in view of (8), we have(1), for all ;(2)for any , a.s. , where . So, the filtrated probability space is the extension of . Thus the stochastic analysis with respect to measure could be reduced to the standard white noise framework naturally. Therefore, we choose the standard white noise measure as the reference measure rather than , and this treatment is more useful and more convenient for applications. For more details, we refer to [20] and the reference therein. 2.2. Small Noise Limit Now, we consider the SDE (2) in fractional white noise framework And to investigate the impact of noise on deterministic dynamical system which is solvable on any finite time interval . We have the following result. Theorem 3. The solution of (2) converges in probability to the solution of (10) uniformly on any finite time interval . Proof. Firstly, we rewrite the equation as Then, by assuming the Lipschitz condition on with Lipschitz constant , it follows from the Gronwall inequality that Hence, for any small enough , we have which completes the proof when . In the final step, we have used the self-similarity of the fractional Brownian motion This theorem provides an estimate for the deviation of random solution orbits from the corresponding deterministic orbits. Note that the expectation in the above theorem corresponds to the fractional white noise measure. And, henceforth, we take all expectations with respect to the fractional white noise measure (i.e., for simplicity, we omit the subscript mentioned above). 3. Probability Density Evolution For SDE, such as (2), the probability density function of the solution carries significant dynamical information. This is considered here by examining a fractional Fokker-Planck type equation. The key step in the derivation of this Fokker-Planck type equation is the application of Ito’s formula for SDEs driven by fractional Brownian motion, under fractional white noise analysis framework [1, 10, 16, 20, 21]. We sketch the derivation here. By Ito’s formula [10], Theorem 6.3.6, for a second order differentiable function with compact support, we have Taking expectations on both sides yields Let be the probability density function of the solution of the system (2). Recall that ; by integration by parts and at , we obtain that is, In the following, we numerically simulate this partial differential equation for two special cases: and , with finite noise intensity (for simplicity we take ). Through these two special cases, we expect to illustrate the impact of correlated noises on additive dynamical systems as the Hurst parameter varies. Here, we perform the popular Crank-Nicolson scheme in Matlab for (17) with zero boundary values;, the grid size is 0.05, total grid points are 801, and the time step size is 0.01. And the initial probability density function is taken to be standard normal; that is, . Since the system is tridiagonal, we could solve it using Thomas Algorithm efficiently. Moreover, for other initial conditions and other drift coefficients, for instance, the initial uniform distribution or , this method also applies smoothly. 3.1. Numerical Simulation: We first simulate the dynamical evolutions of the probability density function for the corresponding stochastic differential equation (2) with the double-well drift , for various values of . The double-well dynamics is a rich and typical model for understanding numerous physical or geophysical systems [22, 23], focusing on the maxima (minima), symmetry, kurtosis, and so forth. As observed in Figure 1, the probability density function evolves from the unimodal (one peak) to the flat top and then to the bimodal (two peaks) shape for various Hurst parameter values , as time increases. Simultaneously, the effect of Hurst parameter on the dynamics is significant. As value increases, the plateau for becomes lower when time exceeds . Plot of with , at . Now, for comparison we investigate the dynamical evolutions of the probability density function of the corresponding stochastic differential equation (2) with the linear drift , which is a rich toy example for understanding dynamical systems. Also as observed in Figure 2, at given time instants, ’s peak becomes higher as increases. This illustrates the significant and distinguishing influence of Hurst parameter on the dynamics when time evolves. The bigger makes the solution of (2) has more centralized value, but the long time effect shows that the values of the solution distribute more scatteredly. Plot of with : , , , and . Chujin Li acknowledges the support of Self-Renovation Project HUST2013QN171. C. Bender, “An Itô formula for generalized functionals of a fractional Brownian motion with arbitrary Hurst parameter,” Stochastic Processes and their Applications, vol. 104, no. 1, pp. 81–106, 2003. View at: Publisher Site | Google Scholar | MathSciNet J. Duan, C. Li, and X. Wang, “Modeling colored noise by fractional Brownian motion,” Interdisciplinary Mathematical Sciences, vol. 8, pp. 119–130, 2009. View at: Google Scholar L. Arnold, Stochastic Differential Equations, John Wiley & Sons, New York, NY, USA, 1974. A. Du and J. Duan, “A stochastic approach for parameterizing unresolved scales in a system with memory,” Journal of Algorithms & Computational Technology, vol. 3, no. 3, pp. 393–405, 2009. View at: Google Scholar J. Duan, “Stochastic modeling of unresolved scales in complex systems,” Frontiers of Mathematics in China, vol. 4, no. 3, pp. 425–436, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet B. Chen and J. Duan, “Stochastic quantification of missing mechanisms in dynamical systems,” Interdisciplinary Mathematical Sciences, vol. 8, pp. 67–76, 2009. View at: Google Scholar D. Nualart, “Stochastic calculus with respect to the fractional Brownian motion and applications,” Contemporary Mathematics, vol. 336, pp. 3–39, 2003. View at: Google Scholar E. Alòs and D. Nualart, “Stochastic integration with respect to the fractional Brownian motion,” Stochastics and Stochastics Reports, vol. 75, no. 3, pp. 129–152, 2003. View at: Publisher Site | Google Scholar | MathSciNet A. Chronopoulou and F. Viens, “Hurst index estimation for self-similar processes with long-memory,” Interdisciplinary Mathematical Sciences, vol. 8, pp. 91–118, 2009. View at: Google Scholar Y. S. Mishura, Stochastic Calculus for Fractional Brownian Motion and Related Processes, Springer, New York, NY, USA, 2008. View at: Publisher Site | MathSciNet C. A. Tudor and F. G. Viens, “Variations and estimators for self-similarity parameters via Malliavin calculus,” The Annals of Probability, vol. 37, no. 6, pp. 2093–2134, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet S. J. Lin, “Stochastic analysis of fractional Brownian motions,” Stochastics and Stochastics Reports, vol. 55, no. 1-2, pp. 121–140, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet T. E. Duncan, Y. Hu, and B. Pasik-Duncan, “Stochastic calculus for fractional Brownian motion. I. Theory,” SIAM Journal on Control and Optimization, vol. 38, no. 2, pp. 582–612, 2000. View at: Publisher Site | Google Scholar | MathSciNet L. Decreusefond and A. S. Üstunel, “Stochastic analysis of the fractional Brownian motion,” Potential Analysis, vol. 10, no. 2, pp. 177–214, 1999. View at: Publisher Site | Google Scholar | MathSciNet F. Baudoin and L. Coutin, “Operators associated with a stochastic differential equation driven by fractional Brownian motions,” Stochastic Processes and Their Applications, vol. 117, no. 5, pp. 550–574, 2007. View at: Publisher Site | Google Scholar | MathSciNet M. A. Diop and Y. Ouknine, “A linear stochastic differential equation driven by a fractional Brownian motion with Hurst parameter >\mathrm{1}/\mathrm{2} ,” Statistics & Probability Letters, vol. 81, no. 8, pp. 1013–1020, 2011. View at: Publisher Site | Google Scholar | MathSciNet B. Saussereau, “A stability result for stochastic differential equations driven by fractional Brownian motions,” International Journal of Stochastic Analysis, vol. 2012, Article ID 281474, 13 pages, 2012. View at: Publisher Site | Google Scholar | MathSciNet L. Denis, M. Erraoui, and Y. Ouknine, “Existence and uniqueness for solutions of one dimensional SDE’s driven by an additive fractional noise,” Stochastics and Stochastics Reports, vol. 76, no. 5, pp. 409–427, 2004. View at: Publisher Site | Google Scholar | MathSciNet Z. Huang and C. Li, “On fractional stable processes and sheets: white noise approach,” Journal of Mathematical Analysis and Applications, vol. 325, no. 1, pp. 624–635, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet G. Ünal, “Fokker-Planck-Kolmogorov equation for fBM: derivation and analytical solution,” in Proceedings of the 12th Regional Conference, pp. 53–60, Islamabad, Pakistan, 2006. View at: Google Scholar D. Farrelly and J. E. Howard, “Double-well dynamics of two ions in the Paul and Penning traps,” Physical Review A, vol. 49, no. 2, pp. 1494–1497, 1994. View at: Publisher Site | Google Scholar E. Kierig, U. Schnorrberger, A. Schietinger, J. Tomkovic, and M. K. Oberthaler, “Single-particle tunneling in strongly driven double-well potentials,” Physical Review Letters, vol. 100, no. 19, Article ID 190405, 2008. View at: Publisher Site | Google Scholar Copyright © 2014 Chujin Li and Jinqiao Duan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Sandoh, Hiroaki ; Larke, Roy The present study proposes a theoretical model to test sales velocity for new products introduced in small format retail stores. The model is designed to distinguish fast moving products within a relatively short period. Under the proposed model, the sales of a newly introduced product are monitored for a prespecified period T , e.g., one week, and if the number of items sold over T is equal to a prespecified integer k or more, the product is considered a fast moving product and is carried over to the following sales periods. A slow moving product could be quickly replaced with alternative merchandise in order to make best use of shelf space. The paper first presents definitions of fast and slow moving products, and then a proposed sales test policy based on the model is formulated, where the expected loss is to be minimized with respect to the integer k . Numerical examples based on actual data collected from a convenience store in Japan are also presented to illustrate the theoretical underpinnings of the proposed sales test model. Mots clés : sales test, fast moving product, slow moving product, expected loss author = {Sandoh, Hiroaki and Larke, Roy}, title = {A theoretical model for testing new product sales velocity at small format retail stores}, AU - Sandoh, Hiroaki AU - Larke, Roy TI - A theoretical model for testing new product sales velocity at small format retail stores Sandoh, Hiroaki; Larke, Roy. A theoretical model for testing new product sales velocity at small format retail stores. RAIRO - Operations Research - Recherche Opérationnelle, Tome 36 (2002) no. 2, pp. 157-172. doi : 10.1051/ro:2002009. http://www.numdam.org/articles/10.1051/ro:2002009/ [1] E.E. Anderson and H.N. Amato, A mathematical model for simultaneously determining the optimal brand collection and display area allocation. Oper. Res. 22 (1974) 13-21. | MR 408780 | Zbl 0273.90060 [2] M.R. Czinkota and J. Woronoff, Unlocking Japan's Markets: Seizing Marketing and Distribution Opportunities in Today's Japan. Probus Publishing Company, Chicago (1991). [3] A.S.C. Ehrenberg, Repeat Buying. North-Holland, Amsterdam (1972). [4] D. Grewal, M. Levy, A. Mehrotra and A. Sharma, Planning merchandising decisions to account for regional and product assortment differences. J. Retailing 75 (1999) 405-424. [5] P. Hansen and H. Heinsbroek, Product selection and space allocation in supermarkets. Eur. J. Oper. Res. 3 (1979) 58-63. | Zbl 0412.90033 [6] R.M. Heeler, M.J. Kearney and B.J. Mehaffey, Modeling supermarket product selection. J. Marketing Res. X (1973) 34-37. [7] R. Larke, Japanese Retailing. Routledge, London & New York (1994). [8] G.L. Lilien, P. Kotler and K.S. Moorthy, Marketing Models. Prentice Hall, New Jersey (1992). [9] P.J. Mcgoldrick, Retail Marketing. McGraw-Hill, London (1990). [10] A.C. Mckinnon, Physical Distribution Systems. Routledge, New York, NY (1989). [11] Nihon Keizai Shinbun, Ryutsu Keizai no Tebiki 2000. Keizai Shinbun, Tokyo (2000) in Japanese. [12] V. Padmanabhan and I.P.L. Png, Manufacturer's returns policies and retail competition. Marketing Sci. 16 (1997) 81-94. [13] S.M. Ross, Applied Probability Models with Optimization Applications. Holden-Day, San Francisco (1970). | MR 264792 | Zbl 0213.19101 [14] S.M. Ross, Introduction to probability models: Sixth edition. Academic Press, New York (1997). | Zbl 0914.60005 [15] D.C. Schmittlein, D.G. Morrison and R. Colombo, Counting your customers: Who are they and what will they do next? Management Sci. 33 (1987) 1-24. [16] M. Shimaguchi, Marketing Channels in Japan. IMI, Michigan, Mass. (1977). [17] S.M. Shugan, Product assortment in a triopoly. Management Sci. 35 (1989) 304-321. [18] G.L. Urban, A mathematical modelling approach to product line decisions. J. Marketing Res. 6 (1969) 40-47. [19] T.L. Urban, An inventory-theoretic approach to product assortment and shelf-space allocation. J. Retailing 74 (1998) 15-35. [20] USITC (United States International Trade Commission), Japan's Distribution System and Improving options for US Access. Government printing office, Washington DC (1990). [21] F.S. Zufryden, A dynamic programming approach for product selection and supermarket shelf-space allocation. J. Oper. Res. Soc. 37 (1986) 413-422. [22] F.S. Zufryden, New computational results with a dynamic programming approach for product selection and supermarket shelf-space allocation. J. Oper. Res. Soc. 38 (1987) 201-204.
An Explicit Single-Step Nonlinear Numerical Method for First Order Initial Value Problems (IVPs) Omolara Fatimah Bakre1, Ashiribo Senapon Wusu2, Moses Adebowale Akanbi2 1Department of Mathematics & Statistics, Federal College of Education (Technical), Lagos, Nigeria. Interest in the construction of efficient methods for solving initial value problems that have some peculiar properties with it or its solution is recently gaining wide popularity. Based on the assumption that the solution is representable by nonlinear trigonometric expressions, this work presents an explicit single-step nonlinear method for solving first order initial value problems whose solution possesses singularity. The stability and convergence properties of the constructed scheme are also presented. Implementation of the new method on some standard test problems compared with those discussed in the literature proved its accuracy and efficiency. Ordinary Differential Equations, First Order, Initial Value Problems, Nonlinear, Singularities Bakre, O. , Wusu, A. and Akanbi, M. (2020) An Explicit Single-Step Nonlinear Numerical Method for First Order Initial Value Problems (IVPs). Journal of Applied Mathematics and Physics, 8, 1729-1735. doi: 10.4236/jamp.2020.89130. Many of the numerical methods for obtaining the solution of the first order ordinary differential equation {y}^{\prime }=f\left(x,y\left(x\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\in \left[{x}_{0},X\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\left({x}_{0}\right)=\eta are based on the assumption that the solution is locally representable by a polynomial. However, when a given initial value problem or its theoretical solution u\left(t\right) is known to posse a singularity, then it is particularly inappropriate to represent y\left(x\right) , in the neighbourhood of the singularity by a polynomial [1] [2]. This is evident as Runge-Kutta type methods, Obrechkoff methods and general linear multistep methods usually produce very poor solutions around singularity points [1] [3] [4] [5]. The authors in [4] were the first to develop quadrature formulas based on rational interpolating functions. On the other hand, the rational interpolation schemes proposed in [2], was seen to be effective in the neighbourhood of the singularity and even beyond as reiterated in [5]. The work of authors in [4] was modified by [2] with a replacement of the general rational function Luke et al. replaced the general rational function of [4] by F\left(x\right)=\frac{{P}_{m}\left(x\right)}{{Q}_{n}\left(x\right)} {P}_{m}\left(x\right) {Q}_{n}\left(x\right) are respectively polynomial of degree m and n. The resulting schemes require analytic generation of first and higher order derivatives, hence, the major limitation of the schemes. Since rational functions are more appropriate for the representation of functions close to singularities than polynomials, the limitation is overcome by a local representation of the theoretical solution with a rational expression. Interestingly, this approach appears to be promising as several methods are now being constructed in this direction [6] [7] [8] [9] [10]. The works of the authors in [6] [7] [9] [10] [11] [12] showed that solution around singularity point are well approximated by this approach. In this work, an explicit single-step nonlinear method involving higher derivatives of the state function for solving (1) is presented. The local truncation error and absolute stability of the new method are also discussed. In this work, we assumed that the theoretical solution y\left(x\right) of (1) can locally be represented by a rational interpolant r\left(x\right) r\left(x\right)=\frac{{a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+{a}_{3}{x}^{3}+{a}_{4}{x}^{4}}{{b}_{0}+x}. To construct an explicit single-step method with (2) for (1), it requires that r\left(x\right) \begin{array}{l}r\left({x}_{n+j}\right)={y}_{n+j},\text{ }j=0,1,\hfill \\ {r}^{\left(i\right)}\left({x}_{n+j}\right)={y}_{n+j}^{\left(i\right)},\text{ }j=0,\text{ }i=0,1,2,3,4,5.\hfill \end{array}\right\} Substituting for expressions and simplifying (3) yields \begin{array}{l}{y}_{n}=\frac{{a}_{4}{x}_{n}^{4}+{a}_{3}{x}_{n}^{3}+{a}_{2}{x}_{n}^{2}+{a}_{1}{x}_{n}+{a}_{0}}{{b}_{0}+{x}_{n}}\hfill \\ {y}_{n+1}=\frac{{a}_{4}{\left(h+{x}_{n}\right)}^{4}+{a}_{3}{\left(h+{x}_{n}\right)}^{3}+{a}_{2}{\left(h+{x}_{n}\right)}^{2}+{a}_{1}\left(h+{x}_{n}\right)+{a}_{0}}{{b}_{0}+h+{x}_{n}}\hfill \\ {{y}^{\prime }}_{n}=\frac{4{a}_{4}{b}_{0}{x}_{n}^{3}+3{a}_{3}{b}_{0}{x}_{n}^{2}+2{a}_{2}{b}_{0}{x}_{n}+{a}_{1}{b}_{0}+3{a}_{4}{x}_{n}^{4}+2{a}_{3}{x}_{n}^{3}+{a}_{2}{x}_{n}^{2}-{a}_{0}}{{\left({b}_{0}+{x}_{n}\right)}^{2}}\hfill \\ {y}_{n}^{\left(2\right)}=2\left(\frac{{b}_{0}\left({b}_{0}\left({a}_{4}{b}_{0}^{2}-{a}_{3}{b}_{0}+{a}_{2}\right)-{a}_{1}\right)+{a}_{0}}{{\left({b}_{0}+{x}_{n}\right)}^{3}}-{a}_{4}{b}_{0}+3{a}_{4}{x}_{n}+{a}_{3}\right)\hfill \\ {y}_{n}^{\left(3\right)}=6{a}_{4}-\frac{6\left({b}_{0}\left({b}_{0}\left({a}_{4}{b}_{0}^{2}-{a}_{3}{b}_{0}+{a}_{2}\right)-{a}_{1}\right)+{a}_{0}\right)}{{\left({b}_{0}+{x}_{n}\right)}^{4}}\hfill \\ {y}_{n}^{\left(4\right)}=\frac{24\left({a}_{4}{b}_{0}^{4}-{a}_{3}{b}_{0}^{3}+{a}_{2}{b}_{0}^{2}-{a}_{1}{b}_{0}+{a}_{0}\right)}{{\left({b}_{0}+{x}_{n}\right)}^{5}}\hfill \\ {y}_{n}^{\left(5\right)}=\frac{120\left({a}_{4}{b}_{0}^{4}-{a}_{3}{b}_{0}^{3}+{a}_{2}{b}_{0}^{2}-{a}_{1}{b}_{0}+{a}_{0}\right)}{{\left({b}_{0}+{x}_{n}\right)}^{6}}\hfill \end{array}\right\} Eliminating the undetermined coefficients {a}_{0} {a}_{1} {a}_{2} {a}_{3} {a}_{4} {b}_{0} in (4) results in {y}_{n+1}={y}_{n}+h{{y}^{\prime }}_{n}+\frac{1}{2}{h}^{2}{{y}^{″}}_{n}+\frac{1}{6}{h}^{3}{y}_{n}^{\left(3\right)}-\frac{5{h}^{4}{\left({y}_{n}^{\left(4\right)}\right)}^{2}}{24\left(h{y}_{n}^{\left(5\right)}-5{y}_{n}^{\left(4\right)}\right)}. The resulting method (5) is explicit, self-starting and nonlinear. We shall refer to (5) as NLM4 which is the method proposed in this work. The new method NLM4 is suitable for solving initial value problems whose solution possesses singularities. 3. Local Truncation Error and Absolute Stability of Constructed Method In this section, the local truncation error (lte) and the absolute stability properties of the new method proposed in this work are considered. Local Truncation Error: The local truncation error {T}_{n+1} {x}_{n+1} of the general explicit one step method {y}_{n+1}={y}_{n}-h\varphi \left({x}_{n},{y}_{n},h\right) {T}_{n+1}=y\left({x}_{n+1}\right)-y\left({x}_{n}\right)-h\varphi \left({x}_{n},y\left({x}_{n}\right),h\right) y\left({x}_{n}\right) is the theoretical solution. Using the above definition, it follows that the local truncation error of the constructed one step method can be written as {T}_{n+1}=y\left({x}_{n+1}\right)-{y}_{n+1} 3.2. Order of a Ordinary Differential Equation Order of a Ordinary Differential Equation: A numerical method is said to be of order p if p is the largest integer for which {T}_{n+1}=\mathcal{O}\left({h}^{p+1}\right) for every n and p\ge 1 . Following the above definition, the local truncation error of the method constructed in this work is obtained as the residual when {y}_{n+1} y\left({x}_{n+1}\right) . Below is the local truncation error for the method constructed in this work. {T}_{n+1}=\frac{1}{600{y}^{\left(4\right)}}\left({h}^{6}{\left({y}^{\left(5\right)}\right)}^{2}\right) A scheme is said to be consistent if the difference equation of the integrating formula exactly approximates the differential equation it intends to solve as the step size approaches zero. In order to establish the consistency property of the constructed method, it is sufficient to show that \underset{h\to 0}{\mathrm{lim}}\frac{{y}_{n+1}-{y}_{n}}{h}=0 \underset{h\to 0}{\mathrm{lim}}\frac{{y}_{n+1}-{y}_{n}}{h}=\underset{h\to 0}{\mathrm{lim}}\left(h{{y}^{\prime }}_{n}+\frac{1}{2}{h}^{2}{{y}^{″}}_{n}+\frac{1}{6}{h}^{3}{y}_{n}^{\left(3\right)}-\frac{5{h}^{4}{\left({y}_{n}^{\left(4\right)}\right)}^{2}}{24\left(h{y}_{n}^{\left(5\right)}-5{y}_{n}^{\left(4\right)}\right)}\right)=0 the above indicates that the constructed schemes satisfy the consistency property. To get the stability behaviour of the constructed scheme, the scheme is implemented on the standard test problem {y}^{\prime }=\lambda y,\text{ }Re\left(\lambda \right)<0 and the stability polynomial R\left(z\right)=\frac{{y}_{n+1}}{{y}_{n}} z=\lambda h is obtained. The stability function of (5) is obtained as R\left(z\right)=\frac{{y}_{n+1}}{{y}_{n}}=\frac{-{z}^{4}-8{z}^{3}-36{z}^{2}-96z-120}{24\left(z-5\right)} and the region of absolute stability is seen in Figure 1. Figure 1. Region of absolute stability of (5). The first problem considered in this work is the nonlinear initial value problem {y}^{\prime }=1+{y}^{2};\text{ }y\left(0\right)=1 whose theoretical solution is given as y\left(x\right)=\mathrm{tan}\left(x+\frac{\pi }{4}\right). For this problem, the absolute errors of the results obtained by the method proposed in this work are first compared with those of Non-linear One-Step methods for initial value problems of [7] and the derivative-free methods proposed in [11] as shown in Figure 2. A comparison of the maximum absolute error obtained by the proposed methods against those produced by the methods of the authors in [4] [6] [7] [9] [13] is also presented in Figure 3. Figure 2. Logarithm of absolute errors for the solutions of Problem 1 with step-size h = 0.01. Figure 3. Log plot of maximum absolute errors for Problem 1 as a function of the step-size h=\frac{0.8}{{2}^{k}},k=4\left(1\right)6 Figure 4. Logarithm of absolute errors for the solutions of (16) with step-size h = 0.01. The second test problem considered is given as {y}^{\prime }={y}^{2};\text{ }y\left(0\right)=1. y\left(x\right)=\frac{1}{1-x}. The logarithm of absolute errors for the solutions obtained is compared with other methods discussed in [12] as given in Figure 4. The explicit single-step nonlinear method constructed in this work is consistent and absolutely stable. Its region of absolute stability is larger than those of the methods discussed in the literature. The method gave more accurate result on the standard test problems compared with other methods discussed. Hence, the method is suitable for solving problems whose solution possesses singularity. [1] Lambert, J.D. (1973) Computational Methods in Ordinary Differential Equations. Academic Press, Cambridge. [2] Luke, Y.L., Fair, W. and Wimp, J. (1975) Predictor Corrector Formulas Based on Rational Interpolants. Computers & Mathematics with Applications, 1, 3-12. [3] Fatunla, S.O. (1991) Numerical Methods for IVPs in ODEs. Academic Press, New York. [4] Lambert, J.D. and Shaw, B. (1965) On the Numerical Solution of y’ = f(x,y) by a Class of Formulae Based on Rational y’ Approximation. Mathematics of Computation, 19, 456-462. [5] Fatunla, S.O. (1986) Numerical Treatment of Singular Initial Value Problems. An International Journal of Computers and Mathematics with Applications, 128, 1109-1115. [6] Ikhile, M.N.O. (2001) Coefficients for Studying One-Step Rational Schemes for IVPs in ODEs. Computers and Mathematics with Applications, 41, 769-781. [7] van Niekerk, F.D. (1987) Non-Linear One-Step Methods for Initial Value Problems. Journal of Computational and Applied Mathematics, 13, 367-371. [8] Niekiek, F.D.V. (1988) Rational One Step Methods for Initial Value Problems, Computers & Mathematics with Applications, 16, 1035-1039. [9] Teh, Y.Y. and Yaacob, N. (2013) One-Step Exponential-Rational Methods for the Numerical Solution of First Order Initial Value Problems. Sains Malaysiana, 42, 456-462. [10] Fatunla, S.O. (1982) Nonlinear Multistep Methods for Initial Value Problems. An International Journal of Computers and Mathematics with Applications, 8, 231-239. [11] Nkatse, T. and Tshelametse, R. (2015) Analysis of Derivative Free Rational Scheme. MATEMATIKA, 31, 135-142. [12] Tasneem, A., Asif, A.S. and Sania, Q. (2018) Development of a Nonlinear Hybrid Numerical Method. Advances in Differential Equations and Control Process, 19, 275-285. https://doi.org/10.17654/DE019030275 [13] Ying, T.Y., Omar, Z. and Mansor, K.H. (2014) Modified Exponential-Rational Methods for the Numerical Solution of First Order Initial Value Problems. Sains Malaysiana, 43, 1951-1959.
Abney-McPeek, Fiona1; An, Serena2; Ng, Jakin S.2 1 Harvard University Cambridge MA USA. 2 Massachusetts Institute of Technology Cambridge MA USA. The Schur polynomials {s}_{\lambda } are essential in understanding the representation theory of the general linear group. They also describe the cohomology ring of the Grassmannians. For \rho =\left(n,n-1,\cdots ,1\right) a staircase shape and \mu \subseteq \rho a subpartition, the Stembridge equality states that {s}_{\rho /\mu }={s}_{\rho /{\mu }^{T}} . This equality provides information about the symmetry of the cohomology ring. The stable Grothendieck polynomials {G}_{\lambda } , and the dual stable Grothendieck polynomials {g}_{\lambda } , developed by Buch, Lam, and Pylyavskyy, are variants of the Schur polynomials and describe the K -theory of the Grassmannians. Using the Hopf algebra structure of the ring of symmetric functions and a generalized Littlewood–Richardson rule, we prove that {G}_{\rho /\mu }={G}_{\rho /{\mu }^{T}} {g}_{\rho /\mu }={g}_{\rho /{\mu }^{T}} , the analogues of the Stembridge equality for the skew stable and skew dual stable Grothendieck polynomials. Keywords: Stembridge equality, Grothendieck polynomial, Young tableau, Hopf algebra. Abney-McPeek, Fiona&hairsp;1; An, Serena&hairsp;2; Ng, Jakin S.&hairsp;2 author = {Abney-McPeek, Fiona and An, Serena and Ng, Jakin S.}, title = {The {Stembridge} equality for skew stable {Grothendieck} polynomials and skew dual stable {Grothendieck} polynomials}, TI - The Stembridge equality for skew stable Grothendieck polynomials and skew dual stable Grothendieck polynomials %T The Stembridge equality for skew stable Grothendieck polynomials and skew dual stable Grothendieck polynomials Abney-McPeek, Fiona; An, Serena; Ng, Jakin S. The Stembridge equality for skew stable Grothendieck polynomials and skew dual stable Grothendieck polynomials. Algebraic Combinatorics, Volume 5 (2022) no. 2, pp. 187-208. doi : 10.5802/alco.199. https://alco.centre-mersenne.org/articles/10.5802/alco.199/ [1] Alwaise, Ethan; Chen, Shuli; Clifton, Alexander; Patrias, Rebecca; Prasad, Rohil; Shinners, Madeline; Zheng, Albert Coincidences among skew stable and dual stable Grothendieck polynomials, Involve, Volume 11 (2018) no. 1, pp. 143-167 | Article | MR: 3681354 | Zbl: 1368.05151 [2] Buch, Anders S. A Littlewood–Richardson rule for the K -theory of Grassmannians, Acta Math., Volume 189 (2002) no. 1, pp. 37-78 | Article | MR: 1946917 | Zbl: 1090.14015 [3] Fomin, Sergey; Kirillov, Anatol N. Grothendieck polynomials and the Yang–Baxter equation, Formal power series and algebraic combinatorics/Séries formelles et combinatoire algébrique, DIMACS, Piscataway, NJ, sd, pp. 183-189 | MR: 2307216 [4] Galashin, Pavel A Littlewood–Richardson rule for dual stable Grothendieck polynomials, J. Combin. Theory Ser. A, Volume 151 (2017), pp. 23-35 | Article | MR: 3663486 | Zbl: 1366.05116 [5] Grinberg, Darij; Reiner, Victor Hopf Algebras in Combinatorics (2020) (https://arxiv.org/abs/1409.8356) [6] Lam, Thomas; Pylyavskyy, Pavlo Combinatorial Hopf algebras and K-homology of Grassmanians, Int. Math. Res. Not. (2007) no. 24, Paper no. rnm125, 48 pages | Article [7] Reiner, Victor; Shaw, Kristin M.; van Willigenburg, Stephanie Coincidences among skew Schur functions, Adv. Math., Volume 216 (2007) no. 1, pp. 118-152 | Article | MR: 2353252 | Zbl: 1128.05051 [8] Stanley, Richard P. Enumerative combinatorics. Volume 1, Cambridge Studies in Advanced Mathematics, 49, Cambridge University Press, 2012, xiv+626 pages (second edition) | MR: 2868112 | Zbl: 1247.05003 [9] Yeliussizov, Damir Duality and deformations of stable Grothendieck polynomials, J. Algebraic Combin., Volume 45 (2017) no. 1, pp. 295-344 | Article | MR: 3591379 | Zbl: 1355.05263
Scattering_theory Knowpia Top: the real part of a plane wave travelling upwards. Bottom: The real part of the field after inserting in the path of the plane wave a small transparent disk of index of refraction higher than the index of the surrounding medium. This object scatters part of the wave field, although at any individual point, the wave's frequency and wavelength remain intact. Conceptual underpinningsEdit Composite targets and range equationsEdit When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time ( {\displaystyle I} ), i.e. that {\displaystyle {\frac {dI}{dx}}=-QI\,\!} {\displaystyle I=I_{o}e^{-Q\Delta x}=I_{o}e^{-{\frac {\Delta x}{\lambda }}}=I_{o}e^{-\sigma (\eta \Delta x)}=I_{o}e^{-{\frac {\rho \Delta x}{\tau }}},} In theoretical physicsEdit Elastic and inelastic scatteringEdit The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles. The mathematical frameworkEdit ^ R. F. Egerton (1996) Electron energy-loss spectroscopy in the electron microscope (Second Edition, Plenum Press, NY) ISBN 0-306-45223-5 ^ Ludwig Reimer (1997) Transmission electron microscopy: Physics of image formation and microanalysis (Fourth Edition, Springer, Berlin) ISBN 3-540-62568-2
Rational Equations | Brilliant Math & Science Wiki Cheolho Han, Thaddeus Abiy, Mahindra Jain, and A rational equation is an equation containing at least one fraction whose numerator and denominator are polynomials, \frac{P(x)}{Q(x)}. These fractions may be on one or both sides of the equation. A common way to solve these equations is to reduce the fractions to a common denominator and then solve the equality of the numerators. While doing this, we have to make sure to note cases where indeterminate forms like \frac{0}{0} \frac{1}{0} may arise. Rational Equations - Basic Rational Equations - Intermediate Rational Equations - Advanced \frac{1}{x} = 2 . Looking at the equation, we can see that it's asking which reciprocal gives 2 \frac{1}{2} and we can conclude that it is the solution. _\square While it is possible to use this inspection method, it is easier to use a more general method. In general, if an equation is in the form of an irreducible proportion \frac{a}{b}=\frac{c}{d} , one can cross multiply to obtain a polynomial ad - bc = 0 . This polynomial can then be solved using whatever appropriate method necessary while noting that b \neq 0 d \neq 0 \frac{2-x}{3+x} = \frac{1}{2}. Using the cross-multiplying method described above gives \begin{aligned} 3 + x &= 2(2-x) \\ 3 + x &= 4 - 2x \\ 3x& = 1 \\ x &= \frac{1}{3}.\ _\square \end{aligned} This method can be extended to any rational equation. However, for expressions with more terms, instead of cross-multiplying we multiply both sides of the equation by the LCM of the denominators. Find all the solutions of \frac{1}{x} + \frac{2}{1-x} = \frac{11}{x} + \frac{3}{x(2x+3)}. x \neq 0, x \neq 1, x \neq \frac{-3}{2}, as they all lead to a zero denominator. When multiplying the whole expression by the LCM of the denominators x(1-x)(2x+3), \begin{aligned} (1-x)(2x+3) + 2x(2x+3) &= 11(1-x)(2x+3) + 3(1-x)\\ -2x^2-x+3+4x^4+6x&=-22x^2-11x\\ 24x^2 + 19x - 33 &= 0. \end{aligned} Using the quadratic formula to solve this equation, we get x = \frac{-19 \pm \sqrt{3529}}{48}.\ _\square \frac{1}{x-2}=\frac{1}{8}. 8(x-2) \begin{aligned} 8 =& x - 2 \\ 10 =& x. \end{aligned} x=10 satisfies the given equation, so the answer is 10. _\square \frac{1}{2x+3}=\frac{1}{x-5}. (2x+3)(x-5) \begin{aligned} x-5 =& 2x + 3 \\ -8 =& x. \end{aligned} x=-8 satisfies the given equation, so the answer is -8. _\square \frac{1}{(x+3)(x-2)}=\frac{1}{x-6}. (x+3)(x-2)(x-6) \begin{aligned} x-6 =& (x+3)(x-2) \\ x-6 =& x^2 +x -6 \\ 0 =& x^2. \end{aligned} x=0 satisfies the given equation, so the answer is 0. _\square \frac{x^2 + 3x}{x + 2}=\frac{-2x -6}{x + 2}. x+2 \begin{aligned} x^2 + 3x =& -2x -6 \\ x^2 +5x +6 =& 0 \\ (x+2)(x+3) =& 0 . \end{aligned} x=-2 is not a solution because the given equation has zeros in the denominator. x=-3 _\square \frac{-4x^2 -4x + a}{2x + 1}=\frac{4x + 1}{2x + 1} has only one solution, what is a? 2x+1 \begin{aligned} -4x^2 -4x + a =& 4x + 1 \\ -4x^2 - 8x + a - 1 =& 0. \end{aligned} \begin{aligned} \frac{D}4 =& (-4)^2 -(-4)(a-1) \\ =& 16 + 4a - 4 \\ =& 4a + 12 \\ =& 4(a+3). \end{aligned} a = -3, \begin{aligned} -4x^2 - 8x + a - 1 =& -4x^2 - 8x - 4 \\ =& -4 (x^2 +2x +1) \\ =& -4(x+1)^2 \\&= 0.\\ \end{aligned} x=-1 satisfies the given equation. a > -3, -4x^2 - 8x + a - 1 = 0 has two solutions. However, if one solution is -\frac{1}{2}, the other one solution will be left because substituting x = -\frac{1}{2} makes the denominators in the given equation zero. Assume one solution is x = -\frac{1}{2}. \begin{aligned} -4x^2 - 8x + a - 1 =& -1 + 4 + a - 1 \\ =& a +2 \\ =& 0. \end{aligned} a = -2, then we have only one solution. a = -2 \text{ or } -3.\ _\square \frac{x^2 + 6x }{x^2 + 7x + 10}=\frac{-2x + a}{x^2 + 7x + 10} a? x^2 + 7x + 10 \begin{aligned} x^2 + 6x =& -2x + a \\ x^2 + 8x - a =& 0. \end{aligned} \begin{aligned} \frac{D}4 = 4^2 + a = a + 16 . \end{aligned} a = -16, \begin{aligned} x^2 + 8x - a = x^2 + 8x + 16 = (x+4)^2 . \end{aligned} x=-4 a > -16, x^2 + 8x - a = 0 has two solutions. However, if one solution is -2 or -5, the other one solution will be left because substituting x = -2 \text{ or } -5 x = -2. \begin{aligned} x^2 + 8x - a =& 4 -16 - a \\ =& -12 - a \\ =& 0. \end{aligned} a = -12, then we have only one solution x = -6. x = -5. \begin{aligned} x^2 + 8x - a =& 25 - 40 - a \\ =& -15 - a \\ =& 0. \end{aligned} a = -15, x = -3. a = -12, -15, -16.\ _\square Cite as: Rational Equations. Brilliant.org. Retrieved from https://brilliant.org/wiki/rational-equations/
Demonstration of the functionality and normal operation of the Doubly-fed Induction Generator (DFIG) Wind Turbine model example Grid-connected inverter with virtual synchronous machine Control demonstration of grid-connected converters to help maintain grid stability Multifunctional grid connected converter Modular Multi-level Converter (MMC) in High-voltage direct current (HVDC) transmission This note demonstrates use of a MMC in one of its main applications: High-voltage DC electric power transmission. The doubly-fed induction generator (DFIG) with the back-to-back converter is a system frequently used in wind turbines. Traditional wind turbines have fixed turning speeds, while DFIG enables wind turbines to operate with various range of speeds. The back-to-back converter is connected to the rotor of the DFIG, and its purpose is to feed the rotor with currents of varying frequency, in order to reach the desired rotor speeds. This application note demonstrates the implementation of a DFIG wind turbine with a back-to-back converter controller. The simulation cases presented in this document cover the dynamic response of DFIG to changes in wind speed and during turbine braking process. The power contained in the form of kinetic energy in the wind Pv is expressed by: {P}_{v}=\frac{1}{2}\rho \pi {R}^{2}{V}_{v}^{3} where Vv is the average wind speed in the swept area of A=\pi {R}^{2} , where R is diameter of the rotor blade, and ρ is air density. The wind turbine can recover only a part of that power (Pt): {P}_{t}=\frac{1}{2}\rho \pi {R}^{2}{V}_{v}^{3}{C}_{p} The power coefficient Cp is a dimensionless parameter that expresses the effectiveness of the wind turbine in the transformation of kinetic energy of the wind into mechanical energy. This coefficient is a function of wind speed, speed of the rotor blades, and the pitch angle. For our Wind Turbine with DFIG model, rotor blade length is set to R = 50m, while air density is set to \rho =1.225\frac{\mathit{kg}}{{m}^{3}} . The pitch angle is automatically regulated in such a way that the change of Cp is as presented in Figure 1: Figure 1. Power coefficient (Cp) as a function of wind speed As mentioned before, rotor blades can revolve in range of different speeds. The curve of the high-speed shaft angular speed as a function of the wind speed can be observed in Figure 3.This graph can be divided in zones, where in each zone the change in angular speed is different. A different control of DFIG is implemented based on which zone the machine currently operates. Note: The wind turbine contains a high-speed and a low-speed shaft, connected through a gearbox (where the low-speed shaft is attached to the rotor hub and the high-speed shaft is attached to the generator). Since the mechanical part is irrelevant for our model (and thus both the high-speed shaft angular speed and the gearbox ratio are also irrelevant), we will observe only the high-speed shaft and the torque applied to it. Figure 2. High-speed shaft angular speed as a function of wind speed Based on the wind speed and the power coefficient, we can calculate the mechanical torque on the high-speed shaft. The rotor-side converter control is set up in a way that it adjusts the rotor speed based on the current mechanical torque. That dependency can be observed in Figure 3. Figure 3. Mechanical Torque as a function of high-speed shaft angular speed Nominal ratings (machines mechanical torque, and angular speed) are shown in Figure 3 as a maximum point of both values, which occurs when nominal mechanical torque is 20006 Nm, while the nominal angular speed of the generator is 105.56\frac{\mathit{rad}}{s} . These nominal ratings are reached when the wind speed is 22\frac{m}{s} , and are maintained until the wind speed is considered to strong for the machine (in this case, 25\frac{m}{s} The DFIG with the back-to-back converter model is presented in Figure 4. The Stator Contactor can be seen on the stator side (bottom left) while the Converter Contactor is visible on the back-to-back converter side (bottom right). Their purpose is to disconnect the DFIG and back-to-back converter from the grid and bring the rotor speed to zero when the wind speed is out of operating range or when when the braking is activated. Contactors are controlled in the Contactor control subsystem. Figure 4. DFIG machine with back-to-back converter The Wind Turbine Control subsystemcontains a C function which calculates the mechanical torque and speed reference, based on the wind speed. It also determines which operating mode the DFIG is working. Figure 5. Rotor-side Converter Control subsystem In Figure 5, we see the Rotor-side Converter Control subsystem where the vector control is implemented. This control is in charge of supplying the rotor with voltages that take a different amplitude and frequency at steady state in order to reach the required speed of the rotor. Input ‘omega_r’ feeds this control as an output of a C function based on the wind speed input. The most important nominal values are provided in Table 1. Table 1. Nominal values of variables in the model Grid line voltage 690 V DFIG nominal power 2250 kW Adequate wind speed 3 - 25 m/s Nominal rotor speed 105.56 rad/s Nominal stator current 2241 A Back-to-back DC link range 950 - 2000 V Back-to-back inverter carrier frequency 4000 Hz No. of processing cores 3 Max. time slot utilization 53.12 % Simulation step, circuit solver 1e-6 s Execution rate, signal processing 100e-6 In the first simulation example observes the impact of a change in wind speed from 6\frac{m}{s} 12\frac{m}{s} on the state of the model. You can change the Wind Speed in the SCADA panel, and the change in speed of the rotor shaft, power output, and disturbance in Vbus can be seen in the Trace graphs in HIL SCADA (Figure 6). There is also a possibility to enable and disable the Wind Generator at any moment by unchecking the Enable machine checkbox. The state of the contactors (stator and back-to-back contactor) can also be observed in the SCADA panel. PI parameters of the regulators can be changed through SCADA, by changing the value in the Text box widgets. Figure 6. SCADA panel during the transition from one wind speed to another In Figure 7, we can observe changes in power output, rotor and stator currents, and the invariable line voltage, measured by the three-phase meter component. Figure 7. Wind turbine response during the increase of wind speed from 6 to 12 meters per second If the wind speed becomes too high, or we disable the DFIG manually, an inverse mechanical torque will be applied to the shaft in order to bring its speed to zero (by which we simulate braking). In addition to this process, the DFIG and the back-to-back are disconnected from the grid, and the Vbus begins to fall to zero. This process can be observed in Figure 8, where the DFIG is manually disabled while the wind speed was at 6\frac{m}{s} Figure 8. Wind turbine response during wind turbine shutdown examples\models\grid-connected converters\back to back dfig wind back to back dfig wind.tse , back to back dfig wind.cus [1] Dimitrije Jelic
Climate change/Risk Literacy - Wikiversity Climate change/Risk Literacy This learning resource starts with a statement that should be analysed by the learner. This analysis is used to trigger further exploration of the topic Climate Change and Risk Literacy. "Scientists mentioned in the context of climate change that we should limit the global warming to {\displaystyle 2^{o}C} . But I am exposed to {\displaystyle 30^{o}C} temperature difference every year between summer and winter. So what is the problem?" (Misconception) Analyse the statement above and explain the misconception in the context of Risk Literacy. (Timespan between Activity and Impact/Outcome) Long time scales and human activities for risk mitigation. If is hurts immediately (e.g. by touching a hot oven plate) our response is rapid (e.g. remove hand from hot oven plate). If look on longer time scales (e.g. smoking and lung cancer) human activities do not "hurt" directly after the activity (e.g. smoking one cigarette). Explain the problem for risk awareness if outputs or outcomes are visible/observable only after a longer time span after the activities, that caused the ultimate changes. They outcome (change) seems to be decouple due the elapsed time. Identify the scientific evidence in Psychology for this human behaviour (e.g. smoking) and apply these psychological results to Climate Change and Global Warming. (Probability and Uncertainty) In general projections and models for the future are associated with an uncertainty. Explain how uncertainty and probability affects behavioural changes in the context of climate change. (Short-term goals/long-term goals) Risk Literacy based on scientific evidence about climate change is a long-term goal for the human population. If more than 90% of the human population on earth battle to survive the next week, month or year (physically or from their workload for short-term goals). There might be no capacity of efforts to accomplish long-term goals. Explain how the combination of short-term goals (e.g. public security, food security and sustainability, sustainable income to feed the family, ...) are necessary to accomplish long-term goals. Retrieved from "https://en.wikiversity.org/w/index.php?title=Climate_change/Risk_Literacy&oldid=2011753"
Volume 2022 Issue 1 | Lithosphere | GeoScienceWorld January - Volume 2022, Number 1 February - Volume 2022, Number Special 10 February - Volume 2022, Number Special 8 February - Volume 2022, Number Special 9 March - Volume 2022, Number Special 11 March - Volume 2022, Number Special 12 Teresa Jordan; Andrés Quezada; Nicolás Blanco; Arturo Jensen; Paulina Vásquez; Fernando Sepúlveda Lithosphere January 18, 2022, Vol.2022, 1024844. doi:https://doi.org/10.2113/2022/1024844 Evolution of Fault-Zone Hydromechanical Properties in Response to Different Cementation Processes C. R. Romano; R. T. Williams The Impact of Sulfate-Driven Anaerobic Oxidation of Methane on the Carbon Isotopic Composition in Marine Sediments: A Case Study from Two Sites in the Shenhu Area, Northern South China Sea Bin Wang; Huaiyan Lei; Fanfan Huang; Yuan Kong; Xijie Yin Lithosphere March 24, 2022, Vol.2022, 1985935. doi:https://doi.org/10.2113/2022/1985935 A Pore-Geometry-Based Thermodynamic Model for the Nanoconfined Phase Behavior in Shale Condensate Reservoirs Min Zhang; Renjing Liu; Jinghong Hu; Yuan Zhang Lithosphere April 11, 2022, Vol.2022, 1989358. doi:https://doi.org/10.2113/2022/1989358 C. Rodriguez Piceda; M. Scheck-Wenderoth; J. Bott; M. L. Gomez Dacal; M. Cacace; M. Pons; C. B. Prezzi; M. R. Strecker Lithosphere May 13, 2022, Vol.2022, 2237272. doi:https://doi.org/10.2113/2022/2237272 Xinglin Lu; Xuquan Hu; Zhengyu Xu; Xian Liao; Longhuan Liu; Zhihong Fu Elemental and Sr-Nd-Pb-Hf Isotope Signatures of Early Cretaceous Magmatic Rocks in the Wulian Area, Eastern Shandong: Implications for Crust-Mantle Interaction at the Edge of the Sulu Col... Qing Du; Fanchao Meng; Andrew C. Kerr; Yong Chen; Yulu Tian; Zhiping Wu; Yaoqi Zhou Chengshi Gan; Xin Qian; Yuejun Wang; Qinglai Feng; Yuzhi Zhang; Junaidi Bin Asis Fracturing Optimization Design of Fractured Volcanic Rock in Songliao Basin Based on Numerical Research and Orthogonal Test Zhaoyi Liu; Zhejun Pan; Shibin Li; Ligang Zhang; Fengshan Wang; Changhao Wang; Lingling Han; Yuanyuan Ma; Hao Li; Peng Wang; Yize Huang Progressive Development of a Distributed Ductile Shear Zone beneath the Patagonian Retroarc Fold-Thrust Belt, Chile Paul Betka; Sharon Mosher; Keith Klepeis Yuchen Liu; Bo Liu; Jian Fu; Le Kang; Siqi Li Recognition of Detrital Carbonaceous Material in the Ryoke Metamorphic Belt by Using Raman Thermometry: Implications for Thermal Structure and Detrital Origin Ken Yamaoka; Simon R. Wallis; Akira Miyake; Yui Kouketsu Mengfan Jiang; Weijia Sun; Jiamin Hu; Qingya Tang The Impact of Fracture Geometries on Heterogeneity and Accuracy of Upscaled Equivalent Fracture Models Willy Lemotio; Evariste Ngatchou; Adiang M. Cyrille; Alain-Pierre K. Tokam; Nguiya Sévérin; Philippe Njandjock Nouck Lithosphere February 07, 2022, Vol.2022, 5596233. doi:https://doi.org/10.2113/2022/5596233 Permian Remelting and Maturity of Continental Crust Revealed by the Daqing Peraluminous Granitic Batholith, Inner Mongolia Jialiang Li; Jingao Liu; Chen Wu; Di-Cheng Zhu Analysis and Research on Borehole Shrinkage Mechanisms and Their Control in Directional Wells in Deep Composite Salt Formations Shiyuan Li; Chao Fang; Chaowei Chen; Degui Xiang; Fuyao Li; Hao Huang Md. Sakawat Hossain; Songjian Ao; Tridib Kumar Mondal; Arnab Sain; Md. Sharif Hossain Khan; Wenjiao Xiao; Pengpeng Zhang Qiang Liu; Jianjun Liu; Bing Liang; Weiji Sun; Jie He; Yun Lei Nancy Hui-Chun Chen; Peter A. Cawood; Yoshiyuki Iizuka Recurrence and Clustering of Large Earthquakes along the Northern Boundary of Ordos Block: Constraining Paleoearthquakes by an Improved Multiple Trench Constraining Method Hui Peng; Dongli Zhang; Wenjun Zheng; Zhuqi Zhang; Haiyun Bi; Shumin Liang; Jingjun Yang Ordovician and Silurian Formations of the Baltic Syneclise (NE Poland): An Organic Geochemistry Approach P. Kosakowski; A. Zakrzewski; M. Waliczek Electron Backscatter Diffraction Study of Ultrahigh-Pressure Tso Morari Eclogites (Trans-Himalayan Collisional Zone): Implications for Strain Regime Transition from Constrictional to Plan... Alosree Dey; Koushik Sen; Manish A. Mamtani R. Soucy La Roche; S. C. Dyer; A. Zagorevski; J. M. Cottle; F. Gaidies Chao Li; Shengli Wang; Yanjun Wang; Zhiyuan He; Dongtao Wei; Dong Jia; Yan Chen; Guohui Chen; Fei Xue; Yunjian Li Origin of the Miocene Adakitic Rocks and Implication for Tectonic Transition in the Himalayan Orogen: Constraints from Kuday Granitoid Porphyry in Southern Tibet Yunsong Fan; Jinjiang Zhang; Chao Lin; Haibing Wang; Daxiang Gu; Xiaoxian Wang; Bo Zhang Initial Cenozoic Exhumation of the Northern Chinese Tian Shan Deduced from Apatite (U-Th)/He Thermochronological Data Jingxing Yu; Dewen Zheng; Huiping Zhang; Yizhou Wang; Yuqi Hao; Chaopeng Li High-Detail Fault Segmentation: Deep Insight into the Anatomy of the 1983 Borah Peak Earthquake Rupture Zone (⁠ M w 6.9, Idaho, USA) Simone Bello; Carlo Andrenacci; Daniele Cirillo; Chelsea P. Scott; Francesco Brozzetti; J Ramon Arrowsmith; Giusy Lavecchia Deterioration Characteristic and Constitutive Model of Red-Bed Argillaceous Siltstone Subjected to Drying-Wetting Cycles Fusheng Zha; Kai Huang; Bo Kang; Xianguo Sun; Jingwen Su; Yunfeng Li; Zhitang Lu Multiple Rodingitization Stages in Alkaline, Tholeiitic, and Calc-Alkaline Basaltic Dikes Intruding Exhumed Serpentinized Tethyan Mantle from Evia Island, Greece Christos Karkalis; Andreas Magganas; Petros Koutsovitis; Panagiotis Pomonis; Theodoros Ntaflos Variability of the Early Summer Temperature in the Southeastern Tibetan Plateau in Recent Centuries and the Linkage to the Indian Ocean Basin Mode Jinjian Li; Liya Jin; Zeyu Zheng; Ningsheng Qin Lingling Han; Xizhe Li; Wei Guo; Wei Ju; Yue Cui; Zhaoyi Liu; Chao Qian; Weijun Shen
m n Matrix (or 2-dimensional Array), then it is assumed to contain m \mathrm{with}⁡\left(\mathrm{SignalProcessing}\right): \mathrm{audiofile}≔\mathrm{cat}⁡\left(\mathrm{kernelopts}⁡\left(\mathrm{datadir}\right),"/audio/stereo.wav"\right): \mathrm{Spectrogram}⁡\left(\mathrm{audiofile},\mathrm{compactplot}\right) \mathrm{Spectrogram}⁡\left(\mathrm{audiofile},\mathrm{channel}=1,\mathrm{includesignal}=[\mathrm{color}="Navy"],\mathrm{includepowerspectrum},\mathrm{colorscheme}=["Orange","SteelBlue","Navy"]\right)
The Traditional Addition Algorithm - Global Math Week I3SFQ1 How does this dots-and-boxes approach to addition compare to the standard algorithm most people know? Let’s go back to the example 358 + 287 . Most people are surprised by the straightforward left-to-right answer 5|13|15 Write on a piece of paper the traditional way to perform this addition problem. Did you perform an explosion? First with 15 13 Did you perform a second explosion? Play with the machine to see if you can make sense of matters for yourself, or read my thoughts below. The traditional algorithm has us work from right to left, looking at 8 + 7 But in the algorithm we don’t write down the answer 15 . Instead, we explode ten dots right away and write on paper a 5 in the answer line together with a small 1 tacked on to the middle column. People call this carrying the one and it – correctly – corresponds to adding an extra dot in the tens position. Now we attend to the middle boxes. Adding gives 14 dots in the tens box ( 5 + 8 gives thirteen dots, plus the extra dot from the previous explosion). Now we perform another explosion. On paper, one writes 4 in the answer line, in the tens position, with another little 1 placed in the next column over. This matches the idea of the dots-and-boxes picture precisely. And now we finish the problem by adding the dots in the hundreds position. So the traditional algorithm works right to left and does explosions (“carries”) as one goes along. On paper it is swift and compact and this might be why it has been the favored way of doing long addition for centuries. The Exploding Dots approach works left to right, just as we are taught to read in English, and leaves all the explosions to the end. It is easy to understand and kind of fun. Both approaches, of course, are good and correct. It is just a matter of taste and personal style which one you choose to do. (But feel free to come up with your own new, and correct, approach!)
The class advisor was helping students plan an end-of-year trip. The students were surveyed about their choices. The results are shown in the circle graph at right. What percent of the students chose the water park? Find the unknown percentage by subtracting the known percentages from 100\% 100\%-23\%-29\%-18\% 30\% Which two results are very close? Which two trips have the highest percentages? Write a recommendation to the class advisor regarding what the next step would be. What should the class advisor do since there are two trips in high demand?
LinearRegression: An implementation of ordinary least-squares linear regression - mlxtend Example 2 - QR decomposition method Example 3 - SVD method A implementation of Ordinary Least Squares simple and multiple linear regression. from mlxtend.regressor import LinearRegression Illustration of a simple linear regression model: In Ordinary Least Squares (OLS) Linear Regression, our goal is to find the line (or hyperplane) that minimizes the vertical offsets. Or in other words, we define the best-fitting line as the line that minimizes the sum of squared errors (SSE) or mean squared error (MSE) between our target variable (y) and our predicted output over all samples in our dataset of size n Now, LinearRegression implements a linear regression model for performing ordinary least squares regression using one of the following five approaches: QR Decomposition Method SVD (Singular Value Decomposition) method The closed-form solution should be preferred for "smaller" datasets where calculating (a "costly") matrix inverse is not a concern. For very large datasets, or datasets where the inverse of [X^T X] may not exist (the matrix is non-invertible or singular, e.g., in case of perfect multicollinearity), the QR, SVD or gradient descent approaches are to be preferred. y \mathbf{x} m -dimensional sample vector, and \mathbf{w} is the weight vector (vector of coefficients). Note that w_0 represents the y-axis intercept of the model and therefore x_0=1 Stable OLS via QR Factorization The QR decomposition method offers a more numerically stable alternative to the closed-form, analytical solution based on the "normal equations," and it can be used to compute the inverse of large matrices more efficiently. QR decomposition method decomposes given matrix into two matrices for which an inverse can be easily obtained. For instance, a given matrix X \in \mathbb{R}^{n \times m} , the QR decomposition into two matrices is: Q \in \mathbb{R}^{n \times m} is an orthonormal matrix, such that Q^\top Q = QQ^\top = I . The second matrix R \in \mathbf{R}^{m \times m} The weight parameters of the ordinary least squares regression model can then be computed as follows [1]: Stable OLS via Singular Value Decomposition Another alternative way for obtaining the OLS model weights in a numerically stable fashion is by Singular Value Decomposition (SVD), which is defined as: for a given matrix \mathbf{X} Then, it can be shown that the pseudo-inverse of X X^+ , can be obtained as follows [1]: \Sigma is the diagonal matrix consisting of singular values of \mathbf{X} \Sigma^{+} is the diagonal matrix consisting of the reciprocals of the singular values. The model weights can then be computed as follows: Please note that this OLS method is computationally most inefficient. However, it is a useful approach when the direct method (normal equations) or QR factorization cannot be applied or the normal equations (via \mathbf{X}^T \mathbf{X} ) are ill-conditioned [3]. [1] Chapter 3, page 55, Linear Methods for Regression. Trevor Hastie; Robert Tibshirani; Jerome Friedman (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd ed.). New York: Springer. (ISBN 978–0–387–84858–7) [2] G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142. [3] Douglas Wilhelm Harder. Numerical Analysis for Engineering. Section 4.8, ill-conditioned Matrices X = np.array([ 1.0, 2.1, 3.6, 4.2, 6])[:, np.newaxis] y = np.array([ 1.0, 2.0, 3.0, 4.0, 5.0]) ne_lr = LinearRegression() ne_lr.fit(X, y) print('Intercept: %.2f' % ne_lr.b_) print('Slope: %.2f' % ne_lr.w_[0]) plt.scatter(X, y, c='blue') plt.plot(X, model.predict(X), color='red') lin_regplot(X, y, ne_lr) qr_lr = LinearRegression(method='qr') qr_lr.fit(X, y) print('Intercept: %.2f' % qr_lr.b_) print('Slope: %.2f' % qr_lr.w_[0]) lin_regplot(X, y, qr_lr) svd_lr = LinearRegression(method='svd') svd_lr.fit(X, y) print('Intercept: %.2f' %svd_lr.b_) print('Slope: %.2f' % svd_lr.w_[0]) lin_regplot(X, y, svd_lr) gd_lr = LinearRegression(method='sgd', gd_lr.fit(X, y) print('Intercept: %.2f' % gd_lr.b_) print('Slope: %.2f' % gd_lr.w_) lin_regplot(X, y, gd_lr) # Visualizing the cost to check for convergence and plotting the linear model: plt.plot(range(1, gd_lr.epochs+1), gd_lr.cost_) sgd_lr = LinearRegression(method='sgd', minibatches=len(y)) sgd_lr.fit(X, y) print('Intercept: %.2f' % sgd_lr.w_) print('Slope: %.2f' % sgd_lr.b_) lin_regplot(X, y, sgd_lr) plt.plot(range(1, sgd_lr.epochs+1), sgd_lr.cost_) minibatches=3) print('Intercept: %.2f' % sgd_lr.b_) print('Slope: %.2f' % sgd_lr.w_)
Atlas (topology) — Wikipedia Republished // WIKI 2 Set of charts that describes a manifold For other uses, see Fiber bundle and Atlas (disambiguation). In mathematics, particularly topology, one describes a manifold using an atlas. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. If the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fibre bundles. John Morgan - Geometry, Topology and Physics (December 14, 2016) 2 Formal definition of atlas 3 Transition maps 4 More structure "Coordinate patch" redirects here. Not to be confused with Surface patch. See also: Topological manifold § Coordinate charts The definition of an atlas depends on the notion of a chart. A chart for a topological space M (also called a coordinate chart, coordinate patch, coordinate map, or local frame) is a homeomorphism {\displaystyle \varphi } from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair {\displaystyle (U,\varphi )} An atlas for a topological space {\displaystyle M} is an indexed family {\displaystyle \{(U_{\alpha },\varphi _{\alpha }):\alpha \in I\}} of charts on {\displaystyle M} {\displaystyle M} {\displaystyle \textstyle \bigcup _{\alpha \in I}U_{\alpha }=M} ). If the codomain of each chart is the n-dimensional Euclidean space, then {\displaystyle M} is said to be an n-dimensional manifold. The plural of atlas is atlases, although some authors use atlantes.[1][2] An atlas {\displaystyle \left(U_{i},\varphi _{i}\right)_{i\in I}} on a{\displaystyle n} {\displaystyle M} is called an adequate atlas if the image of each chart is either {\displaystyle \mathbb {R} ^{n}} {\displaystyle \mathbb {R} _{+}^{n}} {\displaystyle \left(U_{i}\right)_{i\in I}} is a locally finite open cover of {\displaystyle M} {\displaystyle M=\bigcup _{i\in I}\varphi _{i}^{-1}\left(B_{1}\right)} {\displaystyle B_{1}} is the open ball of radius 1 centered at the origin and {\displaystyle \mathbb {R} _{+}^{n}} is the closed half space. Every second-countable manifold admits an adequate atlas.[3] Moreover, if {\displaystyle {\mathcal {V}}=\left(V_{j}\right)_{j\in J}} is an open covering of the second-countable manifold {\displaystyle M} then there is an adequate atlas {\displaystyle \left(U_{i},\varphi _{i}\right)_{i\in I}} {\displaystyle M} {\displaystyle \left(U_{i}\right)_{i\in I}} is a refinement of {\displaystyle {\mathcal {V}}} {\displaystyle M} {\displaystyle U_{\alpha }} {\displaystyle U_{\beta }} {\displaystyle \varphi _{\alpha }} {\displaystyle \varphi _{\beta }} {\displaystyle \tau _{\alpha ,\beta }} {\displaystyle \tau _{\beta ,\alpha }} {\displaystyle \mathbf {R} ^{n}} {\displaystyle \mathbf {R} ^{n}} Two charts on a manifold, and their respective transition map A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.) To be more precise, suppose that {\displaystyle (U_{\alpha },\varphi _{\alpha })} {\displaystyle (U_{\beta },\varphi _{\beta })} are two charts for a manifold M such that {\displaystyle U_{\alpha }\cap U_{\beta }} is non-empty. The transition map {\displaystyle \tau _{\alpha ,\beta }:\varphi _{\alpha }(U_{\alpha }\cap U_{\beta })\to \varphi _{\beta }(U_{\alpha }\cap U_{\beta })} is the map defined by {\displaystyle \tau _{\alpha ,\beta }=\varphi _{\beta }\circ \varphi _{\alpha }^{-1}.} {\displaystyle \varphi _{\alpha }} {\displaystyle \varphi _{\beta }} are both homeomorphisms, the transition map {\displaystyle \tau _{\alpha ,\beta }} is also a homeomorphism. One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives. If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be {\displaystyle C^{k}} Very generally, if each transition function belongs to a pseudogroup {\displaystyle {\mathcal {G}}} of homeomorphisms of Euclidean space, then the atlas is called a {\displaystyle {\mathcal {G}}} -atlas. If the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle. Smooth frame ^ Jost, Jürgen (11 November 2013). Riemannian Geometry and Geometric Analysis. Springer Science & Business Media. ISBN 9783662223857. Retrieved 16 April 2018 – via Google Books. ^ Giaquinta, Mariano; Hildebrandt, Stefan (9 March 2013). Calculus of Variations II. Springer Science & Business Media. ISBN 9783662062012. Retrieved 16 April 2018 – via Google Books. ^ a b Kosinski, Antoni (2007). Differential manifolds. Mineola, N.Y: Dover Publications. ISBN 978-0-486-46244-8. OCLC 853621933. Dieudonné, Jean (1972). "XVI. Differential manifolds". Treatise on Analysis. Pure and Applied Mathematics. Vol. III. Translated by Ian G. Macdonald. Academic Press. MR 0350769. Loomis, Lynn; Sternberg, Shlomo (2014). "Differentiable manifolds". Advanced Calculus (Revised ed.). World Scientific. pp. 364–372. ISBN 978-981-4583-93-0. MR 3222280. Husemoller, D (1994), Fibre bundles, Springer , Chapter 5 "Local coordinate description of fibre bundles".
Home : Support : Online Help : Statistics and Data Analysis : DataFrames and DataSeries : DataFrame Commands : Tabulate insert a DataFrame as a worksheet Table Tabulate(df, opts) options for the DocumentTools[Tabulate] command, as described on that help page The Tabulate command, applied to a DataFrame object, inserts the DataFrame as a worksheet Table into the worksheet. The column labels are shown in a row inserted before the rows of actual data in the DataFrame. Similarly, the row labels are shown in a column inserted before the columns of data. By default, they are shown with a different background color than the other values. The options for this command are discussed on the help page DocumentTools[Tabulate]. They include options for changing the foreground and background color for the cells, showing table borders, as well as options to control the width of the table and of the individual columns. \mathrm{df}≔\mathrm{DataFrame}⁡\left(〈〈1,2〉|〈3,4〉|〈5,6〉〉,\mathrm{columns}=[a,b,c],\mathrm{rows}=[d,e]\right) \textcolor[rgb]{0,0,1}{\mathrm{DataFrame}}\textcolor[rgb]{0,0,1}{⁡}\left(\left[\begin{array}{rrr}1& 3& 5\\ 2& 4& 6\end{array}\right]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{rows}}\textcolor[rgb]{0,0,1}{=}\left[\textcolor[rgb]{0,0,1}{d}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{e}\right]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{columns}}\textcolor[rgb]{0,0,1}{=}\left[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}\right]\right) \mathrm{Tabulate}⁡\left(\mathrm{df}\right): 1 3 5 2 4 6 If we want to specify the relative width of the columns, we can do that by specifying the weights option, which specifies relative weights for the widths of the table columns. Note that, while the DataFrame has three columns, the Table has four: there is an extra column for the row labels. Thus, we specify four weights. We also use the width option to specify that the total width of the table should be 50% of the width of the worksheet. \mathrm{Tabulate}⁡\left(\mathrm{df},\mathrm{weights}=[1,2,2,2],\mathrm{width}=50.\right): 1 3 5 2 4 6 The returned identity of the Table can be used to modify Table properties following insertion. \mathrm{id}≔\mathrm{Tabulate}⁡\left(\mathrm{df},\mathrm{weights}=[1,2,2,2],\mathrm{width}=50.\right): 1 3 5 2 4 6 \mathrm{id} \textcolor[rgb]{0,0,1}{"Tabulate1"} \mathrm{DocumentTools}:-\mathrm{SetProperty}⁡\left(\mathrm{id},'\mathrm{fillcolor}'[2..,2..],"LightGreen"\right) The Tabulate command was introduced in Maple 2016.
Solve design optimization problem - MATLAB sdo.optimize \underset{p}{\text{min}}F\left(p\right)\text{ subject to }\left\{\begin{array}{l}{C}_{leq}\left(p\right)\le 0\hfill \\ {C}_{eq}\left(p\right)=0\hfill \\ A×p\le B\hfill \\ {A}_{eq}×p={B}_{eq}\hfill \\ lb\le p\le ub\hfill \end{array} \mathit{f}\left(\mathit{x}\right)={\mathit{x}}^{2} {\mathit{x}}^{2}-4\mathit{x}+1\le 0 \frac{2\mathit{x}}{3}-3\le 0 \mathit{f}\left(\mathit{x}\right)
Definite Integrals: What Are They and How to Calculate Them | Outlier Knowing how to find definite integrals is an essential skill in calculus. In this article, we’ll learn the definition of definite integrals, how to evaluate definite integrals, and practice with some examples. Defining Definite Integrals Definite Integrals vs. Indefinite Integrals How to Calculate Definite Integrals Properties of Definite Integrals and Key Equations 3 Practice Exercises and Solutions What is a definite integral? Definite integrals are used to calculate the area between a curve and the x-axis on a specific interval. (If you need to review, see our beginner's guide to integrals). If we want to evaluate the definite integral of a real-valued function f x on the interval [a, b], where and b a \leq b , we use the following notation: \int_{a}^{b} f(x)dx = A In this notation, the curved integral sign \int indicates the operation of taking an integral. The rest of this notation is composed of three parts: The integrand f(x) The integral bounds and b a is the lower bound and b is the upper bound. These are also referred to as limits. dx , which tells us that we are integrating f x Altogether, this notation represents the area enclosed by f(x) x=a x=b . Graphically, we can visualize \int_{a}^{b} f(x)dx as something like this: Before we learn exactly how to solve definite integrals, it’s important to understand the difference between definite and indefinite integrals. Definite integrals find the area between a function’s curve and the x-axis on a specific interval, while indefinite integrals find the antiderivative of a function. Finding the indefinite integral and finding the definite integral are operations that output different things. Calculating the indefinite integral takes in one function, and outputs another function: the antiderivative function of f(x) , notated by F(x) This output function is accompanied by an arbitrary constant C and does not involve lower and upper boundaries. By contrast, calculating the definite integral always outputs a real number, which represents the area under the curve on a specific interval. You can see the difference in their notations below: The indefinite integral \int f(x)dx = F(x) + C \int_{a}^{b} f(x)dx = A , for some real number A. f(x) , the indefinite integral answers the question, “What function, when differentiated, gives us f(x) ?” The indefinite integral gives us a family of functions F since infinite functions will satisfy this question. Thus, the indefinite integral gives us an “indefinite” answer. The definite integral gives us a real number — a unique “definite” answer. You can learn more about the difference with this lesson sample on indefinite integrals by one of our instructors Dr. Hannah Fry. To find the definite integral of a function, we can use the Fundamental Theorem of Calculus, which states: If F is an antiderivative of \int_{a}^{b} f(x)dx = [F(x)]^b_a = F(b) - F(a) This means that to find the definite integral of a function on the interval [a, b], we simply take the difference between the indefinite integral of the function evaluated at and the indefinite integral of the function evaluated at b We can break this process down into four steps: F(x) . You can use the Rules of Integration that you learned with indefinite integrals to help with this part. F(b) . This is found by plugging the upper bound b into the indefinite integral found in Step 1. F(a) . This is found by plugging the lower bound a F(b) - F(a) Let’s do one example together. Let’s calculate the definite integral of the function f(x) = 4x^3-2x on the interval [1, 2]. We'll follow the four steps given above. \int (4x^3-2x) dx = x^4 - x^2 = F(x) F(2) = 2^4-2^2 = 16-4 = 12 F(1) = 1^4-1^2 = 1-1 = 0 F(2)-F(1) = 12 - 0 = 12 \int_{1}^{2} (4x^3-2x) dx = 12 Let’s review some of the key properties of definite integrals. These will be useful for solving more complex integral problems. In the following properties, assume that f g are continuous functions, and let k be a constant. Zero-Length Interval Rule When a=b, the interval has length 0, and so the definite integral definite integral of a function on [a, b] is 0. \int_{a}^{a} f(x)dx = 0 Reverse Bounds Rule To find the definite integral of a function on [a, b] where a > b , we can simply reverse the sign of \int_{b}^{a} f(x)dx \int_{a}^{b} f(x)dx = -\int_{b}^{a} f(x)dx Adding Intervals Rule a b c are real numbers on a closed interval, then \int_{a}^{c} f(x)dx can be found by adding integrals as follows: \int_{a}^{c} f(x)dx = \int_{a}^{b} f(x)dx + \int_{b}^{c} f(x)dx Constant Multiplier Rule You can pull constants outside of an integral. \int_{a}^{b} kf(x)dx = k\int_{a}^{b} f(x)dx The integral of the sum or difference of two functions is the sum or difference of their integrals. \int_{a}^{b} [f(x) \pm g(x)]dx = \int_{a}^{b} f(x)dx \pm \int_{a}^{b} g(x)dx Integral of a Constant Rule The integral of a constant over [a, b] is equal to the constant multiplied by the difference b-a \int_{a}^{b} kdx = k(b-a) Comparison Properties of Definite Integrals f(x) \geq 0 on [a, b], then \int_{a}^{b} f(x)dx \geq 0 f(x) \leq 0 \int_{a}^{b} f(x)dx \leq 0 f(x) \geq g(x) \int_{a}^{b} f(x)dx \geq \int_{a}^{b} g(x)dx The average value of a function on [a, b] is defined by: f_{avg}=\frac{1}{b-a}\int_{a}^{b} f(x)dx=\frac{F(b)-F(a)}{b-a} This theorem tells us that there’s at least one point c inside the open interval (a,b) at which f(c) will be equal to the average value of the function over [a, b]. That is, there exists a c on (a, b) such that: f(c) = \frac{1}{b-a}\int_{a}^{b} f(x)dx f(c)(b-a) = \int_{a}^{b} f(x)dx Here are three exercises for you to practice how to do a definite integral and their solutions. Calculate the definite integral of the function f(x) = \cos{(x)} [0, \frac{\pi}{2}] \int_{0}^{\frac{\pi}{2}} \cos{(x)}dx = [\sin{(x)}]^{\frac{\pi}{2}}_0 = \sin{(\frac{\pi}{2})} - \sin{(0)} = 1 - 0 =1 Determine the average value of f(x) = 12x^2-2x [2, 4] f_{avg}=\frac{1}{4-2}\int_{2}^{4} (12x^2-2x)dx =\frac{1}{2}\left[\frac{12x^3}{3}-\frac{2x^2}{2}\right]^4_2 =\frac{1}{2}\left[4x^3-x^2\right]^4_2 =\frac{1}{2}((4 \cdot 4^3-4^2)-(4 \cdot 2^3-2^2)) =\frac{1}{2}((256-16)-(32-4)) =\frac{1}{2}(240-28) =\frac{1}{2}(212) =106 \int_{3}^{10} f(x)dx = 17 \int_{7}^{10} f(x)dx=9 \int_{3}^{7} f(x)dx \int_{3}^{10} f(x)dx = \int_{3}^{7} f(x)dx + \int_{7}^{10} f(x)dx \int_{3}^{7} f(x)dx = \int_{3}^{10} f(x)dx - \int_{7}^{10} f(x)dx \int_{3}^{7} f(x)dx = 17 - 9 \int_{3}^{7} f(x)dx = 8
Structure and Bonding | Organic Chemistry 1 The structure and bonding in organic molecules are studied in this chapter: atomic structure and electron configurations, ionic and covalent bonds (octet rule), formal charges, hybridization types, resonance forms, Lewis structure and other molecular structures Shorthand Representations X: Symbol of the element Z: Atomic number = number of protons A: Mass number = number of protons + number of neutrons Number of electrons = number of protons - charge Z is always the same for a specific element A can be different. {}_{\mathrm{Z}}{}^{\mathrm{A}}\mathrm{X} {}_{\mathrm{Z}}{}^{\mathrm{A}\text{'}}\mathrm{X} are two isotopes The arrangement of electrons in the atomic orbitals of an atom. An atomic orbital can hold a maximum of 2 electrons The electronic configuration of an element can easily be determined by using the mnemonic below. The diagonal lines give the order of the subshells. The maximum number of electrons in each subshell is as follows: 2 e- in the s-subshells, 6 e- in the p-subshells, 10 e- in the d-subshells, 14 e- in the f-subshells Oxygen ⇒ Z = 8 and neutral atom ⇒ 8 electrons ⇒ 1s22s22p4 Iron ⇒ Z = 26 and neutral atom ⇒ 26 electrons ⇒ 1s22s22p63s23p64s23d6 The general rule governing the bonding process for second-row elements (the main elements in organic chemistry): atoms tend to form molecules in such a way that they reach an octet in the valence shell and reach a noble gas configuration Bonding is a process of joining 2 atoms that results in a decrease in energy and an increase in stability: the atoms get a complete outer shell of valence electrons. There are 2 main types of bonds: Ionic bonds are based on the strong electrostatic attraction between 2 ions of opposite charges. These bonds result from the transfer of electrons from one element to another in order to follow the octet rule Covalent bonds are bonds with 2 electrons. These bonds result from the sharing of electrons between 2 atoms (especially those in the middle of the periodic table). The electrons are shared to allow the atoms to reach noble-gas configurations Expected number of covalent bonds around an atom = 8 - number of valence electrons Carbon: Z = 6 ⇒ 6 e- and neutral ⇒ 1s2 2s2 2p2 ⇒ 4 e- in its valence shell Carbon needs 4 more e- to get the configuration of Neon and will therefore form 4 covalent bonds with other atoms ⇒ carbon atom is tetravalent Nitrogen: Z = 7⇒ 7 e- and neutral ⇒ 1s2 2s2 2p3 ⇒ 5 e- in its valence shell Nitrogen needs 3 more e- to get the configuration of Neon and will therefore form 3 covalent bonds with other atoms ⇒ nitrogen is trivalent Charge assigned to individual atoms in a Lewis structure Formal charge = number of valence electrons in free atom - number of valence electrons in bound atom Number of valence electrons in bound atom = number of unshared electrons + \frac{1}{2} number of shared electrons What are the formal charges in the CH3NO2 molecule? N: 1s2 2s2 2p3 = [He] 2s2 2p3 ⇒ 5 valence electrons in free atom 4 bonds: 8 shared electrons ⇒ 4 valence electrons in bound atom Formal charge = 5 - 4 = + 1 O: 1s2 2s2 2p4 = [He] 2s2 2p4 ⇒ 6 valence electrons in free atom 2 bonds + 2 lone pairs: 4 shared e- + 4 unshared e- ⇒ 6 valence e- in bound atom Formal charge = 6 - 6 = 0 1 bond + 3 lone pairs: 2 shared e- + 6 unshared e- ⇒ 7 valence e- in bound atom Formal charge = 6 - 7 = - 1 The combination of 2 or more atomic orbitals to form the same number of hybrid orbitals: sp3 hybridization: the combination of 1 s-orbital and 3 p-orbitals to form 4 sp3 hybrid orbitals sp hybridization: the combination of 1 s-orbital and 1 p-orbital to form 2 sp hybrid orbitals The number of electron domains (atoms or lone pairs of electrons) around an atom determines its geometry and hybridization: 2 electron domains ⇒ linear (bond angle: 180°) ⇒ sp hybridization 3 electron domains ⇒ trigonal planar (bond angle: 120°) ⇒ sp2 hybridization 4 electron domains ⇒ tetrahedral (bond angle: 109.5°) ⇒ sp3 hybridization Resonance structures: A group of Lewis structures with the same placement of the atoms but a different placement of their π or nonbonded electrons ⇒ their single bonds remain the same but the position of their multiple bonds and nonbonded electrons differ. Resonance structures must be valid Lewis structures The different resonance forms of a substance are not all equal: the form with the most bonds and fewer charges has a higher contribution to the resonance hybrid Selecting the best resonance structure that contributes the most to the resonance hybrid: Lower formal charges (positive or negative) are preferable to higher charges Formal charges on adjacent atoms are not desirable A more negative formal charge should reside on a more electronegative atom Difference between isomers and resonance structures: 2 isomers differ by the arrangement of their atoms and electrons, while 2 resonance structures differ only by the arrangement of their electrons A representation of the arrangement of atoms and the position of all valence electrons in a molecule or polyatomic ion. Shared electron pairs are represented by lines between 2 atoms, and lone pairs are represented by pairs of dots on individual atoms. We always try to satisfy the octet rule (or duet rule for hydrogen) when writing Lewis structures Dot: one nonbonding electron Pair of dots: lone electron pair (lone pair) Line: two shared electrons (bond) Lewis structure of NH3: N: 5 valence electrons ⇒ needs 3 shared electrons ⇒ 3 covalent bonds​​​ H: 1 valence electron ⇒ needs 1 shared electron ⇒ 1 covalent bond Lewis structure of CO2: C: 4 valence electrons ⇒ needs 4 shared electrons ⇒ 4 covalent bonds O: 6 valence electrons ⇒ needs 2 shared electrons ⇒ 2 covalent bonds How to write Lewis structures: Count the total number of valence electrons. Add or subtract electrons if you have a negative or a positive charge Determine the number of covalent bonds / lone pairs the molecule will have by dividing the number of valence electrons by 2. Use the molecular formula to draw the skeletal structure Distribute the remaining valence electrons to satisfy the octet rule, completing the octet of the more electronegative atoms first. Include double or triple bonds if necessary Check the number of valence electrons in the drawn molecule Assign formal charges to all atoms Draw a Lewis structure for methanol CH3OH Count the valence electrons: 1 C x 4 e- + 4 H x 1 e- + 1 O x 6 e- = 14 valence electrons ⇒ 7 bonds and lone pairs Arrange the atoms Add bonds ... 5 bonds ⇒ 10 electrons ... then lone pairs 5 bonds + 2 lone pairs ⇒ 14 e- Shorthand methods are used to abbreviate the structure of organic molecules. The 2 main types of shorthand representations are: Condensed structure: The main carbon chain is written horizontally. The atoms are drawn next to the atoms to which they are bonded Covalent bonds and lone pairs are omitted Parentheses are used around similar groups bonded to the same atom. If different substituents are bonded to the same atom, vertical lines can be used Carbon atoms are not shown: it is assumed that a carbon is at the junction of 2 lines and at the end of any line The hydrogens around each carbon are not drawn, but we assume that there are enough hydrogens for the carbons to follow the octet rule All the heteroatoms are drawn as well as the hydrogens that are directly bonded to them Check your knowledge about this Chapter How to write the nuclear notation? How to find the number of neutrons in an atom? The number of neutrons in an atom is equal to the mass number (A) of the atom minus the number of protons (Z) How to find the number of electrons? The number of electrons in an atom is equal to the number of protons (Z) minus the charge of the atom Can an element have the same number of protons as another element? No, the number of protons determines the element and is therefore specific for an element and determines the element. The mass number (A) of an element can be different. This is the case between 2 isotopes How many electrons can the subshells hold? An atomic orbital can hold a maximum of 2 electrons. Thus: s subshell: 1 s orbital ⇒ 2 electrons p subshell: 3 p orbitals ⇒ 6 electrons d subshell: 5 d orbitals ⇒ 10 electrons f subshell: 7 f orbitals ⇒ 14 electrons What is the correct order of increasing orbital energies? The order of orbital energies is: 1s < 2s < 2p < 3s < 3p < 4s < 3d < 4p < 5s ... You can easily remember this order by using the mnemonic on the right: The octet rule states that atoms tend to form molecules in such a way that they reach an octet in the valence shell and reach a noble gas configuration. The octet rule applies to almost all compounds made up of second period elements Bonding is a process of joining 2 atoms that results in a decrease in energy and an increase in stability: the atoms get a complete outer shell of valence electrons. There are 2 main types of bonds: ionic bonds and covalent bonds What is the difference between ionic bonds and covalent bonds? Covalent bonds are bonds with 2 electrons. These bonds result from the sharing of electrons between two atoms (especially those in the middle of the periodic table). The electrons are shared to allow the atoms to reach noble-gas configurations How many covalent bonds can an atom have? The expected number of covalent bonds around an atom is equal to 8 minus the number of valence electrons of the atom How to find the formal charge of an atom? The formal charge of an atom is the number of valence electrons in the free atom minus the number of valence electrons in the bound atom. The number of valence electrons in the bound atom is equal to the number of unshared electrons + number of shared electrons What is hybridization? What are the types of hybrid orbitals formed from s- and p-orbitals? Hybridization is the combination of 2 or more atomic orbitals to form the same number of hybrid orbitals: How do you predict hybridization and geometry? What is meant by resonance structure in chemistry? Resonance structures are a group of Lewis structures with the same placement of the atoms but a different placement of their π or nonbonded electrons ⇒ their single bonds remain the same but the position of their multiple bonds and nonbonded electrons differ Do all resonance structures contribute in the same way? The different resonance forms of a substance are not all equal: the form with the most bonds and less charges has a larger contribution to the resonance hybrid How to choose the resonance structure that contributes the most to the resonance hybrid? Principles for determining which resonance structure is most stable: What is the difference between resonance and isomerism? What is a Lewis structure in chemistry? A Lewis structure is a representation of the arrangement of atoms and the position of all valence electrons in a molecule or polyatomic ion. Shared electron pairs are represented by lines between 2 atoms, and lone pairs are represented by pairs of dots on individual atoms How to write a Lewis structure? Count the total number of valence electrons. Add or subtract electrons if you have a negative or positive charge How to write a condensed structure? How to draw a skeletal structure? Quiz - Structure and Bonding Atomic structure - Electron configuration Formal charge - Hybridization Resonance structures - 1 Lewis structures - 1 Skeletal structures - 1 Structures - Formal charge Organic Chemistry: Building Reaction Maps How satisfied are you overall to learn chemistry with Chemistry coach?
Search results for: S. Lakshminarayana Study of optical switching characterstics in nano doped liquid crystal K. Gouthami, D V N. Sukanya, S. Lakshminarayana, Y. Usha Devi 2016 Thirteenth International Conference on Wireless and Optical Communications Networks (WOCN) > 1 - 5 2016 Thirteenth International Conference on Wireless and Optical Communications Networks (WOCN) Nano doped Liquid Crystals are thermo tropic exhibiting various phase transitions under the influence of electric field. The molecules behave differently at each phase. The parameters like refractive index, optical polarization, dielectric constant, switching times and other physical parameters are different in different Liquid Crystal Phases. For the purpose of studying the electro-optic properties... Compositional characterization of Kakinada Bay sediment by INAA and IBA methods: preliminary study K. B. Dasari, M. Ratna Raju, S. Lakshminarayana Elemental concentrations of contaminated sediment samples were determined by instrumental neutron activation analysis, particle induced X-ray emission and particle induced gamma-ray emission. Sediment samples were collected from the Godavari estuary, Kakinada Bay, East Coast of India. A total 35 elements were determined through the aforementioned analytical techniques. International Atomic Energy... Understanding the constructional features of materialistic atoms in the light of strong nuclear gravitational coupling U.V.S. Seshavatharam, P. Kalyanai, B. Ramanuja Srinivas, T. Rajavardhanarao, more Materials Today: Proceedings > 2016 > 3 > 10 Part B > 3976-3981 At fundamental level, understanding the constructional features of materialistic atoms in a unified approach is very complicated and success of any model depends on its ability to cover a broad range of physics in a simplified approach and it is the secret of final unification. In this context, by extending Abdus Salam’s idea of ‘nuclear strong gravitational coupling’ and by considering two very large... S Lakshminarayana, Anjul 2014 POWER AND ENERGY SYSTEMS: TOWARDS SUSTAINABLE ENERGY > 1 - 6 2014 Power and Energy Systems Conference: Towards Sustainable Energy (PESTSE) The relatively static, slow-changing power transmission and distribution market is finding itself at the confluence of energy, telecommunications and information technology (IT) markets, driving necessary change and innovation in support of a 21st century intelligent utility network, a “Smart Grid.” This paper serves to provide clarification of what the Smart Grid is, from end-to-end, and where it's... Knowledge-Based Systems > 2009 > 22 > 1 > 100-104 With the advent of technology man is endeavoring for relevant and optimal results from the web through search engines. Retrieval performance can often be improved using several algorithms and methods. Abundance in web has impelled to exert better search systems. Categorization of the web pages abet fairly in addressing this issue. The anatomy of the web pages, links, categorization of text and their... Kα X-ray satellite spectra of Ti, V, Cr and Mn induced by photons M. V. R. Murti, S. S. Raju, B. Seetharami Reddy, V. Koteswara Rao, more Pramana > 2008 > 70 > 4 > 747-752 K X-ray emission spectra of Ti, V, Cr and Mn generated by photon excitation have been studied with a crystal spectrometer. The measured energy shifts of Kα satellite relative to the diagram line are compared with values obtained by electron excitation and with different theoretical estimates. The present experimental values of KαL1/KαL0 relative intensities are compared with values obtained by electron... Estimation of trace elements in some anti-diabetic medicinal plants using PIXE technique G.J. Naga Raju, P. Sarita, G.A.V. Ramana Murty, M. Ravi Kumar, more Applied Radiation and Isotopes > 2006 > 64 > 8 > 893-900 Trace elemental analysis was carried out in various parts of some anti-diabetic medicinal plants using PIXE technique. A 3MeV proton beam was used to excite the samples. The elements Cl, K, Ca, Ti, Cr, Mn, Fe, Ni, Cu, Zn, Br, Rb and Sr were identified and their concentrations were estimated. The results of the present study provide justification for the usage of these medicinal plants in the treatment... G.J. Naga Raju, P. Sarita, M. Ravi Kumar, G.A.V. Ramana Murty, more Nuclear Inst. and Methods in Physics Research, B > 2006 > 247 > 2 > 361-367 Particle induced X-ray emission technique was used to study the variations in trace elemental concentrations between normal and malignant human breast tissue specimens and to understand the effects of altered homeostasis of these elements in the etiology of breast cancer. A 3MeV proton beam was used to excite the biological samples of normal and malignant breast tissues. The elements Cl, K, Ca, Ti,... G. S. Murthy, M. V. Sivaiah, S. S. Kumar, V. N. Reddy, more The applicability of zirconium phosphate-ammonium molybdophosphate (ZrP-AMP) for the efficient removal of cesium from aqueous acidic solutions by adsorption has been investigated. The adsorption data analysis was carried out using the Freundlich, Dubinin-Raduskevich (D-R) and Langmuir isotherms for the uptake of Cs in the initial concentration range of 3.75.10-5-7.52.10-3 mol.dm-3 on the ZrP-AMP exchanger... Multiple ionization effects on L X-ray intensity ratios in Hf, Ta, Re, Ir, Pt, Au and Pb due to proton bombardment at energies 1-5 MeV G. J. Naga Raju, G. A. V. Ramana Murty, B. Seetharami Reddy, T. Seshi Reddy, more The European Physical Journal D > 2004 > 30 > 2 > 171-179 The L X-ray intensity ratios in the elements Hf, Ta, Re, Ir, Pt, Au and Pb due to proton bombardment at energies from 1 to 5 MeV are measured and compared with the ECPSSR theoretical intensity ratios. The L _{\alpha} /Ll intensity ratios obtained in the present work are in good agreement with theoretical values while the L _{\beta} _{\gamma} _{\alpha} _{\beta} intensity ratios are consistently... Magnetic expression of some major lineaments and Cretaceous quiet zone in the Bay of Bengal A.S. Subrahmanyam, K.S.R. Murthy, S. Lakshminarayana Oceanographic Literature Review > 1998 > 45 > 3 > 493 Magnetic data over the eastern continental margin of India and adjacent Bengal fan demarcate two major lineaments. A high amplitude N-S-trending lineation of the Cauvery offshore Basin corresponds to the offshore fragment of the 80°E lineament recorded onland. A N-S lineation of very high amplitude anomaly off Chilka lake considered as the possible northward extension of the 85°E ridge delineated,... Marine magnetic anomalies as a link between the granulite belts of east coast of India and Enderby Land of Antarctica K.S.R. Murthy, M.M.M. Rao, K. Venkateswarlu, A.S. Subrahmanyam, more Journal of African Earth Sciences > 1998 > 26 > 2 > IX-X Magnetic data of the eastern continental margin of India (ECMI) helped in demarcating the offshore extension of the granulite belts of east coast of India and their possible link to those of the East Antarctica. Similarity in the trends of the magnetic anomalies and the granulite facies on the east coast of India and their correlation with the granulite belt of Enderby Land of Antarctica supports... A. S. Subrahmanyam, K. S. R. Murthy, S. Lakshminarayana, M. M. Malleswara Rao, more AbstractMagnetic data over the eastern continental margin of India and adjacent Bengal fan demarcate two major lineaments. A high amplitude NS-trending lineation of the Cauvery offshore Basin corresponds to the offshore fragment of the 80E lineament recorded onland. A NS lineation of very high amplitude anomaly off Chilka lake considered as the possible northward extension of the 85E ridge delineated,... A. S. Subrahmanyam, S. Lakshminarayana, D. V. Chandrasekhar, K. S. R. Murthy, more Detailed analysis of magnetic and bathymetric data over shelf and slope regions of Cauvery basin demarcated three major offshore lineaments. The N-S lineation has been interpreted as due to dyke intrusions. The NE-SW lineament reflects the offshore extension of a major basement depression, viz., the Pondicherry depression. The E-W lineation, south of Porto Novo reveals a basement high suggesting... K. S. R. Murthy, A. S. Subrahmanyam, S. Lakshminarayana, D. V. Chandrasekhar, more Analysis of magnetic data of the Krishna-Godavari offshore basin provides new information on the evolution of this basin since the breakup of Peninsular India in the late Jurassic from the erstwhile Gondwanaland. The results establish the offshore extension of two major onshore cross trends viz., the Chintalapudi and Avanigadda cross trends. While the onshore basin is characterized by NE-SW ridges... Detailed analysis of magnetic data of the Krishna-Godavari offshore basin provides new information on the evolution of this basin since the breakup of Peninsular India in the late Jurassic from the erstwhile Gondwanaland. The results establish the offshore extension of two major onshore cross trends viz, the Chintalapudi and Avanigadda cross trends (CCT and ACT). While the onshore basin is characterized... ANTI-DIABETIC MEDICINAL PLANTS (1) ATOMIC RADII (1) AVOGADRO NUMBER (1) CONTAMINATION LEVEL (1) CRYSTAL SPECTROMETER (1) FINAL UNIFICATION (1) HAZARDOUS ELEMENTS (1) K X-RAY SATELLITES (1) KAKINADA BAY SEDIMENT (1) MOLAR MASS CONSTANT (1) NANO DOPED LIQUID CRYSTAL (1) NEUTRON STAR MASS & RADIUS (1) NEWTONIAN GRAVITATIONAL CONSTANT (1) NORMAL AND CANCEROUS BREAST TISSUES (1) NUCLEAR STABILITY AND BINDING ENERGY (1) OPTICAL SWITCHING (1) ROTATINGPOLARISER TECHNIC (1) SCHWARZSCHILD INTERACTION (1) Oceanographic Literature Review (3)
Derivatives in Math: Definition and Rules | Outlier How Do Derivative Rules Work? The derivative is one of the fundamental operations that we study in calculus. We use derivatives to measure rates of change of functions, which makes them useful in every scientific field, from physics to economics to engineering to astronomy. Because of their ubiquity and usefulness, the ability to quickly differentiate (that is, to compute the derivative of) a function is a handy skill to have if you’re working in a math-based discipline. However, when you’re first starting out with calculus, derivatives can feel complicated and tedious to compute. Fortunately, there are a set of derivative rules that you can use to break down any derivative computation into manageable pieces. Once you master these rules, you’ll be able to compute derivatives in mere seconds. This article will walk you through the most common derivatives and derivative rules, and will give you the tools that you need to start improving your differentiation skills. Before we jump into derivative rules, let’s briefly discuss what a derivative is and why we might want to compute them in the first place. In short, the derivative of a function tells us about its rate of change. As an example, suppose you’re driving along an interstate highway and the function f\left(t\right) measures the distance (in miles) from your starting location as a function of the time t (in hours) since you started driving. If you were to look at your car’s speedometer two hours into the trip, it would tell you how fast your car was traveling (in miles per hour) at that moment. The derivative of the function f tells us the same information: f'\left(2\right) is the instantaneous rate of change of f at t = 2. If we know the derivative f'\left(2\right) at every time t, we can compute or estimate a lot of interesting properties of the motion of the car. In practice, to compute the derivative of f we need some information about the function f. This often comes in the form of a formula, like f\left(t\right)=30t^\frac32+20\ln\left(t+1\right) . We’ll see shortly that derivative rules give us a way to take a seemingly complicated differentiation task like this one and break it up into smaller and simpler derivatives. For derivative rules to be useful, we first need to know how to compute derivatives of a handful of base functions. Here are the derivatives of the functions that you’ll frequently run into in calculus and other scientific contexts. These functions appear frequently in almost every mathematical setting due to their simplicity, and their derivatives are connected by a common rule that we’ll cover in the next section. The exponential and logarithmic functions play key roles throughout both theoretical and applied mathematics. One of the defining characteristics of the natural exponential function e^x is that it is its own derivative. Trigonometric functions are useful in any situation that involves periodic behavior, where a function takes on the same values in repeating intervals. Note that the variable x in these functions is implicitly in radians. These are included for completeness, but note that these functions and their derivatives rarely appear outside of calculus itself. It’s useful to know that they exist, but arguably you can get by without memorizing these derivatives. Once you have a grasp of the basic derivatives, the derivative rules provide a means of differentiating more complicated functions by breaking them into pieces. For each rule, we’ve provided a few examples to show how the rules work. In the tables above we showed some derivatives of “power functions” like x^2 x^3 ; the Power Rule provides a formula for differentiating any power function: \frac d{dx}x^k=kx^{k-1} This works even if k is a negative number or a fraction. It’s common to remember the power rule as a process: to differentiate x^k , bring the power k down in front of the expression, then decrease the old exponent by one. Here are a couple examples: \frac d{\operatorname dx}x^5=5x^{5-1}=5x^4 \frac d{\operatorname dx}x^\frac13=\frac13x^{\frac13-1}=\frac13x^{-\frac23} The Sum/Difference Rules The Sum Rule says that we can differentiate the sum of two functions simply by differentiating the two functions separately and then adding the results together. The same goes for the difference of two functions: \frac d{dx}\left(f\left(x\right)+g\left(x\right)\right)=\frac d{dx}f\left(x\right)+\frac d{dx}g\left(x\right) \frac d{dx}\left(f\left(x\right)+g\left(x\right)\right)=\frac d{dx}f\left(x\right)+\frac d{dx}g\left(x\right) A quick way to remember this is “the derivative of the sum is the sum of the derivatives.” Here are some examples; note that we can differentiate the individual functions using the power rule or by looking back at the tables in the previous section. \frac d{dx}\left(x^2+3\right)=\frac d{dx}x^2+\frac d{dx}3=2x+0=2x \frac d{dx}\left(e^x-\sin x\right)=\frac d{dx}e^x-\frac d{dx}\sin x=e^x-\cos x When differentiating, we can “pull” constants outside of the derivative operator: \frac d{dx}\left(cf\left(x\right)\right)=c\frac d{dx}f\left(x\right) In other words, to differentiate a function multiplied by a constant, we can differentiate the function first, then multiply the result by the constant. Here’s an example: \frac d{dx}\left(6\cos x\right)=6\frac d{dx}\cos x=6\left(-\sin x\right)=-6\sin x By combining the Sum Rule, the Constant Multiple Rule, and the Power Rule, we’re able to compute the derivative of any polynomial function: \frac d{dx}\left(4x^4-7x^2+3x+10\right) =\frac d{dx}\left(4x^4\right)-\frac d{dx}\left(7x^2\right)+\frac d{dx}\left(3x\right)+\frac d{dx}10 =4\frac d{dx}x^4-7\frac d{dx}x^2+3\frac d{dx}x+\frac d{dx}10 =4\left(4x^3\right)-7\left(2x\right)+3\left(1\right)+0 =16x^3-14x+3 Things start to get a little tricky here, because the derivative of the product of two functions is not the product of the derivatives. It’s a little more complicated than that: \frac d{dx}\left(f\left(x\right)\cdot g\left(x\right)\right) =\frac d{dx}f\left(x\right)\cdot g\left(x\right)+f\left(x\right)\cdot\frac d{dx}g\left(x\right) The Product Rule says that the derivative of the product of two functions is the derivative of the first function times the second function plus the first times the derivative of the second—try saying that five times fast. It’s a little easier to read and remember the Product Rule if we write the derivatives in a different notation: \left(f\cdot g\right)'=f'\cdot g+f\cdot g' Here are a couple examples of the Product Rule in action: \frac d{dx}\left(e^x\sin x\right)=\frac d{dx}\left(e^x\right)\cdot\sin x+e^x\cdot\frac d{dx}\left(\sin x\right) =e^x\cdot\sin x+e^x\cdot\cos x=e^x\left(\sin x+\cos x\right) \left(x^3\ln x\right)'=\left(x^3\right)'\cdot\ln x+x^3\left(\ln x\right)'=\left(3x^2\right)\cdot\ln x+x^3\cdot\left(\frac1x\right) =3x^2\ln x+x^2 Similar to products, differentiating the quotient of two functions is more involved than simplify dividing one derivative by the other: \frac d{dx}\left(\frac{f\left(x\right)}{g\left(x\right)}\right)=\frac{\displaystyle\frac d{dx}f\left(x\right)\cdot g\left(x\right)-f\left(x\right)\cdot\frac d{dx}g\left(x\right)}{\displaystyle\left(g\left(x\right)\right)^2} \left(\frac fg\right)'=\frac{f'\cdot g-f\cdot g'}{g^2} The numerator in the formula for the quotient rule looks a lot like the product rule, but note the minus sign. This one is harder to remember, but you can use a mnemonic like this one or even come up with your own. Here are some example applications of the quotient rule: \frac d{dx}\left(\frac{\tan x}{x-1}\right)=\frac{\displaystyle\frac d{dx}\left(\tan x\right)\cdot\left(x-1\right)-\tan x\cdot\frac d{dx}\left(x-1\right)}{\displaystyle\left(x-1\right)^2} =\frac{\displaystyle\left(\sec^2x\right)\left(x-1\right)-\left(\tan x\right)\left(1-0\right)}{\displaystyle\left(x-1\right)^2} =\frac{\displaystyle\left(x-1\right)\sec^2x-\tan x}{\displaystyle\left(x-1\right)^2} \left(\frac1{x^3}\right)'=\frac{\left(1\right)'\cdot\left(x^3\right)-1\cdot\left(x^3\right)'}{\left(x^3\right)^2}=\frac{0\cdot x^3-1\cdot3x^2}{x^6} =\frac{-3x^2}{x^6}=-\frac3{x^4} (Notice that we could have also done that last example with the Power Rule: \left(\frac1{x^3}\right)'=\left(x^{-3}\right)'=-3x^{-3-1}=-3x^{-4}=-\frac3{x^4} . In math there are often multiple ways to approach a single problem.) We save the best for last: The Chain Rule is arguably the most important of the derivative rules, as it shows up very frequently and has wide-reaching applications in calculus. The Chain Rule enables us to differentiate compositions of functions, or “functions inside other functions.” There are two ways in which the Chain Rule is typically presented: f\left(g\left(x\right)\right)'=f'\left(g\left(x\right)\right)\cdot g'\left(x\right) \frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dx} The first formulation of the Chain Rule above is more explicit, but the second is a little bit easier to remember. They both say the same thing: to differentiate a composition of two functions, we differentiate the “outside” function and plug the “inside” function into it f'\left(g\left(x\right)\right) and multiply that by the derivative of the inside function g'\left(x\right) When applying the Chain Rule, it’s often helpful to assign names to the outside and inside functions to help you plug them into the formula. Here’s an example of that process: \frac d{dx}\cos\left(x^2\right) f\left(u\right) \cos u g\left(x\right) x^2 f\left(g\left(x\right)\right) \cos\left(x^2\right) The derivative of the “outside” function is f'\left(u\right) -\sin\;u , and we plug the “inside” function into that to get f'\left(g\left(x\right)\right) -\sin\left(x^2\right) The derivative of the “inside” function is g'\left(x\right)=2x The Chain Rule then says that \frac d{dx}\cos\left(x^2\right)=f'\left(g\left(x\right)\right)\cdot g'\left(x\right)=\left(-\sin\left(x^2\right)\right)\left(2x\right)=-2x\sin\left(x^2\right) Here’s another example using the fractional derivative notation: \frac d{dx}e^{\sin x} Break the function into pieces: Let y=e^u u=\sin x Compute derivatives: \frac{dy}{du}=\frac d{du}e^u=e^u \frac{du}{dx}=\frac d{dx}\left(\sin x\right)=\cos x \frac d{dx}e^{\sin x}=\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dx}=e^u\cdot\cos x We aren’t quite done yet. Our original function was in terms of x, so we need to write the derivative completely in terms of x. Plug u=\sin x into the derivative we just computed to complete the process: \frac d{dx}e^{\sin x}=e^u\cdot\cos x=e^{\sin x}\cdot\cos x
This user how-to guide is related to C-HIL: Field-oriented control of PMSM using Texas Instruments TMS320F2808 card. All tests in document are done without using the TI card. In electric drive applications, rectifiers are prone to DC voltage overshoot issues. In this case, we will attempt to test the potential occurance of these issues using an uncontrolled Three-Phase Diode Rectifier, as shown in Figure 1. The input impedance can be calculated and introduced in the model by a Bode diagram. This peak of the DC link voltage can lead to burning of the diodes in the three-phase diode rectifier and burning of the IGBTs in the Three-phase two-level inverter/rectifier if they are not rated for overshoot voltages. The main purpose of this application note is to show a possibility to include Python-specific libraries to resolve issues with designing the power converter. This how-to guide is related to C-HIL: Field-oriented control of PMSM using Texas Instruments TMS320F2808 card. All tests in the document are done without using the TI card. The model of the rectifier part of the drive system is presented in image below. Figure 1. Rectifier part of the DRIVE Figure 2. Overshoot with default parameters Using the Model Initialization function in the code block below, we can generate a Bode diagram using the scipy and matplotlib python libraries. The most important functions from the scipy library are scipy.signal.TransferFunction and scipy.signal.bode. The model initilization script is shown below: # Numpy module is imported as 'np' # Scipy module is imported as 'sp' #parameters that chan be changeable Rprech = 0 Ts = 100e-6 RloadAnalys = 100 #Solution no. 1: fix with inductor #L = C * R**2 *0.9**2 #mdl.info(L) #Solution no. 2: fix with capacitor #fsw = 100 #C = 1/(2*np.pi*0.3*fsw)**2/L #mdl.info("C = {} F".format(C)) #Solution no. 3: add precharge resistor Q = 0.707 Rprech =np.sqrt(2)*(np.sqrt(L/C)-1/np.sqrt(2)*R) mdl.info("Rprech = {} ohm".format(Rprech)) #bode diagram analysis # resonant frequecny f0 = 1/(2*np.pi*np.sqrt(L*C)) mdl.info("f0 = {} Hz".format(f0)) w0 = 2*np.pi*f0 # absolute Q factor Qab = 1/(R+Rprech)*np.sqrt(L/C) mdl.info("Qab = {}".format(Qab)) Qdb = 20 * np.log10(Qab) # Q factor in db #bode plot using sp.signal.lti function s1 = sp.signal.lti([1],[1/w0**2,1/(w0*Qab),1]) w, mag, phase = sp.signal.bode(s1) # Bode magnitude plot plt.semilogx(1/(2*np.pi)*w, mag) # Bode phase plot plt.figure(1,1) plt.semilogx(1/(2*np.pi)*w, phase) Figure 3. Bode diagram of input transfer parameters From this Bode diagram and from calculation, we get {f}_{0}=\frac{1}{2\pi \sqrt{\left({L}_{a}{C}_{dc}\right)}}=102.3 Hz . We can see that the switching resonant frequency and the switching frequency of the diode bridge (100Hz) is almost the same. This implies there will be some resonance and overshoot in DC voltages. The switching frequency also cannot be changed because of the use of a diode rectifier. Instead, changing the parameters of input impedance is necessary. Since increasing model resistance (R) will lead to increased losses, it is better to focus on the inductor by decreasing the quality (Q) factor. In this section we explore two possible solutions for addressing the voltage overshoot on the DC link: decreasing the inductance and increasing the capacitance. Solving the issue decreasing the inductance First let’s try to find the input transfer function. The transfer function is defined from the circuit in Figure 1 {G}_{\left(s\right)}= \frac{1}{1+s{R}_{1}{C}_{dc}+{s}^{2}{L}_{a}{C}_{dc}} An alternative standard, normalized form of the transfer function is: {G}_{\left(s\right)}=\frac{1}{1+\frac{s}{Q{\omega }_{0}}+{\left(\frac{s}{{\omega }_{0}}\right)}^{2}} The parameter Q is called the quality factor of the circuit, and is a measure of the dissipation in the system. The parameter represents the resonant angluar frequency. From the input transfer function, we can find the resonant frequency which is: {f}_{0}=\frac{1}{2\pi \sqrt{\left({L}_{a}{C}_{dc}\right)}} Now it is easy to find the Q factor with the following formula: Q=\frac{1}{{R}_{1}}\sqrt{\frac{{L}_{a}}{{C}_{dc}}} The result of this analysis will give us the value of the inductor to reduce the Q factor. Let’s try to decrease value of the inductor by half. In Figure 4, we can see that we manage to decrease the overshoot of the DC voltage. Figure 4. Overshoot with decreasing inductance by half Now, let’s try to decrease inductance to one third. In Figure 5, we can see that we manage to decrease the DC voltage (Cdc) below our 400V target. Figure 5. Overshoot with decreasing inductance by triple A second-order Butterworth filter (i.e., continuous-time filter with the flattest passband frequency response) has an underdamped quality factor of 0.707. Therefore, to reduce overshoot in the voltage of the DC capacitor you need to set the Q factor to 0.707. Now, let’s try to calculate the inductor. The equation is {L}_{ac}={C}_{dc}{R}_{1}^{2}{Q}^{2} where we can calculate {L}_{ac}= 1.782e-05 In Figure 6, we can see that we manage to fix this issue. Figure 6. Issue fixed with calculated inductance In the Bode diagram in Figure 7, we can see that we manage to decrease the Q factor. Figure 7. Bode diagram of input transfer function with decreased inductance Solving the issue with increasing the capacitor While in theory decreasing the inductance solves the problem, it is often not practical, since inductors are much more expensive then capacitors. Additionally, if the inductance in the model is due to the source inductance (i.e. grid impedance), it is impossible to change it. In these cases, we can instead change the value of the capacitor. This is done by moving the resonant frequency far from the switching frequency. For example, by setting {f}_{0}={0.3f}_{sw} {C}_{dc}=\frac{1}{{\left(2\pi {0.3f}_{sw}\right)}^{2}{L}_{a}}= 0.0255 F With this new capacitor, we get attenuation to the switching frequency. In Figure 8, you can find the new Bode diagram: Figure 8. Bode diagram with new value of capacitor In Figure 9, we can see the new response of voltage on the DC link. By increasing the DC capacitor, we can see that response is a little bit slower than when we changed the inductance. Figure 9. Response of voltage on the DC link Overvoltage on DC-link can be resolved using pre-charge circuit. As is showed in Figure 10 you can see implementation of pre-charge circuit. Figure 10. DC-link pre-charge circuit In this case, the pre-charge resistor can be calculated by setting the Q factor: Q=\frac{1}{\sqrt{2}} The following is the calculation for {R}_{prech}: Q=\frac{1}{{R}_{1}+{R}_{prech}}\sqrt{\frac{{L}_{a}}{{C}_{dc}}}=\frac{1}{\sqrt{2}} {R}_{prech}= \sqrt{\frac{{\mathrm{2L}}_{a}}{{C}_{dc}}}-{R}_{1} The Figure 11shows the pre-charge resistor effects caused by reducing the Q factor. The Figure 12 shows the response of voltage on DC-link. It is important to note that in this case there is a reduced peak of incoming current on the AC side of rectifier. Figure 11. Bodes diagram with pre-charge circuit Figure 12. Voltage response with pre-charge circuit Using this approach, we manage to troubleshoot issues in the design of the various converters and applications in power electronics, and specifically to avoid voltage overshoot on the DC link. In Figure 12, you can see a high peak of the inductor current. To reduce this current, we need to define a different value of the pre-charge resistor. The resistance of the pre-charge circuit is chosen based on the capacity of the load and the desired pre-charge time. The pre-charge surge current reaches 1/e of its initial value after a time of: T = {R}_{prech} C The current is reduced to a manageable value after approximately a time of 5 * T. {R}_{prech}=\frac{T}{5C} Figure 13. Test of pre-charge circuit with new resistor. Here we can see how we manage to decrease inrush current. Figure 14. Bode plot with new resistor In practice, if the design allows, having the option to solve overvoltage by placing a larger capacitor, would likely be a cheaper option than adding the pre-charge circuit. Based on these results, we can envision a way that this would be possible. examples\models\hardware in the loop\ti pmsm sensored foc ti pmsm sensored foc.tse, ti pmsm sensored foc.cus, 20140220_pmsm3_1_TMS3202808.out, settings.runx
Switching Model Representation - MATLAB & Simulink Example - MathWorks América Latina Model Type Conversions Caution About Switching Back and Forth Between Representations You can convert models from one representation to another using the same commands that you use for constructing LTI models (tf, zpk, ss, and frd). For example, you can convert the state-space model: sys = ss(-2,1,1,3); to a zero-pole-gain model by typing: zpksys = zpk(sys) zpksys = 3 (s+2.333) Similarly, you can calculate the transfer function of sys by typing: Conversions to FRD require a frequency vector: f = logspace(-2,2,10); frdsys = frd(sys,f) frdsys = 100.0000 3.0002 - 0.0100i Note that FRD models cannot be converted back to the TF, ZPK, or SS representations (such conversion requires the frequency-domain identification tools available in System Identification). All model type conversion paths are summarized in the diagram below. Some commands expect a specific type of LTI model. For convenience, such commands automatically convert incoming LTI models to the appropriate representation. For example, in the sample code: the function tfdata automatically converts the state-space model sys to an equivalent transfer function to obtain its numerator and denominator data. Conversions between the TF, ZPK, and SS representations involve numerical computations and can incur loss of accuracy when overused. Because the SS and FRD representations are best suited for numerical computations, it is good practice to convert all models to SS or FRD and only use the TF and ZPK representations for construction or display purposes. For example, convert the ZPK model G = zpk([],ones(10,1),1,0.1) (z-1)^10 to TF and then back to ZPK: G1 = zpk(tf(G)); Now compare the pole locations for G and G1: pzmap(G,'b',G1,'r') legend('G','G1') Observe how the pole of multiplicity 10 at z=1 in G is replaced by a cluster of poles in G1. This occurs because the poles of G1 are computed as the roots of the polynomial \left(z-1{\right)}^{10}={z}^{10}-10{z}^{9}+45{z}^{8}-120{z}^{7}+210{z}^{6}-252{z}^{5}+210{z}^{4}-120{z}^{3}+45{z}^{2}-10z+1 and an o(eps) error on the last coefficient of this polynomial is enough to move the roots by o\left({ϵ}^{1/10}\right)=o\left(3×1{0}^{-2}\right). In other words, the transfer function representation is not accurate enough to capture the system behavior near z=1, which is also visible in the Bode plot of G vs. G1: bode(G,'b',G1,'r--'), grid This illustrates why you should avoid unnecessary model conversions.
MultilayerPerceptron: A simple multilayer neural network - mlxtend Example 1 - Classifying Iris Flowers Example 2 - Classifying Handwritten Digits from a 10% MNIST Subset Implementation of a multilayer perceptron, a feedforward artificial neural network. from mlxtend.classifier import MultiLayerPerceptron Although the code is fully working and can be used for common classification tasks, this implementation is not geared towards efficiency but clarity – the original code was written for demonstration purposes. The neurons x_0 a_0 represent the bias units ( x_0=1 a_0=1 i th superscript denotes the i th layer, and the jth subscripts stands for the index of the respective unit. For example, a_{1}^{(2)} refers to the first activation unit after the bias unit (i.e., 2nd activation unit) in the 2nd layer (here: the hidden layer) Each layer (l) in a multi-layer perceptron, a directed graph, is fully connected to the next layer (l+1) . We write the weight coefficient that connects the k th unit in the l th layer to the j th unit in layer l+1 w^{(l)}_{j, k} For example, the weight coefficient that connects the units a_0^{(2)} \rightarrow a_1^{(3)} w_{1,0}^{(2)} In the current implementation, the activations of the hidden layer(s) are computed via the logistic (sigmoid) function \phi(z) = \frac{1}{1 + e^{-z}}. (For more details on the logistic function, please see classifier.LogisticRegression; a general overview of different activation function can be found here.) Furthermore, the MLP uses the softmax function in the output layer, For more details on the logistic function, please see classifier.SoftmaxRegression. D. R. G. H. R. Williams and G. Hinton. Learning representations by back-propagating errors. Nature, pages 323–533, 1986. C. M. Bishop. Neural networks for pattern recognition. Oxford University Press, 1995. T. Hastie, J. Friedman, and R. Tibshirani. The Elements of Statistical Learning, Volume 2. Springer, 2009. Load 2 features from Iris (petal length and petal width) for visualization purposes: X = X[:, [0, 3]] # standardize training data X_std = (X - X.mean(axis=0)) / X.std(axis=0) Train neural network for 3 output flower classes ('Setosa', 'Versicolor', 'Virginica'), regular gradient decent (minibatches=1), 30 hidden units, and no regularization. Setting the minibatches to 1 will result in gradient descent training; please see Gradient Descent vs. Stochastic Gradient Descent for details. from mlxtend.classifier import MultiLayerPerceptron as MLP nn1 = MLP(hidden_layers=[50], l2=0.00, decrease_const=0.0, minibatches=1, print_progress=3) nn1 = nn1.fit(X_std, y) Iteration: 150/150 | Cost 0.06 | Elapsed: 0:00:00 | ETA: 0:00:00 fig = plot_decision_regions(X=X_std, y=y, clf=nn1, legend=2) plt.title('Multi-layer Perceptron w. 1 hidden layer (logistic sigmoid)') plt.plot(range(len(nn1.cost_)), nn1.cost_) print('Accuracy: %.2f%%' % (100 * nn1.score(X_std, y))) Setting minibatches to n_samples will result in stochastic gradient descent training; please see Gradient Descent vs. Stochastic Gradient Descent for details. minibatches=len(y), nn2.fit(X_std, y) Iteration: 5/5 | Cost 0.11 | Elapsed: 00:00:00 | ETA: 00:00:00 Continue the training for 25 epochs... nn2.epochs = 25 Iteration: 25/25 | Cost 0.07 | Elapsed: 0:00:00 | ETA: 0:00:00 Load a 5000-sample subset of the MNIST dataset (please see data.loadlocal_mnist if you want to download and read in the complete MNIST dataset). from mlxtend.data import mnist_data X, y = mnist_data() X, y = shuffle_arrays_unison((X, y), random_seed=1) Visualize a sample from the MNIST dataset to check if it was loaded correctly: def plot_digit(X, y, idx): img = X[idx].reshape(28,28) plt.imshow(img, cmap='Greys', interpolation='nearest') plt.title('true label: %d' % y[idx]) plot_digit(X, y, 3500) Standardize pixel values: X_train_std, params = standardize(X_train, columns=range(X_train.shape[1]), return_params=True) X_test_std = standardize(X_test, columns=range(X_test.shape[1]), Initialize the neural network to recognize the 10 different digits (0-10) using 300 epochs and mini-batch learning. nn1 = MLP(hidden_layers=[150], minibatches=100, Learn the features while printing the progress to get an idea about how long it may take. nn1.fit(X_train_std, y_train) print('Train Accuracy: %.2f%%' % (100 * nn1.score(X_train_std, y_train))) print('Test Accuracy: %.2f%%' % (100 * nn1.score(X_test_std, y_test))) Please note that this neural network has been trained on only 10% of the MNIST data for technical demonstration purposes, hence, the lousy predictive performance. MultiLayerPerceptron(eta=0.5, epochs=50, hidden_layers=[50], n_classes=None, momentum=0.0, l1=0.0, l2=0.0, dropout=1.0, decrease_const=0.0, minibatches=1, random_seed=None, print_progress=0) Multi-layer perceptron classifier with logistic sigmoid activations epochs : int (default: 50) Passes over the training dataset. Prior to each epoch, the dataset is shuffled if minibatches > 1 to prevent cycles in stochastic gradient descent. hidden_layers : list (default: [50]) Number of units per hidden layer. By default 50 units in the first hidden layer. At the moment only 1 hidden layer is supported n_classes : int (default: None) A positive integer to declare the number of class labels if not all class labels are present in a partial training set. Gets the number of class labels automatically if None. l1 : float (default: 0.0) L1 regularization strength momentum : float (default: 0.0) Momentum constant. Factor multiplied with the gradient of the previous epoch t-1 to improve learning speed w(t) := w(t) - (grad(t) + momentum * grad(t-1)) decrease_const : float (default: 0.0) Decrease constant. Shrinks the learning rate after each epoch via eta / (1 + epoch*decrease_const) minibatches : int (default: 1) Divide the training data into k minibatches for accelerated stochastic gradient descent learning. Gradient Descent Learning if minibatches = 1 Stochastic Gradient Descent learning if minibatches = len(y) Minibatch learning if minibatches > 1 random_seed : int (default: None) Set random state for shuffling and initializing the weights. print_progress : int (default: 0) Prints progress in fitting to stderr. 0: No output 1: Epochs elapsed and cost 2: 1 plus time elapsed 3: 2 plus estimated time until completion w_ : 2d-array, shape=[n_features, n_classes] Weights after fitting. b_ : 1D-array, shape=[n_classes] Bias units after fitting. List of floats; the mean categorical cross entropy cost after each epoch. For usage examples, please see http://rasbt.github.io/mlxtend/user_guide/classifier/MultiLayerPerceptron/ fit(X, y, init_params=True) init_params : bool (default: True) Re-initializes model parameters prior to fitting. Set False to continue training with weights from a previous model fitting. Predict targets from X. target_values : array-like, shape = [n_samples] Predict class probabilities of X from the net input. Class probabilties : array-like, shape= [n_samples, n_classes] Compute the prediction accuracy Target values (true class labels). acc : float The prediction accuracy as a float between 0.0 and 1.0 (perfect score).
PHC4101-Summer2017 Environmental Issues in Public Health \sigma_P^2=\sigma_G^2+\sigma_E^2 H^2={{\sigma^2_G}\over {\sigma^2_P}} A study trying to assess the association between exposure to magnetic field and childhood asthma The authors obtained geocodes for all the participants, and linked them to the data from electrical utility companies which include all existing and historical 132-400 kV overhead trasmission lines. - they assigned each home exposure levels 0 μT, 0.1 μT, 0.2 μT, or 0.4μT Advantages of environmental measurements - the measurement procedure proper Advantages of environmental measurements (continued) - pre-definition of strata presumptively homogeneous with respect to the exposure and sampling randomly within the strata Records of past measurements of relevant exposures can be sought. Built Environment and Social Environment Emcompasses all man-made surroundings that provide the setting for human activity - sidewalks/trails - parks/recreational services US Census Geographic Entities Read the article below (https://ehp.niehs.nih.gov/125-A65/). What do you think about population-level vs. individual-level environmental interventions? Discuss their strengths and limitations. Slides for the Guest Lecture, Summer 2017, PHC4101 Public Health Concepts
Home : Support : Online Help : Connectivity : Web Features : Network Communication : Sockets Package : Close terminate an open TCP/IP connection Close(sid) The procedure Close is used to close an open socket that was created by using either the Sockets[Open] or Sockets[Serve] routines in the Sockets package. This terminates an open connection on the socket and releases any resources used internally by the socket. The argument sid identifies the socket to be closed. It must be a valid and open socket ID. You should always close any sockets that you open as soon as you are done with them. Most systems have a hard limit on the number of open sockets that any one process may have at one time. The Sockets package itself imposes such a limit (in most cases, coincident with the limit imposed by the underlying operating system). Closing a socket releases the resources it uses and enables new sockets to be created without overflowing the maximum limit. (On some systems, a socket is just a file, and so each open socket contributes also to the number of open files held by a process, which is also usually limited by the operating system.) When Close is called, it attempts to shut down the connection, flushing any buffered data written to the socket. The remote endpoint of the connection is notified that no further data will be accepted at the local endpoint. Once the connection has been successfully shut down, Close returns the value true. In the event of an error condition, the value false is returned. All open socket connections are shut down when the Sockets package itself is garbage collected by the system, or when the Maple process in which it is running terminates normally. Although it is guaranteed that this will eventually occur, there is no user level control over when it will happen, so Close must not be relied upon for normal termination of socket connections. \mathrm{with}⁡\left(\mathrm{Sockets}\right): \mathrm{sid}≔\mathrm{Open}⁡\left("localhost","echo"\right) \textcolor[rgb]{0,0,1}{0} \mathrm{Close}⁡\left(\mathrm{sid}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} Sockets[Serve]
An $\ell _1$-oracle inequality for the Lasso in finite mixture gaussian regression models {\ell }_{1} -oracle inequality for the Lasso in finite mixture gaussian regression models Meynet, Caroline We consider a finite mixture of Gaussian regression models for high-dimensional heterogeneous data where the number of covariates may be much larger than the sample size. We propose to estimate the unknown conditional mixture density by an ℓ1-penalized maximum likelihood estimator. We shall provide an ℓ1-oracle inequality satisfied by this Lasso estimator with the Kullback-Leibler loss. In particular, we give a condition on the regularization parameter of the Lasso to obtain such an oracle inequality. Our aim is twofold: to extend the ℓ1-oracle inequality established by Massart and Meynet [12] in the homogeneous Gaussian linear regression case, and to present a complementary result to Städler et al. [18], by studying the Lasso for its ℓ1-regularization properties rather than considering it as a variable selection procedure. Our oracle inequality shall be deduced from a finite mixture Gaussian regression model selection theorem for ℓ1-penalized maximum likelihood conditional density estimation, which is inspired from Vapnik's method of structural risk minimization [23] and from the theory on model selection for maximum likelihood estimators developed by Massart in [11]. Classification : 62G08, 62H30 Mots clés : finite mixture of gaussian regressions model, Lasso, ℓ1-oracle inequalities, model selection by penalization, ℓ1-balls author = {Meynet, Caroline}, title = {An $\ell _1$-oracle inequality for the {Lasso} in finite mixture gaussian regression models}, AU - Meynet, Caroline TI - An $\ell _1$-oracle inequality for the Lasso in finite mixture gaussian regression models Meynet, Caroline. An $\ell _1$-oracle inequality for the Lasso in finite mixture gaussian regression models. ESAIM: Probability and Statistics, Tome 17 (2013), pp. 650-671. doi : 10.1051/ps/2012016. http://www.numdam.org/articles/10.1051/ps/2012016/ [1] P.L. Bartlett, S. Mendelson and J. Neeman, ℓ1-regularized linear regression: persistence and oracle inequalities, Probability and related fields. Springer (2011). [2] J.P. Baudry, Sélection de Modèle pour la Classification Non Supervisée. Choix du Nombre de Classes. Ph.D. thesis, Université Paris-Sud 11, France (2009). [3] P.J. Bickel, Y. Ritov and A.B. Tsybakov, Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat. 37 (2009) 1705-1732. | MR 2533469 | Zbl 1173.62022 [4] S. Boucheron, G. Lugosi and P. Massart, A non Asymptotic Theory of Independence. Oxford University press (2013). | MR 3185193 | Zbl 1279.60005 [5] P. Bühlmann and S. Van De Geer, On the conditions used to prove oracle results for the Lasso. Electr. J. Stat. 3 (2009) 1360-1392. | MR 2576316 [6] E. Candes and T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35 (2007) 2313-2351. | MR 2382644 | Zbl 1139.62019 [7] S. Cohen and E. Le Pennec, Conditional Density Estimation by Penalized Likelihood Model Selection and Applications, RR-7596. INRIA (2011). [8] B. Efron, T. Hastie, I. Johnstone and R. Tibshirani, Least Angle Regression. Ann. Stat. 32 (2004) 407-499. | MR 2060166 | Zbl 1091.62054 [9] M. Hebiri, Quelques questions de sélection de variables autour de l'estimateur Lasso. Ph.D. Thesis, Université Paris Diderot, Paris 7, France (2009). [10] C. Huang, G.H.L. Cheang and A.R. Barron, Risk of penalized least squares, greedy selection and ℓ1-penalization for flexible function librairies. Submitted to the Annals of Statistics (2008). | MR 2711791 [11] P. Massart, Concentration inequalities and model selection. Ecole d'été de Probabilités de Saint-Flour 2003. Lect. Notes Math. Springer, Berlin-Heidelberg (2007). | MR 2319879 | Zbl 1170.60006 [12] P. Massart and C. Meynet, The Lasso as an ℓ1-ball model selection procedure. Elect. J. Stat. 5 (2011) 669-687. | MR 2820635 | Zbl 1274.62468 [13] C. Maugis and B. Michel, A non asymptotic penalized criterion for Gaussian mixture model selection. ESAIM: PS 15 (2011) 41-68. | Numdam | MR 2870505 [14] G. Mclachlan and D. Peel, Finite Mixture Models. Wiley, New York (2000). | MR 1789474 | Zbl 0963.62061 [15] N. Meinshausen and B. Yu, Lasso type recovery of sparse representations for high dimensional data. Ann. Stat. 37 (2009) 246-270. | MR 2488351 | Zbl 1155.62050 [16] R.A. Redner and H.F. Walker, Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev. 26 (1984) 195-239. | MR 738930 | Zbl 0536.62021 [17] P. Rigollet and A. Tsybakov, Exponential screening and optimal rates of sparse estimation. Ann. Stat. 39 (2011) 731-771. | MR 2816337 | Zbl 1215.62043 [18] N. Städler, B.P. Hlmann, and S. Van De Geer, ℓ1-penalization for mixture regression models. Test 19 (2010) 209-256. | Zbl 1203.62128 [19] R. Tibshirani, Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. Ser. B 58 (1996) 267-288. | MR 1379242 | Zbl 0850.62538 [20] M.R. Osborne, B. Presnell and B.A. Turlach, On the Lasso and its dual. J. Comput. Graph. Stat. 9 (2000) 319-337. | MR 1822089 [21] M.R. Osborne, B. Presnell and B.A Turlach, A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20 (2000) 389-404. | MR 1773265 | Zbl 0962.65036 [22] A. Van Der Vaart and J. Wellner, Weak Convergence and Empirical Processes. Springer, Berlin (1996). | MR 1385671 | Zbl 0862.60002 [23] V.N. Vapnik, Estimation of Dependencies Based on Empirical Data. Springer, New-York (1982). | MR 672244 | Zbl 0499.62005 [24] V.N. Vapnik, Statistical Learning Theory. J. Wiley, New-York (1990). | MR 1641250 | Zbl 0935.62007 [25] P. Zhao and B. Yu On model selection consistency of Lasso. J. Mach. Learn. Res. 7 (2006) 2541-2563. | MR 2274449 | Zbl 1222.62008
PrincipalComponentAnalysis: Principal component analysis (PCA) for dimensionality reduction - mlxtend Example 1 - PCA on Iris Example 2 - Plotting the Variance Explained Ratio Example 3 - PCA via SVD Example 4 - Factor Loadings Example 5 - Feature Extraction Pipeline Example 6 - Whitening Implementation of Principal Component Analysis for dimensionality reduction from mlxtend.feature_extraction import PrincipalComponentAnalysis The sheer size of data in the modern age is not only a challenge for computer hardware but also a main bottleneck for the performance of many machine learning algorithms. The main goal of a PCA analysis is to identify patterns in data; PCA aims to detect the correlation between variables. If a strong correlation between variables exists, the attempt to reduce the dimensionality only makes sense. In a nutshell, this is what PCA is all about: Finding the directions of maximum variance in high-dimensional data and project it onto a smaller dimensional subspace while retaining most of the information. Often, the desired goal is to reduce the dimensions of a d -dimensional dataset by projecting it onto a (k) -dimensional subspace (where k\;<\;d ) in order to increase the computational efficiency while retaining most of the information. An important question is "what is the size of k that represents the data 'well'?" Later, we will compute eigenvectors (the principal components) of a dataset and collect them in a projection matrix. Each of those eigenvectors is associated with an eigenvalue which can be interpreted as the "length" or "magnitude" of the corresponding eigenvector. If some eigenvalues have a significantly larger magnitude than others that the reduction of the dataset via PCA onto a smaller dimensional subspace by dropping the "less informative" eigenpairs is reasonable. A Summary of the PCA Approach Obtain the Eigenvectors and Eigenvalues from the covariance matrix or correlation matrix, or perform Singular Vector Decomposition. Sort eigenvalues in descending order and choose the k eigenvectors that correspond to the k largest eigenvalues where k is the number of dimensions of the new feature subspace ( k \le d Construct the projection matrix \mathbf{W} from the selected k eigenvectors. Transform the original dataset \mathbf{X} \mathbf{W} to obtain a k -dimensional feature subspace \mathbf{Y} Pearson, Karl. "LIII. On lines and planes of closest fit to systems of points in space." The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2.11 (1901): 559-572. pca = PrincipalComponentAnalysis(n_components=2) plt.scatter(X_pca[y==lab, 0], X_pca[y==lab, 1], pca = PrincipalComponentAnalysis(n_components=None) pca.e_vals_ pca.e_vals_normalized_ tot = sum(pca.e_vals_) var_exp = [(i / tot)*100 for i in sorted(pca.e_vals_, reverse=True)] cum_var_exp = np.cumsum(pca.e_vals_normalized_*100) While the eigendecomposition of the covariance or correlation matrix may be more intuitiuve, most PCA implementations perform a Singular Vector Decomposition (SVD) to improve the computational efficiency. Another advantage of using SVD is that the results tend to be more numerically stable, since we can decompose the input matrix directly without the additional covariance-matrix step. pca = PrincipalComponentAnalysis(n_components=2, solver='svd') If we compare this PCA projection to the previous plot in example 1, we notice that they are mirror images of each other. Note that this is not due to an error in any of those two implementations, but the reason for this difference is that, depending on the eigensolver, eigenvectors can have either negative or positive signs. v is an eigenvector of a matrix \Sigma \lambda is our eigenvalue -v is also an eigenvector that has the same eigenvalue, since After evoking the fit method, the factor loadings are available via the loadings_ attribute. In simple terms, the loadings are the unstandardized values of the eigenvectors. Or in other words, we can interpret the loadings as the covariances (or correlation in case we standardized the input features) between the input features and the principal components (or eigenvectors), which have been scaled to unit length. By having the loadings scaled, they become comparable by magnitude and we can assess how much variance in a component is attributed to the input features (as the components are just a weighted linear combination of the input features). solver='eigen') pca.fit(X); xlabels = ['sepal length', 'sepal width', 'petal length', 'petal width'] ax[0].bar(range(4), pca.loadings_[:, 0], align='center') ax[0].set_ylabel('Factor loading onto PC1') ax[0].set_xticks(range(4)) ax[0].set_xticklabels(xlabels, rotation=45) For instance, we may say that most of the variance in the first component is attributed to the petal features (although the loading of sepal length on PC1 is also not much less in magnitude). In contrast, the remaining variance captured by PC2 is mostly due to the sepal width. Note that we know from Example 2 that PC1 explains most of the variance, and based on the information from the loading plots, we may say that petal features combined with sepal length may explain most of the spread in the data. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123, test_size=0.3, stratify=y) pipe_pca = make_pipeline(StandardScaler(), PrincipalComponentAnalysis(n_components=3), KNeighborsClassifier(n_neighbors=5)) pipe_pca.fit(X_train, y_train) print('Transf. training accyracy: %.2f%%' % (pipe_pca.score(X_train, y_train)*100)) print('Transf. test accyracy: %.2f%%' % (pipe_pca.score(X_test, y_test)*100)) Transf. training accyracy: 96.77% Transf. test accyracy: 96.30% Certain algorithms require the data to be whitened. This means that the features have unit variance and the off-diagonals are all zero (i.e., the features are uncorrelated). PCA already ensures that the features are uncorrelated, hence, we only need to apply a simple scaling to whiten the transformed data. For instance, for a given transformed feature X'_i , we divide it by the square-root of the corresponding eigenvalue \lambda_i The whitening via the PrincipalComponentAnalysis can be achieved by setting whitening=True during initialization. Let's demonstrate that with an example. Regular PCA pca1 = PrincipalComponentAnalysis(n_components=2) X_train_transf = pca1.fit(X_train_scaled).transform(X_train_scaled) plt.scatter(X_train_transf[y_train==lab, 0], X_train_transf[y_train==lab, 1], print('Covariance matrix:\n') np.cov(X_train_transf.T) As we can see, the features are uncorrelated after transformation but don't have unit variance. PCA with Whitening pca1 = PrincipalComponentAnalysis(n_components=2, whitening=True) As we can see above, the whitening achieves that all features now have unit variance. I.e., the covariance matrix of the transformed features becomes the identity matrix.
Flutter Suppression of Long-Span Bridges Using Suboptimal Control Lingjun Zhuo1, Yunjin Dong2, Xinyu Xu3 1Research Center for Wind Engineering, Southwest Jiaotong University, Chengdu, China. 2Xihua University, Chengdu, China. 3China Railway Eryuan Engineering Group Co., Ltd., Chengdu, China. Based on the Theodorsen’s Theory of the aerodynamic forces on wing-aileron, the Scanlan’s Theory is expanded considering a deck-flap system. It is suggested that a new forced vibration method can acquire aerodynamic derivatives of this deck-flap system theoretically. After obtaining the wind induced forces, a deck-flap equation of motion in time domain is established to investigate its control law. Numerical simulation results indicate suboptimal control law of the deck-flap system can suppress the flutter effectively, and the flutter speed can be increased for desirable purpose. Aerodynamic Force, Deck-Flap System, Flutter Suppression, Control Law Zhuo, L. , Dong, Y. and Xu, X. (2018) Flutter Suppression of Long-Span Bridges Using Suboptimal Control. World Journal of Engineering and Technology, 6, 34-40. doi: 10.4236/wjet.2018.62B004. Flutter is a self-excited motion, which eventually leads to catastrophic damage in bridge structures. Nowadays, with more and more long-span bridges to emerge, finding ways to suppress flutter and increase stability can tackle severe wind-induced problems. Adding stiffness of a girder, application of mechanical dampers are common ways to improve a bridge aerodynamic property. Using active control is a new way to solve these problems. Some researchers tried to put flaps away from deck in order to omit interference of aerodynamic forces between deck and flaps and can easily apply Theodorsen’s Theory of aerodynamic forces [1]. However, this interference cannot be omitted and can improve aerodynamic property in a way [2] [3]. In this paper, the active control system composes of a deck and flaps symmetrically mounted adjacent to the deck. A deck-flap equation of motion in time domain is established. Along with a new forced vibration method, aerodynamic forces can be calculated theoretically. In the end, a numerical simulation helps to investigate its control law. 2. Equation of Motion in Time Domain Flutter analysis is usually done in the frequency domain. The frequency-dependent motion-induced forces should be transformed to time-dependent ones so that they can be applied in the active control analysis. Based on the Scanlan’s Theory, the two dimensional equation of motion can be expressed as: m\stackrel{¨}{h}+{c}_{h}\stackrel{\dot{}}{h}+{k}_{h}h=L={L}_{d}+{L}_{f} I\stackrel{¨}{\alpha }+{c}_{\alpha }\stackrel{\dot{}}{\alpha }+{k}_{\alpha }\alpha =M={M}_{d}+{M}_{f} where m = mass of the system, I = moment of inertia of the system, {c}_{h} {c}_{\alpha } = damping of vertical and torsional motion respectively, {k}_{h} {k}_{\alpha } = stiffness of vertical and torsional motion respectively, L, M = total lift and moment respectively, {L}_{d} {M}_{d} = motion-induced lift and moment of the deck respectively, {L}_{f} {M}_{f} = motion-induced lift and moment of the flaps respectively. The motion-induced forces of the deck can be expressed as: {L}_{d}=\frac{1}{2}\rho {U}^{2}\left(2B\right)\left({K}_{h}{H}_{1}^{*}\frac{\stackrel{˙}{h}}{U}+{K}_{h}{H}_{2}^{*}\frac{B\stackrel{˙}{\alpha }}{U}+{K}_{h}^{2}{H}_{3}^{*}\alpha +{K}_{h}^{2}{H}_{4}^{*}\frac{h}{B}\right) {M}_{d}=\frac{1}{2}\rho {U}^{2}\left(2{B}^{2}\right)\left({K}_{\alpha }{A}_{1}^{*}\frac{\stackrel{˙}{h}}{U}+{K}_{\alpha }{A}_{2}^{*}\frac{B\stackrel{˙}{\alpha }}{U}+{K}_{\alpha }^{2}{A}_{3}^{*}\alpha +{K}_{\alpha }^{2}{A}_{4}^{*}\frac{h}{B}\right) where ρ = air density, U = wind velocity, B = deck width, h, α = vertical and torsional displacement respectively, {K}_{h}={\omega }_{h}B/U {K}_{\alpha }={\omega }_{\alpha }B/U {H}_{i}^{*},{A}_{i}^{*}\left(i=1~4\right) = aerodynamic derivatives of the deck. The aerodynamic flaps can be driven on both sides. Figure 1 shows that when flutter of the system is detected, the trailing flap is motivated and the leading flap is locked along with the deck. In this way, Theodorsen’s Theory on wing-aileron forces can be applied [4]. {L}_{f}=\frac{1}{2}\rho {U}^{2}\left(2B\right)\left({K}_{h}{H}_{5}^{*}\frac{B\stackrel{˙}{\beta }}{U}+{K}_{h}^{2}{H}_{6}^{*}\beta \right) {M}_{f}=\frac{1}{2}\rho {U}^{2}\left(2{B}^{2}\right)\left({K}_{\alpha }{A}_{5}^{*}\frac{B\stackrel{˙}{\beta }}{U}+{K}_{\alpha }^{2}{A}_{6}^{*}\beta \right) where β = torsional displacement of trailing flap, {H}_{i}^{*},{A}_{i}^{*}\left(i=5,6\right) = aerodynamic derivatives of the trailing flap. To obtain the aerodynamic derivatives for the deck-flap system, a forced Figure 1. Deck-flap system [2]. vibration method is proposed. When the system is forced to rotate sinusoidally, displacements of the system can be assumed as: \begin{array}{ccc}h=0& \alpha ={\alpha }_{0}{e}^{i\omega t}& \beta ={\beta }_{0}{e}^{i\left(\omega t+\phi \right)}\end{array} The total lift and moment are: \begin{array}{c}L=\frac{1}{2}\rho {U}^{2}\left(2B\right)\left[K\left({H}_{2}^{*}+{l}_{1}{H}_{5}^{*}+{l}_{2}{H}_{6}^{*}\right)\frac{B\stackrel{˙}{\alpha }}{U}+{K}^{2}\left({H}_{3}^{*}+{l}_{3}{H}_{5}^{*}+{l}_{4}{H}_{6}^{*}\right)\alpha \right]\\ =\frac{1}{2}\rho {U}^{2}\left(2B\right)\left(K{\stackrel{^}{H}}_{2}\frac{B\stackrel{˙}{\alpha }}{U}+{K}^{2}{\stackrel{^}{H}}_{3}\alpha \right)\end{array} \begin{array}{c}M=\frac{1}{2}\rho {U}^{2}\left(2{B}^{2}\right)\left[K\left({A}_{2}^{*}+{m}_{1}{A}_{5}^{*}+{m}_{2}{A}_{6}^{*}\right)\frac{B\stackrel{˙}{\alpha }}{U}+{K}^{2}\left({A}_{3}^{*}+{m}_{3}{A}_{5}^{*}+{m}_{4}{A}_{6}^{*}\right)\alpha \right]\\ =\frac{1}{2}\rho {U}^{2}\left(2{B}^{2}\right)\left(K{\stackrel{^}{A}}_{2}\frac{B\stackrel{˙}{\alpha }}{U}+{K}^{2}{\stackrel{^}{A}}_{3}\alpha \right)\end{array} {l}_{i},{m}_{i}\left(i=1,2,3,4\right) = a combination of flap’s amplitude and its phase angle, {\stackrel{^}{H}}_{i},{\stackrel{^}{A}}_{i}\left(i=2,3\right) = new derivatives of the system in the forced vibration method. Using the same way in the regular forced vibration method, the new derivatives can be acquired from: \begin{array}{cc}{\stackrel{^}{H}}_{2}=\frac{2}{\rho {B}^{3}{\omega }^{2}{\alpha }_{0}}\mathrm{Im}\left[F\left(L\right)\right]& {\stackrel{^}{H}}_{3}=\frac{2}{\rho {B}^{3}{\omega }^{2}{\alpha }_{0}}\mathrm{Re}\left[F\left(L\right)\right]\end{array} \begin{array}{cc}{\stackrel{^}{A}}_{2}=\frac{2}{\rho {B}^{4}{\omega }^{2}{\alpha }_{0}}\mathrm{Im}\left[F\left(M\right)\right]& {\stackrel{^}{A}}_{3}=\frac{2}{\rho {B}^{4}{\omega }^{2}{\alpha }_{0}}\mathrm{Re}\left[F\left(M\right)\right]\end{array} And their relations can be put as: \left[\begin{array}{c}{\stackrel{^}{H}}_{2}\\ {\stackrel{^}{H}}_{3}\end{array}\right]=\left[\begin{array}{cccc}1& 0& {l}_{1}& {l}_{2}\\ 0& 1& {l}_{3}& {l}_{4}\end{array}\right]{\left[\begin{array}{cccc}{H}_{2}& {H}_{3}& {H}_{5}& {H}_{6}\end{array}\right]}^{T} \left[\begin{array}{c}{\stackrel{^}{A}}_{2}\\ {\stackrel{^}{A}}_{3}\end{array}\right]=\left[\begin{array}{cccc}1& 0& {m}_{1}& {m}_{2}\\ 0& 1& {m}_{3}& {m}_{4}\end{array}\right]{\left[\begin{array}{cccc}{A}_{2}& {A}_{3}& {A}_{5}& {A}_{6}\end{array}\right]}^{T} As seen from above, it needs only a few tests to figure out a statically indeterminate matrix for solving the aerodynamic derivatives. Thus, inserting Equations (2) and (3) into Equation (1), and transforming it into Laplace domain with zero initial conditions gives: [\stackrel{¯}{M}{s}^{2}+\stackrel{¯}{C}s+\stackrel{¯}{K}]\stackrel{˜}{q}=\stackrel{˜}{F} \stackrel{¯}{M}=\left[\begin{array}{cc}mB& \\ & I\end{array}\right] \stackrel{¯}{C}=\left[\begin{array}{cc}{c}_{h}B& \\ & {c}_{\alpha }\end{array}\right] \stackrel{¯}{K}=\left[\begin{array}{cc}{k}_{h}B& \\ & {k}_{\alpha }\end{array}\right] \stackrel{˜}{q}=\left[\begin{array}{c}\stackrel{˜}{h}/B\\ \stackrel{˜}{\alpha }\end{array}\right] \stackrel{˜}{F}=\left[\begin{array}{c}{\stackrel{˜}{Q}}_{L}\\ {\stackrel{˜}{Q}}_{M}\end{array}\right] A common way to transform wind induced forces into time domain is a rational function approximation by Roger [5]. The details can be found in any aeronautical textbook. Each coefficient {Q}_{ij} {Q}_{ij}={A}_{0}^{ij}+{A}_{1}^{ij}\stackrel{¯}{s}+{\displaystyle \underset{m=2}{\overset{N}{\sum }}{A}_{m}^{ij}\frac{\stackrel{¯}{s}}{\stackrel{¯}{s}+{\gamma }_{m-1}}} Combined with aerodynamic derivatives acquired from forced vibration method, each coefficient can be obtained through the rational function approximation. Equation of motion can be rewritten as: \stackrel{¯}{M}\stackrel{¨}{q}+\stackrel{¯}{C}\stackrel{\dot{}}{q}+\stackrel{¯}{K}q-{q}_{d}({A}_{0}+{A}_{1}\stackrel{¯}{s}+{\displaystyle \underset{m=2}{\overset{N}{\sum }}{A}_{m}\frac{\stackrel{¯}{s}}{\stackrel{¯}{s}+{\gamma }_{m-1}})q}={q}_{d}({B}_{0}+{B}_{1}\stackrel{¯}{s}+{\displaystyle \underset{m=2}{\overset{N}{\sum }}{B}_{m}\frac{\stackrel{¯}{s}}{\stackrel{¯}{s}+{\gamma }_{m-1}})\beta } In order to study control law for stabilization, the equation of motion can be put in the state space form: \left\{\begin{array}{c}\stackrel{˙}{x}=Ax+Bu\\ y=Cx\end{array} x=\left[\begin{array}{c}X\\ \stackrel{˙}{X}\\ {X}_{a3}\\ ⋮\\ {X}_{am}\end{array}\right] A=\left[\begin{array}{ccccc}0& I& 0& \cdots & 0\\ -{M}^{-1}K& -{M}^{-1}C& {q}_{d}{M}^{-1}{A}_{3}& \cdots & {q}_{d}{M}^{-1}{A}_{m}\\ 0& I& -\frac{U{\gamma }_{1}}{B}I& \dots & 0\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& I& 0& \cdots & -\frac{U{\gamma }_{m-2}}{B}I\end{array}\right] B=\left[\begin{array}{c}0\\ {M}^{-1}G\\ 0\\ ⋮\\ 0\end{array}\right] u={\beta }_{c} y=\left[X\right] C=\left[\begin{array}{cccc}I& & & \\ & 0& & \\ & & \ddots & \\ & & & 0\end{array}\right] M=\stackrel{¯}{M}-{q}_{d}\frac{{B}^{2}}{{U}^{2}}{A}_{2} C=\stackrel{¯}{C}-{q}_{d}\frac{B}{U}{A}_{1} K=\stackrel{¯}{K}-{q}_{d}{A}_{0} G=\left[\begin{array}{c}0\\ 0\\ {k}_{c}{k}_{\beta }\end{array}\right] Optimal output control is a mature way [3] [6], but it needs all the state vector to be measured, which isn’t easy in the deck-flap system. On the other hand, suboptimal output control can be exerted through only a few state vector. It’s more reliable in wind tunnel experiments [7]. Suppose that the control is generated via output linear feedback gains: u=-{K}_{con}y {K}_{con} = feedback gain matrix to be determined. To solve the suboptimal output control problem is to figure out a optimization of the averaged performance index: J=\frac{1}{2}{\int }_{0}^{\infty }\left({x}^{T}Qx+{u}^{T}Ru\right)dt where Q, R = appropriate weighting matrices. Inserting the output equation from the state space form and performing a simple mathematical operation yields \begin{array}{l}QJ=\frac{1}{2}{\int }_{0}^{\infty }\left({x}^{T}Qx+{x}^{T}{C}^{T}{K}_{con}^{T}R{K}_{con}Cx\right)dt\\ =\frac{1}{2}{\int }_{0}^{\infty }{x}^{T}\left(Q+{C}^{T}{K}_{con}^{T}R{K}_{con}C\right)xdt=\frac{1}{2}{\int }_{0}^{\infty }{x}^{T}{Q}_{1}xdt\end{array} {K}_{con} is the solution of three equations: \mathrm{min}J=\mathrm{min}\frac{1}{2}{\int }_{0}^{\infty }{x}^{T}{Q}_{1}xdt \stackrel{˙}{x}={A}_{con}x=\left(A-B{K}_{con}C\right)x P{A}_{con}+{A}_{con}^{T}P+{Q}_{1}=0 A plate with decks on both sides is simulated to see the results of the application of active control. The deck is 40 meters wide, mass of the deck m=20000kg/m I=4.5\times {10}^{6}kg\cdot {m}^{2}/m {\omega }_{h}=0.1788Hz {\omega }_{\alpha }=0.5028Hz , air density \rho =1.225kg/{m}^{3} , the width of flap is 3 m. First, its aerodynamic forces should be transformed through the Roger approach, as shown in Figure 2. The aim of the active control is to suppress flutter up to 159 m/s. when using optimal control, the value of its gain is: \begin{array}{l}{K}_{op}=\left[\begin{array}{cccccc}-0.5373& 2.8299& 3.6154& -1.8600& 0.0069& 1.0017\end{array}\\ \begin{array}{cccccccc}& & 2.5807& -6.5471& -1.6502& 3.5028& -1.0229& -4.3174\end{array}\right]\end{array} Figure 3 shows the responses of the deck-flap system under optimal control in the wind speed of 159 m/s, when the system is exerted external forces. The value of gain after applying suboptimal control is: {K}_{con}=\left[\begin{array}{cc}-0.0098& 0.6779\end{array}\right] And its responses in the wind speed of 159 m/s is shown in Figure 4, when exerted with external forces. The numerical example shows that both control laws can suppress flutter in the high wind speed. Despite the quick response under optimal control, it's hard to track all the parameters in state vector. The suboptimal control provides a brief strategy to suppress flutter with few parameters in state vector. In this case, the control law depends on the vertical and torsional displacements. For wind tunnel experiments, the two displacements are very common and easy to track. The modeling of deck-flap system is studied in this paper. A state space form of equations of motion in time domain are obtained through Roger’s approach. Considering the interference between the deck and flaps, a forced vibration method is proposed. Theoretically, this method can help researchers obtain Figure 2. Rational approximation for motion-induced forces. Figure 3. Time-history of system displacements under optimal control. Figure 4. Time-history of system displacements under suboptimal control. aerodynamic derivatives of the deck-flap system with only a few wind tunnel tests. But its accuracy need to be checked. The numerical simulation shows that the aerodynamic flaps can suppress flutter on desired wind speed. Although feedback gain from the optimal control effectively stabilize the system in high wind speed, its state vector is not that easy to track. The suboptimal control can greatly suppress the vibration and relies on a few state vector parameters. [1] Kobayashi, H. and Nagaoka, H. (1992) Active Control of Flutter of a Suspension Bridge. Journal of Wind Engineering and Industrial Aerodynamics, 41, 143-151. https://doi.org/10.1016/0167-6105(92)90402-V [2] Hansen, H.I. and Thoft-Christensen, P. (1998) Active Vibration Control of Long Suspension Bridges Using Flaps. Proceedings of the 2nd World Conference on Structure Control, Wiley, Sin-gapore. [3] Guo, Z.W. and Ge, Y.J. (2012) A New State-Space Model for Self-Exited Forces and Flutter Automatic Analysis of Long Span Bridges. 7th International Cooloquium on Bluff Body Aerodynamics and Applications (BBAA7), Shanghai, China. [4] Theodorsen, T. (1935) General Theory of Aerodynamic Instability and the Mechanism of Flutter. Aerodynamic Flutter, American Institute of Aeronautics and Astronautics, No. 496. [5] Dowell, E.H., Clark, R., Curtiss Jr., H.C., et al. (2004) A Modern Course in Aeroelasticity. 4th Edition, Kluwer Academic Publishers. [6] Wilde, K. and Fujino, Y. (1998) Aerodynamic Control of Bridge Deck Flutter by Active Surfaces. Journal of Engineering Mechanics, 124, 718-727. https://doi.org/10.1061/(ASCE)0733-9399(1998)124:7(718) [7] Librescu, L. and Marzocca, P. (2005) Advances in the Linear/Nonlinear Control of Aeroelastic Structural Systems. Atca Mechanica, 178, 147-186. https://doi.org/10.1007/s00707-005-0222-6
cochrans_q: Cochran's Q test for comparing multiple classifiers - mlxtend Example 1 - Cochran's Q test Cochran's Q test for comparing the performance of multiple classifiers. from mlxtend.evaluate import cochrans_q Cochran's Q test can be regarded as a generalized version of McNemar's test that can be applied to evaluate multiple classifiers. In a sense, Cochran's Q test is analogous to ANOVA for binary outcomes. To compare more than two classifiers, we can use Cochran's Q test, which has a test statistic Q that is approximately, (similar to McNemar's test), distributed as chi-squared with L-1 degrees of freedom, where L is the number of models we evaluate (since L=2 for McNemar's test, McNemars test statistic approximates a chi-squared distribution with one degree of freedom). More formally, Cochran's Q test tests the hypothesis that there is no difference between the classification accuracies [1]: \{D_1, \dots , D_L\} be a set of classifiers who have all been tested on the same dataset. If the L classifiers don't perform differently, then the following Q statistic is distributed approximately as "chi-squared" with L-1 G_i is the number of objects out of N_{ts} correctly classified by D_i= 1, \dots L L_j L \mathbf{z}_j \in \mathbf{Z}_{ts} \mathbf{Z}_{ts} = \{\mathbf{z}_1, ... \mathbf{z}_{N_{ts}}\} is the test dataset on which the classifers are tested on; and T is the total number of correct number of votes among the L classifiers [2]: To perform Cochran's Q test, we typically organize the classificier predictions in a binary N_{ts} \times L matrix. The ij\text{th} entry of such matrix is 0 if a classifier D_j has misclassified a data example (vector) \mathbf{z}_i and 1 otherwise (if the classifier predicted the class label l(\mathbf{z}_i) correctly) [2]. The following example taken from [2] illustrates how the classification results may be organized. For instance, assume we have the ground truth labels of the test dataset y_true and the following predictions by 3 classifiers (y_model_1, y_model_2, and y_model_3): The table of correct (1) and incorrect (0) classifications may then look as follows: D_1 D_2 D_3 Accuracy 84/100*100% = 84% 92/100*100% = 92% 92/100*100% = 92% By plugging in the respective value into the previous equation, we obtain the following Q value [2]: (Note that the Q value in [2] is listed as 3.7647 due to a typo as discussed with the author, the value 7.5294 is the correct one.) Now, the Q value (approximating \chi^2 ) corresponds to a p-value of approx. 0.023 assuming a \chi^2 L-1 = 2 degrees of freedom. Assuming that we chose a significance level of \alpha=0.05 , we would reject the null hypothesis that all classifiers perform equally well, since 0.023 < \alpha In practice, if we successfully rejected the null hypothesis, we could perform multiple post hoc pair-wise tests -- for example, McNemar tests with a Bonferroni correction -- to determine which pairs have different population proportions. [1] Fleiss, Joseph L., Bruce Levin, and Myunghee Cho Paik. Statistical methods for rates and proportions. John Wiley & Sons, 2013. \alpha=0.05 p_i: H_0 = p_1 = p_2 = \cdots = p_L q, p_value = cochrans_q(y_true, print('Q: %.3f' % q) \alpha Lastly, let's illustrate that Cochran's Q test is indeed just a generalized version of McNemar's test: chi2, p_value = cochrans_q(y_true, print('Cochran\'s Q Chi^2: %.3f' % chi2) print('Cochran\'s Q p-value: %.3f' % p_value) Cochran's Q Chi^2: 5.333 Cochran's Q p-value: 0.021 chi2, p_value = mcnemar(mcnemar_table(y_true, y_model_2), corrected=False) print('McNemar\'s Chi^2: %.3f' % chi2) print('McNemar\'s p-value: %.3f' % p_value) McNemar's Chi^2: 5.333 McNemar's p-value: 0.021
905.1 Traffic Data Collection - Engineering_Policy_Guide To perform a traffic study, certain data should be collected. A good source of data can be obtained through MoDOT’s tools, that include: Transportation Management System (TMS) - has inventories for roadway information as well as crash information. Interactive AADT Map – shows traffic volumes at some locations that are associated with actual count data and some data that are estimated volumes (not based on count data). This map breaks down traffic by both generalized vehicle classifications and directional, hourly volume breakdowns. For some traffic studies, data from TMS or interactive AADT map may be enough. For other studies, field data will be required. EPG 905.1 Traffic Data Collection shows how to “right-size data collection” based on purpose and need of study. Refer to EPG 905.3.3 Data Collection for how to collect data in a Transportation Impact Analysis. 1 905.1.1 Transportation Management System (TMS) 1.1 905.1.1.1 Crash Summary 1.2 905.1.1.2 Crash Browser 1.3 905.1.1.3 Statewide Average Crash Rates 1.4 905.1.1.4 Intersection Expected Values 1.5 905.1.1.5 Queries & Reports 1.6 905.1.1.6 ArcMAP Queries 1.7 905.1.1.7 Direct access to TMS Databases 2 905.1.2 Non-TMS Field Data 2.1 905.1.2.1 Traffic Counts 2.2 905.1.2.2 Speed Study 2.3 905.1.2.3 Traffic Flow 2.4 905.1.2.4 Condition Diagram 2.5 905.1.2.5 Sight Distance Measurements 2.6 905.1.2.6 Measuring Degree of Curvature 2.6.1 Table 905.1.2.6 905.1.1 Transportation Management System (TMS) 905.1.1.1 Crash Summary The Crash Summary TMS application ties roadway data to crash data. In TMS, select the menu options Applications + Safety Management System + Crash Summary, An example of the Crash Summary application may look like the following: From the application, the following information can be found: Crash rate (intersection or range) State Crash Rate (by route designation & by roadway type) Crashes by types. The Crash Summary application provides a quick review of crash severity, comparative crash rates, and crashes by classification. For range studies, crash rates calculated by TMS are frequency based, determined by the total number of crashes per 100 million vehicle miles traveled. Below is the equation used to calculate range crash rates: {\displaystyle {\mbox{Crash rates for ranges}}={\frac {\mbox{No. of Accidents x 100,000,000}}{\mbox{No. of Years x AADT x Range Length x 365 days/year}}}} Range lengths of more than one mile should be used in the calculation, if practical. As seen in the rate calculation, an crash rate for a route less than one mile long will result in an artificially high crash rate. For intersection studies, crash rates calculated by TMS are also frequency-based but are determined by the total number of crashes by 1 million entering vehicles. Below is the equation used to calculate intersection crash rates: {\displaystyle {\mbox{Crash rates for intersection}}={\frac {\mbox{No. of Accidents x 1,000,000}}{\mbox{No. of Years x Entering AADT x 365 days/year}}}} Crash rates are a good indication of the level of operation for an intersection. It is not easy to determine if an individual crash rate is good or bad by itself. For this reason, there is a State Rate available for comparison. For range studies, the Crash Summary application compares the selected route to statewide routes of similar route designations (IS, US, MO, RT, etc.) and route type (freeway, expressway, two-lane, etc.). For intersection studies, a similar concept is in place to compare intersection with similar geometric design and entering volumes. Due to roadway data constraints at this time, not all intersections are comparable to statewide rates. Additional information available through Crash Summary is the Crash Types, Statewide Average Crash Rates, and ARAN video. The Crash Type field provides a general description of the crash; i.e. rear-end, head-on, left turn, etc. The Statewide Average Crash Rate application can be launched to see more in-depth review of available crash rates. The ARAN video is available for most routes that allow the user to view the road as driven by the ARAN van. 905.1.1.2 Crash Browser The Crash Browser application can be launched from the Crash Summary application or from the TMS menu Applications + Safety Management System + Crash Browser. An example of the Crash Browser application may look like the following: Crash image number Crash relation to roadway (range/intersection/interchange) This information can be used to determine when crashes are occurring and the best times to go to the field to collect data and observe traffic conditions. Additionally, the collision diagram software and ARAN video can be launched from the Crash Browser application. 905.1.1.3 Statewide Average Crash Rates The Statewide Average Crash Rates application can be launched from the TMS Menu or Crash Summary application. The application allows the user to view crash rates by state, district or county for many combinations of crash severity and roadway information. The Statewide Average Crash Rates are used to determine if the studied range is above or below the statewide average. 905.1.1.4 Intersection Expected Values The Intersection Expected Crash Rates application can be launched from the TMS Menu. The application allows the user to compare the intersection being studied with other intersections in the state with similar roadway and traffic characteristics. 905.1.1.5 Queries & Reports Queries & Reports are located under the TMS menu . Within the Queries & Reports menu are applications that will provide pavement reports, safety reports, state of the system (SOS) reports and traffic reports. The safety reports include the Crash Statistics Manual, High Severity Lists and various other crash reports by severity. The SOS reports provide roadway inventory information. The traffic reports provide AADT information for all state routes. 905.1.1.6 ArcMAP Queries ArcMAP allows the user to map roadway and traffic data. Queries are available within the ArcMAP tool to map basic information. The software allows users to perform database queries from other query applications and import the data into ArcMAP for viewing. This mapping tool is useful for displaying data for public meetings. 905.1.1.7 Direct access to TMS Databases Authorized TMS users have direct access to the TMS databases. Linking databases by using software such as Microsoft Access, allows TMS users to query data in a specialized manner. While the applications in TMS allow for most combinations of data, the applications cannot account for every scenario. When such a scenario occurs, specialized queries must be prepared by those with expertise in query languages and the details of the TMS Database. 905.1.2 Non-TMS Field Data Installing a permanent traffic counter While some field data may be available in TMS, it is often useful to obtain current data from the field. The TMS applications are data intensive. Very few state routes’ data are updated every year. Traffic patterns can change quickly with new developments. For these reasons, it is encouraged to obtain field data for traffic studies. 905.1.2.1 Traffic Counts As mentioned, traffic patterns can change quickly. There are several ways to review traffic data that may be important to a study. For intersections, it is common to perform a 12-hour turning movement count. When counting an intersection for 12 hours, track the turning movements by 15-minute intervals. By tracking vehicle actions using 15-minute intervals, the intersection can be reviewed for peak hour(s) movements and assist in signal timing plans. When the traffic study involves a range location, count devices can be placed to gather AADT and sometimes speed. Due to new developments and the TMS count cycle, it may be a couple years before a road is counted. It is also good to confirm the information in TMS. While in the field gathering traffic counts, it is a good idea to perform a spot speed study as well. 905.1.2.2 Speed Study A speed study is performed to determine the prevailing speed at a location. The prevailing speed is used to determine if the speed limit is set appropriately and to check for proper sight distances along the roadway. 905.1.2.3 Traffic Flow While performing a traffic study for either an intersection or range, take notes about the traffic flow. Look for erratic and illegal maneuvers or other operational problems, as these may indicate signing/striping needs or other operational deficiencies. 905.1.2.4 Condition Diagram As mentioned in the S-HAL, the Condition Diagram is a drawing (to scale, if practical), of the existing roadway, control device locations and major features in the nearby environment. 905.1.2.5 Sight Distance Measurements Sight Distances should be measured for at-grade intersections and ranges. There are four basic sight distance measurements that will be obtained: Decision Sight Distance and Passing Sight Distance. Sight distances will be measured for At-grade intersections – Driveways and Entrances At-grade intersections with Stop and Yield Control At-grade intersections with Signal Control. If sight distance is being reviewed due to an entrance permit, refer to EPG 941.7 Sight Distance for Entrances. 905.1.2.6 Measuring Degree of Curvature Sometimes horizontal degrees of curvature are used to make engineering decisions. An example is the article on chevron placement. By using equations involving the curve’s radius, LC (length of chord), degree of curve, and middle ordinate, a relationship can be calculated. The relationship assumes a chord length of 62 ft. By using a chord length of 62 ft., the middle ordinate (measured in inches) equals the degree of curvature, thus a simple field check using a 62 ft. length of rope can be used to measure the degree of curvature. Components of a simple circular curve PI = points of intersection of back tangent and forward tangent PC = point of curvature, the point of change from back tangent to circular curve PT = point of tangency, the point of change from circular curve to forward tangent LC = Total chord length, or long chord, from PC to PT in ft. for the circular curve D = Degree of curvature, the central angle that subtends a 100 ft. arc. The degree of curvature is determined by the appropriate design speed (anticipated posted speed). {\displaystyle \Delta } = Total intersection (or delta) angle between back and forward tangents T = Tangent distance in ft. The distance between the PC and PI or the PI and PT. L = Total length in ft. of the circular curve from PC to PT measured along its arc. E = External distance (radial distance) in ft. from PI to the midpoint of the circular curve. Assume “LC” Calculate “R”, sin(1/2D)=50/R {\displaystyle \Delta } , LC = 2R sin ( {\displaystyle \Delta } Calculate “MO” in ft., MO = R(1 – cos ( {\displaystyle \Delta } /2)) ”MO”, in inches 62 1 0.017453293 5729.650674 0.010820957 0.08 1.01 62 4 0.06981317 1432.685417 0.043278753 0.34 4.03 62 6 0.104719755 955.3661305 0.064907979 0.50 6.04 62 7 0.122173048 819.020412 0.075718276 0.59 7.04 62 8 0.13962634 716.7793513 0.086525016 0.67 8.05 Retrieved from "https://epg.modot.org/index.php?title=905.1_Traffic_Data_Collection&oldid=50694"
Scalar Form Factor of the Pion in the Kroll-Lee-Zumino Field Theory C. A. Dominguez, M. Loewe, M. Lushozi, "Scalar Form Factor of the Pion in the Kroll-Lee-Zumino Field Theory", Advances in High Energy Physics, vol. 2015, Article ID 803232, 4 pages, 2015. https://doi.org/10.1155/2015/803232 C. A. Dominguez,1 M. Loewe,1,2 and M. Lushozi1 1Centre for Theoretical & Mathematical Physics and Department of Physics, University of Cape Town, Rondebosch 7700, South Africa 2Instituto de Física, Pontificia Universidad Católica de Chile, Casilla 306, 22 Santiago, Chile The renormalizable Kroll-Lee-Zumino field theory of pions and a neutral rho-meson is used to determine the scalar form factor of the pion in the space-like region at next-to-leading order. Perturbative calculations in this framework are parameter-free, as the masses and the rho-pion-pion coupling are known from experiment. Results compare favorably with lattice QCD calculations. The scalar form factor of the pion [1, 2], and particularly its quadratic radius, plays an important role in chiral perturbation theory (CHPT) [3, 4]. This form factor is defined as the pion matrix element of the QCD scalar current ; that is, where . The associated quadratic scalar radius is given by where is the pion sigma term The scalar radius fixes , one of the low energy constants of CHPT, through the relation where MeV is the physical pion decay constant [5]. The low energy constant , in turn, determines the leading contribution in the chiral expansion of the pion decay constant; that is, where is the pion decay constant in the chiral limit. This scalar form factor is not accessible experimentally, but it has been determined from lattice QCD (LQCD) [6–8], or hadronic models [9, 10]. Theoretically, the ideal tool to study this form factor, independently from LQCD, is the Kroll-Lee-Zumino Abelian renormalizable field theory of pions and a neutral -meson [11, 12]. This provides the appropriate field theory platform for the phenomenological vector meson dominance (VMD) model [13, 14], allowing for a systematic calculation of higher order quantum corrections ([15], this paper, has a misprint in equation (15) (the sign of the first term in curly brackets should be negative), with the remaining equations being correct. The electromagnetic square radius of the pion quoted in the paper is incorrect; the correct value is fm2, in much better agreement with data than naive (single ) VMD.) [16]. Due to the renormalizability of the theory, predictions are parameter-free, as the strong coupling, , is known from experiment. In spite of this coupling being a strong interaction quantity, perturbative calculations in the scheme make sense because the effective expansion parameter turns out to be . The KLZ theory has been used to compute the next-to-leading order (NLO) correction to the tree level (VMD) electromagnetic form factor of the pion in the space-like region with very good results [15]. In fact, it agrees with data up to GeV2 with a chi-squared per degree of freedom , as opposed to VMD which gives . In addition, the mean-squared radius at NLO is fm2, compared with the experimental result [5] fm2, and the VMD value fm2. In this note we compute in this framework the scalar form factor of the pion at NLO in the space-like region and compare with current results from LQCD. The KLZ Lagrangian is given by where is a vector field describing the meson (), is a complex pseudoscalar field describing the mesons, is the usual field strength tensor: , and is the current: . In spite of the explicit presence of the mass term in the Lagrangian, the theory is renormalizable because the neutral vector meson is coupled to a conserved current [11, 12]. Figures 1 and 2 show, respectively, the LO and the NLO diagrams, where the cross indicates the coupling of the current to the two pions. Notice that while the Lagrangian (6) contains a quartic coupling, this term only contributes in this application at NNLO and beyond. Leading order (LO) contribution to the scalar form factor of the pion. The cross indicates the coupling of the scalar current to two pions. Next-to-leading order (NLO) contribution to the scalar form factor. Using the Feynman propagator for the -meson, and in dimensions, the unrenormalized vertex function in Figure 2 in dimensional regularization is given by where is defined as In the scheme and renormalizing the vertex function at the point , the NLO contribution in Figure 2 is [16] with For details on the renormalization procedure for the fields, masses, and coupling, see [15]. The result of a numerical evaluation of (9), using from the measured width of the -meson [5], is shown in Figure 3. Regarding the scalar radius, defined in (2), we confirm the NLO result obtained in [16] with a negligible error due to the strong coupling. The normalized scalar form factor (8), to NLO in the space-like region. This value is smaller than typical values in the literature [6–10]. However, it must be kept in mind that the NLO result is expected to be a lower bound; that is, with , the NNLO would reduce , thus increasing the radius. A rough order of magnitude estimate of the size of the NNLO contribution suggests a correction of some 20% to the NLO term (the NNLO calculation is quite formidable and beyond the scope of this note). This is obtained by estimating a typical two-loop diagram, for example, the -meson propagator at NNLO and comparing it with the NLO result. The Feynman integrals in the variables at NLO and NNLO are of order in the range explored here. We find the total contribution from this diagram to be over 20% of the NLO, thus increasing the radius to fm2. It should be clear that this result (11) would be affected by (unknown) systematic uncertainties arising, for example, from hadronic contributions from fields absent from the KLZ Lagrangian (6). A comparison of the KLZ form factor itself at low GeV2 with LQCD results read from figures in [6, 8] shows good agreement. It should be mentioned, though, that LQCD results from [6] are for light-quark masses in the range from to , while those from [8] are for MeV. These LQCD determinations find values for the scalar radius higher than in this analysis (11); that is, fm2 from [6] and fm2 from [8]. These results for the radius are determined from, for example, chiral extrapolations to the physical pion mass. Our results for the form factor are also in agreement within less than 10% with a CHPT calculation [17] in the range GeV2. Regarding potential contributions to the form factor and its radius, for example, from other scalar degrees of freedom absent from the KLZ Lagrangian, they should be understood as part of the systematic uncertainties. The excellent agreement with data of KLZ results for the electromagnetic pion form factor [15] might be a result of the absence of other hadronic contributions with the same quantum numbers as the neutral -meson. In the case of the scalar form factor, these additional contributions are potentially present, and thus could become part of the systematic uncertainties. This work was supported in part by FONDECyT (Chile) under Grants 1130056 and 1120770, by NRF (South Africa), and by the University of Cape Town URC. The authors wish to thank Gary Tupper for valuable discussions on KLZ and Hartmut Wittig for enlightening exchanges on LQCD. T. N. Truong and R. S. Willey, “Branching ratios for decays of light Higgs bosons,” Physical Review D, vol. 40, no. 11, pp. 3635–3640, 1989. View at: Publisher Site | Google Scholar J. F. Donoghue, J. Gasser, and H. Leutwyler, “The decay of a light Higgs boson,” Nuclear Physics B, vol. 343, no. 2, pp. 341–368, 1990. View at: Publisher Site | Google Scholar S. Scherer, “Introduction to chiral perturbation theory,” Advances in Nuclear Physics, vol. 27, p. 277, 2003. View at: Google Scholar J. Gasser, “Light-quark dynamics,” in Lectures on Flavor Physics, vol. 629 of Lecture Notes in Physics, pp. 1–35, Springer, Berlin, Germany, 2004. View at: Google Scholar S. Aoki, T. W. Chiu, H. Fukaya et al., “Pion form factors from two-flavor lattice QCD with exact chiral symmetry,” Physical Review D, vol. 80, Article ID 034508, 2009. View at: Publisher Site | Google Scholar S. Aoki, Y. Aoki, C. Bernard et al., “Review of lattice results concerning low energy particle physics,” http://arxiv.org/abs/1310.8555. View at: Google Scholar V. Gülpers, G. von Hippel, and H. Wittig, “The scalar pion form factor in two-flavor lattice QCD,” Physical Review D, vol. 89, Article ID 094503, 2014. View at: Publisher Site | Google Scholar J. A. Oller and L. Roca, “Scalar radius of the pion and zeros in the form factor,” Physics Letters B, vol. 651, no. 2-3, pp. 139–146, 2007. View at: Publisher Site | Google Scholar L. Roca, J. A. Oller, and C. Schat, “Scalar radius of the pion and \gamma \gamma \to \pi \pi ,” PoS EFT, vol. 09, p. 031, 2009. View at: Google Scholar N. M. Kroll, T. D. Lee, and B. Zumino, “Neutral vector mesons and the hadronic electromagnetic current,” Physical Review, vol. 157, no. 5, pp. 1376–1399, 1967. View at: Publisher Site | Google Scholar J. H. Lowenstein and B. Schroer, “Gauge invariance and ward identities in a massive-vector-meson model,” Physical Review D, vol. 6, no. 6, pp. 1553–1571, 1972. View at: Publisher Site | Google Scholar J. J. Sakurai, “Theory of strong interactions,” Annals of Physics, vol. 11, no. 1, pp. 1–48, 1960. View at: Publisher Site | Google Scholar | MathSciNet J. J. Sakurai, Currents and Mesons, University of Chicago Press, Chicago, Ill, USA, 1969. C. A. Dominguez, J. I. Jottar, M. Loewe, and B. Willers, “Pion form factor in the Kroll-Lee-Zumino model,” Physical Review D, vol. 76, no. 9, Article ID 095002, 2007. View at: Publisher Site | Google Scholar C. A. Dominguez, M. Loewe, and B. Willers, “Scalar radius of the pion in the Kroll-Lee-Zumino renormalizable theory,” Physical Review D, vol. 78, Article ID 057901, 2008. View at: Publisher Site | Google Scholar A. Jüttner, “Revisiting the pion's scalar form factor in chiral perturbation theory,” Journal of High Energy Physics, vol. 2012, no. 1, article 7, 2012. View at: Publisher Site | Google Scholar Copyright © 2015 C. A. Dominguez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The publication of this article was funded by SCOAP3.
Fabrication of uniform 4H-SiC mesopores by pulsed electrochemical etching | Nanoscale Research Letters | Full Text Fabrication of uniform 4H-SiC mesopores by pulsed electrochemical etching Jia-Hui Tan1, Zhi-zhan Chen1, Wu-Yue Lu1, Hong He1, Yi-Hong Liu1, Yu-Jun Sun1 & Gao-Jie Zhao1 In this letter, the uniform 4H silicon carbide (SiC) mesopores was fabricated by pulsed electrochemical etching method. The length of the mesopores is about 19 μm with a diameter of about 19 nm. The introduction of pause time (Toff) is crucial to form the uniform 4H-SiC mesopores. The pore diameter will not change if etching goes with Toff. The hole concentration decreasing at the pore tips during the Toff is the main reason for uniformity. Silicon carbide (SiC) is a promising semiconductor for high-temperature, high-power, and high-frequency electronic devices, due to its wide bandgap, high breakdown electric field, and high electron saturation velocity [1]. Recently, the advent of high surface-to-volume ratio of SiC porous semiconductors has drawn significant attention. SiC porous structures are found applications in the gas sensors [2], supercapacitor [3], and RF planar inductor [4]. Cao et al. [5] fabricated a porous layer in amorphous SiC thin films by using constant-current anodic etching in an electrolyte of aqueous diluted HF. The morphology of the porous layer was dependent on the anodic current density. Ke et al. [6] fabricated variable porous structures on 6H-SiC crystalline faces by anodic etching under constant voltage with UV assistance. The nano-columnar pore was formed on the C face. With simplified experimental conditions, Gautier et al.[7] formed columnar mesopores on 4H-SiC under constant-current density without UV assistance. The pore size was not uniform. It was about 10 nm near the surface and between 20 and 50 nm at a 40-μm depth when the current density was 25.5 mA/cm2. Uniform pore structures can enhance the optical performance and can be applied in optoelectronics field [8]. In this paper, the uniform longitudinal distribution of SiC mesopores on 4H-SiC substrate with thin cap and transition layer was first fabricated by pulsed electrochemical etching. The uniform pore diameter of about 19 nm, in the whole 4H-SiC mesopores, was prepared through the introduction of pause time. The 4H-SiC mesopores were fabricated on the C face of a n-type 4H-SiC wafer with on-axis surface. The wafer is 340-μm thick and double polished and the resistivity is about 0.015 ~ 0.028 Ω · cm. All samples were prepared in the size of 1 cm × 1 cm. The C face was chosen as an etching face. Samples 1, 2, 3, 4, and 7 (#1, #2, #3, #4, #7) were made to be Schottky contacts by sticking a 0.2-mm-thick copper foil with conductive silver glue. A 200-nm-thick nickel was sputtered on the Si face of samples 5 and 6 (#5, #6), and then annealed in N2 atmosphere for 3 min to form an Ohmic contact. The Keithley 2635A semiconductor characterization system (Keithley, Cleveland, USA) was used to acquire I-V curves to determine whether the backside contact is Ohmic or Schottky. The schematic drawing of anodic etching system is shown in Figure 1. The anodization was performed in a simple, open Teflon cell with an O-ring seal (201 mm2 active area) under constant room temperature. A copper disc, with a copper wire, being pressed to the metal contact side of the sample, served as a conductor in order to integrate the sample in the anodic etching system. Pt mesh works as a counter electrode. The etching solution was used with volume ratios HF (49%):C2H5OH (99%):H2O2 (30%) of 3:6:1. Etching time of all samples was 7 min. All experiments were processed under room light. After anodization, they were rinsed and then cracked along <1000 > axis. The morphology of the cross sections were observed by HITACHI S-4800 scanned electron microscopy (SEM) (Hitachi High-Technologies Corporation, Tokyo, Japan). The schematic drawing of electrochemical etching. The pulsed current cycle time and the pause time are represented by “T” and “Toff,” respectively. The current pulse shapes from the power source are measured by oscilloscope. It is proved to be approximately rectangular in shape in longer T (10 ms) and triangular in shape in shorter T (0.2 ms). To modulate the T and Toff, the constant pulsed current/voltage resource is applied. The etching was performed in galvanostatic mode for samples #1 to #6 and in potentiostatic mode for sample #7. The detailed experimental parameters are listed in Table 1. The porous layer structure consists of two parts, one is the cap and transition layer, the other is the mesopores. The characteristics of different porous layer structure were summarized in Table 2. The thickness of cap and transition layer is about 1 μm, and the length of mesopores is about 19 μm for all samples. The mesopore diameters are 19 nm in the middle and more than 30 nm in the bottom for samples #1, #3, #5, and #7. They are always about 19 nm from the middle to the bottom for samples #2, #4, and #6. Tolerances of all parameters are shown in Table 2. Here, we select about 50 mesopores in different parts of a sample and get an average value of pore diameter and tolerances. The mesopore wall thickness is almost the same in different cycle times and pause times and gets to be thinner when the current densities increase to a higher value [5–7]. Table 2 The characteristics of porous layer structure Comparing #1 with #3, we note that the pores of #1 looks discontinuous like with a diameter of about 19 nm in the middle and 33 nm in the bottom. On the contrary, the continuous pores, with the enlarging diameter from the surface to the bottom like #1, are formed for sample #3 (Figure 2). In fact, both sample pore distributions are same; the discontinuous-like pore of #1 is just the pore wall belonging to a pore which grew in front of the pore primarily looked at, which could be found in other porous semiconductors. In conclusion, the T from 0.2 to 10 ms cannot influence the pores morphology obviously. The cross-section morphology of #3. The whole view (a) and the magnified field of the cap and transition layer (b), mesopores in the middle (c), and mesopores in the bottom (d). The schematic of the cross-section structure is shown on the left (e). The samples with Toff have uniform longitudinal pore distribution. The diameters in the middle and the bottom are the same with reference to #2, #4, and #6. The diameters are enlarged in the bottom of the other samples (#1, #3, #5, #7) without Toff. The cross-section morphology of #3 and #4 are chosen as examples to clarify the role of Toff, as shown in Figures 2 and 3. The cross-section morphology of #4. The whole view (a) and the magnified field of cap and transition layer (b), mesopores in the middle (c), and mesopores in the bottom (d). The plane view (e) at 1-μm deep is realized by RIE and SEM. The schematic cross-section structure (f). When the constant voltage is applied to #7, the columnar mesopores also forms, but the longitudinal view of them is not uniform which also confirms the role of Toff in slowing down the tendency of pore diameter enlargement. In all etching processes, voltage was increased with the time going due to the increasing specific surface area. Comparing #6 with #4 and #5 with #3, we can figure out almost the same morphologies in two groups. The conclusion is that the Ohmic contact plays the same role as the Schottky contact in our experiments, which should attribute to high-contact resistance of the Ohmic contact which is close to the Schottky contact in the 10- to 20-V region. The effect of H2O2 is expected to facilitate the uniform etching process similar to the Si system [9]. What is more, HF/H2O2 does have an effect on enhancing uniformity on 6H-SiC [10]. The electrochemical etching SiC process in other works, without H2O2, can be listed as the following [11]: \mathrm{SiC}+4{\mathrm{H}}_{2}\mathrm{O}+8{\mathrm{H}}^{+}\to {\mathrm{S}\mathrm{i}\mathrm{O}}_{2}+{\mathrm{C}\mathrm{O}}_{2}↑+8{\mathrm{H}}^{+} \mathrm{SiC}+2{\mathrm{H}}_{2}\mathrm{O}+4{\mathrm{H}}^{+}\to \mathrm{S}\mathrm{i}\mathrm{O}+\mathrm{C}\mathrm{O}↑+4{\mathrm{H}}^{+} {\mathrm{S}\mathrm{i}\mathrm{O}}_{2}+6\mathrm{H}\mathrm{F}\to {\left({\mathrm{S}\mathrm{i}\mathrm{F}}_{6}\right)}^{2-}+2{\mathrm{H}}_{2}\mathrm{O}+2{\mathrm{H}}^{+} \mathrm{S}\mathrm{i}\mathrm{O}+6\mathrm{H}\mathrm{F}\to {\left({\mathrm{S}\mathrm{i}\mathrm{F}}_{6}\right)}^{2-}+{\mathrm{H}}_{2}\mathrm{O}+4{\mathrm{H}}^{+} When H2O2 is added to the etching solution, the possible reaction will occur. \mathrm{SiC}+2{\mathrm{H}}_{2}\mathrm{O}+4{\mathrm{H}}^{+}+2{\mathrm{O}}^{-}\to {\mathrm{S}\mathrm{i}\mathrm{O}}_{2}+{\mathrm{C}\mathrm{O}}_{2}↑+4{\mathrm{H}}^{+} The Equation (5) enhances the oxide formation rate. The kinetic balance between the oxide formation and dissolution can therefore be quickly achieved, which favors the formation of columnar pores. The last but not the least, the effect of H2O2 here aims to promise the pore uniformity in the plane perpendicular to the etching direction. The mechanism of forming longitudinal uniform mesopores can be stated as the following, which emphasizes the role of Toff in the etching process. If etching goes without Toff, the diffusion limitation of the reactive species due to the increasing aspect ratio of the pores results in a depletion of the reactive species at the pore tips; thus, a reduced etching rate and accumulation of holes at the pore tips will widen the pore tips. This is the reason why the pore diameter became larger with deeper etching as reported by Gautier et al. [7]. If etching goes with Toff, the concentration of the holes at the pore tips will be decreased, the oxide formation velocity is therefore reduced. However, the reactive species diffusing from the etching solution to pore tips continues. The oxide removal velocity is unchangeable in this case, which prevents pore diameter from being enlarged. With different T or Toff in same current density, the pore diameter and the pore densities are almost the same, which means they would not influence the thickness of pore walls in our experiment. Additionally, the constant voltage experiment results in inhomogeneous pore structure, because the electric field distribution is not homogenous on the SiC surface. In this paper, the effect of cycle time and pause time on the mesopore morphology is clarified. Cycle time ranging from 0.2 to 10 ms does not distinguish pore morphologies, and the different metal contact types has no influence on changing the morphologies either, but the pause time of the constant pulsed current does make a difference on preventing the pore diameters being wider at the bottom. Zhuang D, Edgar JH: Wet etching of GaN, AlN, and SiC: a review. Mater Sci Eng 2005, 48(1):1–46. 10.1016/j.mser.2004.11.002 Korotcenkov G, Cho BK: Porous semiconductors: advanced material for gas sensor applications. Crit Rev Solid State Mater Sci 2010, 35(1):1–37. 10.1080/10408430903245369 Tsai WY, Gao PC, Daffos B, Taberna PL, Perez CR, Gogotsi Y, Favier F, Simon P: Ordered mesoporous silicon carbide-derived carbon for high-power supercapacitors. Electrochem Commun 2013, 34: 109–112. Gautier G, Capelle M, Billoué J, Cayrel F, Poveda P: RF planar inductor electrical performances on n-type porous 4H silicon carbide. IEEE Electron Device Letters 2012, 33(4):477–479. Cao AT, Luong QNT, Dao CT: Influence of the anodic etching current density on the morphology of the porous SiC layer. AIP Advance 2014, 4: 037105. 10.1063/1.4869017 Ke Y, Devaty RP, Choyke WJ: Comparative columnar porous etching studies on n-type 6H SiC crystalline faces. Phys Status Solidi B 2008, 245(7):1396–1403. 10.1002/pssb.200844024 Gautier G, Cayrel F, Capelle M, Billoué J, Song X, Michaud JF: Room light anodic etching of highly doped n-type 4H-SiC in high-concentration HF electrolytes: difference between C and Si crystalline faces. Nanoscale Res Lett 2012, 7: 367–369. 10.1186/1556-276X-7-367 Naderi N, Hashim MR, Saron KMA, Rouhi J: Enhanced optical performance of electrochemically etched porous silicon carbide. Semicond Sci Technol 2013, 28: 025011. 10.1088/0268-1242/28/2/025011 Naderi N, Hashim MR: A combination of electroless and electrochemical etching methods for enhancing the uniformity of porous silicon substrate for light detection application. Appl Surf Sci 2012, 258: 6436–6440. 10.1016/j.apsusc.2012.03.056 Wang LH, Shao HH, Hu XB, Xu XG: Hierarchical porous patterns of n-type 6H-SiC crystals via photo-electrochemical etching. J Mater Sci Technol 2013, 29(7):655–661. 10.1016/j.jmst.2013.03.017 Bourenane K, Keffous A, Kechouache M, Nezzal G, Boukezzata A, Kerdja T: Morphological and photoluminescence study of porous thin SiC layer grown onto silicon. Surf Interface Anal 2008, 40(3–4):763–768. This work was funded by the Special Prophase Project on the National Basic Research Program of China (2012CB326402), Innovation Program of Shanghai Municipal Education Commission (13ZZ108), and Shanghai Science and Technology Commission (13520502700). Department of Physics, Shanghai Normal University, 100 Guilin Road, Shanghai, 200234, China Jia-Hui Tan, Zhi-zhan Chen, Wu-Yue Lu, Yue Cheng, Hong He, Yi-Hong Liu, Yu-Jun Sun & Gao-Jie Zhao Jia-Hui Tan Zhi-zhan Chen Wu-Yue Lu Yi-Hong Liu Yu-Jun Sun Gao-Jie Zhao Correspondence to Zhi-zhan Chen. JHT wrote the manuscript and performed the porous SiC. HH participated in the study of the porous SiC fabrication. YC, YJS, and GJZ offered the critical parameters to make the Ohmic contact on the samples in the experiment. YHL performed RIE on the sample. ZZC read and approved the current manuscript. All authors read and approved the final manuscript. Tan, JH., Chen, Zz., Lu, WY. et al. Fabrication of uniform 4H-SiC mesopores by pulsed electrochemical etching. Nanoscale Res Lett 9, 570 (2014). https://doi.org/10.1186/1556-276X-9-570 Constant pulsed current Uniform mesopores
What Is the Market Clearing Price? | Outlier This article is a quick guide about what market clearing price is, how it works, its importance, as well as a list of examples and FAQs. Understanding How Market Clearing Prices Work Market Equilibrium: When Does It Happen? 5 Factors That Affect the Market Clearing Price Examples of Market Clearing Prices When sellers and buyers encounter each other in a market, a process starts in which each side of the market pursues its best interests. Sellers in the market expect to earn the biggest possible profits—i.e., sell the most at the highest possible price. Buyers in the market look to maximize their utility and prefer to buy goods at the lowest possible price. So, how do both sides agree on a specific price? And, what’s special about this price? If you’ve heard the term equilibrium price—the price at which supply equals demand—then you are already familiar with what a market clearing price is. These two terms can be used interchangeably. A market clearing price is a price at which the quantity supplied matches the quantity demanded. At this price, every seller who is willing to sell at or below the market-clearing price can do so, and every buyer who is willing to buy at or above the market-clearing price can do so as well. A market clearing price is a price at which the quantity supplied matches the quantity demanded. For many years, economists thought that market-clearing was a natural phenomenon. To better understand how this phenomenon works, let’s look at an example using a supply and demand diagram. Start by looking at points A and B, where the price is equal to $100. At this price, sellers expect to sell a quantity of 150, but consumers are only willing to buy a quantity of 50. As a result, there is excess supply (a surplus of goods) in the market. A total of 150 units will be produced, but only 50 units will be sold. There will be 100 units produced for which there are no buyers. One of the most recent and famous examples of excess supply was in 2020, when Russia and Saudi Arabia both raised their oil production and the amount of oil offered in the world temporarily exceeded demand. When this happens, the overproduction of goods puts downward pressure on prices and causes the market price to fall. This is exactly what happened to oil prices in 2020. Notice in the diagram that as the price falls, there continues to be a surplus of goods in the market (albeit a smaller one) until the price reaches the market-clearing price. Remember, the market clearing price occurs at the point where supply equals demand. So in this diagram, the market clearing price is $75. Once the market reaches this point, the quantity supplied is exactly equal to the quantity demanded, so the price ceases to fall. Now, let’s have a look at points C and D, where the price is equal to $50. At this price, demand exceeds supply. Consumers demand 150 units of the good, but producers are only willing to sell 50 units. This time, there is excess demand (a shortage of goods) in the market. Excess demand puts upward pressure on prices. Buyers desperate to buy the good will bid up the price on the limited quantity supplied. The price in the market will continue to rise until it hits equilibrium. At the equilibrium, quantity supplied catches up to quantity demanded, so the price ceases to increase further. A good example of a market shortage was the market for facemasks shortly after the start of the COVID-19 pandemic. When news of the pandemic began to spread, people around the world rushed to buy facemasks, and demand for facemasks quickly overwhelmed supply. Because there were not enough facemasks to meet the sudden increase in demand, prices surged. As you can see from both of these examples, if prices in the market deviate from the equilibrium, there are forces that push the price back towards equilibrium. So far, we have discussed excess supply and excess demand. In both cases, quantity supplied and quantity demanded did not match, and as a result, there were forces that pushed supply and demand back towards equilibrium. The forces that drive the price to increase (in the case of excess demand) or decrease (in the case of excess supply) are the law of supply and the law of demand. When a low price causes demand to exceed supply, the good will be scarce and suppliers will take advantage of a highly desired good which consumers are willing to pay a higher price for. On the other hand, if the price is too high, consumers will not demand a large quantity, and sellers will have to sell at a discounted price. This will cause them to sell a smaller quantity of goods. In either case, the incentives facing individual buyers and sellers will work to put the pull the market back to an equilibrium. Do Markets Really Clear? For many years, economists thought that market-clearing happened naturally without interference by outside actors or policymakers. They believe that market-clearing clearing mechanisms were at work in all markets. During the Great Depression, however, economists started to recognize this might not be the case. John Maynard Keynes, an English economist from the last century, noticed this when he observed labor markets. During the Great Depression many workers were unemployed and looking for jobs (an excess supply of workers), and yet, the labor market did not seem to be moving back to an equilibrium. Keynes’ observation opened up a discussion for how markets are sticky and will not always move quickly back towards an equilibrium 1. Liquidity of the Market Several factors can affect the market clearing price. Perhaps the most important is how frequently transactions occur in the market. Economists call markets with many frequent transactions liquid markets. The market for stocks and many other financial assets are examples of liquid markets. On the other hand, when transactions occur infrequently, markets are slower to adjust and clear. Economists call these types of markets illiquid markets. When prices adjust slowly in a market, we say that prices in the market are sticky. This means that despite changes in market conditions, the price in the market might remain unchanged for an extended period of time. This is true of the market for many types of cars. When a new line of cars is launched, prices are defined for a year and will not change until the next line is launched. This is also what Keynes observed in the labor market. The “price” in labor markets is the wage paid to employees. You’ll often hear economists say that “wages are sticky.” This is because employers typically hire workers with wage contracts and salaries that are fixed for a particular period of time. Wages and salaries aren’t usually adjusted day-to-day or even week-to-week, so they are slow to adjust to changes in supply and demand. 3. Shocks, Usually Caused by a Crisis The COVID-19 pandemic is an example of how shocks can affect markets. The pandemic caused a swift change in the demand and supply of many goods such as toilet paper, ventilators, and the demand for services that became inaccessible due to lockdowns and other preventive policies. For goods and services where demand suddenly increased, suppliers needed to adapt and ramp up production, but due to the sudden and unexpected nature of the shock, suppliers were not always able to adapt quickly. Whether these shocks are temporary or permanent, they can produce shortages and surpluses that disrupt markets by knocking them out of equilibrium. 4. New Technologies or Competitors When a new technology reduces production costs, suppliers can produce more goods at a lower cost, and therefore, sell at a lower price. In this case, the equilibrium price and quantity will shift to a lower price and greater quantity. In a supply and demand diagram, this can be represented as a shift of the supply curve. The same could be the case for the arrival of new competitors in the market. The presence of new firms in a market increases supply and results in a shift of the equilibrium. Taxes, subsidies, price caps, regulations, even bans to specific companies. All of these affect the equilibrium and can affect how markets clear. For example, the 2019 US government’s ban on Huawei created a temporary shortage of goods in the market for a specific range of smartphones. Let’s do an example of a market where the price is outside of the equilibrium. Suppose the demand for bicycles is given by the following equation: P_{D} = \frac{1270-4Q_{D}}{7} On the other side, suppose that the supply for bicycles in the same market is given by the equation: P_{S} = \frac{2Q_{S}-210}{5} At the beginning, suppose that the price in the market is P=90. At this price level, consumers will demand: Q_{D}= \frac{1270-7P}{4}=\frac{1270-7(90)}{4}=\frac{1270-630}{4}=\frac{640}{4}=160 Producers will offer: Q_{S}=\frac{5P+210}{2}=\frac{5(90)+210}{2}=\frac{450+210}{2}=\frac{660}{2}=330 Quantity demanded is lower than quantity supplied, causing a surplus (excess supply) of 170 units of the good. Q_{S}-Q_{D} =330-160=170. Now suppose that a new price level of P=10 is defined. Since the price is lower, producers will have an incentive to produce fewer goods, but the consumers will demand a higher quantity. The new respective quantities for demand and supply at this price level are (respectively): Q_{D}= \frac{1270-7P}{4}=\frac{1270-7(10)}{4}=\frac{1270-70}{4}=\frac{1200}{4}=300 Q_{S}=\frac{5P+210}{2}=\frac{5(10)+210}{2}=\frac{50+210}{2}=\frac{260}{2}=130 Since quantity supplied is lower than quantity demanded, we have a shortage of 170 units. Q_{S}-Q_{D} =130-300=-170. Finally, to find the equilibrium, we must find a price where quantity supplied equals quantity demanded. We can do this by equating both functions: P_{S}=P_{D} \frac{1270-4Q}{7}=\frac{2Q-210}{5} 6350-20Q=14Q-1470 6350+1470=14Q+20Q 7820=34Q Q=\frac{7820}{34} = 230 P=\frac{1270-4(230)}{7}=50=\frac{2(230)-210}{5} At a price of P=50, consumers will demand Q=230, and producers will supply the same amount. We have found the market clearing price! How do you find the market clearing price? The process to find the market clearing price consists of finding the moment in which demand and supply are willing to exchange the same quantity at the same price. Let’s do another example. Suppose that in the market for jackets, supply and demand are defined by the following functions: Q_{D} =\frac{1260-5P}{7} Q_{S}=P-60 To find the market clearing price, we need to find the price for which Q_{D} = Q_{S} Q_{D}=Q_{S} \frac{1260-5P}{7}=P-60 1260-5P=7P-420 1260+420=7P+5P 1680=12P P=\frac{1680}{12}=140 When the price is 140, quantity demanded is equal to quantity supplied, and the market equilibrium will be: (Q,P)=(80,140). What happens when the price is above the equilibrium price? As we have discussed, when the price is above the equilibrium price, there will be an excess supply of goods (a surplus). In this case, consumers are willing to buy fewer units of the good than suppliers would like to sell. Graphically, this is represented by Point A and B. If a market experiences a surplus, the laws of supply and demand should drive the market price back down to the market clearing price. What happens when the price is below the equilibrium price? If price is below the equilibrium, consumers will demand more goods than what suppliers are willing to sell. This translates to an excess of demand of goods (a shortage). Graphically, this is represented by Point C and D. If a market experiences a shortage, the laws of supply and demand should drive the market price back up to the market clearing price.
 Consumer Behavior in Low Involvement Product Purchase: A Stochastic Model Consumer Behavior in Low Involvement Product Purchase: A Stochastic Model Indian Institute of Management Kozhikode (IIMK), Kozhikode, India Consumers provide less time and collect less information in buying decision of low involvement products. Consequently, they engage little thought process in their purchase decision. This is primarily because low involvement products are often low priced and carry low cost of failure. Along with uncertainties, in many situations, particularly in low involvement products and frequently purchased consumer packaged goods, little conscious decision making takes place. In such situations stochastic model―concentrating on random nature of choice becomes more appropriate than deterministic approach. In this research, we develop a stochastic model for consumer buying decision of low involvement products. We have considered agitations a buyer experience during their purchase occasion. These agitations create internal force that stimulates consumer mind. These forces are chaotic, and so the resultant force which makes purchase decision random. Consumer Behavior, Low Involvement Consumer Purchase Decision of low involvement products often involve very little thought process, information gathering and proper decision making. For example, buying a pack of chewing gum or chocolate while checking out in a retail store barely takes more than a few seconds as these products are low involvement and consumers buy these products on their impulses. This is primarily because low involvement products are often low priced and carry low cost of failure. Information available to take a buying decision of such products may not strongly guide a purchaser while taking buying decision. Previous researches suggested multiple approaches in modelling consumer purchase behaviour of low involvement products. Along with uncertainties, in many situations, particularly in low involvement products and frequently purchased consumer packaged goods, little conscious decision making takes place. In such situations stochastic model―concentrating on random nature of choice becomes more appropriate than deterministic approach. Another reason that stochastic choice model is suitable for such goods is availability of large volume of brand switching data with market researchers [1] . We assume that stochastic purchase behavior observed in the consumers is due to some kind of agitation within the consumers before or during the purchase. Due to such agitation in the mind of the consumers, purchase decision of above products experience several internal forces in different directions. Consequently, these forces imbalance consumers’ mind and purchase decision become random. Since these forces are haphazard, resultant force that influences the purchase decision is also haphazard. The smaller the involvement, the larger is the resultant force and consequently more irregular the movements are. This study contributes to marketing literature by modelling consumer’s choice behaviour of low involvement product mapping consumer’s mind with movement of gaseous molecules. There is no study available in consumer behaviour literature that has conceived movement of consumer mind with respect to stochastic agitation and consequently their decision making. Research on consumer Available in marketing literature since 1963. Roberts and Lattin [2] provided systemize catalogue of consumers consideration set. In addition to that there are several research done to investigate consumer behaviour in considering different brands while taking a buying decision. Research done by DeSarbo, Young and Rangasswamy [3] proposed stochastic multidimensional unfolding to represent the consideration set. Several other researchers have considered Bernoulli model to captcha heterogeneity in stochasticity of consumer behaviour. Other researchers have considered Markov process to do the same. The independent and dependent probability diffusion process was investigated and mathematical models for marketing share calculation through Markov chain was developed. Subsequent research also investigated linear model to predict consumer behaviour. Subsequent research also explode stochastic consumer behaviour model for market penetration of different varieties. Many researchers have taken this models one step farther by generalizing through empirical validation. Mathematical models on customer retention has also been studied in consumer behaviour literature. Gupta and Zeithaml [4] , define customer retention as the probability of customer being alive or repeat buying from a firm. Other researchers have also developed various models of retention [5] . However, the above researchers have not investigated consumer behaviour in the context of random agitation in consumer’s mind at the place of purchase, and have not modelled how such behaviour takes place. This research tries to bridge that gap in marketing literature of consumer behaviour. The basic concept on which kinetic theory stands is the motion of molecules due to thermal agitation. Although kinetic theory explains many thermal and allied phenomena, direct experimental evidence of random motion of molecules came only after the theory of Robert Brown and termed as Brownian motion. In 1900, Bachelier exhibited Markovian nature of Brownian motion. According to such phenomenon position of a particle at time (t + s) depends on its position at time t and does not depend on its position before time t. Markov model in stochastic consumer choice behavior assumes impact of just previous purchase on present purchase. This can be extended to post purchase behavior of the consumer for next purchase. Hence this assumption is similar to Bachelier’s exhibition of Markovian nature of Brownian motion. This paper introduces and implements mathematical approach of Brownian motion in consumers’ purchase decision (Dt) in low involvement product. We assume here that this kind of stochastic process is a stochastic process of continuous time {Dt: 0 ≤ t < T}. This means that it is a standard Brownian motion on (0,T) if it has the following properties. 1) D0 = 0 2) The increments of Dt are independent: that is for any finite sets of times. 0\le {t}_{1}<{t}_{2}<{t}_{3}\cdots <{t}_{n}<T then the random variables {D}_{t2}–{D}_{t1},{D}_{t3}-{D}_{t2},{D}_{t4}-{D}_{t3},\cdots ,{D}_{tn}-{D}_{t\left(n-1\right)} 3) For any 0 ≤ c < t < T, the increment Dt - Dc has a Gaussian distribution with mean zero and variance (t - c). 4) For all α in a set of probability one, Dt(α) is a continuous function of “t”. The main purpose of this article is to show that one can represent consumers’ purchase behavior of low involvement and frequently purchased products as Brownian motion. In other words, we are to prove the existence of the process by conforming above four defined properties of Brownian motion. With a view to provide support to our hypothesis that construction of Brownian motion is true, we are required to use multivariate Gaussian distribution. The most critical factor of multivariate Gaussian is that their joint density is fully determined by mean vector and variance-covariance matrix [6] . Hence we are required to identify tools that help us check that agitations of our processes are Gaussian distribution as well as the agitations are independent. As per the properties of multivariate Gaussian, if B is a d-dimensional random vector, then Mean vector of B=\left[\begin{array}{c}{B}_{1}\\ {B}_{2}\\ \cdots \\ {B}_{d}\end{array}\right] \mu =\left[\begin{array}{c}{\mu }_{1}\\ {\mu }_{2}\\ \begin{array}{l}{\mu }_{3}\\ \cdots \end{array}\\ {\mu }_{d}\end{array}\right] And the varience-covarience matrix of B is given by \sum =\left[\begin{array}{c}{\sigma }_{11}{\sigma }_{12}{\sigma }_{13}\cdots {\sigma }_{1d}\\ {\sigma }_{21}{\sigma }_{22}{\sigma }_{23}\cdots {\sigma }_{2d}\\ {\sigma }_{31}{\sigma }_{32}{\sigma }_{33}\cdots {\sigma }_{3d}\\ \cdots \\ \cdots \\ {\sigma }_{d1}{\sigma }_{d2}{\sigma }_{d3}\cdots {\sigma }_{dd}\end{array}\right] {\sigma }_{ij}=E\left[\left({B}_{i}-{\mu }_{i}\right)\left({B}_{i}-{\mu }_{i}\right)\right] f\left(B\right)\text{}=\frac{1}{\sqrt[d]{2}\pi }\left[\mathrm{exp}\left\{-\frac{1}{2}{\left(x-\mu \right)}^{\text{T}}{\sum }^{-1}\left(x-\mu \right)\right\}\right] x\in {R}^{d} The profile of Brownian motion in this case can be represented as at each instant in time, the consumer randomly chooses a brand and then purchases that brand. This approach is both intuitive and rigorous [7] . We assume each time of purchase is a combination of several decisions taken at multiple of Δt. In each of such instant, the consumer randomly chooses to buy or not to buy one brand and goes to second. We can express that if a consumer starts from a particular point, in every Δt time it covers a distance of Δx from starting point. To model this randomness, we consider a sequence of identically distributed random variables (Yi, i > 1) such that P\left({Y}_{i}=\Delta x\right)=P\left({Y}_{i}=-\Delta x\right)=1/2 At time t, the purchaser to make [t/Δt] moves (where [g] denotes the integer part of g). The respondents position will be {U}_{t}={Y}_{1}+{Y}_{2}+{Y}_{3}+{Y}_{4}+\cdots +{Y}_{\left[t/\Delta t\right]} . All this takes place on a very small scale at the time of purchase or just before purchase. We would like to assume both Δt and Δx tend to zero in an appropriate way. Note that E{U}^{2}\approx {\left(\Delta x\right)}^{2}\cdot \left(t/\Delta t\right) . In order for this expression to have a limit, we must consider that {\left(\Delta x\right)}^{2}/\Delta t have a limit. The increment Δt will be very small and Δx will also be small so that (Δx)2 will also be very small. The most appropriate choice is Δx = (Δt)1/2 and Δt = 1/n, where “n” is integer. The true formulation of this approach can be represented as follows: On a probability space (Ω, Ψ, P), let P\left({X}_{i}=1\right)=P\left({X}_{i}=-1\right)=1/2 i\in N Be a group of identically distributed random variables (the Xi are said to be independent random variables). To this group, we express the sequence (Dn, n ≥ 0) defined by {D}_{0}=0 {D}_{n}={\sum }_{i=1}^{n}{X}_{i} We have E(Dn) = 0 and Var (Dn) = n. we can say that the sequence is a random walk [8] . We can illustrate it as a game of tossing a coin. The player gains $1 if it comes up head and looses $1 if tails appear. Let us start with that the player has no initial wealth (D0 = 0). His capital at time n (after n tosses) is Dn. If we draw the results of N successive tosses, we can plot the outcomes as below: It is to be noted that the sequence (Dm- Dn) at m ≥ n is independent of \left({D}_{0},{D}_{1},{D}_{2},\cdots ,{D}_{n}\right) . Here Dm - Dn has the same probability law as Dm - n as the binomial distribution depends only on (m - n). Let us follow a two stage normalization. Let N be fixed. In the first stage we transform the time interval [0,N] into an interval [0,1] and in the second stage, we change the scale of values taken by Dn. Hence, actually we define a group of random variables bt real numbers of the form k/N for k\in N {U}_{k/N}=\frac{1}{\sqrt{N}}{D}_{k} We move from {U}_{\frac{k}{N}} {U}_{\frac{k+1}{N}} in a very small time interval 1/N by making a small displacement \frac{1}{\sqrt{N}} in any of the two directions). We have E\left({U}_{\frac{k}{N}}\right)=0 \text{Var}\left({U}_{\frac{k}{N}}\right)=k/N The independency and stationary properties of random walk still holds. Hence UN converges to a process B that has continuous path (i.e. for almost all ω, the mapping t → Bt (ω) is continuous) and which satisfies 2) Bt+s- Bt has normal distribution with N (0, s); 3) Bt+s - Bt is independent of Bt(i)- Bt(i+1) for {t}_{0}<{t}_{1}<{t}_{2}<\cdots <\text{}{t}_{n}=t. Brownian motion is the only process that satisfies (1) and (3) above. To show that the distribution depends only on s, we introduce the notation ΔB(t) = B(t + Δt) - B(t) where B(t) = Bt and Δt > 0. The Brownian motion then satisfies E\left[\Delta B\left(t\right)\right]=0 Var\left[\Delta B\left(t\right)\right]=\Delta t {E}_{t}\left[\Delta B\left(t\right)\right]=0 {E}_{t}\left[{\left(\Delta B\left(t\right)\right)}^{2}\right]=\Delta t using (2) and (3) where Et is the conditional expectation with respect to Ψt = σ(Bs, s ≤ t). The equality Et(ΔB(t)) = 0 can be interpreted as if the position of the Brownian motion at time t is known, then averahe move between time t and t + Δt is zero. This property is the result of independency and Gaussian nature of Brownian motion [9] . Above analysis suggests that in case of low involvement products, the consumers select a product randomly from a set of such product options that fetch her similar utility. Observing the probability distribution function (pdf) and different attributes of low involvement products, the consumers would be able to derive a steady utility which is again a distribution function in mathematical context and that distribution follows the path of a particle in Brownian motion [10] . Our model would help the marketers to decide that low involvement products need to be kept places like aisles, passageways, around the check in checkout counters so that it creates agitation in customer’s mind when they walk inside retail store. Adhikari, A. (2019) Consumer Behavior in Low Involvement Product Purchase: A Stochastic Model. Theoretical Economics Letters, 9, 424-430. https://doi.org/10.4236/tel.2019.92030 1. Kotler, P., Lilien, G.L. and Moorthy, K.S. (1992) Marketing Models. Kotler on Marketing y Marketing Insights: From A to Z. 2. Roberts, J.H. and Lattin, J.M. (1997) Consideration: Review of Research and Prospects for Future Insights. Journal of Marketing Research, 406-410. https://doi.org/10.1177/002224379703400309 3. DeSarbo, W.S., Young, M.R. and Rangaswamy, A. (1997) A Parametric Multidimensional Unfolding Procedure for Incomplete Nonmetric Preference/Choice Set Data in Marketing Research. Journal of Marketing Research, 499-516. https://doi.org/10.1177/002224379703400407 4. Gupta, S. and Zeithaml, V. (2006) Customer Metrics and Their Impact on Financial Performance. Marketing Science, 25, 718-739. https://doi.org/10.1287/mksc.1060.0221 5. Dwyer, F.R. (1997) Customer Lifetime Valuation to Support Marketing Decision Making. Journal of Interactive Marketing, 11, 6-13. 6. Elheridge, A. (2002) A Course in Financial Calculus. Cambridge. 7. Lin, K.Y. and Sibdari, S.Y. (2009) Dynamic Price Competition with Discretecustomer Choices. European Journal of Operational Research, 197, 969-980. https://doi.org/10.1016/j.ejor.2007.12.040 8. Osborne, M.F.M. (1959) Brownian Motion in the Stock Market. Operations Research, 7, 145-173. https://doi.org/10.1287/opre.7.2.145 9. Tremblay, C.H., Tremblay, M.J. and Tremblay, V.J. (2011) A General Cournot-Bertrand Model with Homogeneous Goods. Theoretical Economics Letters, 1, 38. https://doi.org/10.4236/tel.2011.12009 10. Soliman, A. and Obi, J. (2017) Bank Capitalisation and Stock Market Growth: Theoretical Model and Empirical Evidence. Theoretical Economics Letters, 7, 1747-1760. https://doi.org/10.4236/tel.2017.76118
Haqiq, Abdelkrim ; Lambadaris, I. ; Mikou, N. ; Orozco-Barbosa, L. We consider a system of three queues and two types of packets. Each packet arriving at this system finds in front of it a controller who either sends it in the first queue or rejects it according to a QoS criterion. When the packet finishes its service in the first queue, it is probabilistically routed to one of two other parallel queues. The objective is to minimize a QoS discounted cost over an infinite horizon. The cost function is composed of a waiting cost per packet in each queue and a rejection cost in the first queue. Subsequently, we generalize this problem by considering a system of \left(m+1\right) queues and n types of packets. We show that an optimal policy is monotonic. Mots clés : queues, flow control, dynamic programming, policies, IP network author = {Haqiq, Abdelkrim and Lambadaris, I. and Mikou, N. and Orozco-Barbosa, L.}, title = {Optimal {QoS} control of interacting service stations}, AU - Haqiq, Abdelkrim AU - Lambadaris, I. AU - Mikou, N. AU - Orozco-Barbosa, L. TI - Optimal QoS control of interacting service stations Haqiq, Abdelkrim; Lambadaris, I.; Mikou, N.; Orozco-Barbosa, L. Optimal QoS control of interacting service stations. RAIRO - Operations Research - Recherche Opérationnelle, Tome 36 (2002) no. 3, pp. 191-208. doi : 10.1051/ro:2003002. http://www.numdam.org/articles/10.1051/ro:2003002/ [1] R. Braden, Requirements for Internet Hosts Communication Layers. STD 3, RFC 1122 (1989). [2] I. Christidou, I. Lambadaris and R. Mazumdar, Optimal Control of Arrivals to a Feedback Queueing System, in Proc. of the 27th CDC. Austin, Texas (1988) 663-667. [3] E. Davis, Optimal Control of Arrivals to a Two-server Queueing System with Separate Queues, Ph.D. Dissertation, Program in Operations Research. N.Y. State University, Raleigh (1977). [4] A. Ephremides, P. Varaiya and J. Walrand, A Simple Dynamic Routing Problem. IEEE Trans. Automat. Control 25 (1980) 690-693. | MR 583444 | Zbl 0451.90060 [5] W. Farrell, Optimal Switching Policies in a Nonhomogeneous Exponential Queueing System, Ph.D. Dissertation. Graduate School of Management, University of California, Los Angeles (1976). [6] S. Floyd and V. Jacobson, Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Trans. Networking (1993). [7] H. Ghoneim, Optimal Control of Arrivals to a Network of Two Queues in Series, Ph.D. Dissertation, Program in Operations Research. North Carolina State University, Raleigh (1980). [8] H. Ghoneim and S. Stidham, Optimal Control of Arrivals to Two Queues in Series. Eur. J. Oper. Res. 21 (1985) 399-409. | MR 811095 | Zbl 0569.60091 [9] B. Hajek, Optimal Control of Two Interacting Service Stations. IEEE Trans. Automat. Control 29 (1984) 491-499. | MR 745188 | Zbl 0555.90047 [10] A. Haqiq and N. Mikou, Contrôle Dynamique de Flux dans un Système d'Attente avec Panne. RAIRO: Oper. Res. 33 (1999) 69-86. | Numdam | Zbl 1024.90022 [11] A. Haqiq, Admission and routing dynamic control in a queueing system with breakdown, in Troisième Conférence Internationale en Recheche Opérationnelle. Marrakech, Maroc (2002). [12] V. Jacobson, Congestion Avoidance and Control. ACM SIGCOMM (1988). [13] P.R. Kumar and P. Varaiya, Stochastic Systems: Estimation, Identification and Adaptive Control. Prentice Hall (1986). | Zbl 0706.93057 [14] I. Lambadaris and P. Narayan, Jointly Optimal Admission and Routing Controls at a Network Node. Commun. Statist. Stochastic Models 10 (1994) 223-252. | MR 1259860 | Zbl 0791.60080 [15] I. Lambadaris, P. Narayan and I. Viniotis, Optimal Service Allocation among Two Heterogeneous Traffic Types at a Network Node with no Queueing, in Proc. of the 26th CDC. Los Angeles, Vol. CA (1987) 1496-1498. [16] A. Lazar, Optimal Flow Control of a Class of Queueing Networks in Equilibrium. IEEE Trans. Automat. Control 28 (1983) 1001-1007. | MR 722351 | Zbl 0526.90041 [17] S. Lippman, Applying a New Device in the Optimization of Exponential Queueing Systems. Oper. Res. 23 (1975) 687-710. | MR 443125 | Zbl 0312.60048 [18] S. Lippman, On Dynamic Programming with Unbounded Rewards. Management Sci. 21 (1975) 1225-1233. | MR 398535 | Zbl 0309.90017 [19] J. Nagle, Congestion Control in IP/TCP. RFC 896 (1984). [20] Z. Rosberg, P. Varaiya and J. Walrand, Optimal Control of Service in Tandem Queues. IEEE Trans. Automat. Control AC-27 (1982) 600-610. | MR 680318 | Zbl 0497.90024 [21] W. Stevens, TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms. RFC 2001 (1997). [22] S. Stidham, Optimal Control of Admission to a Queueing System. IEEE Trans. Automat. Control AC-30 (1985) 705-713. | MR 794203 | Zbl 0563.90044 [23] S. Stidham and R. Weber, Control of Service Rates in Cycles and Series of Queues. Stat. Lab., Univ. Cambridge, Cambridge (1983). [24] P. Varaiya, Note on Optimization. Van Nostrand Reinhold, New York (1972). | Zbl 0251.90034 [25] I. Viniotis and A. Ephremides, Optimal Switching of Voice and Data at a Network Node, in Proc. of the 26th CDC, Vol. CA. Los Angeles (1987) 1504-1507. [26] J. Walrand, An Introduction to Queueing Networks. Prentice Hall International Editions, Englwood Cliffs, New Jersey (1988). | Zbl 0854.60090 [27] R. Weber, On the Optimal Assignment of Customers to Parallel Servers. J. Appl. Probab. 15 (1978) 406-413. | MR 518586 | Zbl 0378.60095 [28] W. Winston, Optimalily of the Shortest-Processing-Time Discipline. J. Appl. Probab. 14 (1977) 181-189. | MR 428516 | Zbl 0357.60023
School of Management, Chongqing University of Technology, Chongqing, China. Abstract: In a two-echelon “Farmer-Supermarket Direct-Purchase” supply chain composed of a supermarket and a farmer, the dominant supermarket used the Nash equilibrium solution and the Shapley value as the fair reference points. By constructing the Stackelberg game model, this paper analyzed the influence of supermarket’s fairness preference on the operation of supply chains and made a sensitivity analysis. The research indicates that supermarket’s fairness preference decreases order prices and the effort level of farmer increases the utility of the supermarket but reduces the income of the farmer. No matter whether the Nash equilibrium solution or the Shapley value is used as the fair reference point of the supermarket, the supply chain cannot be achieved coordination. However, the supply chain can be improved with the Shapley value as the fair reference point. Thus, the farmer, the supermarket and the supply chain tend to use Shapley value as a fairness reference point for the supermarket. Keywords: Fair Reference Point, The Shapley Value, The Nash Equilibrium Solution, Farmer-Supermarket Direct-Purchase d=y\left(p,{e}_{s}\right)\cdot \epsilon y\left(p,{e}_{s}\right)=\eta {p}^{-a}{e}_{s}^{b} \eta \eta >0 a>0 0<b<1 p>w>{e}_{s}>0 \epsilon {e}_{s} \epsilon Q=\alpha y\left(p,{e}_{s}\right)=\alpha \eta {p}^{-a}{e}_{s}^{b} \alpha \in \left(0,1\right) \alpha \epsilon {\lambda }_{r} {\lambda }_{r}>0 {\lambda }_{r} {\pi }_{s} {\pi }_{r} {\pi }_{sc} {u}_{s} {u}_{r} {u}_{sc} {\stackrel{¯}{\pi }}_{r}^{Na} {\stackrel{¯}{\pi }}_{r}^{SP} {b}^{*} N{a}^{*} S{P}^{*} {\pi }_{s}=\left(w-{e}_{s}\right)\alpha \eta {p}^{-a}{e}_{s}^{b} \begin{array}{c}{\pi }_{r}=pE\left\{\mathrm{min}\left[Q,d\right]\right\}-wQ\\ ={\int }_{0}^{\alpha }\left(pd-wQ\right){d}_{\epsilon }+{\int }_{\alpha }^{1}\left(pQ-wQ\right){d}_{\epsilon }\\ =\left(p-w\right)\alpha \eta {p}^{-a}{e}_{s}^{b}-\frac{{\alpha }^{2}\eta {p}^{1-a}{e}_{s}^{b}}{2}\end{array} {\pi }_{sc}=\left(p-{e}_{s}\right)\alpha \eta {p}^{-a}{e}_{s}^{b}-\frac{{\alpha }^{2}\eta {p}^{1-a}{e}_{s}^{b}}{2} \underset{{e}_{s}^{c}}{\mathrm{max}}{\pi }_{sc}=\left(p-{e}_{s}\right)\alpha \eta {p}^{-a}{e}_{s}^{b}-\frac{{\alpha }^{2}\eta {p}^{1-a}{e}_{s}^{b}}{2} \frac{{\text{d}}^{2}{\pi }_{sc}}{\text{d}{e}_{s}^{2}}=-\frac{b\alpha \eta {e}_{s}^{b}\left[p\left(2-\alpha \right)\left(1-b\right)+2{e}_{s}\left(b+1\right)\right]}{2{p}^{a}{e}_{s}^{2}}<0 \frac{\text{d}{\pi }_{sc}}{\text{d}{e}_{s}}=0 {e}_{s}^{c}=\frac{bp\left(2-\alpha \right)}{2\left(b+1\right)} {e}_{s}^{c} {\pi }_{sc}^{c}=\frac{\eta \alpha \left(2-\alpha \right){p}^{1-a}}{2\left(b+1\right)}{\left(\frac{bp\left(2-\alpha \right)}{2\left(b+1\right)}\right)}^{b} {P}_{1} {P}_{1} \underset{w}{\mathrm{max}}{\pi }_{r}=\left(p-w\right)\alpha \eta {p}^{-a}{e}_{s}^{b}-\frac{{\alpha }^{2}\eta {p}^{1-a}{e}_{s}^{b}}{2} {e}_{s}^{b}\left({w}^{b}\right)\in \mathrm{arg}\mathrm{max}{\pi }_{s}^{b} {e}_{s}^{b}<{w}^{b} {e}_{s}^{b*}=\frac{{b}^{2}p\left(2-\alpha \right)}{2{\left(b+1\right)}^{2}} {w}^{b*}=\frac{bp\left(2-\alpha \right)}{2b+2} {w}^{b*} {e}_{s}^{b*} {\pi }_{s}^{b*}=\frac{\left(2-\alpha \right)\eta \alpha b{p}^{1-a}}{2{\left(b+1\right)}^{2}}{\left(\frac{{b}^{2}p\left(2-\alpha \right)}{2{\left(b+1\right)}^{2}}\right)}^{b} , the profit of the supermarket is: {\pi }_{r}^{b*}=\frac{\left(2-\alpha \right)\eta \alpha {p}^{1-a}}{2\left(b+1\right)}{\left(\frac{{b}^{2}p\left(2-\alpha \right)}{2{\left(b+1\right)}^{2}}\right)}^{b} , and the profit of supply chain is {\pi }_{sc}^{b*}=\frac{\left(2-\alpha \right)\left(2b+1\right)\eta \alpha {p}^{1-a}}{2{\left(b+1\right)}^{2}}{\left(\frac{{b}^{2}p\left(2-\alpha \right)}{2{\left(b+1\right)}^{2}}\right)}^{b} {\stackrel{¯}{\pi }}_{r}^{Na}=\frac{1+\lambda {}_{r}}{2+\lambda {}_{r}}{\pi }_{sc} {u}_{s}^{Na}={\pi }_{s} {u}_{r}^{Na}={\pi }_{r}-{\lambda }_{r}\left({\stackrel{¯}{\pi }}_{r}^{Na}-{\pi }_{r}\right) {u}_{sc}^{Na}={u}_{s}^{Na}+{u}_{r}^{Na} {P}_{2} {P}_{2} \underset{w}{\mathrm{max}}{u}_{s}^{Na}={\pi }_{r}-{\lambda }_{r}\left({\stackrel{¯}{\pi }}_{r}^{Na}-{\pi }_{r}\right) {e}_{s}^{Na}\left({w}^{Na}\right)\in \mathrm{arg}\mathrm{max}{u}_{s}^{Na} {e}_{s}^{Na}<{w}^{Na} {e}_{s}^{Na*}=\frac{{b}^{2}p\left(2-\alpha \right)}{\left(2b+2+{\lambda }_{r}\right)\left(1+b\right)} {w}^{Na*}=\frac{bp\left(2-\alpha \right)}{2b+2+{\lambda }_{r}} {w}^{Na*} {e}_{s}^{Na*} {u}_{s}^{Na*}=\frac{\left(2-\alpha \right)\eta \alpha b{p}^{1-a}}{\left(2b+2+{\lambda }_{r}\right)\left(1+b\right)}{\left(\frac{{b}^{2}p\left(2-\alpha \right)}{\left(2b+2+{\lambda }_{r}\right)\left(1+b\right)}\right)}^{b} {u}_{r}^{Na*}=\frac{\left(2-\alpha \right)\left(1+{\lambda }_{r}\right)\eta \alpha {p}^{1-a}}{\left(2+{\lambda }_{r}\right)\left(1+b\right)}{\left(\frac{{b}^{2}p\left(2-\alpha \right)}{\left(2b+2+{\lambda }_{r}\right)\left(1+b\right)}\right)}^{b} {u}_{sc}^{Na*}=\frac{\left({\lambda }_{r}{}^{2}+3b{\lambda }_{r}+3{\lambda }_{r}+4b+2\right)\left(2-\alpha \right)\eta \alpha {p}^{1-a}}{\left(2b+2+{\lambda }_{r}\right)\left(2+{\lambda }_{r}\right)\left(1+b\right)}{\left(\frac{{b}^{2}p\left(2-\alpha \right)}{\left(2b+2+{\lambda }_{r}\right)\left(1+b\right)}\right)}^{b} {e}_{s}^{Na*}-{e}_{s}^{*}=-\frac{{b}^{2}p{\lambda }_{r}\left(2-\alpha \right)}{\left(2b+2+{\lambda }_{r}\right){\left(1+b\right)}^{2}}<0 \frac{\text{d}{w}^{Na*}}{\text{d}{\lambda }_{r}}=-\frac{bp\left(2-\alpha \right)}{\left(2b+2+{\lambda }_{r}\right)}<0 0<{\lambda }_{r}<\frac{-3b+1+\sqrt{9{b}^{2}+2b+1}}{2b} 0<{\lambda }_{r}<\frac{-3b+1+\sqrt{9{b}^{2}+2b+1}}{2b} \frac{\text{d}{u}_{r}^{Na*}}{\text{d}{\lambda }_{r}}>0 {\lambda }_{r}>\frac{-3b+1+\sqrt{9{b}^{2}+2b+1}}{2b} \frac{\text{d}{u}_{r}^{Na*}}{\text{d}{\lambda }_{r}}<0 {w}^{Na*}<{w}^{b*} {e}_{s}^{Na*}<{e}_{s}^{b*}<{e}_{s}^{c} {\stackrel{¯}{\pi }}_{r}^{SP}=\frac{{\pi }_{sc}}{2} {u}_{s}^{SP}={\pi }_{s} {u}_{r}^{SP}={\pi }_{r}-{\lambda }_{r}\left({\stackrel{¯}{\pi }}_{r}^{SP}-{\pi }_{r}\right) {u}_{sc}={u}_{s}^{SP}+{u}_{r}^{SP} {P}_{3} {P}_{3} \underset{w}{\mathrm{max}}{u}_{r}^{SP}={\pi }_{r}-{\lambda }_{r}\left({\stackrel{¯}{\pi }}_{r}^{SP}-{\pi }_{r}\right) {e}_{s}^{SP}\left({w}^{SP}\right)\in \mathrm{arg}\mathrm{max}{u}_{s}^{SP} {e}_{s}^{SP}<{w}^{SP} {e}_{s}^{SP*}=\frac{{b}^{2}p\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)}{2\left(2b+b{\lambda }_{r}+2{\lambda }_{r}+2\right)\left(1+b\right)} {w}^{SP*}=\frac{bp\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)}{\left(2b+4\right){\lambda }_{r}+4b+4} {w}^{SP*} {e}_{s}^{SP*} {u}_{s}^{SP*}=\frac{\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)\eta \alpha b{p}^{1-a}}{2\left(2b+b{\lambda }_{r}+2{\lambda }_{r}+2\right)\left(1+b\right)}{\left(\frac{{b}^{2}p\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)}{2\left(2b+b{\lambda }_{r}+2{\lambda }_{r}+2\right)\left(1+b\right)}\right)}^{b} , the utility of the supermarket is: {u}_{r}^{SP*}=\frac{\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)\eta \alpha {p}^{1-a}}{4+4b}{\left(\frac{{b}^{2}p\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)}{2\left(2b+b{\lambda }_{r}+2{\lambda }_{r}+2\right)\left(1+b\right)}\right)}^{b} , andthe utility of supply chain is {u}_{sc}^{SP*}=\frac{\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)\left(b{\lambda }_{r}+4b+2{\lambda }_{r}+2\right)\eta \alpha {p}^{1-a}}{4\left(b{\lambda }_{r}+4b+2{\lambda }_{r}+2\right)\left(1+b\right)}{\left(\frac{{b}^{2}p\left(2+{\lambda }_{r}\right)\left(2-\alpha \right)}{2\left(2b+b{\lambda }_{r}+2{\lambda }_{r}+2\right)\left(1+b\right)}\right)}^{b} {w}^{Na*}<{w}^{SP*}<{w}^{b*} {e}_{s}^{Na*}<{e}_{s}^{SP*}<{e}_{s}^{b*}<{e}_{s}^{c} {u}_{s}^{Na*}<{u}_{s}^{SP*}<{u}_{s}^{b*} {u}_{r}^{b*}<{u}_{r}^{Na*}<{u}_{r}^{SP*} {u}_{sc}^{Na*}<{\pi }_{sc}^{b*}<{u}_{sc}^{SP*}<{\pi }^{c} {w}^{Na*}<{w}^{SP*}<{w}^{b*} {e}_{s}^{Na*}<{e}_{s}^{SP*}<{e}_{s}^{b*}<{e}_{s}^{c} {u}_{s}^{Na*}<{u}_{s}^{SP*}<{u}_{s}^{b*} {u}_{r}^{b*}<{u}_{r}^{Na*}<{u}_{r}^{SP*} {u}_{s}^{Na*}<{u}_{s}^{SP*}<{u}_{s}^{b*} {u}_{r}^{b*}<{u}_{r}^{SP*} {\pi }_{sc}^{b*}<{u}_{sc}^{SP*} {u}_{r}^{Na*}<{u}_{r}^{SP*} {e}_{s}^{Na*}<{e}_{s}^{SP*} {u}_{s}^{Na*}<{u}_{s}^{SP*} {u}_{sc}^{Na*}<{\pi }_{sc}^{b*}<{u}_{sc}^{SP*} \alpha =0.5 a=1 b=1 p=10 \eta =10 {\lambda }_{r}\in \left[0,1\right] {\lambda }_{r} {\lambda }_{r} {\lambda }_{r} {\lambda }_{r} {\lambda }_{r} 0<{\lambda }_{r}<\sqrt{3}-1 \sqrt{3}-1\le {\lambda }_{r}<1 Cite this paper: Qin, Y. and Le, H. (2019) Optimal Decision of Different Fair Reference Points in Supply Chain under “Farmer-Supermarket Direct-Purchase” Mode. Open Access Library Journal, 6, 1-17. doi: 10.4236/oalib.1105457. [1] Hernández, R., Reardon, T. and Berdegué, J. (2010) Supermarkets, Wholesalers, and Tomato Growers in Guatemala. Agricultural Economics, 36, 281-290. [2] Jack, L., Florez-Lopez, R. and Ramon-Jeronimo, J.M. (2018) Accounting, Performance Measurement and Fairness in UK Fresh Produce Supply Networks. Accounting, Organizations and Society, 64, 17-30. [3] Yang, Y. (2017) Discussion on the Mode and Strategy of “Agricultural-Super Docking”. Journal of Commercial Economics, No. 15, 115-117. [4] Cui, T.H., Raju, J.S. and Zhang, Z.J. (2007) Fairness and Channel Coordination. Management Science, 53, 1303-1314. https://doi.org/10.1287/mnsc.1060.0697 [5] Katok, E., Olsen, T. and Pavlov, V. (2014) Wholesale Pricing under Mild and Privately Known Concerns for Fairness. Production & Operations Management, 23, 285-302. https://doi.org/10.1111/j.1937-5956.2012.01388.x [6] Qin, Y., Wei, G. and Dong, J.X. (2017) The Signaling Game Model under Asymmetric Fairness-Concern Information. Cluster Computing, No. 8, 1-16. [7] Caliskan-Demirag, O., Chen, Y. and Li, J. (2010) Channel Coordination under Fairness Concerns and Nonlinear Demand. European Journal of Operational Research, 207, 1321-1326. https://doi.org/10.1016/j.ejor.2010.07.017 [8] Pu, X.J., Zhu, Q.Y. and Cao, W.B. (2014) Impact of Suppliers’ Fairness Preference on Price Equilibrium in the Retailer Dominated Supply Chain. Journal of Systems & Management, No. 6, 876-882. [9] Du, S., Nie, T., Chu, C., et al. (2014) Newsvendor Model for a Dyadic Supply Chain with Nash Bargaining Fairness Concerns. International Journal of Production Research, 52, 5070-5085. https://doi.org/10.1080/00207543.2014.895446 [10] Zhu, L.L., Shi, Y.C., Zhu, J.A., et al. (2015) Supply Chain Optimization Based on Shapley Fair Theory. Journal of University of Science and Technology of China, No. 6, 497-506. [11] Kurz, S., Maaser, N. and Napel, S. (2016) Fair Representation and a Linear Shapley Rule. Social Science Electronic Publishing, Chicago. [12] Pu, X.J., Zhu, Q.Y. and Lu, L. (2016) Reference Point Effect, Fairness Preference and the Relational Governance on the Supply Chain of “Leading Agricultural Enterprises + Farmers”. Journal of Industrial Engineering and Engineering Management, No. 2, 116-123. [13] Yao, G.X. and Pu, L.B. (2014) Research on Quality Improvement and Fairness Preference Based on “Agricultural-Super Docking” Model. Commercial Research, No. 5, 172-176. [14] Sun, Y., Liu, Z. and Yang, H. (2018) How Does Suppliers’ Fairness Affect the Relationship Quality of Agricultural Product Supply Chains? Journal of Food Quality, 2018, Article ID: 9313068. https://doi.org/10.1155/2018/9313068 [15] Feng, C., Yu, B., Wang, Y.T., et al. (2018) Fair Distribution Mechanism of Channel Profit under Exponential Demand in Agri-Food Supply Chain. Journal of Systems & Management, No. 3, 470-477. [16] Zhang, X. and Zhang, Q. (2017) Coordination of Fresh Agricultural Supply Chain Considering Fairness Concerns under Controlling the Loss by Freshness-Keeping. Chinese Journal of Systems Science, No. 3, 112-116. [17] Pu, X.J., Zhu, Q.Y. and Cao, W.B. (2014) Research on the Bilateral Effort of Supply Chains of Supplier Dominating in the Inspective of Fairness Preferences. Forecasting, No. 1, 56-60.
A distance relay is a type of protection relay most often used for transmission line protection. Distance relays measure the impedance from the installation side to the fault location and operates in response to changes in the ratio of measured current and voltage. The characteristic of the relay is represented in the complex Z plane, while the characteristic of the transmission line is represented as a straight line passing through the origin of the R-X plane, as shown in Figure 1. Figure 1. Mho relay characteristic One challenging situation for distance protection relays is when the power system is exposed to significant power swings. Power swings are oscillations in active and reactive power flows on a transmission line which can consequentially induce large disturbances, such as faults. The oscillation in the apparent power and bus voltages are observed by the relay as an impedance swing on the R-X plane. If the impedance trajectory enters the relay zone and stays there for a sufficient period of time, then the relay will issue a trip command. The higher the impedance of the source, the larger the circle in the R-X plane, allowing for a higher resistance tolerance. Since the mere existence of resistance at the fault location is a problem in the operation of the distance relays, the detection of power swings is solved by applying mathematical morphology. The electrical part of the model is shown in Figure 2. On both sides of the schematic there are 3-phase grids with RL impedance. The parameters of the grids are V= 230 V and f = 60 Hz. The grids are connected by a transmission line, of length 100 km. At the transmission line, two faults are located: a 3-phase fault in the middle and a 1-phase fault at the end of transmission line. Between the grid on the left side and transmission line there is a Distance protection relay which is controlling the contactor located next to it. Figure 2. Typhoon HIL schematic model for a Distance Protection Relay The protection logic implemented in the Distance protection relay block includes an Closing Opening Difference Operator (CODO) algorithm and a Fault Detection for measurement, which provides inputs to the trip logic. This is shown in detail in Figure 3. Note: The CODO algorithm is a contribution from one of the winning models of the 10for10 Typhoon HIL Awards program of 2019. The featured model’s author is Prof. Adriano Peres de Morais from the UFSM university Figure 3. Protection algorithm for a Distance Protection Relay The fault detection block is responsible for detecting the fault in the transmission line and determining if fault is inside zone 1, zone 2 or in both of them. Fault detection measures fault impedance according to voltage and current in the phase a: {Z}_{measured}= \frac{{{V}_{a}}^{rms}}{{{I}_{a}}^{rms}} {Z}_{measured} is impedance observed by the relay, while {{V}_{a}}^{rms} {{I}_{a}}^{rms} are the RMS values of voltage and current measured by the relay, respectively. Each point in the complex plane is defined by the R (x-axis) and the X (y-axis) according to following formulas: {R}_{measured}= {Z}_{measured} \mathrm{c}\mathrm{o}\mathrm{s}\left({\theta }_{V, I}\right) {X}_{measured}= {Z}_{measured} \mathrm{s}\mathrm{i}\mathrm{n}\left({\theta }_{V, I}\right) {R}_{measured} {X}_{measured} are the resistance and the reactance observed by the relay, and θV,I is the phase difference between current and voltage. The fault detection block provides fault signals to fault zone 1 and fault zone 2 depending on the measured values and settings for zone reaches and the transmission line characteristics. The preview of distance protection zones can be accessed by clicking on the preview button in the Distance Protection Relay component shown in Figure 4. Figure 4. Protection zones preview The Closing Opening Difference Operator (CODO) algorithm block contains C function blocks which calculate the fault filtering signal according to the model based on mathematical morphology (MM). MM is a nonlinear signal transformation tool for non-periodic transient signals. The mathematical calculation involved in MM includes only addition, subtraction, maximum, and minimum operations - suitable for real-time application. MM comprises two basic operations – dilation and erosion. Basic definitions of MM operators are listed below: {y}_{d}\left(n\right)=\left(f \oplus g\right)\left(n\right)=max\left\{f\left(n-m\right\}+g\left(m\right),\mathrm{ }\left(\mathrm{n}-\mathrm{m}\right)\in {\mathrm{D}}_{f}, m \in {\mathrm{D}}_{g} {y}_{e}\left(n\right)=\left(f \ominus g\right)\left(n\right)=min\left\{f\left(n+m\right\}-g\left(m\right),\mathrm{ }\left(\mathrm{n}+\mathrm{m}\right)\in {\mathrm{D}}_{f}, m \in {\mathrm{D}}_{g} {y}_{0}\left(n\right)=\left(f \ominus g\right)\oplus g\left(n\right) {y}_{c}\left(n\right)=\left(f \oplus g\right)\ominus g\left(n\right) The algorithm in which we can obtain the CODO signal is formed using equations (4), (5), (6), and (7). Its realisation in the model is shown in Figure 5. Finally, the trip logic block is responsible for calculating trip signals according to the fault detection signal, the CODO algorithm signal, and an external reset signal. Max. matrix memory utilization 26% Simulation step, electrical 2 µs Execution rate, signal processing 60 µs, 600 µs Figure 6. SCADA panel The purpose of this model is to show the response of the relay to a change in the impedance value. The impedance value depends on the frequency in the grid and on the presence of a fault in the transmission line. By changing this impedance, the CODO signal changes, and we can observe it trace graph. The SCADA Panel consists of 6 main parts: Distance protection commands and measurements Capture/Scope Power Swing Control The one line diagram displays the state of model values – frequency in the grid, contactor status, and also the presence and location of the transmission line fault. Inside the Distance Protection Relay group, you can do perform several actions: Enable/disable relay Observe the trip status Check the presence of a fault Change the values of zone reaches Track the change in impedance. The left diagram shows zoomed graph with zones, transmission line and observed fault impedance. The right diagram shows zoomed out picture in which you can better see impedance points during frequency swings. In the Faults section, you can choose which type of fault(s) you want to inject: 3-phase fault in the middle of transmission line 1-phase fault at the end of transmission line Both faults simultaneously The Extras group contains macros for grid initialization and for handling the one line diagram image. In the Capture/Scope widget, you can follow grid voltage values during frequency sweep, capture injecting of fault, or observe any other signal of interest. In the Power Swing Control Group, you can choose to do the following: Choose the type of frequency sweep (high frequency or low frequency) Choose the duration of a frequency sweep (2-10s) Start the selected scenario Enable or disable the CODO algorithm Observe the frequencies in both grids Observe the status of the CODO signal The following three scenarios highlight the expected operational modes and behavior during certain conditions. It is important to note that in order to successfully repeat any of following scenarios, you will have to do the following prior to starting the model: Start the simulation, if it is not already running Enable the relay, in case it is disabled Clear all faults Reset the relay Fault injection - To reproduce this scenario, inject either Fault 1 or Fault 2. If Fault 1 is injected, you can observe how the distance relay instantly (with approximately 70 ms calculation delay) opens the contactor and separates the left grid from the fault. If Fault 2 is injected, impedance measurement will be out of both zones, and contactor will stay closed. Note – you can force a relay to react to this fault by extending the zone 2 reach to 150% or more. In this case, the relay will react after a time delay of 300 ms. Frequency sweep - In scenario 2, either enable or disable CODO (choosing any sweep type or duration) and start the frequency sweep. With CODO enabled, the frequency should change in the left grid, and the CODO signal should alternate between 0 and 1, filtering the false relay tripping. In case the CODO algorithm is disabled, the relay will wrongly assume that there is a fault in the grid and will open the contactor. Note – sometimes in the case of shorter sweep scenarios, the relay will not react even if CODO is disabled. This is because the value of measured impedance is changing too fast, so the relay can’t detect the fault state. Fault injection with frequency sweep - For this case it is the best to activate the start trigger in Capture/Scope. After that, you have to start one of the frequency sweep scenarios (preferably longer one) with CODO enabled and inject Fault 1 before the end of the scenario. From the results in Figure 7, we can conclude that relay has properly reacted and detected that the fault is inside zone 1. This case illustrates that due to the CODO algorithm, the realisation relay is capable of distinguishing the frequency sweep from a real fault, even if both of them are present simultaneously. Figure 7. Results obtained in Distance Protection Relay scenario 3 You can evaluate this example using the automatic test from our example library. This is located in the installation folder with the following path: ..examples\tests\106_distance_protection_relay\test_distance_protection_relay.py This application example is included in the free Virtual HIL Device license and can be simulated on your PC. Table 2 lists the file names and minimum hardware requirements needed to simulate the model. examples\models\distance protection relay distance_protection relay.tse distance_protection relay.cus TyphoonTest IDE script examples\tests\106_distance_protection_relay test_distance_protection_relay.py TyphoonTest IDE script path: examples\tests\106_distance_protection_relay\test_distance_protection_relay.py The provided test automation script validates the distance relay tripping for different types of faults. It demonstrates the CODO algorithm for filtering faults tripping caused by a frequency sweep. This series of tests are expected to result in some faults, as shown in the Suites section of Figure 8. Additionally, the distance protection relay's reaction to grid faults is checked as well, with fault_1 expected to trip the relay while fault_2 is not. The former case is shown by the graph on the right of Figure 8. Figure 8. Distance Protection Relay Grid Fault 1 reaction [1] Dusan Kostic
(one hundred thirty-first) 131 (one hundred [and] thirty-one) is the natural number following 130 and preceding 132. 131 is a Sophie Germain prime,[1] an irregular prime,[2] the second 3-digit palindromic prime, and also a permutable prime with 113 and 311. It can be expressed as the sum of three consecutive primes, 131 = 41 + 43 + 47. 131 is an Eisenstein prime with no imaginary part and real part of the form {\displaystyle 3n-1} . Because the next odd number, 133, is a semiprime, 131 is a Chen prime. 131 is an Ulam number.[3] 131 is a full reptend prime in base 10 (and also in base 2). The decimal expansion of 1/131 repeats the digits 007633587786259541984732824427480916030534351145038167938931 297709923664122137404580152671755725190839694656488549618320 6106870229 indefinitely. Convair C-131 Samaritan was an American military transport produced from 1954 to 1956 Strike Fighter Squadron (VFA-131) is a United States Navy F/A-18C Hornet fighter squadron stationed at Naval Air Station Oceana Tiger 131 is a German Tiger I heavy tank captured in Tunisia by the British 48th Royal Tank Regiment during World War II USNS Mission Santa Barbara (T-AO-131) was a Mission Buenaventura-class fleet oiler during World War II USS Bandera (APA-131) was a is a United States Navy Haskell-class attack transport ship during World War II USS Buchanan (DD-131) was a United States Navy Wickes-class destroyer USS General T. H. Bliss (AP-131) was a United States Navy General G. O. Squier-class transport ship during World War II USS Hammann (DE-131) was a United States Navy Edsall-class destroyer escort during World War II USS Melucta (AK-131) was a United States Navy Crater-class cargo ship during World War II USS Walter X. Young (APD-131) was a ship of the United States Navy during World War II ZIL-131 is a 3.5-ton 6x6 army truck The Fiat 131 Mirafiori small/medium family car produced from 1974 to 1984 STS-131 is a NASA Contingency Logistic Flight (CLF) of the Space Shuttle Atlantis which launched in April 2010 131 AH is a year in the Islamic calendar that corresponds to 748 – 749 CE. 131 Vala is an inner main belt asteroid Iodine-131, or radioiodine, is a radioisotope of iodine for medical and pharmaceutical use ACP-131 is the controlling publication for listing of Q codes and Z codes, as published by NATO Allied countries 131 is the medical emergency telephone number in Chile United States Citizenship and Immigration Services Form I-131 to apply for a travel document, reentry permit, refugee travel document or advance parole 131 is the ID3v1 tag equivalent to Indie music ^ "Sloane's A005384 : Sophie Germain primes". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-27. ^ "Sloane's A000928 : Irregular primes". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-27. ^ "Ulam numbers". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 2016-04-18. Retrieved 2016-04-19.
Second-Law Thermodynamic Comparison and Maximal Velocity Ratio Design of Shell-and-Tube Heat Exchangers With Continuous Helical Baffles | J. Heat Transfer | ASME Digital Collection Gui-Dong Chen, e-mail: chengd514@163.com e-mail: xujing_nanjing@163.com Yan-Peng Ji e-mail: jyp.2008@stu.xjtu.edu.cn Wang, Q., Chen, G., Xu, J., and Ji, Y. (July 27, 2010). "Second-Law Thermodynamic Comparison and Maximal Velocity Ratio Design of Shell-and-Tube Heat Exchangers With Continuous Helical Baffles." ASME. J. Heat Transfer. October 2010; 132(10): 101801. https://doi.org/10.1115/1.4001755 Shell-and-tube heat exchangers (STHXs) have been widely used in many industrial processes. In the present paper, flow and heat transfer characteristics of the shell-and-tube heat exchanger with continuous helical baffles (CH-STHX) and segmental baffles (SG-STHX) were experimentally studied. In the experiments, these STHXs shared the same tube bundle, shell geometrical structures, different baffle arrangement, and number of heat exchange tubes. Experimental results suggested that the CH-STHX can increase the heat transfer rate by 7–12% than the SG-STHX for the same mass flow rate although its effective heat transfer area had 4% decrease. The heat transfer coefficient and pressure drop of the CH-STHX also had 43–53% and 64–72% increase than those of the SG-STHX, respectively. Based on second-law thermodynamic comparisons in which the quality of energy are evaluated by the entropy generation number and exergy losses, the CH-STHX decreased the entropy generation number and exergy losses by 30% and 68% on average than the SG-STHX for the same Reynolds number. The analysis from nondimensional correlations for Nusselt number and friction factor also revealed that if the maximal velocity ratio R>2.4 ⁠, the heat transfer coefficient of CH-STHX was higher than that of SG-STHX, and the corresponding friction factor ratio kept at constant fo,CH/fo,SG=0.28 entropy, heat exchangers, heat transfer, pipe flow, continuous helical baffles, segmental baffles, entropy generation number, exergy losses, second-law thermodynamic analysis, shell-and-tube heat exchangers Entropy, Exergy, Flow (Dynamics), Heat exchangers, Heat transfer, Heat transfer coefficients, Pressure drop, Shells, Reynolds number, Friction, Design Most Frequently Used Heat Exchangers From Pioneering Research to Worldwide Applications Heat Transfer Augmentation in a Heat Exchanger Using a Baffle Different Strategies to Improve Industrial Heat Exchanger A New Design Method for Segmentally Baffled Heat Exchangers Tube and Shell Heat Exchanger With Baffle ,” U.S. Patent No. US5,832,991. Effect of Leakage on Pressure Drop and Local Heat Transfer in Shell-and-Tube Heat Exchangers for Staggered Tube Arrangement Pressure Drop on the Shell Side of Shell-and-Tube Heat Exchangers With Segmental Baffles ,” U.S. Patent No. US 6,827,138 Bl. Nemcansky Performance Improvement of Tubular Heat Exchangers by Helical Baffles Helical Baffles Shell-and-Tube Heat Exchangers, Part 1: Experimental Verification Shell-side Heat Transfer and Pressure Drop of Shell-and-Tube Heat Exchangers With Overlap Helical Baffles Experimental Performance Comparison of Sell-Side Heat Transfer for Shell-and-Tube Heat Exchangers With Middle-Overlapped Helical Baffles and Segmental Baffles Numerical Simulation of Flow Performance in Shell Side of Shell-and-Tube Heat Exchanger With Discontinuous Helical Baffles Proceedings of the Second International Symposium on Thermal Science and Technology Continuous Helical Baffled Shell-and-Tube Heat Exchanger ,” China Patent No. CN: ZL200510043033.5. Numerical Investigation on Combined Multiple Shell-Pass Shell-and-Tube Heat Exchangers With Continuous Helical Baffles Numerical Studies of a Novel Combined Multiple Shell-Pass Shell-and-Tube Heat Exchangers With Helical Baffles Numerical Studies of Combined Multiple Shell-Pass Shell-and-Tube Heat Exchangers With Helical Baffles Seventh International Symposium on Heat Transfer Numerical Studies on a Novel Shell-and-Tube Heat Exchanger With Combined Helical Baffles Seventh International Conference on Enhanced, Compact and Ultra-Compact Heat Exchangers: From Microscale Phenomena to Industrial Applications , Heredia, Costa Rica, Paper No. CHE 2009-17. The Concept of Irreversibility in Heat Exchanger Design: Counterflow Heat Exchanger for Gas-Gas Application Performance Evaluation of Convective Heat Transfer Enhancement Devices Using Exergy Analysis Entropy Generation in Turbulent Liquid Flow Through a Smooth Duct With Constant Wall Temperature On the Generation of Entropy in a Counterflow Heat Exchanger Performance Evaluation Criteria for Heat Exchangers Based on Second Law Analysis Second-Law Based Thermodynamic Analysis of a Novel Heat Exchanger Mean Temperature Difference in Design New Equations for Heat and Mass Transfer in Turbulent Pipe and Channel Flows Delaware Method for Shell Side Design Heat Exchangers-Thermal-Hydraulic Fundamentals and Design Heat Exchanger Sourcebook Sunarao Heat Transfer, A Practical Approach WCB McGraw-Hill A Correlating Equation for Forced Convection From Gases and Liquids to a Circular Cylinder in Cross Flow
Pythagorean addition - Wikipedia In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides {\displaystyle a}nd {\displaystyle b} , this length can be calculated as {\displaystyle a\oplus b={\sqrt {a^{2}+b^{2}}},} {\displaystyle \oplus } denotes the Pythagorean addition operation.[1] This operation can be used in the conversion of Cartesian coordinates to polar coordinates. It also provides a simple notation and terminology for some formulas when its summands are complicated; for example, the energy-momentum relation in physics becomes {\displaystyle E=mc^{2}\oplus pc.} It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature.[2] Pythagorean addition (and its implementation as the hypot function) is often used together with the atan2 function to convert from Cartesian coordinates {\displaystyle (x,y)} to polar coordinates {\displaystyle (r,\theta )} {\displaystyle {\begin{aligned}r&=x\oplus y=\operatorname {hypot} (x,y)\\\theta &=\operatorname {atan2} (y,x).\\\end{aligned}}} If measurements {\displaystyle X,Y,Z,\dots } have independent errors {\displaystyle \Delta _{X},\Delta _{Y},\Delta _{Z},\dots } respectively, the quadrature method gives the overall error, {\displaystyle \varDelta _{o}={\sqrt {{\varDelta _{X}}^{2}+{\varDelta _{Y}}^{2}+{\varDelta _{Z}}^{2}+\cdots }}} whereas the upper limit of the overall error is {\displaystyle \varDelta _{u}=\varDelta _{X}+\varDelta _{Y}+\varDelta _{Z}+\cdots } if the errors were not independent.[5] This is equivalent of finding the magnitude of the resultant of adding orthogonal vectors, each with magnitude equal to the uncertainty, using the Pythagorean theorem. In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives six digital numbers of shot noise, three of dark current noise and twp of Johnson–Nyquist noise under a specific condition, the overall noise is {\displaystyle \sigma =6\oplus 3\oplus 2={\sqrt {6^{2}+3^{2}+2^{2}}}=7} digital numbers,[6] showing the dominance of larger sources of noise. {\displaystyle \oplus } is associative and commutative,[7] and {\displaystyle {\sqrt {x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}}=x_{1}\oplus x_{2}\oplus \cdots \oplus x_{n}.} This is enough to form the real numbers under {\displaystyle \oplus } into a commutative semigroup. The real numbers under {\displaystyle \oplus } are not a group, because {\displaystyle \oplus } can never produce a negative number as its result, whereas each element of a group must be the result of multiplication of itself by the identity element. On the non-negative numbers, it is still not a group, because Pythagorean addition of one number by a second positive number can only increase the first number, so no positive number can have an inverse element. Instead, it forms a commutative monoid on the non-negative numbers, with zero as its identity. Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square-root function on the sum of two squares, but hypot(x, y) avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula, {\displaystyle r={\sqrt {x^{2}+y^{2}}},} the squares of very large or small values of x and y may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and/or arithmetic overflow. The hypot function was designed to calculate the result without causing this problem. If either input is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).[8] Calculation orderEdit The difficulty with the naive implementation is that {\displaystyle x^{2}+y^{2}} may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that {\displaystyle |x|\geq |y|} , and then to use the equivalent form {\displaystyle {\begin{aligned}r&={\sqrt {x^{2}+y^{2}}}\\&={\sqrt {x^{2}\left(1+\left({\tfrac {y}{x}}\right)^{2}\right)}}\\&=|x|{\sqrt {1+\left({\tfrac {y}{x}}\right)^{2}}}\left(=|x|+{\frac {y}{|x|}}{\frac {y}{1+{\sqrt {1+\left({\tfrac {y}{x}}\right)^{2}}}}}\right).\end{aligned}}} {\displaystyle y/x} cannot overflow unless both {\displaystyle x} {\displaystyle y} are zero. If {\displaystyle y/x} underflows, the final result is equal to {\displaystyle |x|} , which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by {\displaystyle |x|} cannot underflow, and overflows only when the result is too large to represent. This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. More complex implementations avoid this by dividing the inputs into more cases: {\displaystyle x} {\displaystyle y} {\displaystyle x\oplus y\approx |x|} , to within machine precision. {\displaystyle x^{2}} overflows, multiply both {\displaystyle x} {\displaystyle x} by a small scaling factor (e.g. 2−64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264). {\displaystyle y^{2}} underflows, scale as above but reverse the scaling factors to scale up the intermediate values. Otherwise, the naive algorithm is safe to use. Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp.[9] The function is present in many programming languages and libraries, including CSS,[10]C++11,[11]D,[12]Go,[13]JavaScript (since ES2015),[14]Julia,[15]Java (since version 1.5),[16]Kotlin,[17]MATLAB,[18]PHP,[19]Python,[20]Ruby,[21]Rust,[22] and Scala.[23] Alpha max plus beta min algorithm Metafont has Pythagorean addition and subtraction as built-in operations, under the names ++ and +-+ respectively. ^ Moler, Cleve; Morrison, Donald (1983). "Replacing square roots by Pythagorean sums". IBM Journal of Research and Development. 27 (6): 577–581. CiteSeerX 10.1.1.90.5651. doi:10.1147/rd.276.0577. ^ Johnson, David L. (2017). "12.2.3 Addition in Quadrature". Statistical Tools for the Comprehensive Practice of Industrial Hygiene and Environmental Health Sciences. John Wiley & Sons. p. 289. ISBN 9781119143017. ^ "SIN (3M): Trigonometric functions and their inverses". Unix Programmer's Manual: Reference Guide (4.3 Berkeley Software Distribution Virtual VAX-11 Version ed.). Department of Electrical Engineering and Computer Science, University of California, Berkeley. April 1986. ^ Beebe, Nelson H. F. (2017). The Mathematical-Function Computation Handbook: Programming Using the MathCW Portable Software Library. Springer. p. 70. ISBN 9783319641102. ^ D.B. Schneider, Error Analysis in Measuring Systems, Proceedings of the 1962 Standards Laboratory Conference, page 94 ^ J.T. Bushberg et al, The Essential Physics of Medical Imaging, section 10.2.7, Wolters Kluwer Health ^ Falmagne, Jean-Claude (2015). "Deriving meaningful scientific laws from abstract, "gedanken" type, axioms: five examples". Aequationes Mathematicae. 89 (2): 393–435. doi:10.1007/s00010-015-0339-1. MR 3340218. S2CID 121424613. ^ Fog, Agner (2020-04-27). "Floating point exception tracking and NAN propagation" (PDF). p. 6. ^ Borges, Carlos F. (14 June 2019). "An Improved Algorithm for hypot(a,b)". arXiv:1904.09481 [math.NA]. ^ Cimpanu, Catalin. "CSS to get support for trigonometry functions". ZDNet. Retrieved 2019-11-01. ^ "Hypot - C++ Reference". ^ "STD.math - D Programming Language". ^ "Math package - math - PKG.go.dev". ^ "Math.hypot() - JavaScript | MDN". ^ "Mathematics · the Julia Language". ^ "Math (Java 2 Platform SE 5.0)". ^ "hypot - Kotlin Programming Language". Kotlin. Retrieved 2018-03-19. ^ "Square root of sum of squares (Hypotenuse) - MATLAB hypot - MathWorks Benelux". ^ "PHP: Hypot - Manual". ^ "Math — Mathematical functions — Python 3.9.7 documentation". ^ "Module: Math (Ruby 3.0.2)". ^ "Scala Standard Library 2.13.6 - scala.math". Dubrulle, Augustin A. (1983). "A class of numerical methods for the computation of Pythagorean sums" (PDF). IBM Journal of Research and Development. 27 (6): 582–589. CiteSeerX 10.1.1.94.3443. doi:10.1147/rd.276.0582. . Retrieved from "https://en.wikipedia.org/w/index.php?title=Pythagorean_addition&oldid=1078809079"
33 | Voting Power | Peter Murphy ProjectsBlogGraphicsAbout 33 | Voting Power 31 March, 2022 - 13 min read In which, I get to erase something that's been on my whiteboard todo list for over 2 years! Might as well have used a sharpie – dry-erase wasn't meant to lay dormant for that long. So I've been wanting to look into voting power indexes for awhile and was excited at the thought of implementing them as part of a supporting util library for other game theory experiments (all of this is in the service of cake cutting), but we must walk before we can run. What are voting power indexes? At a high level, they're used to measure the distribution of influence of players in a weighted voting system. A practical example is the electoral college - where states have different amounts of votes proportionate to their seats in the House. California, Texas, and New York have a lot more weight than Wyoming. An impersonal example might be, say, a bachelor party. Let's say, purely hypothetically, we have groomsmen Peter, Charlie, P_3 P_4 P_5 as well as the bachelor: Jack. Suppose, again totally hypothetically, there are unequal contributions towards the Debaucherous Dowry for the bachelor party. This resembles a weighted voting system where weight is analogous to the financial investments of the groomsmen. Obviously the groom isn't paying anything, and he gets veto power/cart blanche but he's chill, and will play along with the voting system. Agent Charlie is a student, so we don't hold his contributions or lackthereof against him, and he's the DD, so that brings him into the fold of voting power. Agent Peter is also super chill and laid back, definitely not the type of guy to write an algorithm to quantifiably prove that he should have more say in the matter as the best man – hypothetically. Agents P_3 P_4 P_5 are arbitrary (sorry fellas, pay up if you want to be named, no free clout 😤😤 etc. etc.). As reasonable voters, they agree that the threshold T for the weighted voting system should be 51% of the total Debaucherous Dowry which is the cumulative sum of their respective contributions. Say T = \$100 \cdot 0.51 (Jack, I'm sorry). We then have the following system: [51: (Jack: 35), (Peter: 20), (Chuck: 15), (P_3: 10), (P_4: 10), (P_5: 10) ] And if that nonsense doesn't make sense yet, hang tight, the notational definitions are imminent! So, how would we measure whose say actually matters? Who is influential? Disregarding the social realities of being a huge dick, veto power, or trying to form a quorum to overrule the groom, we have some tools at our disposal to investigate this question! I'm not going to wax poetic about the importance of the metrics of democracy (*proceeds to write a blog post about just that*), but the subjects of this post are two forms of quantifying the importance of particular players in weighted voting systems. There are numerous ways to measure voter importance or power, but two of the most prominent measuring sticks are the Shapley-Shubrik and the Banzhaf methods. First, some a power index amongst a finite amount of voters (dear God, please be finite) A Coalition is a list of players [P_1, P_2, ..., P_n] (or, rather, their respective weights). Wikipedia has some prudential examples of how coalitions are key to parliamentary systems, check'em out if you care. A Sequential Coalition is just a coalition, but we pay extra attention to the order of the voters: \begin{aligned} <P_1, P_2, ..., P_n> \end{aligned} Idc enough about the formal game theory definitions to distinguish between Measures, Ballots, and Issues (similarly players, agents, ... whatever), but they're expressed as follows: \begin{aligned} [ T: [P_1, P_2, ..., P_n ]] \end{aligned} T is the threshold for the measure to pass, as well as the coalition of voters. Concretely, the following notation describes a measure with a requisite threshold to pass of 8 , with a coalition of 4 players with respective weights of 6, 4, 3, 2 \begin{aligned} [8 : 6, 4, 3, 2] \end{aligned} Note that the sequentiality is implicit under the index. Shapley-Shubrik The Shapley-Shubrik voting power index defines a mechanism of determining how likely it is for a voter to be pivotal in a ballot. It relies on Sequential Coalitions (whereas the Banzhaf index does not). A pivotal player is the player in a sequential coalition that changes a measure's outcome from losing to winning. Notably, there can only be one pivotal voter in a sequential coalition. Using the above example measure, we can illustrate this: Sequential Coalition \sum w_i Measure passes? <P_3> 3 ❌ <P_3, P_2> 3 + 4 = 7 ❌ <P_3, P_2, P_4> <P_3, P_2, P_4> 3 + 4 + 2 = 9 ✅ yea boiiii <P_3, P_2, P_4, P_1> 3 + 4 + 2 + 6 = 9 ✅ also yes, but irrelevant at this point The Shapley-Shubrik "algorithm" is pretty straightforward: List all sequential coalitions (this is expensive, for N voters in a coalition, we have N! sequences to evaluate - hence "dear God, please be finite," as in like 6 dudes max) Compute the percentage of times each voter is pivotal Consider the measure [6: 4, 3, 2] Listing all the sequential coalitions (3! = 6, we gucci) and underlining the pivotal voters we have: <P_1, \underline{P_2}, P_3> <P_1, \underline{P_3}, P_2> <P_2, \underline{P_1}, P_3> <P_2, P_3, \underline{P_1}> <P_3, P_2, \underline{P_1}> <P_3, \underline{P_1}, P_2> Yielding the following distribution: times pivotal P_1 4/6 P_2 1/6 P_3 1/6 Another measure of voting power is the Banzhaf index which determines which voters are critical. A critical voter is one whose vote is necessary for a measure to pass. E.g. I am a critical voter if \begin{aligned} \text{total votes so far} - w_{me} < T \end{aligned} Note the distinction between critical and pivotal: since the Banzhaf index considers combinations instead of permutations (and the ordering of combinations is not necessary, so we deal in regular ol' coalitions). E.g. (say "E.g." one more time I dare you) in a measure [8: P_1, P_2, P_3] , it would be possible for arbitrarily sufficient P_1 P_2 to meet the threshold 8 without the "dummy" voter P_3 The algorithm for computing the Banzhaf index for a weighted voting system is as follows: Compute all winning combinations within the coalition Identify which voters in each combination are critical Compute the percentage of times each voter is critical Example Gratia! [8: 6, 3, 2] . The winning coalitions are: [\underline{P_1}, \underline{P_2}] [\underline{P_1}, \underline{P_3}] [\underline{P_1}, P_2, P_3] In the third case, either P_2 P_3 could leave the coalition and the remaining players could still meet quota, so neither is critical. If P_1 were to leave, the remaining players could not reach quota, so only P_1 is critical. While the Banzhaf index helps identify which players actually influence the outcome of the measure (i.e. some voters are irrelevant or "dummies," their votes for or against the issue have no impact), it does not capture the sequential dimension of the Shapley-Shubrik index. The Shapley-Shubrik index is constrained by the assumption that all members of a coalition are present for the measure. The Banzhaf index does not, hence it works with partial sequences (combinations). By definition the Banzhaf index is just the count of people necessary to pass the measure (it discounts all the dummy voters), and the Shapley-Shubrik index measures how pivotal a person is in time (be it for that given vote, or when they ordered their coalition, what have you). There can only be one pivotal person, but their can be several critical people. So when should you use which? I dunno dawg, I'm not your keeper. Use both, and I'll show you how! (In all seriousness, use Shapley-Shubrik when your weighted ballot is not blind or simultaneous: Iowa will never be a pivotal agent - I will not be taking comments about the idiocy of that statement at this time). Ambitions and Infertility After I hacked the first draft of this code (which is still gross, I know, spare me) I was stoked to try to get extravagant and have a much longer post where we also construct an interactive map of congressional voting districts labeled with their respective voting power etc. etc. But remember that lil caveat of the Shabley-Shubrik algorithm and how it requires us to compute all permutations of the coalition - yea, permuting things is just no good for large N (and by large, I mean like more than 10). We were going to make a nice choroplethic visualization with hexagons 'n shit, but alas, the constraints of flops. There are 435 electoral districts. 435! has 961 digits. (20% of which are 0, the rest are more or less uniformly distributed, which is curious and seemingly unnatural. But it makes sense, theres a ton of even numbers being multiplied with factors of 5). The number of trailing zeros is given by: \begin{aligned} \sum_{k=1}^\infty \Bigg\lfloor \frac{n}{5^k}\Bigg\rfloor \end{aligned} The point is that my laptop almost caught on fire, and I may now be infertile. Docker was actually invented after Solomon Hykes tried to compute all the permutations of a list of length 11 on his laptop and it burnt his crotch right off! Okay, onto the nuts and bolts. Let's wrap this notion of a Measure in a class and just stuff it chock-full of the computations. Design patterns be damned! We know we need a threshold T and a list of weights for each voter, and while we're at it, we might as well accept a list of labels for each voter as well (coulda been a dictionary, but you shoulda not mention it): from typing import List # as a treat, I will typehint one (1) argument in the whole file class Measure(): def __init__(self, majority: int, weights: List[int], labels = None): """majority: amount required for coalition to pass weights: amount of votes for each voter P self.majority = majority self.weights = {f"{l}": w for l, w in zip(labels, weights)} self.weights = {f"P{i+1}": w for i, w in enumerate(weights)} We know we're going to need to compute all the permutations and combinations of our coalition, and hell, we should probably even store those as fields of the instance so we don't have to melt our gonads each time we care to examine a voting system, but I'm lazy + members of a coalition may come and go! def get_perms(self): """gets all (comprehensive) permutations of weights""" return list(permutations(self.weights)) def get_combos(self): """gets all (partial) combinations of weights""" def flatten(lls): """helper to flatten the list of lists we get from `combinations`""" for e in lls: for item in e: flattened.append(item) flattened.append(e) for r in range(len(self.weights) + 1): combos.append(list(combinations(self.weights, r))) return flatten(list(combos)) def sum_seq(self, seq): """given a tuple of the form ('P1', 'P2', 'P3') return the sum of the associated weights""" return sum(self.weights[p] for p in seq) We (I'm being incredibly generous here by crediting you, but you're reading this which is more than half the battle) also threw in a helper to tally the cumulative weights of the coalition which is useful later on. First, for Shapley-Shubrik, we need to count all the pivotal agents. def count_pivotal(self): """Shapley-Shubrik""" res = {k: 0 for k in self.weights } # for each permutation perms = self.get_perms() # start summing the weights of the sequenced coalition for w in perm: total += self.weights[w] # once a majority has been reached if total >= self.majority: res[w] += 1 break # stop counting the "dummy" voters n_perms = len(perms) if n_perms > 0: res[k] = round(float(v)/n_perms, 4) voila! I named this method poorly on purpose 😈. Banzhaf is only slightly more troubling: def count_critical(self): """Banzhaf""" # start by listing all coalitions, then eliminating non winning coalitions combos = self.get_combos() winning_combos = [ combo for combo in combos if self.sum_seq(combo) >= self.majority ] critical_counts = { k: 0 for k in self.weights } # for each passing measure for wp in winning_combos: # why `wp`? beats me total = self.sum_seq(wp) # iterate over each voter for p in wp: # if their cast vote is necessary for the measure to pass, they are critical if total - self.weights[p] < self.majority: critical_counts[p] += 1 n_critical = sum([critical_counts[k] for k in critical_counts]) for k, v in critical_counts.items(): critical_counts[k] = round(float(v)/n_critical, 4) critical_counts[k] = 0 return critical_counts And lastly, a perverse toString to describe our Measure: dict_str = "" dict_str += f"({k}, {v}), " dict_str = dict_str[:-2] res = f"[{self.majority}: {dict_str}".replace("[", "").replace("]", "") res = f"[{res}]" res += f"\nShapley-Shubrik:\t{self.count_pivotal()}" res += f"\nBanzhaf:\t\t{self.count_critical()}\n" Remember, we use Python because it's easy not because we want to pure, consistent, or good programmers. Let's test it out on a known quantity like the first example from earlier: \begin{aligned} [6: 4, 3, 2] \end{aligned} m = Measure(6, [4, 3, 2]) >>> [6: (P1, 4), (P2, 3), (P3, 2)] Shapley-Shubrik: {'P1': 0.6667, 'P2': 0.1667, 'P3': 0.1667} Banzhaf: {'P1': 0.6, 'P2': 0.2, 'P3': 0.2} Presto! Again, I lament the absence of pretty visuals. Alas, even a meager distribution graph looks pretty unimpressive with only 10 agents (imagine a power law curve, yea that's pretty much it :p). But now, finally, we can get down to the bottom of our oh-so-pressing, purely hypothetical. weights = [35, 20, 15, 10, 10, 10] agents = ["Jack", "Peter", "Chuck", "P3", "P4", "P5"] i_am_not_a_control_freak_i_swear = Measure(51, weights, agents) >>> [51: (Jack, 35), (Peter, 20), (Chuck, 15), (P3, 10), (P4, 10), (P5, 10)] Shapley-Shubrik: { 'Jack': 0.4333, 'Peter': 0.1833, 'Chuck': 0.1333, 'P3': 0.0833, 'P5': 0.0833 Banzhaf: { 'Jack': 0.4259, 'Peter': 0.1667, 'Chuck': 0.1296, 'P3': 0.0926, 'P5': 0.0926 In conclusion, get destroyed P_3, P_4, P_5 . Jokes, I love you guys. Congrats, Jack.
m n Matrix (or 2-dimensional Array), then it is assumed to contain m \mathrm{with}⁡\left(\mathrm{SignalProcessing}\right): \mathrm{f1}≔12.0: \mathrm{f2}≔24.0: \mathrm{signal}≔\mathrm{Vector}⁡\left({2}^{10},i↦\mathrm{sin}⁡\left(\frac{\mathrm{f1}\cdot \mathrm{\pi }\cdot i}{50}\right)+1.5\cdot \mathrm{sin}⁡\left(\frac{\mathrm{f2}\cdot \mathrm{\pi }\cdot i}{50}\right),\mathrm{datatype}=\mathrm{float}[8]\right) {\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628696909549564}} \mathrm{Periodogram}⁡\left(\mathrm{signal},\mathrm{samplerate}=100\right) \mathrm{audiofile}≔\mathrm{cat}⁡\left(\mathrm{kernelopts}⁡\left(\mathrm{datadir}\right),"/audio/maplesim.wav"\right): \mathrm{Periodogram}⁡\left(\mathrm{audiofile},\mathrm{frequencyscale}="kHz"\right) \mathrm{audiofile2}≔\mathrm{cat}⁡\left(\mathrm{kernelopts}⁡\left(\mathrm{datadir}\right),"/audio/stereo.wav"\right): \mathrm{Periodogram}⁡\left(\mathrm{audiofile2},\mathrm{compactplot}\right)
Yuping Cao, Chuanzhi Bai, "Finite-Time Stability of Fractional-Order BAM Neural Networks with Distributed Delay", Abstract and Applied Analysis, vol. 2014, Article ID 634803, 8 pages, 2014. https://doi.org/10.1155/2014/634803 Based on the theory of fractional calculus, the generalized Gronwall inequality and estimates of mittag-Leffer functions, the finite-time stability of Caputo fractional-order BAM neural networks with distributed delay is investigated in this paper. An illustrative example is also given to demonstrate the effectiveness of the obtained result. Fractional calculus (integral and differential operations of noninteger order) was firstly introduced 300 years ago. Due to lack of application background and the complexity, it did not attract much attention for a long time. In recent decades fractional calculus is applied to physics, applied mathematics, and engineering [1–6]. Since the fractional-order derivative is nonlocal and has weakly singular kernels, it provides an excellent instrument for the description of memory and hereditary properties of dynamical processes. Nowadays, study on the complex dynamical behaviors of fractional-order systems has become a very hot research topic. We know that the next state of a system not only depends upon its current state but also upon its history information. Since a model described by fractional-order equations possesses memory, it is precise to describe the states of neurons. Moreover, the superiority of the Caputo’s fractional derivative is that the initial conditions for fractional differential equations with Caputo derivatives take on the similar form as those for integer-order differentiation. Therefore, it is necessary and interesting to study fractional-order neural networks both in theory and in applications. Recently, fractional-order neural networks have been presented and designed to distinguish the classical integer-order models [7–10]. Currently, some excellent results about fractional-order neural networks have been investigated, such as Kaslik and Sivasundaram [11, 12], Zhang et al. [13], Delavari et al. [14], and Li et al. [15, 16]. On the other hand, time delay is one of the inevitable problems on the stability of dynamical systems in the real word [17–20]. But till now, there are few results on the problems for fractional-order delayed neural networks; Chen et al. [21] studied the uniform stability for a class of fractional-order neural networks with constant delay by the analytical approach; Wu et al. [22] investigated the finite-time stability of fractional-order neural networks with delay by the generalized Gronwall inequality and estimates of Mittag-Leffler functions; Alofi et al. [23] discussed the finite-time stability of Caputo fractional-order neural networks with distributed delay. The integer-order bidirectional associative memory (BAM) model known as an extension of the unidirectional autoassociator of Hopfield [24] was first introduced by Kosko [25]. This neural network has been widely studied due to its promising potential for applications in pattern recognition and automatic control. In recent years, integer-order BAM neural networks have been extensively studied [26–29]. However, to the best of our knowledge, there is no effort being made in the literature to study the finite-time stability of fractional-order BAM neural networks so far. Motivated by the above-mentioned works, we were devoted to establishing the finite-time stability of Caputo fractional-order BAM neural networks with distributed delay. In this paper, we will apply Laplace transform, generalized Gronwall inequality, and estimates of Mittag-Leffler functions to establish the finite-time stability criterion of fractional-order distributed delayed BAM neural networks. This paper is organized as follows. In Section 2, some definitions and lemmas of fractional differential and integral calculus are given and the fractional-order BAM neural networks are presented. A criterion for finite-time stability of fractional-order BAM neural networks with distributed delay is obtained in Section 3. Finally, the effectiveness and feasibility of the theoretical result is shown by an example in Section 4. For the convenience of the reader, we first briefly recall some definitions of fractional calculus; for more details, see [1, 2, 5], for example. Definition 1. The Riemann-Liouville fractional integral of order of a function is given by provided that the right side is pointwise defined on , where is the Gamma function. Definition 2. The Caputo fractional derivative of order of a function can be written as Definition 3. The Mittag-Leffler function in two parameters is defined as where , , and , where denotes the complex plane. In particular, for , one has The Laplace transform of Mittag-Leffler function is where and are, respectively, the variables in the time domain and Laplace domain and stands for the Laplace transform. In this paper, we are interested in the finite-time stability of fractional-order BAM neural networks with distributed delay by the following state equations: or in the matrix-vector notation where , . The model (6) is made up of two neural fields and , where and are the activations of the th neuron in and the th neuron in , respectively; is the state vector of the network at time ; the functions are the activation functions of the neurons at time ; is a diagonal matrix; represents the rate with which the th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs; and are the feedback matrix; denotes the maximum possible transmission delay from neuron to another; and are the delayed feedback matrix; and are two external bias vectors. Let be the Banach space of all continuously differential functions over a time interval of length , mapping the interval into with the norm defined as follows: for every , The initial conditions associated with (6) are given by where . In order to obtain main result, we make the following assumptions.(H1)For , the functions and are continuous on .(H2)The neurons activation functions and are bounded.(H3)The neurons activation functions and are Lipschitz continuous; that is, there exist positive constants such that Since the Caputo’s fractional derivative of a constant is equal to zero, the equilibrium point of system (6) is a constant vector which satisfies the system By using the Schauder fixed point theorem and assumptions (H1)–(H3), it is easy to prove that the equilibrium points of system (6) exist. We can shift the equilibrium point of system (6) to the origin. Denoting then system (6) can be written as with the initial conditions where Similarly, by using the matrix-vector notation, system (15) can be expressed as with the initial condition where Define the functions as follows: where . From assumption (H3), we can obtain , . By (21), we have Thus, system (18) can be further written as the following form: where , . Definition 4. System (23) with the initial condition (19) is finite-time stable with respect to , , if and only if implies where is a positive real number and , , denotes the initial time of observation of the system, and denotes time interval . A technical result about norm upper-bounding function of the matrix function is given in [30] as follows. Lemma 5. If , then, for , one has Moreover, if is a diagonal stability matrix, then where ( ) is the largest eigenvalue of the diagonal stability matrix . Lemma 6 (see [31]). Let be nonnegative and local integrable on , and let be a nonnegative, nondecreasing continuous function defined on , , and let be a real constant, , with Then Moreover, if is a nondecreasing function on , then We first give a key lemma in the proof of our main result as follows. Lemma 7. Let be nonnegative and local integrable on , and let be nonnegative, nondecreasing and local integrable on , and let , be two positive constants, , with Then Proof. Substituting (32) into (31), we obtain Changing the order of integration in the above double integral, we obtain Let , ; then is a nonnegative, nondecreasing, and local integrable function and is a nonnegative, nondecreasing continuous function. Thus, by Lemma 6 (30), one has Similarly, we get For convenience, let where is the largest eigenvalue of the diagonal stability matrix and denotes the largest singular value of matrix . In the following, sufficient conditions for finite-time stability of fractional-order BAM neural networks with distributed delay are derived. Theorem 8. Let . If system (23) satisfies (H1)–(H3) with the initial condition (19), and where , then system (23) is finite-time stable with respect to , . Proof. By Laplace transform and inverse Laplace transform, system (23) is equivalent to From (40), (41), and Lemma 5, we obtain Let , and ; then Thus, we have by (42) and (44) that where denotes the largest singular value of matrix . Similarly, by (43) and (45), we get Hence, by (46) and (47), we have Set By simple computation, we have It follows from (48)–(50) and Lemma 7 that By (51), we obtain Thus, if condition (39) is satisfied and , then , ; that is, system (23) is finite-time stable. This completes the proof. In this section, we give an example to illustrate the effectiveness of our main result. Consider the following two-state Caputo fractional BAM type neural networks model with distributed delay with the initial condition where , , and , . It is easy to know that is an equilibrium point of system (53). Since , we may let . Take It is easy to check that From condition (41) of Theorem 8, we can get We can obtain that the estimated time of finite-time stability is . Hence, system (53) is finite-time stable with respect to . This work is supported by the Natural Science Foundation of Jiangsu Province (BK2011407) and the Natural Science Foundation of China (11271364 and 10771212). E. Soczkiewicz, “Application of fractional calculus in the theory of viscoelasticity,” Molecular and Quantum Acoustics, vol. 23, pp. 397–404, 2002. View at: Google Scholar V. V. Kulish and J. L. Lage, “Application of fractional calculus to fluid mechanics,” Journal of Fluids Engineering, vol. 124, no. 3, pp. 803–806, 2002. View at: Publisher Site | Google Scholar J. Sabatier, O. P. Agrawal, and J. Machado, Theoretical Developments and Applications, Advance in Fractional Calculus, Springer, Berlin, Germany, 2007. P. Arena, R. Caponetto, L. Fortuna, and D. Porto, “Bifurcation and chaos in noninteger order cellular neural networks,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 8, no. 7, pp. 1527–1539, 1998. View at: Google Scholar | Zentralblatt MATH P. Arena, L. Fortuna, and D. Porto, “Chaotic behavior in noninteger-order cellular neural networks,” Physical Review E: Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, vol. 61, no. 1, pp. 776–781, 2000. View at: Google Scholar M. P. Lazarević, “Finite time stability analysis of {\mathrm{PD}}^{\alpha } fractional control of robotic time-delay systems,” Mechanics Research Communications, vol. 33, no. 2, pp. 269–279, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet A. Boroomand and M. Menhaj, “Fractional-order Hopfield neural networks,” in Proceedings of the Natural Computation International Conference, pp. 883–890, 2010. View at: Google Scholar E. Kaslik and S. Sivasundaram, “Dynamics of fractional-order neural networks,” in Proceedings of the International Joint Conference on Neural Network (IJCNN '11), pp. 611–618, August 2011. View at: Publisher Site | Google Scholar R. Zhang, D. Qi, and Y. Wang, “Dynamics analysis of fractional order three-dimensional Hopfield neural network,” in Proceedings of the 6th International Conference on Natural Computation (ICNC '10), pp. 3037–3039, August 2010. View at: Publisher Site | Google Scholar H. Delavari, D. Baleanu, and J. Sadati, “Stability analysis of Caputo fractional-order nonlinear systems revisited,” Nonlinear Dynamics, vol. 67, no. 4, pp. 2433–2439, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y. Li, Y. Chen, and I. Podlubny, “Mittag-Leffler stability of fractional order nonlinear dynamic systems,” Automatica, vol. 45, no. 8, pp. 1965–1969, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y. Li, Y. Chen, and I. Podlubny, “Stability of fractional-order nonlinear dynamic systems: lyapunov direct method and generalized Mittag-Leffler stability,” Computers & Mathematics with Applications, vol. 59, no. 5, pp. 1810–1821, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-delay Systems, Birkhäuser, Boston, Mass, USA, 2003. View at: Publisher Site | MathSciNet J. Tian, W. Xiong, and F. Xu, “Improved delay-partitioning method to stability analysis for neural networks with discrete and distributed time-varying delays,” Applied Mathematics and Computation, vol. 233, pp. 152–164, 2014. View at: Google Scholar J. Tian, Y. Li, J. Zhao, and S. Zhong, “Delay-dependent stochastic stability criteria for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 5769–5781, 2012. View at: Publisher Site | Google Scholar | MathSciNet L. Chen, Y. Chai, R. Wu, and T. Zhai, “Dynamic analysis of a class of fractional-order neural networks with delay,” Neurocomputing, vol. 111, pp. 190–194, 2013. View at: Publisher Site | Google Scholar R.-C. Wu, X.-D. Hei, and L.-P. Chen, “Finite-time stability of fractional-order neural networks with delay,” Communications in Theoretical Physics, vol. 60, no. 2, pp. 189–193, 2013. View at: Publisher Site | Google Scholar | MathSciNet A. Alofi, J. Cao, A. Elaiw, and A. Al-Mazrooei, “Delay-dependent stability criterion of caputo fractional neural networks with distributed delay,” Discrete Dynamics in Nature and Society, vol. 2014, Article ID 529358, 6 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet S. Arik and V. Tavsanoglu, “Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays,” Neurocomputing, vol. 68, no. 1–4, pp. 161–176, 2005. View at: Publisher Site | Google Scholar S. Senan, S. Arik, and D. Liu, “New robust stability results for bidirectional associative memory neural networks with multiple time delays,” Applied Mathematics and Computation, vol. 218, no. 23, pp. 11472–11482, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet B. Liu, “Global exponential stability for BAM neural networks with time-varying delays in the leakage terms,” Nonlinear Analysis: Real World Applications, vol. 14, no. 1, pp. 559–566, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet R. Raja and S. M. Anthoni, “Global exponential stability of BAM neural networks with time-varying delays: the discrete-time case,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 2, pp. 613–622, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet M. de la Sen, “About robust stability of Caputo linear fractional dynamic systems with time delays through fixed point theory,” Fixed Point Theory and Applications, vol. 2011, Article ID 867932, 19 pages, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Terminology | Solana Docs Many terms are thrown around when discussing inflation and the related components (e.g. rewards/yield/interest), we try to define and clarify some commonly used concept here: Total Current Supply [SOL]# The total amount of tokens (locked or unlocked) that have been generated (via genesis block or protocol inflation) minus any tokens that have been burnt (via transaction fees or other mechanism) or slashed. At network launch, 500,000,000 SOL were instantiated in the genesis block. Since then the Total Current Supply has been reduced by the burning of transaction fees and a planned token reduction event. Solana’s Total Current Supply can be found at https://explorer.solana.com/supply Inflation Rate [%]# The Solana protocol will automatically create new tokens on a predetermined inflation schedule (discussed below). The Inflation Rate [%] is the annualized growth rate of the Total Current Supply at any point in time. Inflation Schedule# A deterministic description of token issuance over time. The Solana Foundation is proposing a dis-inflationary Inflation Schedule. I.e. Inflation starts at its highest value, the rate reduces over time until stabilizing at a predetermined long-term inflation rate (see discussion below). This schedule is completely and uniquely parameterized by three numbers: Effective Inflation Rate [%]# The inflation rate actually observed on the Solana network after accounting for other factors that might decrease the Total Current Supply. Note that it is not possible for tokens to be created outside of what is described by the Inflation Schedule. While the Inflation Schedule determines how the protocol issues SOL, this neglects the concurrent elimination of tokens in the ecosystem due to various factors. The primary token burning mechanism is the burning of a portion of each transaction fee. 50\% of each transaction fee is burned, with the remaining fee retained by the validator that processes the transaction. Additional factors such as loss of private keys and slashing events should also be considered in a holistic analysis of the Effective Inflation Rate. For example, it’s estimated that 10-20\% of all BTC have been lost and are unrecoverable and that networks may experience similar yearly losses at the rate of 1-2\% Staking Yield [%]# The rate of return (aka interest) earned on SOL staked on the network. It is often quoted as an annualized rate (e.g. "the network staking yield is currently 10\% per year"). Staking yield is of great interest to validators and token holders who wish to delegate their tokens to avoid token dilution due to inflation (the extent of which is discussed below). 100\% of inflationary issuances are to be distributed to staked token-holders in proportion to their staked SOL and to validators who charge a commission on the rewards earned by their delegated SOL. There may be future consideration for an additional split of inflation issuance with the introduction of Archivers into the economy. Archivers are network participants who provide a decentralized storage service and should also be incentivized with token distribution from inflation issuances for this service. - Similarly, early designs specified a fixed percentage of inflationary issuance to be delivered to the Foundation treasury for operational expenses and future grants. However, inflation will be launching without any portion allocated to the Foundation. Staking yield can be calculated from the Inflation Schedule along with the fraction of the Total Current Supply that is staked at any given time. The explicit relationship is given by: \begin{aligned} \text{Staking Yield} =~&\text{Inflation Rate}\times\text{Validator Uptime}~\times \\ &\left( 1 - \text{Validator Fee} \right) \times \left( \frac{1}{\%~\text{SOL Staked}} \right) \\ \text{where:}\\ \%~\text{SOL Staked} &= \frac{\text{Total SOL Staked}}{\text{Total Current Supply}} \end{aligned} Token Dilution [%]# Dilution is defined here as the change in proportional representation of a set of tokens within a larger set due to the introduction of new tokens. In practical terms, we discuss the dilution of staked or un-staked tokens due to the introduction and distribution of inflation issuance across the network. As will be shown below, while dilution impacts every token holder, the relative dilution between staked and un-staked tokens should be the primary concern to un-staked token holders. Staking tokens, which will receive their proportional distribution of inflation issuance, should assuage any dilution concerns for staked token holders. I.e. dilution from 'inflation' is offset by the distribution of new tokens to staked token holders, nullifying the 'dilutive' effects of the inflation for that group. Adjusted Staking Yield [%]# A complete appraisal of earning potential from staking tokens should take into account staked Token Dilution and its impact on the Staking Yield. For this, we define the Adjusted Staking Yield as the change in fractional token supply ownership of staked tokens due to the distribution of inflation issuance. I.e. the positive dilutive effects of inflation. « Solana Economics Overview Solana's Proposed Inflation Schedule » Total Current Supply SOL Effective Inflation Rate % Staking Yield % Token Dilution % Adjusted Staking Yield %
YDbDr - Wikipedia An image along with its {\displaystyle Y} {\displaystyle D_{B}} {\displaystyle D_{R}} YDbDr, sometimes written YDBDR, is the colour space used in the SECAM analog terrestrial colour television broadcasting standard (adopted in France and some countries of the former Eastern Bloc) and PAL-N (adopted in Argentina, Paraguay and Uruguay).[1] It is very close to YUV (used on the PAL system) and its related colour spaces such as YIQ (used on the NTSC system), YPbPr and YCbCr. YDbDr is composed of three components - {\displaystyle Y} {\displaystyle D_{B}} {\displaystyle D_{R}} {\displaystyle Y} is the luminance, {\displaystyle D_{B}} {\displaystyle D_{R}} are the chrominance components, representing the red and blue colour differences. The three component signals are created from an original RGB (red, green and blue) source. The weighted values of {\displaystyle R} {\displaystyle G} {\displaystyle B} are added together to produce a single {\displaystyle Y} signal, representing the overall brightness, or luminance, of that spot. The {\displaystyle D_{B}} signal is then created by subtracting the {\displaystyle Y} from the blue signal of the original RGB, and then scaling; and {\displaystyle D_{R}} by subtracting the {\displaystyle Y} from the red, and then scaling by a different factor. These formulae approximate the conversion between the RGB colour space and YDbDr. {\displaystyle {\begin{aligned}R,G,B,Y&\in \left[0,1\right]\\D_{B},D_{R}&\in \left[-1.333,1.333\right]\end{aligned}}} From RGB to YDbDr: {\displaystyle {\begin{aligned}Y&=+0.299R+0.587G+0.114B\\D_{B}&=-0.450R-0.883G+1.333B\\D_{R}&=-1.333R+1.116G+0.217B\\{\begin{bmatrix}Y\\D_{B}\\D_{R}\end{bmatrix}}&={\begin{bmatrix}0.299&0.587&0.114\\-0.450&-0.883&1.333\\-1.333&1.116&0.217\end{bmatrix}}{\begin{bmatrix}R\\G\\B\end{bmatrix}}\end{aligned}}} From YDbDr to RGB: {\displaystyle {\begin{aligned}R&=Y+0.000092303716148D_{B}-0.525912630661865D_{R}\\G&=Y-0.129132898890509D_{B}+0.267899328207599D_{R}\\B&=Y+0.664679059978955D_{B}-0.000079202543533D_{R}\\{\begin{bmatrix}R\\G\\B\end{bmatrix}}&={\begin{bmatrix}1&0.000092303716148&-0.525912630661865\\1&-0.129132898890509&0.267899328207599\\1&0.664679059978955&-0.000079202543533\end{bmatrix}}{\begin{bmatrix}Y\\D_{B}\\D_{R}\end{bmatrix}}\end{aligned}}} You may note that the {\displaystyle Y} component of YDbDr is the same as the {\displaystyle Y} component of YUV. {\displaystyle D_{B}} {\displaystyle D_{R}} are related to the {\displaystyle U} {\displaystyle V} components of the YUV colour space as follows: {\displaystyle {\begin{aligned}D_{B}&=+3.059U\\D_{R}&=-2.169V\end{aligned}}} There is also a variety of the PAL broadcasting standard, PAL-N, that uses the YDbDr colour space.[2] ^ https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.470-6-199811-S!!PDF-E.pdf[bare URL PDF] Shi, Yun Q. and Sun, Huifang Image and Video Compression for Multimedia Engineering, CRC Press, 2000 ISBN 0-8493-3491-8 YUV - related colour system PAL-N (Argentina, Paraguay and Uruguay) - some information on PAL-N Retrieved from "https://en.wikipedia.org/w/index.php?title=YDbDr&oldid=1078585078"
AIS - Monte-Carlo start with a trial density, say g_0(x) = t_\alpha (x; \mu_0,\Sigma_0) with weighted Monte Carlo samples, estimate the parameters, mean and covariate matrix, and construct new trial density, say g_1(x) = t_\alpha (x; \mu_1,\Sigma_1) construct a certain measure of discrepancy between the trial distribution and the target distribution, such as the coefficient of variation of importance weights, does not improve any more. Implement an Adaptive Importance Sampling algorithm to evaluate mean and variance of a density \pi(\mathbf{x})\propto N(\mathbf{x;0}, 2I_4) + 2N(\mathbf{x; 3e}, I_4) + 1.5 N(\mathbf{x; -3e}, D_4) \mathbf{e} = (1,1,1,1), I_4 = diag(1,1,1,1), D_4 = diag(2,1,1,.5) A possible procedure is as follows: start with a trial density g_0 =t_{\nu}(0, \Sigma) Recursively, build g_k(\mathbf{x})=(1-\epsilon)g_{k-1}(\mathbf {x}) + \epsilon t_{\nu}(\mu, \Sigma) in which one chooses (\epsilon, \mu, \Sigma) to minimize the variation of coefficient of the importance weights.
Multiplication and Division - Global Math Week I8S8CQ1 I8S8CQ2A I8S8CQ2B I8S8CQ3C Even though the addition and subtraction of decimals is straightforward in a 1 \leftarrow 10 machine, performing multiplication and division on them can be awkward. The trouble is that the 1 \leftarrow 10 machine is based on whole dots, not parts of dots, and so it is tricky to make easy sense of pictures. But there are ways to play with dots-and-boxes nonetheless. We can certainly do some basic multiplication. For example, a picture of dots and boxes shows that 2.615 \times 7 14.42|7|35 With explosions, this becomes 18.305 . (Check this!) We can also explain a rule that students are taught to memorize: If you multiply a decimal by 10 , just shift the decimal point to the right. 31.42 \times 10 gives the answer 30|10.40|20 . With explosions, this becomes 314.20 and the answer looks as though we just shifted the decimal point. (Most people leave off the final zero.) By the way, if you multiply a number a little larger than 31 by ten, you should get and answer a little larger than 310 . Memorizing the direction of travel of a decimal point is unnecessary. Explain why multiplying a decimal by 100 has the same effect as shifting the decimal point two places? (Do you need to memorize the direction of the shift?) We can also multiply decimals by decimals, but now the work is getting less fun. We need to follow the same approach as we did when we explored long multiplication in Island 3. Let’s do an example. Consider 2.15 \times 0.3 Here’s what 2.15 0.3 looks like: it’s three dots to the right of where one dot should be. So to compute 2.15 \times 0.3 we need to replace each dot that appears in the picture of 2.15 with three dots to the right of where that dot should be. Doing so gives this picture. We see the answer 0.6|3|15 0.645 after an explosion. So multiplication with decimals is do-able, but awkward. It is actually easier, in practice, to go back to basic arithmetic and think of multiplication of 0.3, say, as multiplication by 3 followed by division by 10, or work with decimals as fractions. For example, 0.2 \times 0.4 = \dfrac {2}{10} \times \dfrac {4}{10} = \dfrac {8}{100} = 0.08 0.05 \times 0.006 = \dfrac {5}{100} \times \dfrac {6}{1000} = \dfrac {30}{100000} = \dfrac {3}{10000} =0.0003 2.15 \times 0.3 = \left(2+ \dfrac {15}{100} \right) \times \dfrac {3}{10} = \dfrac {6}{10}+\dfrac {45}{1000} = 0.6+0.045 = 0.645 We also see the effect of multiplying a decimal by ten this way. For example 10 \times 3.142 = 10 \times \left(3+\dfrac{1}{10}+\dfrac{4}{100}+\dfrac{2}{1000}\right)=30+1+\dfrac{4}{10}+\dfrac{2}{100}=31.42 0.04 \times 0.5 1000 \times 0.0385 0.08 \div 0.005 To do this problem I could try drawing a picture of 0.08 1 \leftarrow 10 machine and attempt to make sense of finding groups of 0.005 in that picture, that is groups of five dots three places to the right of where they should be. Although possible to do, it seems hard to keep straight in my head and it makes my brain hurt! Here’s a piece of advice from a mathematician: Avoid hard work! Mathematicians will, in fact, work very hard to avoid hard work! Since converting decimals to fractions makes the multiplication of fractions easier, let’s do the same thing for division. Example: Examine 0.08 \div 0.005 A fraction is a number that is an answer to a division problem. And, conversely, we can think of a division problem as a fraction. For example, the quantity 0.08 \div 0.005 is really this “fraction” \dfrac {0.08}{0.005} (It is okay to have non-whole numbers as numerators and denominators of fractions.) To make this fraction look friendlier, let’s multiply the top and bottom by factors of ten. \dfrac {0.08 \times 10 \times 10 \times 10}{0.005 \times 10 \times 10 \times 10} = \dfrac{80}{5} The division problem \dfrac{80}{5} is much friendlier. It has the answer 16 (Use a 1 \leftarrow 10 machine to compute it if you like!) \dfrac{8.5}{100} 100 \dfrac{1}{100} \dfrac{8.5}{100} , for example, can be thought of as \dfrac{1}{100} \times 8.5 = \dfrac{1}{100} \left(8+\dfrac{5}{10}\right)=\dfrac{8}{100} + \dfrac{5}{1000} = 0.085 1.51 \div 0.07 If I was asked to compute 1.51 \div 0.07 , I would personally do this problem instead: \dfrac{1.51 \times 10 \times 10}{0.07 \times 10 \times 10}= \dfrac{151}{7} I see the answer 21 \dfrac{4}{7} . If that same person asked me to give the answer as a decimal, then I would have to convert the fraction \dfrac{4}{7} into a decimal. And that is possible. The next lesson explains how! Explain why the answer to 0.9 \div 10 0.09 2.34 \div 1000 0.00234 40.04 \times 0.01 .4004 \dfrac {0.75} {25} Let’s put it all together. Here are some unpleasant computations. Do you want to try evaluating them? 0.3 \times \left( 5.37 - 2.07 \right) \dfrac {0.1 + \left( 1.01 - 0.1 \right)} {0.11 + 0.09} \dfrac {\left( 0.002 + 0.2 \times 2.02 \right) \left( 2.2 - 0.22 \right)} {2.22 - 0.22} 8EIrrational Numbers
Introduction to twisted Alexander polynomials and related topics Teruaki Kitano1 This article is based on the lectures in the Winter Braids V (Pau, February 2015). We introduce some studies of twisted Alexander polynomials to non-experts through many concrete examples. In this article we follow the definition of the twisted Alexander polynomial by Wada, which can be defined for a finitely presented group with an epimorphism onto a free abelian group. The main tool is FoxÕs free calculus. In the last two sections we discuss some applications on the fiberedness of a knot and the existence of epimorphisms between knot groups. Teruaki Kitano&hairsp;1 author = {Teruaki Kitano}, title = {Introduction to twisted {Alexander} polynomials and related topics}, TI - Introduction to twisted Alexander polynomials and related topics %T Introduction to twisted Alexander polynomials and related topics Teruaki Kitano. Introduction to twisted Alexander polynomials and related topics. Winter Braids Lecture Notes, Volume 2 (2015), Talk no. 4, 35 p. doi : 10.5802/wbln.10. https://wbln.centre-mersenne.org/articles/10.5802/wbln.10/ [1] I. Agol and N. M. Dunfield, Certifying the Thurston norm via \mathit{SL}\left(2,ℂ\right) -twisted homology, arXiv:1501.02136 [2] I. Agol and Y. Liu, Presentation length and Simon’s conjecture, J. Amer. Math. Soc. 25 (2012), no. 1, 151–187. | Article | MR: 2833481 | Zbl: 1237.57002 [3] M. Boileau and S. Boyer, On character varieties, sets of discrete characters and non-zero degree maps, Amer. J. Math. 134 (2012), no. 2, 285–347. | Article | MR: 2904999 | Zbl: 1257.57021 [4] M. Boileau, S. Boyer, A. Reid and S. Wang, Simon’s conjecture for two-bridge knots, Comm. Anal. Geom. 18 (2010), 121–143. | Article | MR: 2660460 | Zbl: 1211.57004 [5] G. Burde, H. Zieschang and M. Heusener, Knots, Third, fully revised and extended edition. De Gruyter Studies in Mathematics, 5. De Gruyter, Berlin, 2014. xiv+417 pp. | Zbl: 1283.57002 [6] J. C. Cha, Fibred knots and twisted Alexander invariants, Trans. Amer. Math. Soc. 355 (2003), no. 10, 4187–4200 | Article | Zbl: 1028.57004 [7] J. C. Cha and C. Livingston, KnotInfo: Table of Knot Invariants, http://www.indiana.edu/ knotinfo. [8] J. C. Cha and M. Suzuki, Non-meridional epimorphisms of knot groups, arXiv:1502.06039 | Article | MR: 3493417 | Zbl: 1345.57007 [9] J. H. Conway, An enumeration of knots and links, and some of their algebraic properties, in the 1970 Computational Problems in Abstract Algebra (Proc. Conf., Oxford, 1967), 329–358 Pergamon, Oxford. | Article [10] R. Crowell and R. H. Fox, Introduction to knot theory, Reprint of the 1963 original. GTM 57. Springer-Verlag, New York-Heidelberg, 1977. | Article [11] G. de Rham, Introduction aux polynômes d’un nœud, Enseignement Math. (2) 13 1967 187–194 (1968). | Zbl: 0157.54803 [12] N. Dunfield and I. Agol, Certifying the Thurston norm via \mathit{SL}\left(2,ℂ\right) [13] N. Dunfield, S. Friedl and N. Jackson, Twisted Alexander polynomials of hyperbolic knots, Exp. Math. 21 (2012), no. 4, 329–352. | Article | MR: 3004250 | Zbl: 1266.57008 [14] R. H. Fox, Free differential calculus. I. Derivation in the free group ring, Ann. of Math. (2) 57, (1953). 547–560. | Article | MR: 53938 | Zbl: 0050.25602 [15] R. H. Fox and J. Milnor, Singularities of 2-spheres in 4-space and cobordism of knots, Osaka J. Math. 3 (1966), 257–267. | Zbl: 0146.45501 [16] S. Friedl and T. Kim, The Thurston norm, fibered manifolds and twisted Alexander polynomials, Topology 45 (2006), no. 6, 929–953. | Article | MR: 2263219 | Zbl: 1105.57009 [17] S. Friedl and S. Vidussi, Twisted Alexander polynomials detect fibered 3-manifolds, Ann. of Math. (2) 173 (2011), no. 3, 1587–1643. | Article | MR: 2800721 | Zbl: 1231.57020 [18] S. Friedl and S. Vidussi, A survey of twisted Alexander polynomials, in the book of The mathematics of knots, 45–94, Contrib. Math. Comput. Sci., 1, Springer, Heidelberg, 2011. | Article | Zbl: 1223.57012 [19] S. Friedl and S. Vidussi, A vanishing theorem for twisted Alexander polynomials with applications to symplectic 4-manifolds, J. Eur. Math. Soc. 15 (2013), no. 6, 2027–2041. | Article | MR: 3120739 | Zbl: 1285.57007 [20] H. Goda, T. Kitano and T. Morifuji, Reidemeister torsion, twisted Alexander polynomial and fibered knots, Comment. Math. Helv. 80 (2005), no. 1, 51–61. | Article | MR: 2130565 | Zbl: 1066.57008 [21] C. Gordon and J. Lucke, Knots are determined by their complements, J. Amer. Math. Soc. 2 (1989), no. 2, 371–415. | Article | MR: 965210 | Zbl: 0678.57005 [22] J. Hempel, 3-manifold, Reprint of the 1976 original. AMS Chelsea Publishing, Providence, RI, 2004. [23] K. Horie, T. Kitano, M. Matsumoto and M. Suzuki, A partial order on the set of prime knots with up to 11 crossings, J. Knot Theory Ramifications, 20, No. 2 (2011), 275–303. Erratum: J. Knot Theory Ramifications, 21 (2012), no. 4, 1292001, 2 pp. | Article | Zbl: 1218.57004 [24] B. Jiang and S. Wang, Twisted topological invariants associated with representations, in Topics in knot theory (Erzurum, 1992), NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci. 399, Kluwer Acad. Publ., Dordrecht (1993). | Article | Zbl: 0815.55001 [25] D. Johnson, A geometric form of Casson’s invariant and its connection to Reidemeister torsion, unpublished lecture notes. [26] A. Kawauchi, An imitation theory of manifolds, Osaka J. Math. 26 (1989), no. 3, 447–464. | Zbl: 0701.57014 [27] W. H. Kazez (ed.), Geometric topology, Proceedings of the 1993 Georgia International Topology Conference held at the University of Georgia, Athens, GA, August 2-13, 1993. AMS/IP Studies in Advanced Mathematics, 2.2. American Mathematical Society, Providence, RI; International Press, Cambridge, MA, 1997. xiv+473 pp. | Article [28] T. Kitano, Twisted Alexander polynomial and Reidemeister torsion, Pacific J. Math. 174 (1996), no. 2, 431–442. | Article | MR: 1405595 | Zbl: 0863.57001 [29] T. Kitano and T. Morifuji, Divisibility of twisted Alexander polynomials and fibered knots, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 4 (2005), no. 1, 179–186. | Zbl: 1117.57004 [30] T. Kitano and T. Morifuji, Twisted Alexander polynomials for irreducible \mathit{SL}\left(2,ℂ\right) -representations of torus knots, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 11 (2012), no. 2, 395–406. | Zbl: 1255.57014 [31] T. Kitano and M. Suzuki, A partial order in the knot table, Experiment. Math. 14 (2005), no. 4, 385–390. Erratum: Experiment. Math. 20 (2011), no. 3, 371. | Article | MR: 2193401 | Zbl: 1089.57006 [32] T. Kitano and M. Suzuki, Twisted Alexander polynomials and a partial order on the set of prime knots, in Groups, homotopy and configuration spaces, 307–321, Geom. Topol. Monogr., 13, Geom. Topol. Publ., Coventry, 2008. | Article | Zbl: 1137.57006 [33] T. Kitano and M. Suzuki, Some minimal elements for a partial order of prime knots, arXiv:1412.3168 [34] T. Kitano, M. Suzuki and M. Wada, Twisted Alexander polynomials and surjectivity of a group homomorphism, Algebr. Geom. Topol. 5 (2005), 1315–1324. Erratum: Algebr. Geom. Topol. 11 (2011), 2937–2939 | Article | MR: 2171811 | Zbl: 1081.57004 [35] P. Kirk and C. Livingston, Twisted Alexander invariants, Reidemeister torsion, and Casson-Gordon invariants, Topology 38, (1999), no. 3, 635–661. | Article | MR: 1670420 | Zbl: 0928.57005 [36] X. S. Lin, Representations of knot groups and twisted Alexander polynomials, Acta Mathematica Sinica, English Series, 17 (2001), No.3, pp. 361–380 | Article | MR: 1852950 | Zbl: 0986.57003 [37] W. Magnus, A. Karrass and D. Solitar, Combinatorial group theory. Presentations of groups in terms of generators and relations. Reprint of the 1976 second edition. Dover Publications, Inc., Mineola, NY, 2004. | Article | MR: 1534367 | Zbl: 1130.20307 [38] J. Milnor, Two complexes which are homeomorphic but combinatorially distinct, Ann. of Math. (2) 74 (1961), 575–590. | Article | MR: 133127 | Zbl: 0102.38103 [39] J. Milnor, A duality theorem for Reidemeister torsion, Ann. of Math. (2) 76 (1962), 137–147. | Article | MR: 141115 | Zbl: 0108.36502 [40] J. Milnor, Whitehead torsion, Bull. Amer. Math. Soc. 72 (1966), 358–426. | Article | MR: 196736 | Zbl: 0147.23104 [41] J. Milnor, Infinite cyclic coverings, 1968 Conference on the Topology of Manifolds (Michigan State Univ., E. Lansing, Mich., 1967), 115–133 Prindle, Weber & Schmidt, Boston, Mass. [42] J. W. Morgan and H. Bass (ed.), The Smith conjecture, Pure and Applied Mathematics, 112. Academic Press, 1984. xv+243 pp. | Zbl: 0599.57001 [43] T. Morifuji, On a conjecture of Dunfield, Friedl and Jackson, C. R. Math. Acad. Sci. Paris 350 (2012), no. 19-20, 921–924. | Article | MR: 2990904 [44] T. Morifuji, Representation of knot groups into \mathit{SL}\left(2;ℂ\right) and twisted Alexander polynomials, Handbook of Group Actions (Vol I) (2015), p527–572, Higher Educational Press and International Press, Beijing-Boston. [45] T. Morifuji and A. T. Tran, Twisted Alexander polynomials of 2-bridge knots for parabolic representations, Pacific J. Math. 269 (2014), no. 2, 433–451. | Article | MR: 3238485 | Zbl: 1305.57027 [46] L. Neuwirth, On Stallings fibrations, Proc. Amer. Math. Soc. 14 1963 380–381. | Article | MR: 149470 | Zbl: 0117.41002 [47] T. Ohtsuki, R. Riley and M. Sakuma, Epimorphisms between 2-bridge link groups, in the Zieschang Gedenkschrift, Geom. and Topol. Monogr. 14 (2008), 417–450. | Article | Zbl: 1146.57011 [48] D. Rolfsen, Knots and links, Corrected reprint of the 1976 original. Mathematics Lecture Series, 7. Publish or Perish, Inc., Houston, TX, 1990. xiv+439 pp. | Article [49] R. Riley, Nonabelian representations of 2-bridge knot groups, Quart. J. Math. Oxford Ser. (2) 35 (1984), no. 138, 191–208. | Article | MR: 745421 | Zbl: 0549.57005 [50] H. Seifert, Über das Geschlecht von Knoten, (German) Math. Ann. 110 (1935), no. 1, 571–592. | Article | Zbl: 0010.13303 [51] D.S. Silver and W. Whitten, Knot group epimorphisms, J. Knot Theory Ramifications 15 (2006), 153–166. | Article | MR: 2207903 | Zbl: 1096.57007 [52] D.S. Silver and W. Whitten, Knot group epimorphisms II, preprint. [53] D. Silver and S. G. Wiliams, Twisted Alexander polynomials detect the unknot, Algebr. Geom. Topol. 6 (2006), 1893–1901. | Article | MR: 2263053 | Zbl: 1132.57010 [54] J. Stallings, On fibering certain 3-manifolds, Topology of 3-manifolds and related topics (Proc. The Univ. of Georgia Institute, 1961), (1962) pp. 95–100 Prentice-Hall, Englewood Cliffs, N.J. [55] M. Wada, Twisted Alexander polynomial for finitely presentable groups, Topology 33 (1994), 241–256. | Article | MR: 1273784 | Zbl: 0822.57006 [56] C. T. C. Wall, Surgery on compact manifolds, Second edition. Edited and with a foreword by A. A. Ranicki. Mathematical Surveys and Monographs, 69. AMS, Providence, RI, 1999. | Article
Continuous Functions: Definition, Examples, and Properties | Outlier Continuous Functions: Definition, Examples, and Properties In this article, we’ll discuss the definition of a continuous function, how to prove continuity, and learn the different properties of continuous functions. In addition, we'll review continuous graph examples to solidify your understanding of continuous and discontinuous functions. What is a Discontinuous Function? Theorems for Continuous Functions A function is continuous everywhere if you can trace its curve on a graph without lifting your pencil. A function is discontinuous at a point if you cannot trace its curve without lifting your pencil at that point; meaning it has a hole, break, jump, or vertical asymptote at that point. f(x) = 2\sin{(x)} is continuous everywhere. We can draw its curve without ever lifting our hand. By contrast, the function f(x) = \frac{1}{x-2} x = 2 . We can’t draw its curve without lifting our pencil at x = 2 In differential calculus, it’s important to understand the concept of continuity because functions that are not continuous are not differentiable. Let’s learn how to prove a function is continuous at a point. Here’s the formal definition of continuity at a point. f x = a f(a) \lim_{x\to a}f(x) f(a) = \lim_{x\to a}f(x) In order to show that a function is continuous at a point a , you must show that all three of the above conditions are true. To refresh your knowledge of evaluating limits, you can review How to Find Limits in Calculus and What Are Limits in Calculus. For example, let’s show that f(x) = x^2 - 3 x = 1 f(1) = 1^2 - 3 = -2 \lim_{x\to 1}f(x) = -2 f(1) = \lim_{x\to 1}f(x) = -2 f(x) = x^2 - 3 x = 1 . In fact, we could show that f(x) = x^2 - 3 Other functions might be continuous only over a specific interval of the real numbers. If a function is continuous on an open interval, that means that the function is continuous at every point inside the interval. f(x) = \tan{(x)} has a discontinuity over the real numbers at x = \frac{\pi}{2} , since we must lift our pencil in order to trace its curve. However, we can say that f(x) = \tan{(x)} is continuous on the open interval (-\frac{\pi}{2}, \frac{\pi}{2}) since it is continuous at every point inside that specific interval. f(x) = \tan{(x)} is continuous over its own domain, which is any real number excluding odd multiples of \frac{\pi}{2} A discontinuity is a hole, jump, break, or vertical asymptote on a function's curve. There are 3 types of discontinuities. If a function has a jump discontinuity at some point a \lim_{x\to a}f(x) does not exist. Remember that in order for a limit to exist, its one-sided limits must exist and they must equal the same value. In other words, the limit as x a from the left must equal the limit as x a In functions with jump discontinuities, \lim_{x\to a^+}f(x) \neq \lim_{x\to a^-}f(x) . For example, in the function above, \lim_{x\to 2^+}f(x) = 5 \lim_{x\to 2^-}f(x) = 7 \lim_{x\to 2}f(x) does not exist, and so there is a discontinuity at x = 2 If a function has a removable discontinuity at some point a \lim_{x\to a}f(x) \neq f(a) . On a graph, this looks like a hole. In these discontinuities, the one-sided limits as x always equal each other. However, the function’s value at x = a equals something different or might not exist at all. For example, in the function above, \lim_{x\to 2}f(x) = 4 f(2) = 2 \lim_{x\to 2}f(x) \neq f(2) , the function has a discontinuity at x = 2 If a function has an infinite discontinuity at some point a , then the function has a vertical asymptote at x = a . If any one of the following statements are true, then f x = a \lim_{x\to a^+ }f(x) = +\infty \lim_{x\to a^+ }f(x) = -\infty \lim_{x\to a^- }f(x) = +\infty \lim_{x\to a^- }f(x) = -\infty For example, in the function above, there is a vertical asymptote at x = -3 x = 0 . Thus, there is an infinite discontinuity at x = -3 x = 0 If g are both continuous at x = c , then the following properties are true: (f+g)x = f(x) + g(x) x = c (f-g)x = f(x) - g(x) x = c (f \cdot g)x = f(x) \cdot g(x) x = c (\frac{f}{g})(x) = \frac{f(x)}{g(x)} is continuous, provided g(x) \neq 0 The constant multiple k \cdot f(x) x = c k (f \circ g)(x) = f(g(x)) c if g(c) The Extreme Value Theorem states that if a function is continuous on the closed interval [a,b] , then the function must have both a maximum and a minimum on [a, b] The Intermediate Value Theorem is an extremely useful theorem in math. It’s often used to prove that different equations are solvable. It’s especially useful for proving that a function has a root on a particular interval. The root of a function is the point at which a function equals zero and crosses the x-axis. The Intermediate Value Theorem states: Suppose f is a continuous function defined on [a, b], and let s be a number such that f(a) < s < f(b). Then, there must exist some x between a and b such that f(x) = s. More simply, the Intermediate Value Theorem says that a continuous function must take on every value between f(a) f(b) at least once on the interval [a, b] For example, consider the graph of f(x) = 2x in the above graph. Let’s examine the interval [a, b] a = 1 b = 3 f(a) = 2 f(b) = 6 , we’ll choose an in-between value s = 4 for our s-value. f(x) = 2x [1, 3] , the Intermediate Value Theorem guarantees that there must exist some x [1, 3] f(x) = 4 . By looking at the graph, we can see that x = 2 is the value for which f(x) = s = 4 Any polynomial function is continuous for all real numbers. A polynomial function is a function consisting of variables and coefficients that involves only non-negative exponents of the variable. Polynomials use only addition, subtraction, and multiplication operations. For example, f(x) = 7x^3 + x^2 - 5 Every differentiable function is continuous. However, be careful to remember that the converse is not necessarily true. A function could be continuous, but not differentiable. For example, the absolute value function f(x) = \mid x \mid below is continuous at x = 0 but not differentiable at x = 0 Rational, root, trigonometric, exponential, and logarithmic functions are all continuous in their domains. The domain of a function is the set of values that a function can accept as inputs. Many real life examples of continuous functions can be modeled using these function types. A rational function is a function that is written as the ratio of two polynomial functions. The domain of rational functions is all numbers except those that make the denominator zero. So, the values where rational functions have vertical asymptotes or removable discontinuities are outside of their domain. \sin(x) \cos(x) have domains that include all real numbers. So, they are continuous for all real numbers. Other trigonometric functions such as $\tan(x)$ have more selective domains — for example, the domain of \tan(x) = \frac{\sin(x)}{\cos(x)} is equal to all real numbers except for where \cos(x) = 0 . This occurs at every odd multiple of \frac{\pi}{2} , and so these x-values are outside the domain of \tan(x) Exponential functions have the form f(x) = ab^x a \neq 0 b is a real number greater than 1. The domain of exponential functions is all real numbers. Logarithmic functions are only defined for positive inputs. So, the domain of logarithmic functions can be determined by solving the inequality that sets the inside terms to be greater than 0.
Determine whether each of the following sequences is arithmetic, geometric, or neither. Then find a rule for the sequence, if possible. 12.2, 13, 13.8, … See problem 3-46 for examples and hints. 90, 81, 72.9, … 1, 1, 1, … This could be either arithmetic or geometric. Why? 2, 4, 16, …
Heaviside vector - Wikiversity The Heaviside vector is a vector of energy flux density of gravitational field, which is a part of the gravitational stress-energy tensor in the Lorentz-invariant theory of gravitation. The Heaviside vector {\displaystyle ~\mathbf {H} } can be determined by the cross product of two vectors: [1] {\displaystyle \mathbf {H} =-{\frac {c_{g}^{2}}{4\pi G}}[\mathbf {\Gamma } \times \mathbf {\Omega } ],} {\displaystyle ~\mathbf {\Gamma } } is the vector of gravitational field strength or gravitational acceleration, {\displaystyle ~G} {\displaystyle ~\mathbf {\Omega } } is the gravitational torsion field or torsion of the field, {\displaystyle ~c_{g}} is the speed of gravity. The Heaviside vector magnitude is equal to the amount of gravitational energy transferred through the unit area which is normal to the energy flux per unit time. The minus sign in the definition of {\displaystyle ~\mathbf {H} } means that the energy is transferred in the direction opposite to the vector. 1 The momentum density of gravitational field 2 The Heaviside theorem The momentum density of gravitational fieldEdit To determine the vector of momentum density {\displaystyle ~\mathbf {P_{g}} } of gravitational field we must divide the Heaviside vector by the square of the speed of gravitation propagation: {\displaystyle ~\mathbf {P_{g}} ={\frac {1}{c_{g}^{2}}}\mathbf {H} =-{\frac {1}{4\pi G}}[\mathbf {\Gamma } \times \mathbf {\Omega } ].} {\displaystyle ~\mathbf {P_{g}} c_{g}={\frac {1}{c_{g}}}\mathbf {H} =U^{0k}} is a part of the gravitational stress-energy tensor {\displaystyle ~U^{ik}} in the form of three timelike components, when the indices of the tensor are i = 0, k = 1,2,3. To determine the momentum of the gravitational field, we must integrate the vector {\displaystyle ~\mathbf {P_{g}} } over the moving space volume, occupied by the field, taking into account the Lorentz contraction of this volume. The Heaviside theoremEdit From the law of conservation of energy and momentum of matter in a gravitational field in the Lorentz-invariant theory of gravitation should Heaviside theorem: {\displaystyle \nabla \cdot \mathbf {H} =-{\frac {\partial {U^{00}}}{\partial {t}}}-\mathbf {J} \cdot \mathbf {\Gamma } ,} {\displaystyle ~\mathbf {J} } is the mass current density. According to this theorem, the gravitational energy flowing into a certain volume in the form of the energy flux density {\displaystyle ~\mathbf {H} } is spent to increase the energy of the field {\displaystyle ~U^{00}} in this volume and to carry out the gravitational work as the product of field strength {\displaystyle ~\mathbf {\Gamma } } and the mass current density {\displaystyle ~\mathbf {J} } Plane wavesEdit Maxwell-like gravitational equations, in the form of which the equations of Lorentz-invariant theory of gravitation are presented, allow us to determine the properties of plane gravitational waves from any point sources of field. In a plane wave the vectors {\displaystyle ~\mathbf {\Gamma } } {\displaystyle ~\mathbf {\Omega } } are perpendicular to each other and to the direction of the wave propagation, and the relation {\displaystyle ~\Gamma _{0}=c_{g}\Omega _{0}} holds for the amplitudes. If we assume that the wave propagates in one direction, for the field strengths it can be written: {\displaystyle ~\Gamma (\mathbf {r} ,t)=\Gamma _{0}\cos(\omega t-\mathbf {k} \cdot \mathbf {r} ),} {\displaystyle ~\Omega (\mathbf {r} ,t)=\Omega _{0}\cos(\omega t-\mathbf {k} \cdot \mathbf {r} ),} {\displaystyle ~\omega } {\displaystyle ~\mathbf {k} } are the angular frequency and the wave vector. Then for the gravitational energy flux it will be: {\displaystyle H(\mathbf {r} ,t)=-{\frac {c_{g}^{2}}{4\pi G}}\Gamma _{0}\Omega _{0}\cos ^{2}(\omega t-\mathbf {k} \cdot \mathbf {r} )=-{\frac {c_{g}}{4\pi G}}\Gamma _{0}^{2}\cos ^{2}(\omega t-\mathbf {k} \cdot \mathbf {r} ).} The average value over time and space of the squared cosine is equal to ½, so: {\displaystyle \left\langle H(\mathbf {r} ,t)\right\rangle =-{\frac {c_{g}}{8\pi G}}\Gamma _{0}^{2}.} In practice, it should be noted that the pattern of waves in a gravitationally bound system of bodies has rather quadrupole than dipole character, since in case of emission we should take into account the contributions of all field sources. According to the superposition principle we must first sum up at each point of space all the existing fields {\displaystyle ~\mathbf {\Gamma } } {\displaystyle ~\mathbf {\Omega } } , find them as functions of coordinates and time, and only then calculate with the obtained total magnitudes the energy flux in the form of the Heaviside vector. Gravitational pressureEdit Suppose that there is a gravitational energy flux falling on some unit material area absorbing all the energy. The energy flux propagates at the speed {\displaystyle ~c_{g}} and transfers the momentum density of the field {\displaystyle ~\mathbf {P_{g}} ={\frac {1}{c_{g}^{2}}}\mathbf {H} .} Then the maximum possible gravitational pressure is: {\displaystyle ~p=\mid {\frac {\langle H\rangle }{c_{g}}}\mid ={\frac {\Gamma _{0}^{2}}{8\pi G}},} {\displaystyle \langle H\rangle } is the mean Heaviside vector and {\displaystyle ~\Gamma _{0}} is the amplitude of the gravitational field strength vector of incident plane gravitational wave. The formula for the maximum pressure can be understood from the definition of pressure as force {\displaystyle ~F} , applied to the area {\displaystyle ~S} , the definition of force as the momentum of field {\displaystyle ~\Delta Q} during the time {\displaystyle ~\Delta t} {\displaystyle ~\Delta Q=Q} {\displaystyle ~c_{g}\Delta t=\Delta x} ; volume, absorbing field momentum {\displaystyle ~\Delta V=\Delta xS} ; average density of gravitational momentum {\displaystyle ~\langle P_{g}\rangle ={\frac {Q}{\Delta V}}} {\displaystyle p={\frac {F}{S}}={\frac {\Delta Q}{\Delta tS}}={\frac {Qc_{g}}{\Delta xS}}=\langle P_{g}\rangle c_{g}.} Since the gravitational energy flux passes through bodies with low absorption in them, to calculate the pressure it is necessary to take the difference between the incident and outgoing energy fluxes. Representation of the gravitational energy flux first appeared in the works by Oliver Heaviside. [2] Previously the Umov vector for the energy flux in substance (1874) and the Poynting vector for the electromagnetic energy flux (1884) had been determined. The Heaviside vector is in agreement with that used by Krumm and Bedford, [3] by Fedosin, [4] by H. Behera and P. C. Naik. [5] ↑ P. Krumm and D. Bedford, Am. J. Phys. 55 (4), 362 (1987). ↑ Fedosin S.G. (1999), written at Perm, pages 544, Fizika i filosofiia podobiia ot preonov do metagalaktik, ISBN 5-8131-0012-1. ↑ Harihar Behera and P. C. Naik. Gravitomagnetic Moments and Dynamics of Dirac (Spin 1/2 ) Fermions in Flat Space-Time Maxwellian Gravity. International Journal of Modern Physics A, Vol. 19, No. 25 (2004), P. 4207-4229. Heaviside vector in Russian Retrieved from "https://en.wikiversity.org/w/index.php?title=Heaviside_vector&oldid=1652927"
Amanullakhan A. Pathan, Kavita R. Desai, Shailesh Vajapara, C. P. Bhasin* Department of Chemistry, Hemchandracharya North Gujarat University, Patan, India. Abstract: In the Mediterranean region, climate change will result by 2100 in a tempera-ture increase that most likely will range from 2°C to 2.7°C, while annual precip-itation will most likely reduce in the range of 3% to 10%. This paper uses hy-drological modeling of precipitation and evapotranspiration to evaluate the challenge to aquifer natural recharge considering Palestine as a case study. The study showed that the climate change impacts on aquifer recharge will vary according to the distributions of monthly precipitation and evapotranspiration in the recharge areas. The 2°C to 3°C increase in temperature could result in a reduction of 6% to 13% in aquifer annual recharge. Aquifer recharge was found to be sensitive to changes in precipitation as a reduction of 3% to 10% in annual precipitation could result in a reduction in annual recharge ranging from 3% to 25%. It was observed that aquifers with recharge areas characterized by lower precipitation are more sensitive to precipitation reduction and thus groundwater resources will be negatively impacted more in these areas by climate change. Thus, climate change will reduce water availability in drier areas requiring adaptation measures through improving water management and rehabilitation of water infrastructure. Keywords: Nanoparticles, Lanthanum Oxide, X-Ray Diffraction, SEM, FTIR, TEM Lanthanum exhibited the good diamagnetic properties and also having the largest band gap Eg > 5 eV in the rare earth group oxides. It having very highly dielectric constant, ε = 27 pF/m with the lowest lattice energy [1] [2] . La2O3 has p-type semiconducting properties there for its resistivity decreases at high temperatures [3] . Thus the use of this material in the gated MOSFET devices will extensively reduce the leakage current density because of the larger band offset for electrons as compared to other high-κ materials [4] [5] . Synthesis of fine and uniform crystallite size, chemical homogeneity, high-purity, complex oxide formulations have been studied for the past few decades. At present, there are many techniques available to synthesize complex oxide by Pechini method [6] , Solution combustion method [7] , and precipitation from aqueous solutions hydrothermal synthesis [8] , sol-gel processing [9] , microwave hydrothermal synthesis [10] [11] , reverse micelle method [12] and solution combustion method using different fuel and chelating agent like Propylene glycol and Glutaric acid [13] . In this work satisfying the demand for higher integration density in microelectronics, the scaling of MOSFETs becomes more and more aggressive. In future, a leading manufacturer of integrated circuits recently announced to introduce hafnium and lanthanum based high-κ dielectrics in their next CMOS new generation [6] . In this article we have demonstrated to synthesis of La2O3 by at 600˚C temperature with using Solution combustion method for taking amount Ψ = 1 of Acetamide as fuel. It is very easy method for Preparation of La2O3 Nanoparticles. Synthesized La2O3 nanoparticles were characterized by various analytical techniques. Analytical grade Lanthanum nitrate and acetamide were used as received from the s.d fine chemicals (India). All reaction was performed using double distilled water. 2.2. Synthesis of La2O3 Nanoparticles by Using Solution Combustion Method K. Bikshalu et al. have been reported synthesis of La2O3 Nanoparticles by using Pechini Method for Future CMOS Applications [14] and A. Pathan et al. have been reported Synthesis of La2O3 Nanoparticles using Glutaric acid and Propylene glycol for Future CMOS Applications [13] . Here, we used Acetamide as fuel. In this Method, We used mixing of the Lanthanum Nitrate with Acetamide as Fuel. Both reactant mix well and put it solution in muffle furnace at 600˚C for 4 - 5 hrs. After that reaction we get solid particles. It cool at room temperature and give sample for various analysis. All reagents used were mixed in Double Distilled water. The experiment was carried out with two Fuel ratios i.e., Ψ = 1. 4La(NO3)3 +2CH3CONH2 → 2La2O3(s) + 2NH3(g) + 2H2O + 4CO2(g) + 11NO2(g) + N2(g) Here, we are describing the amount of Precursor (fuel) Materials to be taken for in this synthesis (Figure 1). Figure 1. Schematic diagram of solution combustion method for La2O3 Nanoparticles. 3.1. X-Ray Diffractrometer Analysis figure 2 shows that the XRD pattern of the La2O3 Nanoparticles Prepared by using Solution Combustion method. This result indicates that the structure of the La2O3 Nanoparticles is in pure Cubic phase when synthesized at Ψ = 1. The extended peaks are representing the dimensions of the Nano range particles. Peaks are observed at 24.32˚, 31.90˚, 36.38˚, 47.71˚ and 56.79˚ respectively corresponding to the (h k l) values of the peaks (1 0 0), (1 1 0), (1 1 0), (2 0 0) and (2 1 0) respectively. The lattice parameters were in good agreement with JCPDS card number 04 - 0856 [15] , having lattice parameters a = b = c = 3.6180 Å and α = β = γ = 90˚. Figure 2: XRD Patterns of La2O3 Particles Synthesized by Solution Combustion method for Ψ = 1. The crystallite size is calculated by Debye-Scherer’s formula, D=\frac{K\lambda }{\beta \mathrm{cos}\theta } where, D is the average crystallite size of the particle, λ is the wavelength of the radiation, β is the full width at half maximum (FWHM) of the peak, θ is the Bragg’s angle. The average crystallite sizes of samples synthesized by this method is 42 nm for ψ = 1. Figure 2. XRD patterns of La2O3 particles synthesized by this method for Ψ = 1. Here, Calculate the strain and crystallite size of the sample are from the Williamson―Hall equation. The equation is as follows: \beta \mathrm{cos}\theta =\frac{K\lambda }{t}+2\epsilon \mathrm{sin}\theta where β is the full width at half maximum (FWHM) of the XRD corresponding peaks, K is Debye-Scherer’s constant, t is the crystallite size, λ is the wave length of the X-ray radiation, ε is the lattice strain and θ is the Bragg angle. In this process 2sinθ is plotted against βcosθ, using a linear extrapolation to this plot, the intercept gives the crystallite size and slope gives the strain (ε). The average crystallite sizes were 42 nm and strain was 0.0028 for Nano particles synthesized by this method using ψ = 1. The lattice parameters of the hexagonal phase was measured by the below formula. \frac{1}{{d}^{2}}=\frac{4\left({h}^{2}+hk+{k}^{2}\right)}{3{a}^{2}}+\frac{1}{{c}^{2}} The measured values a = b = 0.3919 nm and c = 0.63196 nm were shows the similar values, which is from the XRD pattern. FTIR analysis has been done in the wave number range from 500 cm−1 to 4000 cm−1. The samples have been admixed with KBr, thoroughly mixed and pelletized by pressing under sufficient pressure, before FTIR analysis. La2O3 Nano particles were analysed with the BRUCKER (αT Model) FTIR spectrometer as shown in figure 3. The very weak absorption bands at 3607 cm−1 is assigned to O-H stretching vibration of water molecules, due to presence of moisture in the sample. Very weak bending vibrations of water molecules appeared at 1636 cm−1, C-C Stretching, Medium strong band positions in the range of 1396 cm−1 to 1464 cm−1 are possibly due to stretching vibrations of ions. The narrow absorption peak observed around at 1066 cm−1 can be ascribed to the C=O bonding. The medium to strong absorption bands at 653 cm−1 were because of La-O stretching. Hence the existence of above mentioned bands identify the presence of La2O3. Figure 3. IR spectra of La2O3 Nanoparticles. 3.3. Thermo Gravimetric and Differential Thermal Analysis The TGA analysis of La2O3 Nano particles synthesized using this Method was representing in figure 4 respectively. The temperature range is 50˚C to 1000˚C. The initial weight loss observed at 350˚C to 500˚C corresponds to that of loss of carbonaceous compounds. The peak observed after 450˚C to 550˚C corresponds to decomposition of covalently bond organic material, mainly carbon which was converted into CO2 at the time of synthesis. From DSC Curves of La2O3 Nano particles the exothermic peak present in between 640˚C to 810˚C can be observed due to desorption and decomposition of carbonaceous materials. The weight loss of the La2O3 Nano Particles are Shown in Above figure 4 Shows the Weight Loss for the Sample Synthesized this Method is 14.006% for at ψ = 1. 3.4. Scanning Electron Microscopy and EDAX The grain size, shape and surface properties like morphology were observed using SEM with different magnifications. The SEM images of La2O3 nanoparticles which were prepared using this Method at ψ = 1 was shown in figure 5 respectively. EDAX spectrum of La2O3 shows the peaks for lanthanum and oxygen elements indicating the formation of La2O3 nanoparticles. Peak indexing of the elements are oxygen 0.52 keV and lanthanum 4.71 keV. The compositions in mass percentage of the elements are oxygen 35.15% and lanthanum 64.42%. The observed composition matches with the theoretically calculated composition (Figure 6). The TEM analysis show the agglomerated sample in Nano range. The below figure shows the TEM micrograph of the sample synthesized using this Method. From TEM analysis, it has been found that the samples particles not good in crystal due to severe agglomeration. But the particles are well below Nanometer range to conclude that the obtained particles are Nano particles (Figure 7). Figure 4. TGA/DSC curves of La2O3 Nanoparticles. Figure 5. SEM images of La2O3 Nanoparticles synthesized by maintaining carried out this method using ψ = 1. It shows that, the particles are agglomerated and porous properties. It show that the size of the porous or porosity shows to be increased as the fuel to oxidizer ratio increased. Figure 6. EDAX spectrum of La2O3 Nanoparticles. Figure 7. (a) and (b) TEM images of La2O3 Nanoparticles synthesized by maintaining carried out this method using ψ = 1. La2O3 Nano powders have been successfully synthesized by very low cost Solution Combustion method using Acetamide as fuel and taking F/O ratios i.e., Ψ = 1. The average crystallite sizes of samples synthesized by using this method are 42 nm for Ψ = 1. From FTIR analysis, it shows good formation of La2O3 NPS for La-O band at 653 cm−1 and TGA/DSC reveal the effective weight loss of materials at 350˚C and exothermic peak of La2O3 at 800˚C. Structural properties were examined by SEM reveals porous and porosity was good in network of Nano crystalline La2O3. The EDAX shows the purity and percentage of the La2O3 nanoparticles. From the above TEM characterizations we inferred that the sample obtained from higher F/O ratio was phase pure and more crystalline in nature. Cite this paper: Pathan, A. , Desai, K. , Vajapara, S. and Bhasin, C. (2018) Conditional Optimization of Solution Combustion Synthesis for Pioneered La2O3 Nanostructures to Application as Future CMOS and NVMS Generations. Advances in Nanoparticles, 7, 28-35. doi: 10.4236/anp.2018.71003. [1] Zhang, N., Ran, Y., Zhou, L.B., Gao, G.H., Shi, R.R., Qiu, G.Z. and Liu, X.H. (2009) Lanthanide Hydroxide Nanorods and Their Thermal Decomposition to Lanthanide Oxide Nanorods. Materials Chemistry and Physics, 114, 160-167. [2] Cedric, B., Condorelli, G.G., Finocchiaro, S.T., Di Mauro, A., Atanasio, D., Fragala, I.L., Cattaneo, L. and Carella, S. (2006) MOCVD of Lanthanum Oxides from La(tmhd)3 and La(tmod)3 Precursors: A Thermal and Kinetic Investigation. Chemical Vapor Deposition, 12, 46-53. [3] Kale, S.S., Jadhav, K.R., Patil, P.S., Gujar, T.P. and Lokhande, C.D. (2005) Characterizations of Spray-Deposited Lanthanum Oxide (La2O 3) Thin Films. Materials Letters, 59, 3007-3009. [4] Wu, Y.H., Yang, M.Y., Chin, A., Chen, W.J. and Kwei, C.M. (2000) Electrical Characteristics of High Quality La2O3 Gate Dielectric with Equivalent Oxide Thickness of 5/spl Aring. IEEE Electron Device Letters, 21, 341-343. [5] Hirotoshi, Y., Shimizu, T., Kurokawa, A., Ishii, K. and Suzuki, E. (2003) MOCVD of High-Dielectric-Constant Lanthanum Oxide Thin Films. Journal of the Electrochemical Society, 150, G429-G435. [6] Pechini, M. P. (1967) USA Patent, No. 3.330. 697. [7] Tsoutsou, D., Scarel, G., Debernardi, A., Capelli, S.C., Volkos, S.N., Lamagna, L., Schamm, S., Coulon, P.E. and Fanciulli, M. (2008) Infrared Spectroscopy and X-Ray Diffraction Studies on the Crystallographic Evolution of La2O3 Films upon Annealing. Microelectronic Engineering, 85, 2411-2413. [8] Xie, Y., Qian, Y., Li, J., Chen, Z. and Yang, L. (1995) Hydrothermal Preparation and Characterization of Ultrafine Powders of Ferrite Spinels MFe2O4 (M= Fe, Zn and Ni). Materials Science and Engineering: B, 34, L1-L3. [9] Kim, W.C., Kim, S.J., Lee, S.W. and Kim, C.S. (2001) Growth of Ultrafine NiZnCu Ferrite and Magnetic Properties by a Sol-Gel Method. Journal of Magnetism and Magnetic Materials, 226, 1418-1420. [10] Wang, H.-W. and Kung, S.-C. (2004) Crystallization of Nanosized Ni-Zn Ferrite Powders Prepared by Hydrothermal Method. Journal of Magnetism and Magnetic Materials, 270, 230-236. [11] Krishnaveni, T., Murthy, S.R., Gao, F., Lu, Q. and Komarneni, S. (2006) Microwave Hydrothermal Synthesis of Nanosize Ta2O 5 Added Mg-Cu-Zn Ferrites. Journal of Materials Science, 41, 1471-1474. http://www.intel.com/technology/silico n/45nmtechnology.htm [13] Pathan, A.A., Desai, K.R. and Bhasin, C.P. (2017) Synthesis of La2O3 Nanoparticles Using Glutaric Acid and Propylene Glycol for Future CMOS Applications. International journal of Nanomaterials and Chemistry, 3, 21-25. [14] Bikshalu, K., Reddy, V.S.K., Reddy, P.C.S. and Rao, K.V. (2014) Synthesis of La2O3 Nanoparticles by Pechini Method for Future CMOS Applications. International Journal of Education and Applied Research, 4, 12-15. [15] Phases, Powder Diffraction File Inorganics (1988) Alphabetical Index, Inorganics phases. Swarthmore, Pennsylvania: JCPDS(04-0856). International Centre for Diffraction Data.
Irrational Numbers - Global Math Week I8S8EQ1 We have seen that fractions can possess finitely long decimal expansions. \dfrac{1}{8} = 0.125 \dfrac{1}{2} = 0.5 And we have seen that fractions can possess infinitely long decimal expansion. \dfrac{1}{3} = 0.3333... \dfrac{6}{7} = 0.857142057142857142... All the examples of fractions with infinitely long decimal expansions we’ve seen so far fall into a repeating pattern. This is curious. We can even say that our finite examples eventually fall into a repeating pattern too, a repeating pattern of zeros after an initial start. \dfrac{1}{8} = 0.12500000... = 0.125\overline{0} \dfrac{1}{2} = 0.50000... = 0.5\overline{0} \dfrac{1}{3} = 0.\overline{3} \dfrac{6}{7} = 0.\overline{857142} Does every fraction have a decimal representation that eventually repeats? Let’s go through the division process again, slowly, first with a familiar example. Let’s compute the decimal expansion of \dfrac{1}{3} again in a 1 \leftarrow 10 machine. We think of \dfrac{1}{3} as the answer to the division problem 1 \div 3 , and so we need to find groups of three within a diagram of one dot. We unexplode the single dot to make ten dots in the tenths position. There we find three groups of three leaving a remainder of 1 in that box. Now we can unexplode that single dot in the tenths box and write ten dots in the hundredths box. There we find three more groups of three, again leaving a single dot behind. And so on. We are caught in a cycle of having the same remainder of one dot from cell to cell, meaning that the same pattern repeats. Thus we conclude \dfrac{1}{3} = 0.333... . The key point is that the same remainder of a single dot kept appearing. Here’s a more complicated example. Let’s compute the decimal expansion of \dfrac{4}{7} 1 \leftarrow 10 That is, let’s compute 4 \div 7 We start by unexploding the four dots to give 40 dots in the tenths cell. There we find 5 groups of seven, leaving five dots over. Now unexplode those five dots to make 50 dots in the hundredths position. There we find 7 groups of seven, leaving one dot over. Unexplode this single dot. This yields 1 group of seven leaving three remaining. Unexplode these three dots. This gives 4 groups of seven with two remaining. Unexplode the two dots. This gives 2 groups of seven with six remaining. Unexplode the six dots. This gives 8 groups of seven with four remaining. But this is predicament we started with: four dots in a box! So now we are going to repeat the pattern and produce a cycle in the decimal representation. We have \dfrac {4}{7} = 0.571428 571428 571428... Stepping back from the specifics of this problem, it is clear now that one must be forced into a repeating pattern. In dividing a quantity by seven, there are only seven possible numbers for a remainder number of dots in a cell – 0, 1, 2, 3, 4, 5, 6 – and there is no option but to eventually repeat a remainder and so enter a cycle. In the same way, the decimal expansion of \dfrac{18}{37} must also cycle. In doing the division, there are only thirty-seven possible remainders for dots in a cell ( 0, 1, 2, 3, 4, 5, ..., 36 ). As we complete the division computation, we must eventually repeat a remainder and again fall into a cycle. We have just established a very interesting fact. ALL FRACTIONS HAVE A REPEATING DECIMAL REPRESENTATION. (A repeating pattern of zeros is possible. In fact, as a check, conduct the division procedure for the fraction \dfrac{1}{8} . Make sure to understand where the cycle of repeated remainders commences.) This now opens up a curious idea: A quantity given by a decimal expansion that does not repeat cannot be a fraction! For example, the quantity 0.10 1100 111000 11110000 11111000000... is designed not to repeat (though there is a pattern to this decimal expansion) and so represents a number that is not a fraction. A number that is equivalent to the ratio of two whole numbers (that is, a fraction) is called a rational number. A number that cannot be represented this way is called an irrational number. It looks like we have just proved that irrational numbers exist! Not all numbers are fractions. In fact, we can now invent all sorts of numbers that can’t be fractions! 0.102030405060708090100110120130140150... 0.3030030003000030000030000003... Of course, we have all heard that numbers like \sqrt {2} \pi are irrational numbers. It is not at all obvious why they are and how you would go about proving that they are. (In fact, it took mathematicians about 2000 years to finally establish that \pi is an irrational number. Swiss mathematician Johann Lambert finally proved it so in 1761.) But if you are willing to believe that these numbers are irrational, then you can say for sure that their decimal expansions possess no repeating patterns!
Viscous damper in mechanical translational systems - MATLAB - MathWorks Deutschland Viscous damper in mechanical translational systems Mechanical Translational Elements The Translational Damper block represents an ideal mechanical translational viscous damper, described with the following equations: F=Dv v={v}_{R}-{v}_{C} F Force transmitted through the damper v Relative velocity vR,vC Absolute velocities of terminals R and C, respectively The block positive direction is from port R to port C. This means that the force is positive if it acts in the direction from R to C. Damping coefficient, defined by viscous friction. The default value is 100 N/(m/s). Mechanical translational conserving port associated with the damper rod. Mechanical translational conserving port associated with the damper case. Translational Friction | Translational Hard Stop | Translational Spring
What is Partial Derivative? Definition, Rules, and Examples | Outlier This article is an overview of partial derivatives. Learn the definition of partial derivatives, how to do partial differentiation, and practice with some examples. How to Do Partial Derivatives We use partial differentiation to differentiate a function of two or more variables. For example, f(x, y) = xy + x^2y If we want to find the partial derivative of a two-variable function with respect to x , we treat y as a constant and use the notation \frac{\partial{f}}{\partial{x}} . If we want to find the partial derivative of a two-variable function with respect to y x \frac{\partial{f}}{\partial{y}} You can think of \partial as the partial derivative symbol, sometimes called “del.” When you see this symbol, it shows that we’re taking a partial derivative. This notation should look familiar — it’s just like the derivative of a function in Leibniz’s notation, expressed \frac{dy}{dx} \partial replaces the letter “d” with a stylized curly d In calculus, derivatives measure the rate of change of a function with respect to a change in its input variable x . Since the input of a multivariable function is more than one variable, we call \frac{\partial{f}}{\partial{x}} \frac{\partial{f}}{\partial{y}} partial derivatives because they only reveal the rate of change of when one variable changes, instead of both. The partial derivative allows us to understand the behavior of a multivariable function when ​​we let just one of its variables change, while the rest stay constant. How do partial derivatives work? To find a partial derivative, we find the derivative of a function of two or more variables by treating one of the variables as a constant. \frac{\partial{f}}{\partial{x}} y Differentiate the function normally. \frac{\partial{f}}{\partial{y}} x After we designate one variable as a constant, we can use the derivative rules that are already familiar to us to differentiate the function. Partial Derivative Rules Derivative rules help us differentiate more complicated functions by breaking them into pieces. Here are some of the most common derivative rules to know: \frac{d}{dx}c = 0 \frac{d}{dx}x^n = nx^{n-1} \frac{d}{dx}f(g(x)) = f’(g(x))g’(x) \frac{d}{dx}f(x) \cdot g(x) = f’(x) \cdot g(x) + f(x)\cdot g’(x) \frac{d}{dx}\frac{f(x)}{g(x)} = \frac{g(x)f’(x)-f(x)g’(x)}{(g(x))^2} \frac{d}{dx}(f(x) \pm g(x)) = f’(x) \pm g’(x) \frac{d}{dx}(\sin{(x)}) = \cos{(x)} \frac{d}{dx}(\cos{(x)}) = -\sin{(x)} \frac{d}{dx}(\tan{(x)}) = \sec ^2 (x) \frac{d}{dx} (\ln{x}) = \frac{1}{x} \frac{d}{dx}(e^x) = e^x You can view more about these rules in an explanation by one of our instructors Dr. Tim Chartier. For example, let’s take another look at the function f(x, y) = xy + x^2y \frac{\partial{f}}{\partial{x}} , the partial derivative with respect to x . The first thing to do is treat y What does it mean to treat y as a constant? A constant is a fixed, unchanging value. Examples of constants are 1, 3.5, 17, and 100,000. To treat y as a constant, we imagine that y is any of these infinite constant values. We can do this because of the constant rule, which states that the derivative of any constant is 0. To make it easier to imagine y as a constant, we can replace y c k , which are two variables that are commonly used to represent constant values. Using this trick and replacing y k f(x, y) = kx + kx^2 Now, we can find the partial derivative \frac{\partial{f}}{\partial{x}} using the derivative rules. Remember to change k y when you have your final answer. \frac{\partial{(xy + x^2y)}}{\partial{x}} = k + 2kx \frac{\partial{(xy + x^2y)}}{\partial{x}} = y + 2yx We can do the same to find \frac{\partial{f}}{\partial{y}} y x as a constant. Remember that the square of any constant is simply another constant. f(x, y) = ky + k^2y \frac{\partial{f}}{\partial{y}} k x \frac{\partial{(xy + x^2y)}}{\partial{y}} = k + k^2 \frac{\partial{(xy + x^2y)}}{\partial{y}} = x + x^2 Let’s take a look at some more partial derivative examples. Find the partial derivatives of f(r, h) = \pi r^2h This function represents the volume of a cylinder. When we find the partial derivative \frac{\partial{(\pi r^2h)}}{\partial{r}} , we find the rate of change of the cylinder’s volume as only the radius changes. When we find the partial derivative \frac{\partial{(\pi r^2h)}}{\partial{h}} , we find the rate of change of the cylinder’s volume as only the height changes. So, \frac{\partial{(\pi r^2h)}}{\partial{r}} = 2\pi rh \frac{\partial{(\pi r^2h)}}{\partial{h}} = \pi r^2 f(x, y, z) = xy^3 - zx + z How do partial derivatives work in more than two variables? Just the same! For a function with three variables, we change only one variable and treat the other two as constants. So, \frac{\partial{(x^3y - zx + z)}}{\partial{x}} = y^3 - z \frac{\partial{(x^3y - zx + z)}}{\partial{y}} = 3xy^2 \frac{\partial{(x^3y - zx + z)}}{\partial{z}} = 1 - x f(x, y) = x^2\sin{(y)} - y^2\cos{(x)} We can use the trigonometry derivative rules for this problem. Remember that \sin{(y)} acts as a constant when we calculate \frac{\partial{(f(x, y))}}{\partial{x}} \cos{(x)} \frac{\partial{(f(x, y))}}{\partial{y}} \frac{\partial{(x^2\sin{(y)} - y^2\cos{(x)})}}{\partial{x}} = 2x\sin{(y)} + y^2\sin{(x)} \frac{\partial{(x^2\sin{(y)} - y^2\cos{(x)})}}{\partial{y}} = x^2\cos{(y)} - 2y\cos{(x)}
LinearDiscriminantAnalysis: Linear discriminant analysis for dimensionality reduction - mlxtend Example 1 - LDA on Iris Example 2 - Plotting the Between-Class Variance Explained Ratio Implementation of Linear Discriminant Analysis for dimensionality reduction from mlxtend.feature_extraction import LinearDiscriminantAnalysis Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications. The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid overfitting ("curse of dimensionality") and also reduce computational costs. Ronald A. Fisher formulated the Linear Discriminant in 1936 (The Use of Multiple Measurements in Taxonomic Problems), and it also has some practical uses as classifier. The original Linear discriminant was described for a 2-class problem, and it was then later generalized as "multi-class Linear Discriminant Analysis" or "Multiple Discriminant Analysis" by C. R. Rao in 1948 (The utilization of multiple measurements in problems of biological classification) The general LDA approach is very similar to a Principal Component Analysis, but in addition to finding the component axes that maximize the variance of our data (PCA), we are additionally interested in the axes that maximize the separation between multiple classes (LDA). So, in a nutshell, often the goal of an LDA is to project a feature space (a dataset n-dimensional samples) onto a smaller subspace k k \leq n-1 ) while maintaining the class-discriminatory information. In general, dimensionality reduction does not only help reducing computational costs for a given classification task, but it can also be helpful to avoid overfitting by minimizing the error in parameter estimation ("curse of dimensionality"). Summarizing the LDA approach in 5 steps Listed below are the 5 general steps for performing a linear discriminant analysis. d -dimensional mean vectors for the different classes from the dataset. Compute the scatter matrices (in-between-class and within-class scatter matrix). Compute the eigenvectors ( \mathbf{e_1}, \; \mathbf{e_2}, \; ..., \; \mathbf{e_d} ) and corresponding eigenvalues ( \mathbf{\lambda_1}, \; \mathbf{\lambda_2}, \; ..., \; \mathbf{\lambda_d} ) for the scatter matrices. Sort the eigenvectors by decreasing eigenvalues and choose k eigenvectors with the largest eigenvalues to form a k \times d dimensional matrix \mathbf{W} (where every column represents an eigenvector). k \times d eigenvector matrix to transform the samples onto the new subspace. This can be summarized by the mathematical equation: \mathbf{Y} = \mathbf{X} \times \mathbf{W} \mathbf{X} n \times d -dimensional matrix representing the n samples, and \mathbf{y} are the transformed n \times k -dimensional samples in the new subspace). Fisher, Ronald A. "The use of multiple measurements in taxonomic problems." Annals of eugenics 7.2 (1936): 179-188. Rao, C. Radhakrishna. "The utilization of multiple measurements in problems of biological classification." Journal of the Royal Statistical Society. Series B (Methodological) 10.2 (1948): 159-203. X = standardize(X) with plt.style.context('seaborn-whitegrid'): for lab, col in zip((0, 1, 2), plt.scatter(X_lda[y == lab, 0], X_lda[y == lab, 1], plt.xlabel('Linear Discriminant 1') plt.ylabel('Linear Discriminant 2') lda = LinearDiscriminantAnalysis(n_discriminants=None) tot = sum(lda.e_vals_) var_exp = [(i / tot)*100 for i in sorted(lda.e_vals_, reverse=True)] plt.xticks(range(4)) ax.set_xticklabels(np.arange(1, X.shape[1] + 1))
paired_ttest_kfold_cv: K-fold cross-validated paired *t* test - mlxtend paired_ttest_kfold_cv: K-fold cross-validated paired t test Example 1 - K-fold cross-validated paired t test K-fold paired t test procedure to compare the performance of two models from mlxtend.evaluate import paired_ttest_kfold_cv K-fold cross-validated paired t-test procedure is a common method for comparing the performance of two models (classifiers or regressors) and addresses some of the drawbacks of the resampled t-test procedure; however, this method has still the problem that the training sets overlap and is not recommended to be used in practice [1], and techniques such as the paired_ttest_5x2cv should be used instead. To explain how this method works, let's consider to estimator (e.g., classifiers) A and B. Further, we have a labeled dataset D. In the common hold-out method, we typically split the dataset into 2 parts: a training and a test set. In the k-fold cross-validated paired t-test procedure, we split the test set into k parts of equal size, and each of these parts is then used for testing while the remaining k-1 parts (joined together) are used for training a classifier or regressor (i.e., the standard k-fold cross-validation procedure). In each k-fold cross-validation iteration, we then compute the difference in performance between A and B in each so that we obtain k difference measures. Now, by making the assumption that these k differences were independently drawn and follow an approximately normal distribution, we can compute the following t statistic with k-1 degrees of freedom according to Student's t test, under the null hypothesis that the models A and B have equal performance: p^{(i)} computes the difference between the model performances in the i p^{(i)} = p^{(i)}_A - p^{(i)}_B \overline{p} represents the average difference between the classifier performances, \overline{p} = \frac{1}{k} \sum^k_{i=1} p^{(i)} Once we computed the t statistic we can compute the p value and compare it to our chosen significance level, e.g., \alpha=0.05 . If the p value is smaller than \alpha , we reject the null hypothesis and accept that there is a significant difference in the two models. The problem with this method, and the reason why it is not recommended to be used in practice, is that it violates an assumption of Student's t test [1]: the difference between the model performances ( p^{(i)} = p^{(i)}_A - p^{(i)}_B ) are not normal distributed because p^{(i)}_A p^{(i)}_B are not independent p^{(i)} 's themselves are not independent because training sets overlap [1] Dietterich TG (1998) Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Comput 10:1895–1923. Assume we want to compare two classification algorithms, logistic regression and a decision tree algorithm: clf2 = DecisionTreeClassifier(random_state=1) train_test_split(X, y, test_size=0.25, score1 = clf1.fit(X_train, y_train).score(X_test, y_test) print('Logistic regression accuracy: %.2f%%' % (score1*100)) print('Decision tree accuracy: %.2f%%' % (score2*100)) Logistic regression accuracy: 97.37% Decision tree accuracy: 94.74% Note that these accuracy values are not used in the paired t-test procedure as new test/train splits are generated during the resampling procedure, the values above are just serving the purpose of intuition. Now, let's assume a significance threshold of \alpha=0.05 for rejecting the null hypothesis that both algorithms perform equally well on the dataset and conduct the k-fold cross-validated t-test: t, p = paired_ttest_kfold_cv(estimator1=clf1, estimator2=clf2, print('t statistic: %.3f' % t) print('p value: %.3f' % p) t statistic: -1.861 p > \alpha , we cannot reject the null hypothesis and may conclude that the performance of the two algorithms is not significantly different. While it is generally not recommended to apply statistical tests multiple times without correction for multiple hypothesis testing, let us take a look at an example where the decision tree algorithm is limited to producing a very simple decision boundary that would result in a relatively bad performance: clf2 = DecisionTreeClassifier(random_state=1, max_depth=1) t statistic: 13.491 Assuming that we conducted this test also with a significance level of \alpha=0.05 , we can reject the null-hypothesis that both models perform equally well on this dataset, since the p-value ( p < 0.001 ) is smaller than \alpha paired_ttest_kfold_cv(estimator1, estimator2, X, y, cv=10, scoring=None, shuffle=False, random_seed=None) Implements the k-fold paired t test procedure to compare the performance of two models. estimator1 : scikit-learn classifier or regressor cv : int (default: 10) Number of splits and iteration for the cross-validation procedure scoring : str, callable, or None (default: None) If None (default), uses 'accuracy' for sklearn classifiers and 'r2' for sklearn regressors. If str, uses a sklearn scoring metric string identifier, for example {accuracy, f1, precision, recall, roc_auc} for classifiers, {'mean_absolute_error', 'mean_squared_error'/'neg_mean_squared_error', 'median_absolute_error', 'r2'} for regressors. If a callable object or function is provided, it has to be conform with sklearn's signature scorer(estimator, X, y); see http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html for more information. Whether to shuffle the dataset for generating the k-fold splits. random_seed : int or None (default: None) Random seed for shuffling the dataset for generating the k-fold splits. Ignored if shuffle=False. Two-tailed p-value. If the chosen significance level is larger than the p-value, we reject the null hypothesis and accept that there are significant differences in the two compared models. For usage examples, please see http://rasbt.github.io/mlxtend/user_guide/evaluate/paired_ttest_kfold_cv/
Rank features for classification using minimum redundancy maximum relevance (MRMR) algorithm - MATLAB fscmrmr I\left(X,Z\right)={\sum }_{i,j}P\left(X={x}_{i},Z={z}_{j}\right)\mathrm{log}\frac{P\left(X={x}_{i},Z={z}_{j}\right)}{P\left(X={x}_{i}\right)P\left(Z={z}_{j}\right)}. {V}_{S}=\frac{1}{|S|}{\sum }_{x\in S}I\left(x,y\right), {W}_{S}=\frac{1}{{|S|}^{2}}{\sum }_{x,z\in S}I\left(x,z\right). {\text{MIQ}}_{x}=\frac{{V}_{x}}{{W}_{x}}, {V}_{x}=I\left(x,y\right), {W}_{x}=\frac{1}{|S|}{\sum }_{z\in S}I\left(x,z\right). \underset{x\in \Omega }{\mathrm{max}}{V}_{x} \underset{x\in {S}^{c},\text{\hspace{0.17em}}{W}_{x}=0}{\mathrm{max}}{V}_{x} \underset{x\in {S}^{c}}{\mathrm{max}}{\text{MIQ}}_{x}=\underset{x\in {S}^{c}}{\mathrm{max}}\frac{I\left(x,y\right)}{\frac{1}{|S|}{\sum }_{z\in S}I\left(x,z\right)}.
What is the reaction between H2O2 and MnO2? - buy cbd oil pennsylvania What is the reaction between H2O2 and MnO2? Not sure about getting the COVID vaccine? Read real stories from people who chose to get the COVID vaccine. Manganese dioxide catalyzes the decomposition of hydrogen peroxide. Hydrogen peroxide, H2O2, decomposes naturally at a very slow rate to form oxygen gas and water. When manganese dioxide, MnO2, is added to a solution of hydrogen peroxide, the rate of the reaction increases significantly. Manganese dioxide acts as a catalyst for the decomposition of hydrogen peroxide, meaning that it is not consumed in the reaction. What the manganese dioxide does is it lowers the activation energy of the reaction from approximately 75 kJ/mol to a little under 60 kJ/mol. This allows more molecules of hydrogen peroxide to undergo decomposition in a shorter period of time. The balanced chemical equation for this reaction looks like this 2H2O2(aq)——MnO2(s)—→2H2O(l)+O2(g) MnO2 + 2H2O2 –> MnO2 + O2 + 2H2O 2H2O2 –> O2 + 2H2O Manganese dioxide is written above the arrow (you’ll sometimes see it written under the arrow) because it is not being consumed in the reaction. The MnO2 is not a reactant. It is a catalyst. The net reaction is 2H2O2 -> 2H2O + O2. Find, evaluate and source engineering materials online. Explore and compare thousands of material datasheets to find the right material for your application. No reaction takes place between H2O2 and MnO2, but MnO2 is used in the preparation of Oxygen in laboratories. Hydrogen peroxide (H2O2) decomposes very slowly. But in presence of manganese dioxide (MnO2), the reaction speeds up. Thus, manganese dioxide acts as a catalyst. A small quantity of manganese dioxide is taken in flat bottom flask. Hydrogen peroxide is added drop by drop with the help of thistle funnel. Oxygen gas bubbles out into the water over the trough. Then the water level in the jar will keep decreasing and the space above the water is occupied by oxygen. Cover the jar with a greased lid and remove it from the water. This method is called downward displacement of water. Reaction: 2H2O2= 2H2O+O2 Mark Fischer is correct. The reaction is 2H2O2 + 2MnO2 = 2H2O + 2MnO2 + O2. Delta G (at 20C) = -232.8kJ (reaction is spontaneous and exothermic). Monica Lopez is incorrect. There is no such compound as MnO3. Manganese oxides MnO2, Mn2O3, Mn3O4, yes. But MnO3 — no such thing. Sorry, Monica. It’s interesting to see the catalytic decomposition of H2O2 by MnO2 being discussed, since the reaction of Mn+7 + H2O2→ O2 + OH- + MnO2 is used to determine the strength of H2O2 solutions. If MnO2 were able to react and not just catalyze, the titration would not work, and since I’ve done the titration, I had to look up the possible conflict. Turns out, the titration works because it’s done in acidic conditions, and MnO2 is not much of a catalyst in acid. Here, an article examines the catalytic activity in pH 3–14, and finds that the crystal structure is important as well. Studies on MnO2—III. The kinetics and the mechanism for the catalytic decomposition of H2O2 over different crystalline modifications of MnO2 So there is no reaction between H2O2 and MnO2 per se, as others have noted, and at pH < 3 there’s probably little catalytic activity. Keeping the permanganate titration acidic is thus seen to be a vital step to avoid the error that would occur if pH were allowed to rise while MnO2 was present. Hope this is as useful for OP as it was for me. Pour some hydrogen peroxide into the cylinder and add a small spatula scoop of MnO2. Bubbles of O2 will form immediately (the reaction also produces heat). H2O2 + MnO2 = H2O + MnO3 If it’s not a radical(which means every atom is bonded to the amount of atoms you’d expect) H204 would take the shape of a chain of 4 oxyogen atoms with a hydrogen atom at each end. Its systematic name would be hydrogentetroxide, which means hydrogen with four oxygen. You could also call it hydroxyperoxide, which means that it’s two OH groups with two oxygen atoms inbetween. This molecule has been synthesized at very low temperatures and is quite unstable. I think you probably meant this as a joke tho, so it’s for drinking. Well, RIP in advance. Breathing difficulty (if large concentrations are swallowed) Drinking small dilution of hydrogen peroxide (3%) isn’t harmful. But, like mentioned above, it can cause irritation in the eyes and skin. Drinking the 35% solution can cause severe burns in the gastrointestinal tract, as far as even causing vomiting and death. So, if you want to drink hydrogen peroxide, drink it at your own risks. I know about two easy ways, how to prepare manganese dioxide in laboratory: Reduction of potassium permanganate in neutral medium: 2MnO4−+3H2C2O4+2H+→6CO2+2MnO2+4H2O ” id=”MathJax-Element-1-Frame” role=”presentation” tabindex=”0″> 2\mathrm{M}\mathrm{n}{\mathrm{O}}_{4}^{-}+3{\mathrm{H}}_{2}{\mathrm{C}}_{2}{\mathrm{O}}_{4}+2{\mathrm{H}}^{+}\to 6\mathrm{C}{\mathrm{O}}_{2}+2\mathrm{M}\mathrm{n}{\mathrm{O}}_{2}+4{\mathrm{H}}_{2}\mathrm{O} Instead of oxalic acid, we can use other reducing agents as well. Oxidation of manganese salts: Mn2++2Ag++4OH−→MnO2+2Ag+2H2O M{n}^{2+}+2A{g}^{+}+4O{H}^{-}\to Mn{O}_{2}+2Ag+2{H}_{2}O , also other relativelly weak oxidating agents could be used, but not too strong, because then the product could be permanganate. The Old Thermodynamist says: Manganese dioxide does not react with hydrogen peroxide, it acts as a catalyst in the decomposition of hydrogen peroxide into water + oxygen. Manganese dioxide + hydrogen peroxide = manganese dioxide + water + oxygen MnO2 + H2O2 = MnO2 + H2O + 0.5O2(g) Change in Free Energy: ΔG(20C) = -116.1kJ (negative, so the reaction runs) Change in Enthalpy: ΔH(20C) = -98.0kJ (negative, so the reaction is exothermic) Guy Clentsmith’s answer is correct as far as it goes. However, the real question is whether the reaction occurs at all. If the desired reaction generates a positive voltage, it will occur spontaneously. That question can be answered by looking up the table of electrochemical series at, e.g., https://sites.chem.colostate.edu/diverdi/all_courses/CRC%20reference%20data/electrochemical%20series.pdf The two pertinent half reactions in that table are shown below. By adding them together to get the desired reaction, one finds that the driving force of the desired reaction is +1.176 volts, and so that reaction takes place. The half reactions and math are shown below (the ^ denotes a superscript): H2O2 + 2H+ + 2e- —> 2H2O +1.776, MnO4^-2 + 2H2O + 2e^- —> MnO2 + 4OH^-1 +0.60, and write the reverse of the first reaction and add to the second, getting: MnO4^-2 + 4H2O + 2e^- —> MnO2 + 4OH^- + H2O2 + 2H^ + 2e^- 0.60 – 1.776 = -1.176, and simplify by combining 2H^+ + 2OH^- → 2H2O, and dropping out identical amounts on each side, getting: MnO4^-2 + 2H2O. —>. MnO2 + 2OH^-1 + H2O2 -1.176, so the desired reverse reaction force is +1.176, meaning the desired reaction will occur spontaneously.
Erratum to: ‘n-tuplet fixed point theorems for contractive type mappings in partially ordered metric spaces’ | Journal of Inequalities and Applications | Full Text Erratum to: ‘n-tuplet fixed point theorems for contractive type mappings in partially ordered metric spaces’ Müzeyyen Ertürk1 & Vatan Karakaya2 (1) Page 3, line 21: The statement ‘(ii) {lim}_{r\to t+}\varphi \left(r\right)<t r>0 ’ should be corrected as ‘(ii) {lim}_{r\to t+}\varphi \left(r\right)<t t>0 (2) Page 4, line 3: The statement ‘… Condition 1 is satisfied.’ should be rewritten ‘… Condition 1 is satisfied and g is continuous.’ The statement ‘… and using (2.20)’ should be corrected as ‘… and using (2.19)’. The statement ‘ \le {\delta }_{j\left(k\right)+1}+{\delta }_{l\left(k\right)+1}+{t}_{k}+n\cdot \varphi \left(\frac{{t}_{k}}{n}\right) ’ should be corrected as ‘ \le {\delta }_{j\left(k\right)+1}+{\delta }_{l\left(k\right)+1}+n\cdot \varphi \left(\frac{{t}_{k}}{n}\right) The statement ‘From (2.10) and by …’ should be corrected as ‘From (2.8) and by … ’. (6) Page 10, line 26: ‘… now the assumption (b) holds.’ should be corrected as ‘… now the assumption (ii) holds.’ (7) Page 11, line 20 (line 2 in Corollary 2) and Page 17, line 20 (line 2 in Corollary 4): The statement ‘and there exist \varphi \in \mathrm{\Phi } such that F’ should be deleted. \varphi \left(F\left({x}^{1},{x}^{2},\dots ,{x}^{n}\right),F\left({y}^{1},{y}^{2},\dots ,{y}^{n}\right)\right) d\left(F\left({x}^{1},{x}^{2},\dots ,{x}^{n}\right),F\left({y}^{1},{y}^{2},\dots ,{y}^{n}\right)\right) The statement ‘…F has the mixed g-monotone’ should be corrected as ‘…F has the mixed monotone’. That is, ‘g-’ should be deleted. (10) Page 17, line 3: \le \varphi \left(\frac{d\left(g\left({x}^{1}\right),g\left({y}^{1}\right)\right)+d\left(g\left({x}^{2}\right),g\left({y}^{2}\right)\right)+\cdots +d\left(g\left({x}^{n}\right),g\left({y}^{n}\right)\right)}{n}\right) \le \varphi \left(\frac{d\left({x}^{1},{y}^{1}\right)+d\left({x}^{2},{y}^{2}\right)+\cdots +d\left({x}^{n},{y}^{n}\right)}{n}\right) (11) Page 17, line 23: \le \frac{m}{n}\left[d\left(g\left({x}^{1}\right),g\left({y}^{1}\right)\right)+d\left(g\left({x}^{2}\right),g\left({y}^{2}\right)\right)+\cdots +d\left(g\left({x}^{n}\right),g\left({y}^{n}\right)\right)\right] \le \frac{m}{n}\left[d\left({x}^{1},{y}^{1}\right)+d\left({x}^{2},{y}^{2}\right)+\cdots +d\left({x}^{n},{y}^{n}\right)\right] (12) Page 4, line 1: {x}_{1},{x}_{2},{x}_{3},\dots ,{x}_{n}\in X {x}_{1},{x}_{2},{x}_{3},\dots ,{x}_{n},{y}_{1},{y}_{2},{y}_{3},\dots ,{y}_{n}\in X (13) ‘ d\left(g\left({x}_{k}^{n}\right),g\left({x}_{k+2}^{n}\right)\right) ’ must be ‘ d\left(g\left({x}_{k+1}^{n}\right),g\left({x}_{k+2}^{n}\right)\right) (14) Page 8, line 16: \le {\delta }_{j\left(k\right)+1}+{\delta }_{l\left(k\right)+1}+d\left(g\left({x}_{j\left(k\right)+1}^{1}\right),g\left({x}_{l\left(k\right)+1}^{1}\right)\right)+d\left(g\left({x}_{j\left(k\right)+1}^{2}\right),g\left({x}_{l\left(k\right)+1}^{2}\right)\right) \le {\delta }_{j\left(k\right)}+{\delta }_{l\left(k\right)}+d\left(g\left({x}_{j\left(k\right)+1}^{1}\right),g\left({x}_{l\left(k\right)+1}^{1}\right)\right)+d\left(g\left({x}_{j\left(k\right)+1}^{2}\right),g\left({x}_{l\left(k\right)+1}^{2}\right)\right) ‘… with (2.26)-(2.29)’ must be ‘… with (2.26)-(2.28)’. g\left({x}_{k}^{1}\right)\ge {x}^{1},g\left({x}_{k}^{2}\right)\le {x}^{2},\dots ,g\left({x}_{k}^{n}\right)\le {x}^{n} (If n is odd)’ must be ‘ g\left({x}_{k}^{1}\right)\le {x}^{1},g\left({x}_{k}^{2}\right)\ge {x}^{2},\dots ,g\left({x}_{k}^{n}\right)\le {x}^{n} (If n is odd)’. Ertürk M, Karakaya V: n -tuplet fixed point theorems for contractive type mappings in partially ordered metric spaces. J. Inequal. Appl. 2013., 2013: Article ID 196 Department of Mathematics, Yildiz Technical University, Davutpasa Campus, Esenler, Istanbul, Turkey Müzeyyen Ertürk Department of Mathematical Engineering, Yildiz Technical University, Davutpasa Campus, Esenler, Istanbul, Turkey Correspondence to Müzeyyen Ertürk. The authors made up the article together. Ertürk, M., Karakaya, V. Erratum to: ‘n-tuplet fixed point theorems for contractive type mappings in partially ordered metric spaces’. J Inequal Appl 2013, 368 (2013). https://doi.org/10.1186/1029-242X-2013-368
37 An organ pipe P1 closed at one end vibrating in its first overtone and another pipe P2open at both - Physics - Electromagnetic Waves - 8751125 | Meritnation.com First overtone of Closed organ pipe P1: It is equal to 3rd harmonic. {f}_{1}=\frac{3v}{4{L}_{1}}\phantom{\rule{0ex}{0ex}}Here, v is speed of sound and {L}_{1} is length of pipe {P}_{1} Third overtone of open organ pipe P2: It is equal to 4th harmonic. {f}_{2}=\frac{4v}{2{L}_{2}}\phantom{\rule{0ex}{0ex}} =\frac{2v}{{L}_{2}}\phantom{\rule{0ex}{0ex}}Here, v is speed of sound and {L}_{2} is length of pipe {P}_{2} Equate f1 and f2 to find the ratio of lengths: (condition of resonance) \frac{3v}{4{L}_{1}}=\frac{2v}{{L}_{2}}\phantom{\rule{0ex}{0ex}}\frac{3}{4{L}_{1}}=\frac{2}{{L}_{2}}\phantom{\rule{0ex}{0ex}}8{L}_{1}=3{L}_{2}\phantom{\rule{0ex}{0ex}}\frac{{L}_{1}}{{L}_{2}}=\frac{3}{8}
Kerbin is the home planet of the Kerbals, the location of the Space Center and other facilities, and the main focus of Kerbal Space Program. It is also the Earth analog for the game but, unlike Earth, it has two moons instead of one. They are named Mun and Minmus. Kerbin is the third planet in orbit around the star Kerbol. It is the third largest celestial body that orbits Kerbol, following Jool and Eve. Jool's moon Tylo has the same radius of Kerbin, though it may be classified as larger, as the highest point on Tylo is about 5 km higher than the highest point on Kerbin. However, Tylo has only 80% of Kerbin's mass. Reaching a stable orbit around Kerbin is one of the first milestones a player might achieve in the game. With the introduction of version 1.0.3, attaining low Kerbin orbit requires a Δv of approximately 3400 m/s (vacuum), though the exact amount depends on the efficiency of the ascent profile and the aerodynamics of the launch vehicle and payload. The only rocky planet that requires a higher Δv to attain orbit is Eve. Many interplanetary missions expend over half of their Δv in reaching Kerbin orbit. The velocity required to escape a body from a given altitude is always exactly the square root of two times the velocity of a circular orbit around the body at that height: {\displaystyle ({\sqrt {2}}*v)} 3.1 List of biomes 6 Delta-V Requirements 10.1 Real-world comparison Reaching a stable orbit around Kerbin is one of the first things budding space programs strive for. It is said that those who can get their ship into orbit are halfway to anywhere.'' Kerbal at Kerbin's highest peak(old) A Kerbal at Kerbin's highest peak(1.12.2) Kerbin has a roughly equal distribution of liquid surface water and solid land, with polar ice caps and scattered deserts. Some of its mountains exceed 6 kilometers in height, with the tallest peak being 6767.4 m in altitude at the coordinates 46°21'32" E 61°35'53" N. The lowest point is almost 1.4 km deep and about 313° south-west of the Kerbal Space Center. Terrain model centered on Kerbin's most pronounced craters Unlike other bodies in its system, Kerbin has few visible craters because its environment would erode craters from the few meteors that avoid the gravity or surface of its large moon and survive entry. Nevertheless, some geological formations indicate that bodies have violently collided with Kerbin: a planetary feature appears to be an impact crater, while a secondary rupture lies on the other side of the planet (made by the intense longitudinal, or P-wave earthquakes that ensue.) Both are in excess of 100 km diameter, and the main crater lies along the far-western coastline. The uplift is easily visible as a series of islands, and the feature has a central peak that pokes up through the water (also known as a rebound peak.) The other, and smaller of the two, is near the prime meridian in the northern hemisphere and is more easily missed, but its uplift rims are visible, and it also has a central rebound peak. After the Mun, Kerbin is the celestial body with the second-highest number of biomes. Science experiments can be performed in all biomes, though Kerbin's low multipliers result in less impressive results than more distant worlds. Kerbin's biomes show a loose correlation with Earth's biomes and geographic features. Uniquely, Kerbin has 33 location biomes at KSC, these are comprised of each building and their props, the crawlerway, the flag, and KSC itself; these give a jumpstart to gathering Science points in Career mode. With 1.2 Kerbin had distinct Northern and Southern Ice Shelves added. Kerbin biome map as of 1.2 Administration Grounds AstronautComplex Grounds LaunchPad Water Tower LaunchPad Flag Pole LaunchPad Round Tank LaunchPad Tanks MissionControl Grounds VAB South Complex(only exist when VAB is Level 2) Temperature and pressure of Kerbin's atmosphere as a function of altitude. Drag force is proportionate to atmospheric density. File:Measures of Kerbin's atmospheric pressure Measures of Kerbin's atmospheric pressure Kerbin has a thick, warm atmosphere with a mass of approximately 4.7×1016 kilograms, a sea level pressure of 101.325 kilopascals (1 atmosphere), and a depth of 70,000 meters. The atmosphere contains oxygen and can support combustion. Kerbin is one of the two celestial bodies (the other one being Laythe) with a breathable atmosphere. The average molecular weight of Kerbin air is 28.9644 g/mol, and its adiabatic index is 1.40. This suggests that Kerbin likely has an earthlike nitrogen-oxygen atmosphere. The air-fuel ratio of jet engines operating in Kerbin's atmosphere suggests that the percentage of oxygen is similar to that of Earth's atmosphere (about 21%). Like all other atmospheres in the game, Kerbin's atmosphere fades exponentially as altitude increases. The scale height varies with altitude, which is a change from pre-1.0 versions of the game. The pressure-altitude profile is globally constant and independent of temperature. The following table gives the atmospheric pressure and density at various altitudes above sea level. Temperature-altitude profile is not globally constant, therefore neither is the density-altitude profile, however variance is slight. Kerbin's atmosphere can be divided into three major layers, comparable to Earth's troposphere, stratosphere and mesosphere. In the lower and upper layers, temperature decreases as altitude increases, while the middle layer spans of a region of increasing temperature. The boundary between the lower and middle layers occurs at an altitude of about 16 km at low latitudes, and about 9 km at high latitudes. The boundary between the middle and upper layer occurs at an altitude of about 38 km. Air temperatures vary with latitude and time of day. At the equator, sea level temperatures vary between a nighttime low of 32 °C and a daytime high of 41 °C. At the poles, the temperature varies between -35 °C and -30 °C. The globally averaged sea level temperature is approximately 13.5 °C. Since Kerbin has no axial tilt, there are no seasonal temperature variations. The atmosphere of Kerbin is patterned after Earth's U.S. Standard Atmosphere (USSA), though with the vertical height scale reduced by 20%. Kerbin's "base" temperature and atmospheric pressure can be very closely approximated using the equations of the USSA, where Kerbin's geometric altitude, z, is converted to Earth's geopotential altitude, h, using the equation: {\displaystyle h={\frac {7963.75\cdot z}{6371+1.25\cdot z}}} The base temperature is the temperature less latitudinal and diurnal adjustments; it is roughly equal to the global mean temperature. The thickness of Kerbin's atmosphere makes it well suited for aerobraking from a high-speed interplanetary intercept. The periapsis altitude required for a successful aerocapture depends on the spacecraft's drag characteristics, its approach velocity, and the desired apoapsis of the resulting orbit. For most conditions, a periapsis altitude of about 30 km should result in an aerocapture. Parachutes perform well in Kerbin's dense air, allowing landings on both land and water to be accomplished without the aid of propulsion. Because of the presence of oxygen, jet engines can operate in Kerbin's atmosphere. And, together with its thickness, Kerbin's atmosphere is ideally suited for aircraft flight. A modified Orbiter 1A prepares to dock with a space station. A Sputnik-derived satellite A synchronous orbit is achieved with a semi-major axis of 3 463.33 km. Kerbisynchronous Equatorial Orbit (KEO) (or geostationary), has a circularly uniform altitude of 2 863.33 km and a speed of 1 009.81 m/s. From a 70 km low equatorial orbit, the periapsis maneuver requires 676.5 m/s and the apoapsis maneuver requires 434.9 m/s. A synchronous Tundra orbit with eccentricity of 0.2864 and inclination of 63 degrees is achieved at 3799.7/1937.7 km. Inclination correlates with eccentricity: higher inclined orbits need to be more eccentric, while equatorial orbit may be circular, essentially KEO. A semi-synchronous orbit with an orbital period of ½ of Kerbin's rotation period (2 h 59 m&nbsp34.7 s or 10774.7 seconds) is achieved at an altitude of 1 581.76 km with an orbital velocity of 1 272.28 m/s. A semi-synchronous Molniya orbit with eccentricity of 0.742[1] and inclination of 63 degrees can not be achieved, because the periapsis would be 36 km below the ground. The highest eccentricity of a semi-synchronous orbit with a periapsis of 70 km is 0.693 with an apoapsis of 3100.36 km. Delta-V Requirements From the lowest possible stable orbit around Kerbin (70 km), the nominal amount of delta-V needed to reach other destinations is: Geostationary Orbit ~1120 m/s Dres ~1540 m/s Kerbin, Mun and Minmus. Kerbin and the Mun, barely visible from ~500,550,000 m An unmanned probe on escape trajectory, flying past Kerbin, Mun and Minmus Kerbin's South Pole Kerbin and Minmus seen from the surface of the Mun. Kerbin, Mun, and Minmus in a line, seen from a Minmus landing craft. A Crashed UFO on Kerbin's northern ice cap. KSC's grass now changes according to the currently set terrain shader quality. Kerbin actually spun up to have a 6 h synodic day. Updated biomes. Minmus was added. Much more varied and taller terrain was added. Prior to this, some mountain ranges exceeded 600 m in height, but the tallest point was at an altitude of approximately 900 meters. Mun added. Kerbin's continents are derived from libnoise[3], a coherent noise generating library, though they have been increasingly modified with time. Before 0.90 Kerbin was one of the few bodies with multiple Biomes. Following the 0.90 update all celestial bodies have biomes. The biomes on Kerbin as of 0.90 The quote from the Kerbal Astronomical Society is true. As in real life and in Kerbal Space Program for an one-way interplanetary mission. half the delta v is expended on orbit. Retrieved from "https://wiki.kerbalspaceprogram.com/index.php?title=Kerbin&oldid=103611"
Qian, Cunhua ; Pan, Yu ; Nakagawa, Toshio This paper considers two backup schemes for a database system: a database is updated at a nonhomogeneous Poisson process and an amount of updated files accumulates additively. To ensure the safety of data, full backups are performed at time NT or when the total updated files have exceeded a threshold level K , and between them, cumulative backups as one of incremental backups are made at periodic times iT \left(i=1,2,\cdots ,N-1 ). Using the theory of cumulative processes, the expected cost is obtained, and an optimal number {N}^{*} of cumulative backup and an optimal level {K}^{*} of updated files which minimize it are analytically discussed. It is shown as examples that optimal number and level are numerically computed when two costs of backup schemes are given. Mots clés : database, full backup, cumulative backup, cumulative process, expected cost author = {Qian, Cunhua and Pan, Yu and Nakagawa, Toshio}, title = {Optimal policies for a database system with two backup schemes}, AU - Qian, Cunhua AU - Pan, Yu AU - Nakagawa, Toshio TI - Optimal policies for a database system with two backup schemes Qian, Cunhua; Pan, Yu; Nakagawa, Toshio. Optimal policies for a database system with two backup schemes. RAIRO - Operations Research - Recherche Opérationnelle, Tome 36 (2002) no. 3, pp. 227-235. doi : 10.1051/ro:2003004. http://www.numdam.org/articles/10.1051/ro:2003004/ [1] R.E. Barlow and F. Proschan, Mathematical Theory of Reliability. John Wiley & Sons, New York (1965). | MR 195566 | Zbl 0132.39302 [2] D.R. Cox, Renewal Theory. Methuen, London (1962). | MR 153061 | Zbl 0103.11504 [3] J.D. Esary, A.W. Marshall and F. Proschan, Shock models and wear processes. Ann. Probab. 1 (1973) 627-649. | MR 350893 | Zbl 0262.60067 [4] R.M. Feldman, Optimal replacement with semi-Markov shock models. J. Appl. Probab. 13 (1976) 108-117. | MR 395794 | Zbl 0338.60062 [5] S. Fukumoto, N. Kaio and S. Osaki, A study of checkpoint generations for a database recovery mechanism. Comput. Math. Appl. 1/2 (1992) 63-68. | Zbl 0782.68036 [6] T. Nakagawa, On a replacement problem of a cumulative damage model. Oper. Res. Quarterly 27 (1976) 895-900. | MR 421648 | Zbl 0345.90015 [7] T. Nakagawa, A summary of discrete replacement policies. Eur. J. Oper. Res. 17 (1984) 382-392. | MR 763572 | Zbl 0541.90046 [8] T. Nakagawa and M. Kijima, Replacement policies for a cumulative damage model with minimal repair at failure. IEEE Trans. Reliability 13 (1989) 581-584. | Zbl 0695.90050 [9] C.H. Qian, S. Nakamura and T. Nakagawa, Cumulative damage model with two kinds of shocks and its application to the backup policy. J. Oper. Res. Soc. Japan 42 (1999) 501-511. | MR 1733246 | Zbl 0998.90510 [10] T. Satow, K. Yasui and T. Nakagawa, Optimal garbage collection policies for a database in a computer system. RAIRO: Oper. Res. 4 (1996) 359-372. | Numdam | Zbl 0859.68018 [11] K. Suzuki and K. Nakajima, Storage management software. Fujitsu 46 (1995) 389-397. [12] H.M. Taylor, Optimal replacement under additive damage and other failure models. Naval Res. Logist. Quarterly 22 (1975) 1-18. | MR 436984 | Zbl 0315.90026
Use the Distributive Property to simplify the following expressions. The Distributive Property involves multiplying the number outside the parentheses into each individual value inside the parentheses and then combining the terms. 4(x+2) -5(9+x) -45-5x 7(x-3)
Create inverted-L antenna over rectangular ground plane - MATLAB - MathWorks 한국 Create and View Inverted-L Antenna Radiation Pattern of Inverted-L Antenna Create inverted-L antenna over rectangular ground plane The invertedL object is an inverted-L antenna mounted over a rectangular ground plane. w=2d=4r d = diameter of equivalent cylinder a = radius of equivalent cylinder For a given cylinder radius, use the cylinder2strip utility function to calculate the equivalent width. The default inverted-L antenna is center-fed. The feed point coincides with the origin. The origin is located on the xy- plane. l = invertedL l = invertedL(Name,Value) l = invertedL creates an inverted-L antenna mounted over a rectangular ground plane. By default, the dimensions are chosen for an operating frequency of 1.7 GHz. l = invertedL(Name,Value) creates an inverted-L antenna, with additional properties specified by one or more name-value pair arguments. Name is the property name and Value is the corresponding value. You can specify several name-value pair arguments in any order as Name1, Value1, ..., NameN, ValueN. Properties not specified retain their default values. Height — Height of inverted element along z-axis Height of inverted element along z-axis, specified a scalar in meters. Length — Stub length along x-axis Stub length along x-axis, specified as a scalar in meters. Ground plane length along x-axis, specified a scalar in meters. Setting 'GroundPlaneLength' to Inf, uses the infinite ground plane technique for antenna analysis. Example: l.Load = lumpedElement('Impedance',75) Create and view an inverted-L antenna that has 30mm length over a ground plane of dimensions 200mmx200mm. il = invertedL('Length',30e-3, 'GroundPlaneLength',200e-3,... show(il) Plot the radiation pattern of an inverted-L at a frequency of 1.7 GHz. pattern(iL,1.7e9) invertedF | pifa | patchMicrostrip | cylinder2strip
\left(2,-3,4\right) \left(1,2,-3\right) \left(5,4,7\right) \left(6,-5,-1\right). Figure 1.5.8(a) displays the points P, Q, R, S, the plane containing Q, R, and S, and the vectors \mathbf{A}=\mathbf{R}-\mathbf{Q} \left[\begin{array}{c}5\\ 4\\ 7\end{array}\right]-\left[\begin{array}{c}1\\ 2\\ -3\end{array}\right] \left[\begin{array}{c}4\\ 2\\ 10\end{array}\right] \mathbf{B}=\mathbf{S}-\mathbf{Q} \left[\begin{array}{c}6\\ -5\\ -1\end{array}\right]-\left[\begin{array}{c}1\\ 2\\ -3\end{array}\right] \left[\begin{array}{c}5\\ -7\\ 2\end{array}\right] \mathbf{C}=\mathbf{P}-\mathbf{Q} \left[\begin{array}{c}2\\ -3\\ 4\end{array}\right]-\left[\begin{array}{c}1\\ 2\\ -3\end{array}\right] \left[\begin{array}{c}1\\ -5\\ 7\end{array}\right] Figure 1.5.8(a) Points P, Q, R, S, and vectors A, B, C where P, Q, R, and S are respectively the position vectors to points P, Q, R, and S. The distance from point P to the plane containing points Q, R, and S, is given by \frac{\left|\left[\mathbf{ABC}\right]\right|}{∥\mathbf{A}×\mathbf{B}∥} \left[\mathbf{ABC}\right] |\begin{array}{ccc}4& 2& 10\\ 5& -7& 2\\ 1& -5& 7\end{array}| -402 \mathbf{A}×\mathbf{B} |\begin{array}{ccc}\mathbf{i}& \mathbf{j}& \mathbf{k}\\ 4& 2& 10\\ 5& -7& 2\end{array}| \left[\begin{array}{c}74\\ 42\\ -38\end{array}\right] ∥\mathbf{A}×\mathbf{B}∥ =\sqrt{{74}^{2}+{42}^{2}+{\left(-38\right)}^{2}} =\sqrt{8684} =2\sqrt{2171} the requisite distance is \frac{\left|-402\right|}{2\sqrt{2171}}=\frac{201}{\sqrt{2171}} ≐4.31 Define the position vectors P, Q, R, and S Enter P as per Table 1.1.1. 〈2,-3,4〉 \stackrel{\text{assign to a name}}{\to } \textcolor[rgb]{0,0,1}{P} Enter Q as per Table 1.1.1. 〈1,2,-3〉 \stackrel{\text{assign to a name}}{\to } \textcolor[rgb]{0,0,1}{Q} 〈5,4,7〉 \stackrel{\text{assign to a name}}{\to } \textcolor[rgb]{0,0,1}{R} Enter S as per Table 1.1.1. 〈6,-5,-1〉 \stackrel{\text{assign to a name}}{\to } \textcolor[rgb]{0,0,1}{S} By subtraction, obtain the vectors A, B, and C \mathbf{A}=\mathbf{R}-\mathbf{Q} \stackrel{\text{assign}}{\to } \mathbf{B}=\mathbf{S}-\mathbf{Q} \stackrel{\text{assign}}{\to } \mathbf{C}=\mathbf{P}-\mathbf{Q} \stackrel{\text{assign}}{\to } Apply the appropriate distance formula from Table 1.5.1 Common Symbols palette: Dot- and cross-product operators \mathbf{A}·\left(\mathbf{B}×\mathbf{C}\right) for the triple scalar product \left[\mathbf{ABC}\right] Keyboard vertical strokes for absolute values and norms. |\frac{\mathbf{A}·\left(\mathbf{B}×\mathbf{C}\right)}{∥\mathbf{A}×\mathbf{B}∥}| \frac{\textcolor[rgb]{0,0,1}{201}\textcolor[rgb]{0,0,1}{⁢}\sqrt{\textcolor[rgb]{0,0,1}{2171}}}{\textcolor[rgb]{0,0,1}{2171}} \stackrel{\text{at 5 digits}}{\to } \textcolor[rgb]{0,0,1}{4.3139} \mathrm{with}\left(\mathrm{Student}:-\mathrm{MultivariateCalculus}\right): Define the position vectors P, Q, R, and S. \mathbf{P},\mathbf{Q},\mathbf{R},\mathbf{S}≔〈2,-3,4〉,〈1,2,-3〉,〈5,4,7〉,〈6,-5,-1〉: Define the vectors A, B, C by subtraction. \mathbf{A},\mathbf{B},\mathbf{C}≔\mathbf{R}-\mathbf{Q},\mathbf{S}-\mathbf{Q},\mathbf{P}-\mathbf{Q}: Apply the abs, BoxProduct, Norm, and CrossProduct commands. \mathrm{abs}\left(\mathrm{BoxProduct}\left(\mathbf{A},\mathbf{B},\mathbf{C}\right)\right)/\mathrm{Norm}\left(\mathrm{CrossProduct}\left(\mathbf{A},\mathbf{B}\right)\right) \frac{\textcolor[rgb]{0,0,1}{201}\textcolor[rgb]{0,0,1}{⁢}\sqrt{\textcolor[rgb]{0,0,1}{2171}}}{\textcolor[rgb]{0,0,1}{2171}} Solution from first principles Apply the abs command (for absolute value). Apply the DotProduct, CrossProduct, and Norm commands. Press the Enter key. Apply the evalf command to obtain a floating-point (decimal) approximation. d≔\frac{\mathrm{abs}\left(\mathrm{DotProduct}\left(\mathbf{A},\mathrm{CrossProduct}\left(\mathbf{B},\mathbf{C}\right)\right)\right)}{\mathrm{Norm}\left(\mathrm{CrossProduct}\left(\mathbf{A},\mathbf{B}\right)\right)} \frac{\textcolor[rgb]{0,0,1}{201}}{\textcolor[rgb]{0,0,1}{2171}}\textcolor[rgb]{0,0,1}{⁢}\sqrt{\textcolor[rgb]{0,0,1}{2171}} \mathrm{evalf}\left(d\right) \textcolor[rgb]{0,0,1}{4.313860984}
f\left(x\right)={x}^{2},g\left(x\right)=8-{\left(\frac{x}{4}\right)}^{2},x⁢\ge 0 x=1 y=f⁡\left(x\right) y=g⁡\left(x\right) ⁡\left(1,f\left(1\right)\right) \mathbf{R}\prime ⁡\left(1\right) ⁡\left(1,f\left(1\right)\right) x=1 , the line tangent to the graph of y=\left(\genfrac{}{}{0}{}{\frac{\mathrm{df}}{\mathrm{dx}}}{\phantom{x=a}}|\genfrac{}{}{0}{}{\phantom{\mathrm{f\left(x\right)}}}{x=1}\right) \left(x-1\right)+f\left(1\right)=2 x-1 The intersection of this tangent line with the graph of g is obtained by solving the equations y=2 x-1 y=g\left(x\right) for the two points \left(-36,-73\right) \left(4,7\right) . Because of the restriction x≥0 , only the second solution is accepted. local p1,p2,p3,T; T:=RootedVector(root=[1,1],<1,2>); p1:=PlotVector(T); p2:=plot([x^2,8-(x/4)^2,2*x-1],x=0..5,y=0..8,color=[red,blue,green]); p3:=display(p1,p2,scaling=constrained); f,g,\mathbf{T} and the tangent line (green) The vector from \left(1,f\left(1\right)\right) \left(4,7\right) \mathbf{V}=\left[\begin{array}{c}4\\ 7\end{array}\right]-\left[\begin{array}{c}1\\ 1\end{array}\right]=\left[\begin{array}{c}3\\ 6\end{array}\right]=3\left[\begin{array}{c}1\\ 2\end{array}\right] \mathbf{R}=\left[\begin{array}{c}x\\ {x}^{2}\end{array}\right] is the radius-vector form of the curve defined by the graph of \mathbf{R}\prime \left(1\right)=\left[\begin{array}{c}1\\ 2\end{array}\right] . Clearly, this tangent vector has the same direction as \mathbf{V} since these two vectors are proportional. Figure 2.3.9(a) shows the graph of in red, the graph of g in blue, and of the tangent line in green. The vector \mathbf{R}\prime \left(1\right) f\left(x\right)={x}^{2} \stackrel{\text{assign as function}}{\to } \textcolor[rgb]{0,0,1}{f} g\left(x\right)=8-{\left(x/4\right)}^{2} \stackrel{\text{assign as function}}{\to } \textcolor[rgb]{0,0,1}{g} Implement the point-slope form of the tangent line and press the Enter key. y=f\prime \left(1\right) \left(x-1\right)+f\left(1\right) \textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1} Write the sequence of equations to be solved and press the Enter key. y=2 x-1,y=g\left(x\right) \textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{16}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}} \stackrel{\text{solve}}{\to } \left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{36}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{73}\right\}\textcolor[rgb]{0,0,1}{,}\left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{7}\right\} \mathbf{V}=\dots \mathbf{V}=\left[\begin{array}{c}4\\ 7\end{array}\right]-\left[\begin{array}{c}1\\ 1\end{array}\right] \stackrel{\text{assign}}{\to } \mathbf{V} \mathbf{V} \left[\begin{array}{r}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{6}\end{array}\right] \mathbf{R}=〈x,f\left(x\right)〉 \stackrel{\text{assign}}{\to } Context Panel: Assign to a Name≻dR1 \genfrac{}{}{0}{}{\frac{ⅆ}{ⅆ x} \mathbf{R}}{\phantom{x=a}}|\genfrac{}{}{0}{}{\phantom{\mathrm{f\left(x\right)}}}{x=1} \left[\begin{array}{r}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\end{array}\right] \stackrel{\text{assign to a name}}{\to } \textcolor[rgb]{0,0,1}{\mathrm{dR1}} Write V and 3 dR1, and press the Enter key. Since these two vectors are obviously equal, V and dR1 are proportional. \mathbf{V},3 \mathbf{dR1} \left[\begin{array}{r}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{6}\end{array}\right]\textcolor[rgb]{0,0,1}{,}\left[\begin{array}{r}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{6}\end{array}\right] Write dR1. Context Panel: Plots: Arrow from point x=1,y=1 \mathbf{dR1} \left[\begin{array}{r}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\end{array}\right] \stackrel{\text{plot arrow}}{\to } Write the sequence of three functions shown to the right. 0≤x≤5 Copy and paste the tangent vector. f\left(x\right),g\left(x\right),2 x-1 {\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{16}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1} \to f f≔x→{x}^{2}: g g≔x→8-{\left(x/4\right)}^{2}: Obtain, at x=1 , the equation of the line tangent to the graph of y=\mathrm{D}\left(f\right)\left(1\right)\cdot \left(x-1\right)+1 \textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1} Obtain the intersection of the graph of g and the line tangent to the graph of \mathrm{solve}\left(\left\{y=2 x-1,y=g\left(x\right)\right\}\right) \left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{36}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{73}\right\}\textcolor[rgb]{0,0,1}{,}\left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{7}\right\} Obtain the vector from the point \left(1,f\left(1\right)\right) \left(4,7\right) , the point of intersection of the line tangent to the graph of g \mathbf{V}≔〈4,7〉-〈1,f\left(1\right)〉 \left[\begin{array}{r}\textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{6}\end{array}\right] 〈x,f\left(x\right)〉 for the position-vector representation of the graph of . Then use the map command to apply the diff command to each component of the vector. Finally, use the eval command to evaluate the tangent vector at x=1 \mathbf{dR}≔\mathrm{eval}\left(\mathrm{map}\left(\mathrm{diff},〈x,f\left(x\right)〉,x\right),x=1\right) \left[\begin{array}{r}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\end{array}\right] Use the Equal command from the LinearAlgebra package to show that \mathbf{V}=3 \mathbf{R}\prime \mathrm{LinearAlgebra}:-\mathrm{Equal}\left(3\cdot \mathbf{dR},\mathbf{V}\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} {p}_{1}≔\mathrm{plot}\left(\left[f\left(x\right),g\left(x\right),2 x-1\right],x=0..5,y=0..8,\mathrm{color}=\left[\mathrm{red},\mathrm{blue},\mathrm{green}\right]\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathbf{T}≔\mathrm{VectorCalculus}:-\mathrm{RootedVector}\left(\mathrm{root}=\left[1,f\left(1\right)\right],\mathbf{dR}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}{p}_{2}≔\mathrm{VectorCalculus}:-\mathrm{PlotVector}\left(\mathbf{T}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{plots}:-\mathrm{display}\left({p}_{1},{p}_{2},\mathrm{scaling}=\mathrm{constrained}\right)
Markov Chain - Monte-Carlo Essential properties of Markov Chain In the setup of MCMC algorithms, we construct Markov chains from a transition kernel K , a conditional probability density such that X_{n+1}\sim K(X_n,X_{n+1}) The chain encountered in MCMC settings enjoy a very strong stability property, namely a stationary probability distribution; that is, a distribution \pi X_n\sim\pi X_{n+1}\sim \pi , if the kernel K allows for free moves all over the state space. This freedom is called irreducibility in the theory of Markov chains and is formalized as the existence of n\in\mathbb{N} P(X_n\in A\mid X_0)>0 A \pi(A)>0 . This property also ensures that most of the chains involved in MCMC algorithms are recurrent (that is, that the average number of visits to an arbitrary set A is infinite), or even Harris recurrent (that is, such that the probability of an infinite number of returns to A Harris recurrence ensures that the chain has the same limiting behavior for every starting value instead of almost every starting value. The stationary distribution is also a limiting distribution in the sense that the limiting distribution of X_{n+1} \pi under the total variation norm, notwithstanding the initial value of X_0 Strong forms of convergence are also encountered in MCMC settings, like geometric and uniform convergences. If the marginals are proper, for convergence we only need our chain to be aperiodic. A sufficient condition is that K(x_n,\cdot)>0 f(\cdot\mid x_n)>0 ) in a neighborhood of x_n If the marginal are not proper, or if they do not exist, then the chain is not positive recurrent. It is either null recurrent, and both cases are bad. The detailed balance condition is not necessary for f to be a stationary measure associated with the transition kernel K , but it provides a sufficient condition that is often easy to check and that can be used for most MCMC algorithms. Ergodicity: independence of initial conditions ​geometrically h-ergodic: decreasing at least at a geometric speed. ​uniform ergodicity: stronger than geometric ergodicity in the sense that the rate of geometric convergence must be uniform over the whole space. Irreducibility + Aperiodic = Ergodicity ? The finite chain is indeed irreducible since it is possible to connect the status x y \vert x-y\vert steps with probability \prod_{i=x\land y}^{x\vee y-1}\Big(\frac{M-i}{M}\Big)^2\,. The Bernoulli-Laplace chain is aperiodic and even strongly aperiodic since the diagonal terms satisfy P_{xx}>0 x\in \{0,\ldots,K\} Given the quasi-diagonal shape of the transition matrix, it is possible to directly determine the invariant distribution, \pi=(\pi_0,\ldots,\pi_K) . From the equation \pi P = \pi \begin{aligned} \pi_0 &= P_{00}\pi_0 + P_{10}\pi_1\\ \pi_1 &= P_{01}\pi_1 + P_{11}\pi_1 + P_{21}\pi_2\\ \cdots &=\cdots\\ \pi_K &= P_{(K-1)K}\pi_{K-1} + P_{KK}\pi_K\,. \end{aligned} \pi_k=\binom{K}{k}^2\pi_0\,,\qquad k=0,\ldots,K\,, and through normalization, \pi_k=\frac{\binom{K}{k}^2}{\binom{2K}{K}}\,, by using Chu-Vandermonde identity​ \binom{m+n}{r}=\sum_{k=0}^r\binom{m}{k}\binom{n}{r-k} m=n=r=K . It turns out that the hypergeometric distribution H(2K,K,1/2) is the invariant distribution for the Bernoulli-Laplace model. A simple illustration of Markov chains on continuous state-space. X_n = \theta X_{n-1}+\varepsilon_n\;,\theta\in \mathrm{I\!R}\,, \varepsilon_n\in N(0,\sigma^2) , and if the \varepsilon_n 's are independent, X_n is indeed independent from X_{n-2},X_{n-3},\ldots X_{n-1} The Markovian properties of an AR(q) can be derived from (X_n,\ldots,X_{n-q+1}) ARMA(p, q) doesn't fit in the Markovian framework. X_n\mid x_{n-1}\sim N(\theta x_{n-1},\sigma^2) , consider the lower bound of the transition kernel ( \theta > 0 \begin{aligned} K(x_{n-1},x_n) &= \frac{1}{\sqrt{2\pi}}\exp\Big\{-\frac{1}{2\sigma^2}(x_n-\theta x_{n-1})^2\Big\}\\ &\ge \frac{1}{\sqrt{2\pi}\sigma}\exp\Big\{-\frac{1}{2\sigma^2}\max\{(x_n-\theta \underline w)^2, (x_n-\theta \bar w)^2\}\Big\}\\ &\ge \frac{1}{\sqrt{2\pi}\sigma}\exp\Big\{-\frac{1}{2\sigma^2}\Big[ \max\{-2\theta x_n\underline w,-2\theta x_n\bar w\}+x_n^2 + \theta^2\underline w^2\land \bar w^2 \Big]\Big\}\\ &=\begin{cases} \frac{1}{\sqrt{2\pi}\sigma}\exp\Big\{-\frac{1}{2\sigma^2}\Big[ -2\theta x_n\underline w+x_n^2 + \theta^2\underline w^2\land \bar w^2 \Big]\Big\}& \text{if }x_n>0\\ \frac{1}{\sqrt{2\pi}\sigma}\exp\Big\{-\frac{1}{2\sigma^2}\Big[ -2\theta x_n\bar w+x_n^2 + \theta^2\underline w^2\land \bar w^2 \Big]\Big\}& \text{if }x_n<0 \end{cases}\,, \end{aligned} x_{n-1}\in[\underline w, \bar w] C = [\underline w, \bar w] is a small set, as the measure \nu_1 \frac{\exp\{(-x^2+2\theta x\underline w)/2\sigma^2\}I_{x>0} + \exp\{(-x^2+2\theta x\bar w)/2\sigma^2\}I_{x<0} }{\sqrt{2\pi}\sigma\{[1-\Phi(-\theta\underline w/\sigma^2)]\exp(\theta^2\underline w^2/2\sigma^2)+\Phi(-\theta\bar w/\sigma)\exp(\theta^2\bar w^2/2\sigma^2)\}}\,, K^1(x,A)\ge \nu_1(A),\qquad \forall x\in C, \forall A\in {\cal B(X)}\,. Given that the transition kernel corresponds to the N(\theta x_{n-1},\sigma^2) distribution, a normal distribution N(\mu,\tau^2) is stationary for the AR(1) chain only if \mu=\theta\mu\qquad\text{and}\qquad \tau^2=\tau^2\theta^2+\sigma^2\,. \mu=0 \tau^2=\sigma^2/(1-\theta^2) , which can only occur for \vert \theta\vert < 1 N(0,\sigma^2/(1-\theta^2)) is indeed the unique stationary distribution of the AR(1) chain. \phi doesn't have a constant term, i.e., P(X_1=0)=0 , then chain S_t is necessarily transient since it is increasing. P(X_1=0)>0 , the probability of a return to 0 at time t \rho(t)=P(S_t=0)=g_t(0) , which thus satisfies the recurrence equation \rho_t=\phi(\rho_{t-1}) . There exists a limit \rho different from 1, such that \rho=\phi(\rho) , iff \phi'(1)>1 ; namely if E[X]>1 . The chain is thus transient when the average number of siblings per individual is larger than 1. If there exists a restarting mechanism in 0, S_{t+1}\mid S_t=0\sim\phi , it is easily shown that when \phi'(1)>1 , the number of returns to 0 follows a geometric distribution with parameter \rho \phi'(1)\le 1 , the chain is recurrent. Simulation of Exp-Abs-xy
System of linear equations - Simple English Wikipedia, the free encyclopedia A linear system in three variables determines a collection of planes (one plane for each equation). The intersection point is the solution. 4 Solving a system of linear equations Mathematicians show the relationship between different factors in the form of equations. "Linear equations" mean the variable appears only once in each equation without being raised to a power. A "system" of linear equations means that all of the equations are true at the same time. So, the person solving the system of equations is looking for the values of each variable that will make all of the equations true at the same time. If no such values can satisfy all of the equations in the system, then the equations are called "inconsistent." {\displaystyle {\begin{alignedat}{7}3x&&\;+\;&&2y&&\;-\;&&z&&\;=\;&&1&\\2x&&\;-\;&&2y&&\;+\;&&4z&&\;=\;&&-2&\\-x&&\;+\;&&{\tfrac {1}{2}}y&&\;-\;&&z&&\;=\;&&0&\end{alignedat}}} {\displaystyle x} {\displaystyle y} {\displaystyle z} . A "solution" to a linear system is a choice of numbers for each variable such that every equation is true at the same time. The following is a solution to the equation above: {\displaystyle {\begin{alignedat}{2}x&=&1\\y&=&-2\\z&=&-2\end{alignedat}}} It works because it makes all three equations true:[1] {\displaystyle {\begin{alignedat}{7}3(1)&&\;+\;&&2(-2)&&\;-\;&&(-2)&&\;=\;&&1&\\2(1)&&\;-\;&&2(-2)&&\;+\;&&4(-2)&&\;=\;&&-2&\\-(1)&&\;+\;&&{\tfrac {1}{2}}(-2)&&\;-\;&&(-2)&&\;=\;&&0&\end{alignedat}}} In mathematics, the theory of linear systems is a branch of linear algebra, a subject which is fundamental to modern mathematics. Computer algorithms for finding the solutions are an important part of numerical linear algebra, and such methods play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model, computer model, or computer simulation of a relatively complex system. For complex systems, there are many equations and many variables, not just two or three. In many cases, the number of equations and variables in the system are the same. In some cases, there are more variables than equations, and the solution will be a range of different values rather than one exact solution. Simple example[change | change source] {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for {\displaystyle x} {\displaystyle y} {\displaystyle x=3-{\frac {3}{2}}y.} {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable {\displaystyle y} . Solving gives {\displaystyle y=1} , and substituting this back into the equation for {\displaystyle x} {\displaystyle x=3/2} {\displaystyle {\begin{alignedat}{5}2\left({\frac {3}{2}}\right)&&\;+\;&&3(1)&&\;=\;&&6&\\4\left({\frac {3}{2}}\right)&&\;+\;&&9(1)&&\;=\;&&15&.\end{alignedat}}} This method generalizes to systems with additional variables. Using Matrices[change | change source] Very often, all the coefficients are written in the form of a matrix A, which is called a coefficient matrix. {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}} In much the same way, the variables can be written in the form of a vector: {\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}};\qquad b={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} {\displaystyle A\cdot x=b} Mathematically, the vector defined above is a 1-by-n matrix. The system of equations can then be solved using the multiplication operation defined on matrices. A, x and b are all part of the same algebraic field. Solving a system of linear equations[change | change source] There a three cases when looking for solutions to a system of linear equations: There are many solutions; the exact number depends on the properties of the field. In many cases there will be an infinite number of solutions. There are two categories of methods for solving a system of linear equations. Iterative methods use many steps to get a solution, direct methods only need one step: An example for a direct method is to solve the system for one variable; this variable can be eliminated and replaced by an expression that only uses other variables, or a number. Doing this for all variables of the equation will lead to a solution of the system if it exists. Another method is to transform two equations so that one of the sides of the equations is the same in both cases; it is then possible to write another equation, which replaces the two equations and reduces the number of equations by one. Examples for iterative methods are: Relaxation, including the Gauss-Seidel and Jacobi methods Krylow method There are examples such as geodesy where there many more measurements than unknowns. Such a system is almost always overdetermined and has no exact solution. Each measurement is usually inaccurate and includes some amount of error. Since the measurements are not exact, it is not possible to obtain an exact solution to the system of linear equations; methods such as Least squares can be used to compute a solution that best fits the overdetermined system. This least squares solution can often be used as a stand-in for an exact solution. Solving a system of linear equations has a complexity of at most O (n3). At least n2 operations are needed to solve a general system of n linear equations. The best algorithm known to date was developed by Don Coppersmith and Shmuel Winograd and dates from 1990. It has a complexity of n2.376[2] Unfortunately, it is of little practical use. Using computers to solve systems of linear equations is used every day. For example, it is used in weather forecasting models. Hot dog factories use it to make small changes in the receipe as food ingredient prices change. College cafeterias use it to figure out how much food to cook based on past experience when the cafeteria gives students the choice between multiple entrees. ↑ Gene Golub, Charles Van Loan: Matrix Computations, Johns Hopkins University Press, 3rd edition, 1996; ISBN 978-0-8018-5414-9 Textbooks[change | change source] Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0387982590 Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0321287137 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0898714548, archived from the original on October 31, 2009, retrieved December 18, 2011 Strang, Gilbert (2005), Linear Algebra and Its Applications WebApp descriptively solving systems of linear equations with a number of methods Archived 2011-04-25 at Archive.today Solving system linear equations online matrix calculator and tutorial. Online Equations Solver Online linear solver Retrieved from "https://simple.wikipedia.org/w/index.php?title=System_of_linear_equations&oldid=7972066"
The Traditional Subtraction Algorithm - Global Math Week I4SKQ1 How does this dots-and-boxes approach compare with the standard algorithm for subtraction? Consider again 512 - 347 The standard algorithm has you start at the right to first look at “ 2 7 ,” which you can’t do. (Well you can do it, it is -5 , but you are not to write that for this algorithm.) You “borrow one.” That is, you take a dot from the tens column and unexplode it to make ten ones. That leaves zero dots in the tens column. We should write ten ones to go with the two in the ones column. But we are a bit clever here and just write 12 10 + 2 . (That is, we put a 1 in front of the 2 to make it look like twelve.) Then we say “twelve take seven is five” and write that answer. The rightmost column is complete. Shift now to the middle column. We see “zero take away four,” which can’t be done. So perform another unexplosion, that is, another “borrow,” to see 10 - 4 in that column. We write the answer 6 We then move to the last remaining column where we have 4 - 3 1 Here are a few questions for you to try, if you like. Compute each of the following two ways: the dots-and-boxes way (and fixing the answer for society to read) and then with the traditional algorithm. The answers should be the same. Thinking question along the way: As you fix up your answers for society, does it seem easier to unexplode from left to right, or from right to left? Additional question: Do you think you could become just as speedy the dots-and-boxes way as you currently are with the traditional approach? All correct approaches to mathematics are correct, and it is just a matter of style as to which approach you like best for subtraction. The traditional algorithm has you work from right to left and do all the unexplosions as you go along. The dots-and-boxes approach has you “just do it!” and conduct all the unexplosions at the end. Both methods are fine and correct. JPiles and Holes
Calculate Outlier Formula: A Step-By-Step Guide | Outlier This article is an overview of the outlier formula and how to calculate it step by step. It’s also packed with examples and FAQs to help you understand it. What Are Q1, Q3, and IQR? Examples of Outlier Formula Calculate Outliers Using Statistical Software FAQs About the Outlier Formula The outlier formula — also known as the 1.5 IQR rule — is a rule of thumb used for identifying outliers. Outliers are extreme values that lie far from the other values in your data set. The outlier formula designates outliers based on an upper and lower boundary (you can think of these as cutoff points). Any value that is 1.5 x IQR greater than the third quartile is designated as an outlier and any value that is 1.5 x IQR less than the first quartile is also designated as an outlier. How to identify outliers using the outlier formula: Anything above Q3 + 1.5 x IQR is an outlier Anything below Q1 - 1.5 x IQR is an outlier To use the outlier formula, you need to know what quartiles (Q1, Q2, and Q3) and the interquartile range (IQR) are. Quartiles (Q1, Q2, Q3) divide a data set into four groups, each containing about 25% (or a quarter) of the data points. There are three quartiles: Q1, Q2, and Q3. Q1 (also known as the first quartile or lower quartile) is the 25th percentile of the data. Q2 (the second quartile) is the 50th percentile or median of the data. Q3 (the third or upper quartile) is the 75th percentile of the data. The Interquartile Range (IQR) is the distance between the first and third quartile. Subtract the first quartile from the third quartile to find the interquartile range. Now that you know what quartiles and the interquartile range are, let’s go through a step-by-step example of using the outlier equation. We’ll use a sample data set containing just 10 data points for this example. Sample Data (n=10) Arrange the data in order from smallest to largest. To find Q1, multiply 25/100 by the total number of data points (n). This will give you a locator value, L. If L is a whole number, take the average of the Lth value of the data set and the (L +1)^{th} value. The average will be the first quartile. If L is not a whole number, round L up to the nearest whole number and find the corresponding value in the data set. That will be the first quartile. L = (25/100)(n)= (0.25)(10) = 2.5 2.5 is not a whole number, so round up the nearest whole number to get 3. The 3rd value in the data set is 22. Q1 = 22 To find Q3, use the same method used to find Q1, except this time, multiply 75/100 by n to get the locator value, L. L = (75.100)(n) = (0.75)(10) = 7.5 7.5 is not a whole number, so round up the nearest whole number to get 8. The 8th value in the data set is 35. Q3 = 35 Find the interquartile range, IQR. Remember, the interquartile range is the difference between Q3 and Q1. IQR = Q3 - Q1 = 35 - 22 = 13 Find the upper boundary. Upper boundary = Q3 + 1.5 IQR = 35 + (1.5)(13) = 54.5 Find the lower boundary. Lower boundary = Q1 - 1.5 IQR = 22 - (1.5)(13) = 2.5 Identify the outliers. The outliers are any data points that lie above the upper boundary or below the lower boundary. In this case, the outliers are 2 and 59. Here are three more examples. See if you can identify outliers using the outlier formula. The data below shows a high school basketball player’s points per game in 10 consecutive games. Use the outlier formula and the given data to identify potential outliers. The data below shows the number of daily visitors to a museum. Use the given data and outlier formula to identify potential outliers. The data below shows the annual rainfall in a tropical rainforest. For ease, the data are already arranged from least to greatest. Use the given data and outlier formula to identify potential outliers. Annual Rainfall (in.) Solution for Example 1 Outliers: 51. Q1 = 22, Q3 = 33, IQR = 11, lower boundary = 5.5, upper boundary = 49.5 Outliers: 503. Q1 = 675, Q3 = 736, IQR = 61, lower boundary = 583.5, upper boundary = 815. Note that there are only 8 data points (n=8). When calculating Q1 and Q3, the locator value L is a whole number. To find Q1, you need to take the average of the 2nd and 3rd values of the data set. To find Q3, you need to take the average of the 6th and 7th values. There are no outliers in this data set. Q1 = 220, Q3 = 320, IQR = 100, lower boundary = 70, upper boundary = 470 While it’s important to know what the outlier formula is and how to find outliers by hand, more often than not, you will use statistical software to identify outliers. Follow these steps to use the outlier formula in Excel, Google Sheets, Desmos, or R. Note that there are several accepted ways to calculate quartiles. Some of the software below uses different approaches to calculating quartiles than what we used in the examples above. Don’t worry. The difference in the calculations won’t be enough to alter your results significantly. 1. In Excel or Google Sheets You can use the Outlier formula in Excel or Google sheets using the following steps. To find the first quartile use the formula =QUARTILE(Data Range; 1) For example, if your data is in cells A2 through A11, you would type =QUARTLE(A2:A11, 1) To find the third quartile use the formula =QUARTILE(Range; 3) Subtract Q3 from Q1 Calculate the upper boundary: Q3 + (1.5)(IQR) Calculate the lower boundary: Q1 - (1.5)(IQR) 2. In Desmos Create a table and input your data in the x1 column. Use the function stats(x1) to find Q1 and Q3 for your data. Subtract Q1 from Q3 to get the interquartile range. Save your data using the assign operator, < -, and the combine function c(). Give the data a name like mydata. For example, say your data consists of the following values (15, 21, 25, 29, 32, 33, 40, 41, 49, 72). Type: mydata <-c(15, 21, 25, 29, 32, 33, 40, 41, 49, 72) Use the summary function to find Q1 and Q3. Type: summary(mydata) Use the IQR function to find the interquartile range. Type: IQR(mydata) For practice, try using one or more of these programs to find the outliers from the examples we covered in the previous section. Here are some frequently asked questions about the outlier formula. There isn’t a clear and fast rule about when you should (or shouldn’t) remove outliers from your data. Outliers can occur for different reasons. Sometimes, outliers result from an error that occurred during the data collection process. If it’s obvious that an outlier results from a data collection error, it’s safe to remove it. You might also choose to re-measure the data point if you can. If you’re not sure if an outlier results from an error, your first instinct shouldn’t be to remove it. The outlier may provide some important insights about your data, and if you remove it, those insights will be lost. A better solution would be to adjust your method of analysis and to think carefully about why the outlier exists. You might also choose to run your analysis with and without the outlier and present both sets of results for the sake of transparency. Can there be a negative outlier? Yes. If your data contains negative values, outliers can be negative numbers. How does removing the outlier affect the mean? The mean of the data set is sensitive to outliers, so removing an outlier can dramatically change the value of the mean. If you remove a positive outlier, the mean will decrease. If you remove a negative outlier, the mean will increase. How does removing outliers affect the median? The median of the data set is resistant to outliers, so removing an outlier shouldn’t dramatically change the value of the median. After removing an outlier, the value of the median can change slightly, but the new median shouldn’t be too far from its original value. Can normal distributions have outliers? Yes. Values that lie in a normal distribution’s extreme right and left tails can be considered outliers. You can use Z-scores to identify outliers in a normal distribution. If you apply the outlier formula, any value in a normal distribution with a Z-score above 2.68 or below -2.68 should be considered an outlier. For more on normal distribution, Duke University's Dr. Olanrewaju Michael Akande gives an overview. Can a data set have more than one outlier? Yes. It’s possible to have more than one outlier in your data. Is the outlier formula the only method of identifying outliers? No. The outlier formula is a commonly used and straightforward method, but there are other ways to identify outliers. Statisticians will often plot their data on graphs such as box plots and scatterplots to identify outliers. They may also use regression, hypothesis testing, and Z-scores to identify outliers.
BootstrapOutOfBag: A scikit-learn compatible version of the out-of-bag bootstrap - mlxtend Example 1 -- Evaluating the predictive performance of a model An implementation of the out-of-bag bootstrap to evaluate supervised learning algorithms. from mlxtend.evaluate import BootstrapOutOfBag X_1,X_2, ..., X_{10} The BootstrapOutOfBag class mimics the behavior of scikit-learn's cross-validation classes, e.g., KFold: oob = BootstrapOutOfBag(n_splits=3) for train, test in oob.split(np.array([1, 2, 3, 4, 5])): Consequently, we can use BootstrapOutOfBag objects via the cross_val_score method: print(cross_val_score(lr, X, y)) print(cross_val_score(lr, X, y, cv=BootstrapOutOfBag(n_splits=3, random_seed=456))) In practice, it is recommended to run at least 200 iterations, though: print('Mean accuracy: %.1f%%' % np.mean(100*cross_val_score( lr, X, y, cv=BootstrapOutOfBag(n_splits=200, random_seed=456)))) Mean accuracy: 94.8% Using the bootstrap, we can use the percentile method to compute the confidence bounds of the performance estimate. We pick our lower and upper confidence bounds as follows: ACC_{lower} \alpha_1th percentile of the ACC_{boot} ACC_{lower} \alpha_2th ACC_{boot} \alpha_1 = \alpha \alpha_2 = 1-\alpha , and the degree of confidence to compute the 100 \times (1-2 \times \alpha) confidence interval. For instance, to compute a 95% confidence interval, we pick \alpha=0.025 to obtain the 2.5th and 97.5th percentiles of the b bootstrap samples distribution as the upper and lower confidence bounds. accuracies = cross_val_score(lr, X, y, cv=BootstrapOutOfBag(n_splits=1000, random_seed=456)) mean = np.mean(accuracies) lower = np.percentile(accuracies, 2.5) upper = np.percentile(accuracies, 97.5) ax.vlines(mean, [0], 40, lw=2.5, linestyle='-', label='mean') ax.vlines(lower, [0], 15, lw=2.5, linestyle='-.', label='CI95 percentile') ax.vlines(upper, [0], 15, lw=2.5, linestyle='-.') ax.hist(accuracies, bins=11, color='#0080ff', edgecolor="none",
A problem on $0-1$ matrices A problem on 0-1 de Valk, V. author = {de Valk, V.}, title = {A problem on $0-1$ matrices}, AU - de Valk, V. TI - A problem on $0-1$ matrices de Valk, V. A problem on $0-1$ matrices. Compositio Mathematica, Tome 71 (1989) no. 2, pp. 139-179. http://www.numdam.org/item/CM_1989__71_2_139_0/ [A.G.] J. Aaronson and D. Gilat, On the structure of stationary one dependent processes, School of Mathematical Sciences, Tel Aviv Univ., Israel, (1987). [A.G.K.V.] J. Aaronson, D. Gilat, M.S. Keane and V. De Valk, An algebraic construction of a class of one-dependent processes, Annals of Probability 17 (1989) 128-143. | MR 972778 | Zbl 0681.60038 [F.] L. Finke, Two maximization problems, a paper submitted to Oregon State Univ. in partial fulfillment of the requirements for the degree of Master of Arts, 1982. [G.K.V.] A. Gandolfi, M.S. Keane and V. De Valk, Extremal two-correlations of two-valued stationary one-dependent processes, accepted by Prob. Theory and Related Fields (1988). | Zbl 0638.60056 [H.L.P.] G.H. Hardy, J.E. Littlewood and G. Pólya, Inequalities, Cambridge Univ. Press (1934). | Zbl 0010.10703 [Ka.] M. Katz, Rearrangements of (0,1) matrices, Israel Journ. of Mathematics, 9 (1971) 53-72. | MR 271131 | Zbl 0215.33405 [Kh.] A. Khintchine, Über eine Ungleichung, Mat. Sb. 39 (1932) 35-39. | JFM 58.0094.09 | Zbl 0006.15803 [Lo.] G.G. Lorentz, A problem of plane measure, Amer. Journ. Math. 71 (1949) 417-426. | MR 28925 | Zbl 0032.19701 [Lu.] W.A.J. Luxemburg, On an inequality of A. Khintchine for zero-one matrices, Journ. of Combinatorial Theory 12 (1972) 289-296. | MR 321762 | Zbl 0241.05017 [V.] V. De Valk, The maximal and minimal 2-correlation of a class of 1-dependent 0-1 valued processes, Israel Journ. of Math. 62 (1988) 181-205. | MR 947821 | Zbl 0661.60028
This is the homepage for CDS 110b, Introduction to Control Theory for Winter 2006. __NOTOC__ <tr bgcolor=lightgreen><td> * [[Main Page|Richard M. Murray]], murray@cds.caltech.edu * '''Homework: 50%''' <br> Homework sets will be handed out weekly and will generally be due one week later at 5 pm to the box outside of 109 Steele. <i>Late homework will not be accepted without'' <b>prior</b> permission from the instructor.</i><br> * B. Friedland, ''Control System Design: An Introduction to State-Space Methods'', Dover, 2004. Available in the Caltech bookstore. * K. J. {{Astrom}} and R. M. Murray, [http://www.cds.caltech.edu/~murray/books/AM05/wiki ''Design and Analysis of Feedback Systems''], Preprint, 2006. Available online. * J. Doyle, B. Francis, A. Tannenbaum, [http://www.control.utoronto.ca/people/profs/francis/dft.html ''Feedback Control Theory''], Macmillan, 1992. Available online. * F. L. Lewis and V. L. Syrmos, "Optimal Control", Second Edition, Wiley-IEEE, 1995. ([http://books.google.com/books?ie=UTF-8&hl=en&vid=ISBN0471033782&id=jkD37elP6NIC Google Books]) * G. F. Franklin, J. D. Powell, and A. Emami-Naeni, ''Feedback Control of Dynamic Systems'', Addison-Wesley, 2002. * N. E. Leonard and W. S. Levine, ''Using Matlab to Analyze and Design Control Systems'', Benjamin/Cummings, 1992. * A. D. Lewis, [http://penelope.mast.queensu.ca/math332/notes.shtml A Mathematical Approach to Classical Control], 2003. <td> Week <td> Date <td> Topic <td> Reading <td> Homework <td>4 Jan (W) <td> [[CDS 110b: Course Overview]] <td> {{am05|Ch_1_-_Introduction|Ch 1}}, [http://www.cds.caltech.edu/~murray/books/AM05/wiki/index.php/Chapter_11_-_Robust_Performance Section 11.1]<td rowspan=4> <td> 6 Feb (M) <td> Optimization-Based Control <td><td rowspan=2> <tr><td>8 Feb (W) <td> Optimal Control <td> <td> 13 Feb (M) <td> Linear Quadratic Regulators <td> Friedlan, Ch 9 <td rowspan=2> <tr><td> 15 Feb (W) <td> Receding Horizon Control <td> <td> 20 Feb (M) <td> No class (Institute holiday) <td> <td rowspan=2> <tr><td> 22 Feb (W) <td> Observability and Estimators<td> {{am05|Chapter_6_-_Output_Feedback|Ch 6}} <td> 27 Feb (M) <td> Introduction to Random Processes <td> Friedland, Ch 10 <td rowspan=2> <tr><td> 1 Mar (W) <td> Linear Quadratic Estimators (LQE) <td> Friedland, Ch 11 <td> 6 Mar Jan (M) <td> Kalman Filtering<td> Friedland, Ch 11 <td rowspan=3> Final exam (due 17 Mar) <tr><td> 8 Mar (W) <td> Extended Kalman Filters <td> * '''Midterm report: 20%'''<br> By midterm, all students should implement and test an LQR controller on the experimental system. A report describing the control design and experimental results is due no later than 5 pm on the last day of the midterm examination period. The report should include a description of the (nonlinear) model for the system, an analysis and design of a control law based on the linearization of that model, and a comparison between simulation and experimental results on the system. For 2005-06, students will implement a lateral control law that controls the position of the vehicle and tracks a reference trajectory on flat pavement. {\displaystyle H_{\infty }}
Price Floors, Explained: A Microeconomics Tool With Macro Impact | Outlier Binding and Non-Binding Price Floors What Can Happen as a Result of a Price Floor? Real-World Examples of Price Floors Another Form of Price Control: Price Ceilings A price floor is a regulation that prevents buying and selling a good or service below a specified price. Price floors are often implemented with one or more of the following goals in mind: To push the price of a good or service above the market price. To reduce the demand for goods or services thought to be harmful. To encourage the production of goods or services in a government-assisted industry. To promote the welfare of low-wage workers (this is the case for minimum wage laws, which are price floors set on the wages employers can pay their workers). In a competitive market, the price of a good or service is determined by the intersection of supply and demand. This is called the market price, p When a price floor is in place, market participants are prevented from buying or selling below a given price, p_f . If the price floor is set above the market price ( p_f p *), buyers and sellers will adjust their behavior in response to the higher price. Buyers will demand fewer units of the good and as a result, fewer units of the good will be sold. Meanwhile, suppliers will want to produce greater quantities of the good. This can lead to overproduction or a surplus of goods in the market. A price floor that is set above the equilibrium price is called a binding price floor. For a price floor to have an effect, it must be binding. A binding price floor makes it illegal to buy and sell at the equilibrium price or any other price that falls below the price floor. A price floor that is set below the equilibrium price is called a non-binding price floor. A non-binding price floor has no effect in a competitive market, because the equilibrium price already exceeds the price floor. In the non-binding case, market participants will continue to buy and sell at the equilibrium price and quantity. Excess supply. As we have already seen, a binding price floor raises the price of a good above the equilibrium price. This leads to a reduction in demand and an increase in supply. Quantity supplied will exceed the quantity demanded, which leads to a surplus of goods in the market. Deadweight loss. A binding price floor also results in a deadweight loss caused by a reduction in goods sold. A subset of buyers who would have made purchases in the competitive market will no longer benefit from doing so. Likewise, some sellers who would have made additional sales in a competitive market lose that benefit. An overall reduction in consumer surplus. When a price floor is implemented, consumers are harmed more than suppliers. Consumers who remain in the market are charged a higher price, while consumers who exit the market lose the entire benefit of purchasing the good. In other words, total consumer surplus falls because of deadweight loss and because a portion of the consumer surplus is reallocated to the producers. Changes in producer surplus. Price floors have a mixed effect on producers. The reduction in the number of goods sold is a loss for some producers. This is reflected in the deadweight loss. On the other hand, the higher price charged for the goods that are sold is a benefit. This benefit is reflected in the portion of surplus that is reallocated from the consumers to the producers. The emergence of black markets. Price floors can sometimes lead to black markets. Black markets are unauthorized or illegal means of buying and selling goods. Sellers and buyers who wish to circumvent the price floor may try to buy and sell in this manner. A Price Floor on Tobacco In 2018, New York City increased its price floor on cigarettes from $10.50 per pack to $13 per pack. A few other counties and cities in the United States also have price floors on the sale of cigarettes and other tobacco products. Price floors on products such as tobacco and alcohol are aimed at reducing demand for products considered harmful to consumers. Price Floors on Agricultural Products As economies started to industrialize and urbanize in the 20th century, many governments started implementing price floors to support rural populations and their shrinking but vital agricultural industries. Price floors on agricultural products are designed to keep production levels and prices high. This incentivizes producers to continue farming when the free market might otherwise incentivize them to turn to other occupations. It also protects farmers against unpredictable fluctuations in their yield. Because price floors create a surplus of goods, when governments implement agricultural price floors, they typically intervene in the market by offering to buy the surplus directly from producers. A Price Floor on Wages (Minimum Wage Laws) Minimum wage laws are a special type of price floor. You can think of a minimum wage as a price floor set on the price of labor. In this case, employers are on the demand side of the market and employees are on the supply side of the market. The price floor regulates the minimum wage that can be paid by employers to workers. The outcomes of implementing (or raising) minimum wages are a matter of considerable debate. If you believe that the market for low-wage labor is competitive, then a price floor on wages would create unemployment due to a reduction in the demand for labor and an increase in the supply. Low-wage workers who remain employed under a minimum wage would benefit from a higher wage, but many other workers might lose their jobs and struggle to find work. For a long time, economists cautioned against minimum wage hikes believing that the resulting loss of jobs would be far worse than any benefits to workers who remained employed. More recently, however, the position of economists has changed. Today, many economists believe that the market for low-wage labor is not competitive and that employers exercise a fair amount of market power when they set wages. If this is the case, the effects of a minimum wage hike are far more ambiguous. A small increase in the minimum wage could, in fact, increase employment. A large increase could still result in a loss of jobs. Price floors are just one form of price control. The opposite of a price floor is a price ceiling. Price floors and price ceilings are both intended to move prices away from the market equilibrium, but they are designed to do so in opposite directions. While a price floor imposes a minimum price on the purchase and sale of a good, a price ceiling does the exact opposite. It imposes a maximum price. For a price ceiling to be binding, it must be below the equilibrium price rather than above it. Price ceilings are typically implemented to keep prices low for the benefit of consumers. These regulations increase demand and reduce supply resulting in a shortage of goods, and they tend to benefit the demand side of the market more than the supply side. It’s easy to confuse price floors and price ceilings, so be sure to double-check your understanding of these price controls when you encounter them. Allocating scarce resources is one of the fundamental problems in both business and economics. In this article, we’ll look at the production possibilities frontier, a tool for understanding the optimal outputs when producing different goods using the same resources.
\mathrm{store}[=\mathrm{name}] \mathrm{storeall}[=\mathrm{name}] \mathrm{ezcriteria}=[\mathrm{quantity},\mathrm{size ratio}] \mathrm{tosolve}=n \mathrm{stl}='\mathrm{time}' \mathrm{itl}='\mathrm{time}' \mathrm{ranking}=[[\mathrm{ind weights}⁢1,\mathrm{dep weights}⁢1],[\mathrm{ind weights}⁢2,\mathrm{dep weights}⁢2],\mathrm{...}] \mathrm{casesplit}=[\mathrm{<>},\mathrm{`=`},\mathrm{<>}] \mathrm{mindim}=... \mathrm{mindim}='n' \mathrm{mindim} \mathrm{pivselect}=\mathrm{choice} \mathrm{nopiv}=[\mathrm{var1},...] \mathrm{faclimit}=n Typically the pivot chosen is the coefficient of the leading indeterminate in the equation. In the event that the leading indeterminate is itself a factor of the equation, and this same leading indeterminate factor occurs in \mathrm{factoring}=\mathrm{desc} \mathrm{desc}=\mathrm{none} \mathrm{desc}=\mathrm{nolead} \mathrm{desc}=\mathrm{all} \mathrm{checkempty} \mathrm{grobonly} \mathrm{grob_rank}=[[1,\mathrm{deg},\mathrm{none}],[1,\mathrm{ilex}]]
pH and the pH Scale - Course Hero Cell Biology/Water/pH and the pH Scale The pH scale measures the hydrogen ion concentration in a solution, and identifies a solution as acidic, neutral, or basic. As pure water dissociates (breaks up into H+ ions and OH– ions), the relative concentrations of hydrogen ion (H+) and hydroxide ion (OH–) remain equal. However, if additional H+ ions or OH– ions are added to the solution, it can cause an imbalance in the relative concentrations of the two ions. The measure of the concentration of H+ (or H3O+) ions in solution is called the pH. The pOH of a substance is a measure of the concentration of OH– ions in solution. The pH scale ranges from zero to 14, with values under 7 considered acidic and values over 7 considered basic. Acidic solutions have a higher concentration of H+ (or H3O+) ions, relative to the concentration of OH– ions. An acid is any substance that lowers pH by increasing the levels of hydrogen ions. Basic solutions have a lower concentration of H+ (or H3O+) ions, relative to the concentration of OH– ions. A base is any substance that increases pH by removing hydrogen ions from a solution. The relationship between H+ ion concentration and pH is logarithmic (base 10). The pH value itself is the value of the exponent, <x , when the concentration is expressed as 1\times10^{-x}\;{\rm M} 1\times10^{-5}\;{\rm M} . The total of the exponents must equal 14, so in a substance with a pH of 5, the pOH must be 9. When the two values are compared, 1\times10^{-5}\;{\rm M} 1\times10^{-9}\;{\rm M} , the concentration of H+ is greater than that of OH– by four orders of magnitude. \rm{pH}=-\log\lbrack{\rm H}^+\rbrack The order of magnitude of pH determines the acidity or alkalinity of a solution, which can also determine its impact on living cells. Pure water, for example, has a pH of 7. Milk has a pH of about 6.6. While the difference in pH is only 0.4 because the pH scale is logarithmic, milk is more acidic than pure water. Milk is considered a weak acid. Solutions that are important to life, such as rainwater, seawater, blood, and tree sap all tend to stay near the neutral range with a few exceptions, such as the acidic human stomach. Drifting too far in pH in either direction can affect protein function and quickly lead to trouble for an organism. <Ionization of Water>Buffers
Piles and Holes - Global Math Week I4SJQ1 I said that I don’t believe in subtraction. To me, subtraction is just the addition of the opposite. Here’s what led me to this belief. It comes from another untrue story. As a young child I used to regularly play in a sandbox. And there I discovered the positive counting numbers as piles of sand: one pile, two piles, and so on. And I also discovered the addition of positive numbers simply by lining up piles. For example, I saw that two plus three equals five simply by lining up piles like this. I had hours of fun counting and lining up piles to explore addition. But then one day I had an astounding flash on insight! Instead of making piles of sand, I realized I could also make holes. And I saw right away that a hole is the opposite of a pile: place a pile and a hole together and they cancel each other out. Whoa! Later in school I was taught to call a hole “ -1 ”, and two holes “ -2 ,” and so on and was told to do this thing called “subtraction.” But I never really believed in subtraction. Although my colleagues would read 5 - 2 , say, as ”five take away two,” I was thinking of five piles and the addition of two holes. A picture shows that the answer is three piles. Yes. This gives the same answer as my peers, of course: all correct thinking is correct! But I knew I had an advantage. For example, my colleagues would say that 7-10 has no answer. But I could see it did. \begin{aligned}<br> 7 - 10 &= \text{seven piles and ten holes}\<br> &= \text{three holes}\<br> &= -3\<br> \end{aligned} (By the way, I will happily write 7-10 7+-10 ". This makes the thinking more obvious.) LWild Exploration
Antidots - Global Math Week In our dots-and-boxes machines we’ve been working with dots, which we draw as solid dots. We now need the notion of the opposite of dot. Like matter and antimatter, which annihilate when brought together, a dot and an antidot should also annihilate – POOF! – when brought together to leave nothing behind. Let’s draw antidots as hollow circles. If a dot represents 1 , then an antidot represents -1 , and bringing one of each together leaves zero. Annihilate the dot and the antidot by dragging them together! Even without a machine we can conduct basic arithmetic with dots and antidots. Can you make sense of these pictures? By the way, some students prefer to call the opposite of a dot a tod. Can you guess what made them think of that name?
Exact Solutions of Coupled Sine-Gordon Equations Using the Simplest Equation Method Yun-Mei Zhao, "Exact Solutions of Coupled Sine-Gordon Equations Using the Simplest Equation Method", Journal of Applied Mathematics, vol. 2014, Article ID 534346, 5 pages, 2014. https://doi.org/10.1155/2014/534346 Yun-Mei Zhao 1 The simplest equation method has been used for finding the exact solutions of coupled sine-Gordon equations. Such equations have some useful applications in physics and biology, so finding their exact solutions is of great importance. Recently, the coupled sine-Gordon equations have been introduced by Khusnutdinova and Pelinovsky [1]. The coupled sine-Gordon equations generalize the Frenkel-Kontorova dislocation model [2, 3]. System (1) with was also proposed to describe the open states in DNA model [4]. Very recently, system (1) was studied by many researchers and various methods. It was studied by Salas, using a special rational exponential ansatz [5]. Zhao et al. obtained some new solutions including Jacobi elliptic function solutions, hyperbolic function solutions, and trigonometric function solutions by the Jacobi elliptic function expansion method [6], the hyperbolic auxiliary function method [7], and the symbolic computation method [8]. In the past four decades, the study of nonlinear partial differential equations (NLEEs) modelling physical phenomena has become an important research topic. Seeking exact solutions of NLEEs has long been one of the central themes of perpetual interest in mathematics and physics. With the development of symbolic computation packages like Maple and Mathematica, many powerful methods for finding exact solutions have been proposed, such as the homogeneous balance method [9, 10], the auxiliary equation method [11, 12], the Exp-function method [13, 14], the Darboux transformation [15, 16], the tanh-function method [17], and the -expansion method [18, 19]. In this paper, we will apply the simplest equation method [24] to obtain some new and more general explicit exact solutions of the coupled sine-Gordon equations. 2. The Simplest Equation Method In this section, we will give the detailed description of the simplest equation method. Step 1. Suppose that we have a nonlinear partial differential equation (PDE) for in the form where is a polynomial in its arguments. Step 2. By taking , , we look for traveling wave solutions of (2) and transform it to the ordinary differential equation (ODE) Step 3. Suppose that the solution of (3) can be expressed as a finite series in the form where satisfies the Bernoulli or Riccati equation, is a positive integer that can be determined by balancing procedure [21], and are parameters to be determined. For the Riccati equation where , , and are constants, we will use the solutions where . Step 4. Substituting (4) into (3) with (5) (or (7)), then the left hand side of (3) is converted into a polynomial in , and equating each coefficient of the polynomial to zero yields a set of algebraic equations for , , . Solving the algebraic equations by symbolic computation, we can determine those parameters explicitly. Step 5. Assuming that the constants , , can be obtained in Step 4 and substituting the results into (4), then we obtain the exact traveling wave solutions for (2). Equation (9) admits the following exact solutions: when , and when . 3. Exact Solutions of the Coupled Sine-Gordon Equations In this section, we solve the coupled sine-Gordon equations by the simplest equation method. In order to solve (1), we introduce a new unknown function by the formula so that . According to (1), we have Let then Substitution (14)–(15) into (13), we get the following coupled system of nonlinear differential equations: According to the first equation of (16), we have Substituting (17) into the second equation of (16), we obtain a single nonlinear second-order differential equation in the unknown : As we can see, it suffices to find analytic solutions to (18). Observe that if is a solution of (18), then is also a solution. 3.1. Solutions of (18) Using the Bernoulli Equation as the Simplest Equation The balancing procedure yields . Thus, the solution of (18) is of the form Substituting (19) into (18) and making use of the Bernoulli equation (5) and then equating the coefficients of the functions to zero, we obtain an algebraic system of equations in terms of , , and . Solving this system of algebraic equations, with the aid of Maple, one possible set of values of , , and is Therefore, using solutions (6) of (5), ansatz (19), we obtain the following exact solution of (18): Substituting (21) into (17) with (12), the exact traveling wave solution to (1) can be written as where , , and , , , , are arbitrary parameters. Now, to obtain some special cases of the above solutions, we set , , , “” take “+”; we have where , . The equations in (24) are the same as those in of [7]. If we set , , , we have where , . Substituting (22) into (17) with (12), the exact traveling wave solution to (1) can be written as where , . Substituting (19) along with (9) into (18) and setting all the coefficients of powers to be zero, then we obtain a system of nonlinear algebraic equations and by solving it, we obtain Therefore, using solutions (10) and (11) of (9), ansatz (19), we obtain the following exact solution of (18): Then the exact solution to (1) can be written as where , , and , , , are arbitrary parameters. Now, to obtain some special cases of the above solutions, we set , ; we have where , . 3.2. Solutions of (18) Using Riccati Equation as the Simplest Equation Substituting (31) into (18) and making use of the Riccati Equation (7) and then equating the coefficients of the functions to zero, we obtain an algebraic system of equations in terms of , , , and . Solving this system of algebraic equations, with the aid of Maple, one possible set of values of , , , and is Remark 2. Compared with [7], the exact solutions of this paper are more general, such that when , , and in (23), the solutions become as those in (36) of [7]. When , , and in (23), the solutions become as those in (32) of [7]. When and of (29), the solutions become as those in (31) of [7]. There are many such examples; thus, it is easy to see that the study of [7] is a special case in this paper. So the exact solutions of this paper are more general, and all the solutions are new solutions which are not reported in the relevant literature reported. In this paper, we obtained some exact solutions of the coupled sine-Gordon equations by using the simplest equation method. The Bernoulli equation and Riccati equation have been used as the simplest equation. The solutions obtained may be significant and important for the explanation of some practical physical problems. The method may also be applied to other nonlinear partial differential equations. Also, we have verified that the solutions that we have found are indeed solutions to the original nonlinear evolution equations. This work was supported by the National Natural Science Foundation of China (11161020 and 11361023), the Natural Science Foundation of Yunnan Province (2011FZ193 and 2013FZ117), and the Natural Science Foundation of Education Committee of Yunnan Province (2012Y452 and 2013C079). K. R. Khusnutdinova and D. E. Pelinovsky, “On the exchange of energy in coupled Klein-Gordon equations,” Wave Motion, vol. 38, no. 1, pp. 1–10, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet T. A. Kontorova and Y. I. Frenkel, “On the theory of plastic deformation and twinning I, II,” Zhurnal Eksperimental'NoI I TeoreticheskoI Fiziki, vol. 8, pp. 89–95, 1938. View at: Google Scholar O. M. Braun and Y. S. Kivshar, “Nonlinear dynamics of the Frenkel-Kontorova model,” Physics Reports, vol. 306, no. 1-2, p. 108, 1998. View at: Publisher Site | Google Scholar | MathSciNet S. Yomosa, “Soliton excitations in deoxyribonucleic acid (DNA) double helices,” Physical Review A, vol. 27, no. 4, pp. 2120–2125, 1983. View at: Publisher Site | Google Scholar | MathSciNet A. H. Salas, “Exact solutions of coupled sine-Gordon equations,” Nonlinear Analysis: Real World Applications, vol. 11, no. 5, pp. 3930–3935, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y.-M. Zhao, H. Liu, and Y.-J. Yang, “Applications of the Jacobi elliptic function expansion method for obtaining travelling wave solutions of coupled sine-Gordon equations,” Mathematical Sciences Research Journal, vol. 15, no. 3, pp. 80–90, 2011. View at: Google Scholar | Zentralblatt MATH | MathSciNet Y.-M. Zhao, H. Liu, and Y.-J. Yang, “Exact solutions for the coupled Sine-Gordon equations by a new hyperbolic auxiliary function method,” Applied Mathematical Sciences, vol. 5, no. 33-36, pp. 1621–1629, 2011. View at: Google Scholar | Zentralblatt MATH | MathSciNet Y.-M. Zhao, W. Li, and Y.-J. Yang, “New exact solutions of coupled sine-Gordon equations using symbolic computation,” Mathematical Sciences Research Journal, vol. 14, no. 4, pp. 79–86, 2010. View at: Google Scholar | Zentralblatt MATH | MathSciNet M. Wang, “Exact solutions for a compound KdV-Burgers equation,” Physics Letters A, vol. 213, no. 5-6, pp. 279–287, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet S. Zhang and T. Xia, “A generalized new auxiliary equation method and its applications to nonlinear partial differential equations,” Physics Letters A, vol. 363, no. 5-6, pp. 356–360, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet X.-H. Wu and J.-H. He, “Solitary solutions, periodic solutions and compacton-like solutions using the Exp-function method,” Computers & Mathematics with Applications. An International Journal, vol. 54, no. 7-8, pp. 966–986, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet X.-H. Wu and J.-H. He, “EXP-function method and its application to nonlinear equations,” Chaos, Solitons & Fractals, vol. 38, no. 3, pp. 903–910, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet S. B. Leble and N. V. Ustinov, “Darboux transforms, deep reductions and solitons,” Journal of Physics A, vol. 26, no. 19, pp. 5007–5016, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet H.-C. Hu, X.-Y. Tang, S.-Y. Lou, and Q.-P. Liu, “Variable separation solutions obtained from Darboux transformations for the asymmetric Nizhnik-Novikov-Veselov system,” Chaos, Solitons & Fractals, vol. 22, no. 2, pp. 327–334, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet M. L. Wang, X. Z. Li, and J. L. Zhang, “The {G}^{\text{'}}/G S. M. Guo and Y. B. Zhou, “The extended {G}^{\text{'}}/G N. K. Vitanov and Z. I. Dimitrova, “Application of the method of simplest equation for obtaining exact traveling-wave solutions for two classes of model PDEs from ecology and population dynamics,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 10, pp. 2836–2845, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet N. K. Vitanov, Z. I. Dimitrova, and H. Kantz, “Modified method of simplest equation and its application to nonlinear PDEs,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2587–2595, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet C. M. Khalique, “Exact explicit solutions and conservation laws for a coupled Zakharov-Kuznetsov system,” Mathematical Problems in Engineering, vol. 2013, Article ID 461327, 5 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet Copyright © 2014 Yun-Mei Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Costing Methods in Manufacturing - Course Hero Managerial Accounting/Variable Costing/Costing Methods in Manufacturing Decision-Making with Segmented Income Statements Variable Costing Defined Variable costing, a method where all fluctuating expenses from manufacturing are considered to be part of inventory expenses, is useful for various types of analysis and decision-making but does not take into account any fixed manufacturing overhead costs. Each organization must keep track of what it is spending on the goods and services it creates and sells. Accountants and managers usually look at each unit of a product and service that a company produces. Each unit that is manufactured comes with its own costs. Manufacturing costs are the sum of all the expenses a company has when it turns raw materials into its finished products. Manufacturing costs are either fixed or variable. A fixed cost is an expense that is related to operating a company for a specific period of time and that stays the same despite any change in the company's activity level. Examples of fixed costs include rent and/or mortgage, property taxes, insurance, interest expenses, utilities, and depreciation. In contrast, a variable cost is an expense that increases or decreases with the level of production. Variable costs can require more precise planning. Examples of variable costs include direct materials, direct labor, production supplies, and freight. A company's variable costs in the manufacturing process include direct materials, direct labor, and variable overhead. Direct materials are items that workers put together to create a finished product and are easily identifiable as part of the product. Direct labor is the hours spent producing a product or providing a service that can easily be traced to the product or service. Variable overhead refers to the indirect costs incurred by manufacturing a product—costs that fluctuate based on volume. An example of variable overhead is an electric utility bill, because the amount of electricity used will vary based on how many shifts a factory is running. The distinction between variable and fixed costs is important for proper planning and budgeting. Variable costing, also known as direct costing, is a method in which the company considers all fluctuating expenses from manufacturing to be part of inventory costs. Companies often use variable costing to analyze performance and make decisions. The use of variable costs as the sole costs tells a company how much it pays for its fixed costs. However, variable costing may not be the best method for a company to use when determining pricing, because it does not consider all costs associated with the manufacturing of the product. The variable costing method is useful for planning but is not sufficient to meet external reporting requirements mandated by generally accepted accounting principles (GAAP), a set of rules and standards that companies must follow when they report financial information. Nonetheless, variable costing is helpful for a company when it is comparing products of the same product line. An inventory cost is an expense that a company incurs for carrying a stock of goods, such as warehousing and depreciation. Depreciation is the gradual loss of an asset's value, such as a machine that wears out. With variable costing, fixed manufacturing costs are not included with inventory costs. Instead, the company considers them as period costs because they happen for a given period. Other variable costs that are unrelated to manufacturing are still considered as period costs and are recorded as expenses when they are incurred. The variable costing method treats all variable costs from the manufacturing process as part of total inventory costs. Manufacturers use three types of costing methods: variable costing, absorption costing, and throughput costing. Besides variable costing, there are two other methods used in manufacturing to assign costs: absorption costing and throughput costing. These methods differ in how they are used for reporting and in how they treat inventory and period costs. Absorption costing, also called full costing, is an inventory costing method in which the accountant considers both variable manufacturing costs and fixed manufacturing costs as inventory costs. In this method, the inventory is in effect "absorbing" all costs associated with the manufacturing process. This also means that companies keep their fixed costs as part of operating expenses and not as production costs. Consider a company called Doll Toys, Inc. that makes high-end dolls. When Doll Toys, Inc. produces 1,000 units, its variable cost per unit is $10 in direct materials, $20 in direct labor, and $5 in variable overhead. The company has fixed costs of $10,000. Under the absorption costing method, the company determines its product cost with the following formula: \begin{aligned}\text{Absorption Cost per Unit}&=(\text {Direct Materials per Unit}+\text{Direct Labor per Unit}+\text{Variable Overhead per Unit)}+\left(\frac{\text{Fixed Costs}}{\text{Number of Units}}\right)\\&={(\$10}+\$20+{\$5)}+\left(\frac{\$10{,}000}{1{,}000}\right)\\&=\$35+\$10\\&=\$45\end{aligned} Absorption costing helps companies determine where they can save costs and increase profits. Additionally, absorption costing is the method companies must use when they calculate their taxes and publish financial statements. GAAP requires companies to use absorption costing in published financial statements. Because almost all companies use GAAP, investors, government officials, and members of the public can compare the financial performance of different companies and therefore make wiser decisions. Therefore, even if managers believe that a different method of inventory costing is more helpful as they make decisions, the accountants who work with them have to use absorption costing when it comes to financial reporting. The least popular inventory costing method is throughput costing. Throughput costing is the least popular method because it is primarily used to approximate capacity. With throughput costing, only direct materials are considered inventory costs, and other manufacturing costs are expensed as period costs when they are incurred. Throughput costing helps companies figure out how much capacity they have to produce goods or services. Some companies use throughput costing because it encourages managers to keep their inventory stores lean. When companies have fewer items of inventory in stock, they can use the money that they would have spent on that extra inventory for other, more urgent needs. Less inventory means fewer items to store, keep warm or cool, insure, and protect from damage, spoilage, or shrinkage (theft of inventory). Also, inventory items that are stored for long periods may become outdated when technology or trends change. If that happens, then the company has to consider whether to sell the items at a discount, throw them away, or continue to use them even if doing so may affect profitability. If managers are able to reduce excess inventory, they may even be able to reduce the amount of warehouse space the company has, thereby saving on rent. For these reasons, it makes sense for managers to consider throughput costing, especially for businesses that spend significant amounts on inventory. Throughput costing also works well for businesses where most of the production process is completed by robots or other machines. Used for external reporting? No, it does not comply with GAAP. Yes. The absorption costing method is the only one of the three that is used in preparing financial statements for external reporting purposes in compliance with GAAP. No, it does not comply with GAAP. Used for internal reporting? Yes, it shows where companies can save costs and increase their profitability. Yes, it shows company performance and provides helpful data when making decisions and planning. Yes, it can help companies make capacity decisions during the fast-paced production process. Which costs are considered inventory costs? Which costs are considered period costs? All selling, general, and administrative expenses There are three main costing methods that companies use for different objectives: variable costing, absorption costing, and throughput costing. Each of these breaks down expenses in a different way. <Vocabulary>Contribution Margin Approach
Fuel saving device - Wikipedia Fuel-saving devices are sold on the aftermarket with claims they may improve the fuel economy, the exhaust emissions, or optimize ignition, air flow, or fuel flow of automobiles in some way. An early example of such a device sold with difficult-to-justify claims is the 200 mpg‑US (1.2 L/100 km) carburetor designed by Canadian inventor Charles Nelson Pogue. The US EPA is required by Section 511 of the Motor Vehicle Information and Cost Savings Act to test many of these devices and to provide public reports on their efficacy; the agency finds most devices do not improve fuel economy to any measurable degree, unlike forced induction, water injection (engine), intercooling and other fuel economy devices which have been long proven.[1] Tests by Popular Mechanics magazine also found unproven types of devices yield no measurable improvements in fuel consumption or power, and in some cases actually decrease both power and fuel economy.[2] Other organizations generally considered reputable, such as the American Automobile Association and Consumer Reports have performed studies with the same result.[3][4] One reason that ineffective fuel-saving gadgets are popular is the difficulty of accurately measuring small changes in the fuel economy of a vehicle. This is because of the high level of variance in the fuel consumption of a vehicle under normal driving conditions. Due to selective perception and confirmation bias, the buyer of a device can perceive an improvement where none actually exists. Also, observer-expectancy effect can result in a user subconsciously altering driving habits. These biases can be either positive or negative to the device tested, depending on the biases of the individual. For these reasons, regulatory bodies have developed standardized drive cycles for consistent, accurate testing of vehicle fuel consumption.[5] Where fuel economy does improve after the fitment of a device, it is usually due to the tune-up procedure that is conducted as part of the installation.[6] In older systems with distributor ignitions, device manufacturers would specify timing advance beyond that recommended by the manufacturer, which by itself could boost fuel economy while potentially increasing emissions of some combustion products, at the risk of possible engine damage.[5] 1.1 Accessory drive modifications 1.2 Fuel & oil additives 1.4 Vapor devices 1.5 Air bleed devices 2 Thermodynamic efficiency Types of devices[edit] Accessory drive modifications[edit] Modifying the accessory drive system can increase fuel economy and performance to some extent.[7] Underdrive pulleys modify the amount of engine power that can be drawn by accessory devices. Such alterations to the drive systems for alternators or air conditioning compressors (rather than the power steering pump, for example) can be detrimental to vehicle usability (e.g., by not keeping the battery fully charged), but will not impair safety.[8] Fuel & oil additives[edit] Compounds sold for addition to the vehicle's fuel may include tin, magnesium and platinum. The claimed purpose of these is generally to improve the energy density of the fuel.[citation needed] Additives for addition to the engine oil, sometimes marketed as "engine treatments", contain teflon, zinc, or chlorine compounds.[9][10][11][12][13][14] Magnets attached to a vehicle's fuel line have been claimed to improve fuel economy by aligning fuel molecules, but because motor fuels are non-polar, no such alignment or other magnetic effect on the fuel is possible. When tested, typical magnet devices have shown no effect on vehicle performance or economy.[2] Vapor devices[edit] Some devices claim to improve efficiency by changing the way that liquid fuel is converted to vapor. These include fuel heaters and devices to increase or decrease turbulence in the intake manifold. These do not work on standard vehicles because the principle is already applied to the design of the engine.[15] This method is however integral to making vegetable oil conversions, and similar heavy oil engines, run at all.[16] Air bleed devices[edit] Devices have been marketed which bleed a small amount of air into the fuel line before the carburetor. These may improve fuel economy because the engine runs slightly lean as a consequence. However, running leaner than the manufacturer intended can cause overheating, piston damage, loss of maximum power and poor emissions (e.g., higher NOx due to higher combustion temperatures, or, if misfiring occurs, greater hydrocarbon emissions). Electronic devices[edit] Some electronic devices are marketed as fuel savers. The Fuel Doctor FD-47, for example, plugs into the vehicle's cigarette lighter and displays several LEDs. It is claimed to increase vehicle fuel economy by up to 25% through "power conditioning of the vehicle's electrical systems",[17] but Consumer Reports detected no difference in economy or power in tests on ten separate vehicles, finding that the device did nothing but light up.[18] Car and Driver magazine found that the device contains nothing but "a simple circuit board for the LED lights",[19] and disassembly and circuit analysis reached the same conclusion.[20] The maker disputed claims that the device has no effect,[21] and proposed changes to the Consumer Reports testing procedure, which when implemented made no difference to the results.[22] Another device described as 'electronic' is the 'Electronic Engine Ionizer Fuel Saver'. Testing of this device resulted in a loss of power and an engine compartment fire.[2] There are also genuinely useful 'emissions-control defeat devices' that operate by allowing a vehicle's engine to operate outside government-imposed tailpipe emissions parameters. These government standards force factory engines to operate outside their most efficient range of operation. Either engine control units are reprogrammed to operate more efficiently,[23] or sensors that influence the ECU's operation are modified or 'simulated' to cause it to operate in a more efficient manner. Oxygen sensor simulators allow fuel-economy reducing catalytic converters to be removed.[24] Such devices are often sold for "off-road use only".[24] Thermodynamic efficiency[edit] The reason why most devices are not capable of producing the claimed improvements is based in thermodynamics. This formula expresses the theoretical efficiency of a petrol engine:[25] {\displaystyle \eta =1-{1 \over r_{v}^{\gamma -1}}} where η is efficiency, rv is the compression ratio, and γ is the ratio of the specific heats of the cylinder gases. Assuming an ideal engine with no friction, perfect insulation, perfect combustion, a compression ratio of 10:1, and a 'γ' of 1.27 (for gasoline-air combustion), the theoretical efficiency of the engine would be 46%. For example, if an automobile typically gets 20 miles per US gallon (12 L/100 km) with a 20% efficient engine that has a 10:1 compression ratio, a carburetor claiming 100 mpg‑US (2.4 L/100 km) would have to increase the efficiency by a factor of 5, to 100%. This is clearly beyond what is theoretically or practically possible. A similar claim of 300 mpg‑US (0.78 L/100 km) for any vehicle would require an engine (in this particular case) that is 300% efficient, which violates the First Law of Thermodynamics. Extremely efficient vehicle designs capable of achieving 100+ mpg‑US (2.4 L/100 km) (such as the VW 1l) do not have substantially greater engine efficiency, but instead focus on better aerodynamics, reduced vehicle weight, and using energy that would otherwise be dissipated as heat during braking. There is a debunked[26] urban legend about an inventor who creates a 100 mpg‑US (2.4 L/100 km) or even 200 mpg‑US (1.2 L/100 km) carburetor, but after demonstrating it for the major vehicle manufacturers, the inventor mysteriously disappears. In some versions of the story, he is claimed to have been killed by the government. This fiction is thought to have started after Canadian Charles Nelson Pogue filed in 1930 for such a device,[27] followed by others.[28][29] The popular U.S. television show MythBusters investigated several fuel-saving devices using gasoline- and diesel-powered fuel-injected cars under controlled circumstances.[30] Fuel line magnets, which supposedly align the fuel molecules so they burn better, were tested and found to make no difference in fuel consumption. The debunked[31] notion that adding acetone to gasoline improves efficiency by making the gasoline burn more completely without damaging the plastic parts of the fuel system was tested, and although there was no apparent damage to the fuel system, the vehicle's fuel economy was actually worsened. The show tested the hypothesis that a car with a carburetor type gasoline engine can run on hydrogen gas alone, which was confirmed as viable, although the high cost of hydrogen gas as well as storage difficulties currently prohibit widespread adoption. They also tested a device that supposedly produces sufficient hydrogen to power a car by electrolysis (running an electric current through water to split its molecules into hydrogen and oxygen). Although some hydrogen was produced, the amount was minuscule compared to the quantity necessary to run a car for even a few seconds. The show also tested a carburetor that, according to its manufacturer, could improve fuel efficiency to 300 miles per US gallon (0.78 L/100 km). However, the device actually made the car less fuel efficient. They also determined that a diesel-powered car can run on used cooking oil though they did not check whether it damaged the engine. The show noted that out of 104 fuel efficiency devices tested by the EPA, only seven showed any improvement in efficiency, and even then, the improvement was never more than six percent. The show also noted that if any of the devices they tested actually worked to the extent they were supposed to, the episode would have been one of the most legendary hours of television. ^ EPA Gas Saving and Emission Reduction Devices Evaluation ^ a b c Mike Allen (25 August 2005), "Looking For A Miracle: We Test Automotive 'Fuel Savers'", Popular Mechanics ^ "Things that Don't Work: A Look at Gas-Saving Gadgets" (PDF). AAA AUTOgram (30). May–June 1999. Archived from the original (PDF) on 2012-09-30. Retrieved 2011-05-22. ^ "Gas-saving devices tested". Consumer Reports (30). July 2010. Archived from the original on June 14, 2011. Retrieved 2011-05-22. ^ a b Jim Dunne (August 1974), "Those "gas-saving" gadgets... do they or don't they?", Popular Science, pp. 67–68 ^ "At last- EPA tests reveal the truth about those gas-saving devices", Popular Science, pp. 117–119, 182, March 1980 ^ Better Business Bureau list of ineffective retrofit devices and glossary of terms ^ "Dozen Tech Tips". AutoSpeed. Archived from the original on 2012-04-03. Retrieved 2013-12-30. ^ FTC lawsuit: ZMax oil additive Archived October 2, 2011, at the Wayback Machine ^ FTC lawsuit: DuraLube oil additives Archived January 15, 2013, at the Wayback Machine ^ FTC lawsuit: STP engine treatment Archived June 6, 2011, at the Wayback Machine ^ FTC lawsuit: Slick-50 engine treatment Archived March 16, 2011, at the Wayback Machine ^ FTC lawsuit: ProLong engine treatment Archived October 2, 2011, at the Wayback Machine ^ FTC lawsuit: MotorUP oil additive Archived April 4, 2011, at the Wayback Machine ^ "Fuel saving gadgets - a professional engineer's view". Archived from the original on 2004-11-05. ^ "Straight vegetable oil as diesel fuel: Journey to Forever". ^ "Fuel Doctor USA's FD-47 Available Now at Best Buy". Reuters. 2010-03-30. Archived from the original on 2013-06-27. ^ "Fuel Doctor FD-47 fails the Consumer Reports mpg test". News.consumerreports.org. 2010-12-07. Retrieved 2013-12-30. ^ MICHAEL AUSTIN (May 2011). "Fuel-Saving Devices Debunked: Dynamic Ionizer, Fuel Doctor FD-47, Hot InaZma Eco, Moletech Fuel Saver, Fuel Boss Magnetic Fuel Saver - Gearbox". Car and Driver. Retrieved 2013-12-30. ^ How the Fuel Doctor Works ^ "Fuel Doctor Challenges Consumer Reports" (Press release). Fuel Doctor USA. 10 December 2010. Archived from the original on 2011-06-11. ^ "New Fuel Doctor tests: Still no MPG magic". Consumer Reports. 26 May 2011. ^ a b "Federal Settlement Targets Illegal Emission Control 'Defeat Devices' Sold for Autos". Prnewswire.com. July 10, 2007. Retrieved 2013-12-30. ^ "Improving IC Engine Efficiency". University of Washington. Retrieved June 4, 2008. ^ Snopes.com: Nobody's Fuel ^ "Episode 53: Exploding Trousers, Great Gas Conspiracy". Unofficial MythBusters: Episode guides. 2006-05-28. ^ Snopes.com: Acetone Deaf "Fuel saving - a professional engineer's view". fuelsaving.info. December 10, 2005. Archived from the original on December 10, 2005. Retrieved May 12, 2014. including a case study on "Case Study: Vaporate (TM)*". fuelsaving.info. February 5, 2006. Archived from the original on February 5, 2006. Retrieved May 12, 2014. FCIC warning on fuel-saving devices Retrieved from "https://en.wikipedia.org/w/index.php?title=Fuel_saving_device&oldid=1072611762"
Jensen's inequality for monetary utility functions | Journal of Inequalities and Applications | Full Text Jensen's inequality for monetary utility functions Long Jiang1 In this paper, we prove that Jensen's inequality holds true for all monetary utility functions with respect to certain convex or concave functions by studying the properties of monetary utility functions, convex functions and concave functions. Monetary utility functions have recently attracted much attention in the mathematical finance community, see e.g. [1–4]. According to [2], a monetary utility function U can be identified with a convex risk measure ρ by the formula U(ξ) = - ρ(ξ); convex risk measure, introduced in [5, 6], is a popular notion in particular since the Basel II accord. Some times we want to not only measure the utility of an uncertain random variable but also estimate the utility of its function (see e.g. 2.2). Jensen's inequality will be a useful tool solving this problem. It is well-known that Jensen's inequality holds true for classical expectation, which, in terms of operator, can be seen as a particular type of monetary utility functions. But with respect to some convex or concave functions, it does not hold true for all monetary utility functions, as stated in our Example 2.1. This suggests a natural question: with respect to which kind of convex or concave functions does it hold true? In this paper, we study this question and give a sufficient and reasonable condition under which Jensen's inequality holds for all monetary utility functions. \left(\mathrm{\Omega },\mathcal{F},ℙ\right) be a probability space. Denote by {\mathbb{L}}^{\mathrm{\infty }} the collection of all real-valued essentially bounded -measurable random variables in \left(\mathrm{\Omega },\mathcal{F},ℙ\right) . The Definition 1.1 and Remark 1.1 are cited from [2]. U:{\mathbb{L}}^{\mathrm{\infty }}↦ℝ is called a monetary utility function if it is concave, non-decreasing with respect to the order of {\mathbb{L}}^{\mathrm{\infty }} , satisfies the normalization condition U(0) = 0 and has the cash-invariance property (U(X + b) = U(X) + b for every X\in {\mathbb{L}}^{\mathrm{\infty }} and b ∈ ℝ). Remark 1.1. The normalization U(0) = 0 does not restrict the generality as it may be obtained by adding a constant to U. 2 Jensen's inequality for monetary utility functions In this section, we will show our main results and give two examples. For proving the results, we need Proposition 2.1 on monetary utility functions. U:{\mathbb{L}}^{\mathrm{\infty }}↦ℝ be a monetary utility function. Then for any k ∈ ℝ, X\in {\mathbb{L}}^{\mathrm{\infty }} \left(\mathsf{\text{i}}\right)U\left(kX\right)\ge kU\left(X\right),\phantom{\rule{1em}{0ex}}\mathsf{\text{if }}0\le k\le \mathsf{\text{1}}; \left(\mathsf{\text{ii}}\right)U\left(kX\right)\le kU\left(X\right),\phantom{\rule{1em}{0ex}}\mathsf{\text{if }}k\le 0\phantom{\rule{0.3em}{0ex}}\mathsf{\text{or}}\phantom{\rule{0.3em}{0ex}}k\ge \mathsf{\text{1}}\mathsf{\text{.}} (i) If 0 ≤ k ≤ 1, for the concavity of U, we have U\left(kX+\left(1-k\right)Y\right)\ge kU\left(X\right)+\left(1-k\right)U\left(Y\right). Take Y = 0 and consider U(0) = 0, then we have U\left(kX\right)\ge kU\left(X\right)+\left(1-k\right)U\left(0\right)=kU\left(X\right). (ii) If k ≥ 1, then 0<\frac{1}{k}\le 1 U\left(\left(\frac{1}{k}\right)kX\right)\ge \frac{1}{k}U\left(kX\right). U\left(kX\right)\le kU\left(X\right). If -1 ≤ k ≤ 0, then 0 ≤ -k ≤ 1. By (2.1) we have U\left(\left(-k\right)X\right)\ge \left(-k\right)U\left(X\right). 0=U\left(0\right)=U\left(\frac{1}{2}kX+\frac{1}{2}\left(-kX\right)\right)\ge \frac{1}{2}U\left(kX\right)+\frac{1}{2}U\left(-kX\right), \begin{array}{c}U\left(-kX\right)\le -U\left(kX\right),\end{array} \begin{array}{c}-kU\left(X\right)\le U\left(-kX\right)\le -U\left(kX\right).\end{array} U\left(kX\right)\le kU\left(X\right). If k ≤ -1, then -k ≥ 1. By (2.3) we have U\left(kX\right)=U\left(\left(-k\right)\left(-X\right)\right)\phantom{\rule{2.77695pt}{0ex}}\le \phantom{\rule{2.77695pt}{0ex}}\left(-k\right)U\left(-X\right). Combining the above inequality with (2.4) which is actually true for all k in R, we have U\left(kX\right)\le \left(-k\right)U\left(-X\right)\le \left(-k\right)\left(-U\left(X\right)\right)=kU\left(X\right). The proof of Proposition 2.1 is complete. Now let us introduce the main results of this paper, i.e. Jensen's inequality for monetary utility functions. Theorem 2.1. Let φ be a convex function on ℝ. Suppose that for any x\in ℝ,{\phi }_{+}^{\prime }\left(x\right)\ge 0 {\phi }_{-}^{\prime }\left(x\right)\le 1 X\in {\mathbb{L}}^{\mathrm{\infty }} and any monetary utility function U:{\mathbb{L}}^{\mathrm{\infty }}↦ℝ , we have φ(U(X)) ≤ U(φ(X)). Proof. As stated in Definition 1.1, for every X\in {\mathbb{L}}^{\mathrm{\infty }} , U(X) is finite, i.e. U\left(X\right)\in ℝ {\phi }_{+}^{\prime }\left(U\left(X\right)\right)\ge 0\phantom{\rule{1em}{0ex}}\mathsf{\text{and}}\phantom{\rule{1em}{0ex}}{\phi }_{-}^{\prime }\left(U\left(X\right)\right)\le 1. \alpha \in \left[{\phi }_{-}^{\prime }\left(U\left(X\right)\right),\phantom{\rule{2.77695pt}{0ex}}{\phi }_{+}^{\prime }\left(U\left(X\right)\right)\right]\cap \left[0,1\right], based on the subgradient inequality in [7], we have \phi \left(x\right)\ge \alpha \left(x-U\left(X\right)\right)+\phi \left(U\left(X\right)\right),\phantom{\rule{2.77695pt}{0ex}}\forall x\in ℝ. For the arbitrariness of x, we have \phi \left(X\right)\ge \alpha \left(X-U\left(X\right)\right)+\phi \left(U\left(X\right)\right). Consider the monotonicity and cash-invariance property of U and (2.1) in Proposition 2.1, then we have \begin{array}{ll}\hfill U\left(\phi \left(X\right)\right)& \ge U\left(\alpha \left(X-U\left(X\right)\right)+\phi \left(U\left(X\right)\right)\right)\phantom{\rule{2em}{0ex}}\\ =U\left(\alpha X\right)-\alpha U\left(X\right)+\phi \left(U\left(X\right)\right)\phantom{\rule{2em}{0ex}}\\ \ge \alpha U\left(X\right)-\alpha U\left(X\right)+\phi \left(U\left(X\right)\right).\phantom{\rule{2em}{0ex}}\end{array} \phi \left(U\left(X\right)\right)\le U\left(\phi \left(X\right)\right).\phantom{\rule{2.77695pt}{0ex}} It is also possible to obtain the Jensen inequality for monetary utility functions with respect to certain concave functions (Theorem 2.2) and prove it by the subgradient inequality in [7] and (2.2) in Proposition 2.1. As the proof is very similar with the proof of Theorem 2.1, we omit it and just give the result. Theorem 2.2. Let ψ be a concave function on ℝ. Suppose that for any x\in ℝ,{\psi }_{+}^{\prime }\left(x\right)\le 0 {\psi }_{-}^{\prime }\left(x\right)\ge 1 X\in {\mathbb{L}}^{\mathrm{\infty }} U:{\mathbb{L}}^{\mathrm{\infty }}↦ℝ , we have ψ(U(X)) ≥ U(ψ(X)). Then let us illustrate the reasonableness of the conditions on convex and concave functions in Theorems 2.1 and 2.2 through an example. Actually, Jensen's inequality usually is not true for all monetary utility functions even when the related convex or concave function is a linear function. Example 2.1. Let φ(x) = kx + a (k < 0 or k > 1) and ψ(x) = hx + b (0 < h < 1), obviously φ (respectively ψ) is a convex (respectively concave) function on ℝ that do not satisfy the condition in Theorem 2.1 (respectively Theorem 2.2). We consider a particular type of monetary utility function U, the entropic utility function in [4], which is defined as U\left(X\right):=-\mathsf{\text{ln}}\mathbb{E}\left[\mathsf{\text{exp}}\left(-X\right)\right]. We choose a X\in {\mathbb{L}}^{\mathrm{\infty }} ℙ\left\{X=0\right\}=ℙ\left\{X=1\right\}=0.5 . It is easy to check that U(kX) < kU(X) and U(hX) > hU(X). Then we have \begin{array}{c}\phi \left(U\left(X\right)\right)=kU\left(X\right)+a>U\left(kX\right)+a=U\left(kX+a\right)=U\left(\phi \left(X\right)\right),\\ \psi \left(U\left(X\right)\right)=hU\left(X\right)+b<U\left(hX\right)+b=U\left(hX+b\right)=U\left(\psi \left(X\right)\right).\end{array} Thus Jensen's inequality does not holds. So the conditions in Theorems 2.1 and 2.2 are reasonable. At the end of this paper, let us discuss an application of Jensen's inequality for monetary utility functions. Example 2.2. We still consider the entropic utility function. Sometimes, we want to estimate the entropic utility of X+ or -X- using the entropic utility of the future outcome X. For this kind of problem, Jensen's inequality will be a useful tool. Let φ(x) = x+, ψ(x) = -x-, then φ is a convex function satisfying the condition in Theorem 2.1 and ψ is a concave function satisfying the condition in Theorem 2.2 By Theorems 2.1 and 2.2 we have \begin{array}{c}-\mathsf{\text{ln}}\mathbb{E}\left[\mathsf{\text{exp}}\left(-{X}^{+}\right)\right]\ge {\left(-\mathsf{\text{ln}}\mathbb{E}\left[\mathsf{\text{exp}}\left(-X\right)\right]\right)}^{+},\\ -\mathsf{\text{ln}}\mathbb{E}\left[\mathsf{\text{exp}}\left(-\left(-{X}^{-}\right)\right)\right]\le -{\left(-\mathsf{\text{ln}}\mathbb{E}\left[\mathsf{\text{exp}}\left(-X\right)\right]\right)}^{-}.\end{array} Delbaen F, Peng S, Rosazza Gianin E: Representation of the penalty term of dynamic concave utilities. Finance and Stochastics 2010, 14(3):449–472. 10.1007/s00780-009-0119-7 Jouini E, Schachermayer W, Touzi N: Optimal risk sharing for law invariant monetary utility functions. Mathematical Finance 2008, 18(2):269–292. 10.1111/j.1467-9965.2007.00332.x Filipović D, Kupper M: Equilibrium Prices for Monetary Utility Functions. International Journal of Theoretical and Applied Finance(IJTAF) 2008, 11(3):325–343. 10.1142/S0219024908004828 Acciaio B: Optimal risk sharing with non-monotone monetary functionals. Finance and Stochastics 2007, 11: 267–289. 10.1007/s00780-007-0036-6 Föllmer H, Schied A: Convex measures of risk and trading constraints. Finance and Stochastics 2002, 6(4):429–447. 10.1007/s007800200072 Frittelli M, Rosazza Gianin E: Putting order in risk measures. Journal of Banking & Finance 2002, 26(7):1473–1486. [http://www.geocities.ws/smhurtado/RiskMeasures.pdf] Rockafellar RT: Convex Analysis. Princeton: Princeton University Press; 1970. The authors would like to thank the referees for their detailed comments and valuable suggestions. This research was supported by the National Natural Science Foundation of China (No. 10971220), the FANEDD (No. 200919), and the National Undergraduate Innovation Experiment Project of China (No. 091029030). College of Sciences, China University of Mining & Technology, Xuzhou, Jiangsu, 221116, People's Republic of China Jing Liu & Long Jiang JL study the properties of the functions, carried out the proof of the results and drafted the manuscript. LJ conceived of the study, designed the research method, and helped to draft the manuscript. All authors read and approved the final manuscript. Liu, J., Jiang, L. Jensen's inequality for monetary utility functions. J Inequal Appl 2012, 128 (2012). https://doi.org/10.1186/1029-242X-2012-128 monetary utility functions
Global Attractors in for Nonclassical Diffusion Equations Qiao-zhen Ma, Yong-feng Liu, Fang-hong Zhang, "Global Attractors in for Nonclassical Diffusion Equations", Discrete Dynamics in Nature and Society, vol. 2012, Article ID 672762, 16 pages, 2012. https://doi.org/10.1155/2012/672762 Qiao-zhen Ma,1 Yong-feng Liu,1 and Fang-hong Zhang1 1College of Mathematics and Information Science, Northwest Normal University, Gansu, Lanzhou 730070, China We study the existence of global attractors for nonclassical diffusion equations in . The nonlinearity satisfies the arbitrary order polynomial growth conditions. In this paper, we investigate the long-time behavior of the solutions for the following nonclassical diffusion equations: with the initial data where , and the nonlinearity satisfies () , and , () , and , and () , where , , , , , and are all positive constants. Moreover, without loss of generality, we also assume . In 1980, Aifantis in [1–3] pointed out that the classical reaction-diffusion equation does not contain each aspect of the reaction-diffusion problem, and it neglects viscidity, elasticity, and pressure of medium in the process of solid diffusion and so forth. Furthermore, Aifantis found out that the energy constitutional equation revealing the diffusion process is different along with the different property of the diffusion solid. For example, the energy constitutional equation is different, when conductive medium has pressure and viscoelasticity or not. He constructed the mathematical model by some concrete examples, which contains viscidity, elasticity, and pressure of medium, that is the following nonclassical diffusion equation: This equation is a special form of the nonclassical diffusion equation used in fluid mechanics, solid mechanics, and heat conduction theory (see [1–4]). Recently, Aifantis presented a new model about this problem and scrutinized the concrete process of constructing model; the reader can refer to [5] for details. The longtime behavior of (1.1) acting on a bounded domain has been extensively studied by several authors in [6–13] and references therein. In [12] the existence of a global attractor for the autonomous case has been shown provided that the nonlinearity is critical and . Furthermore, for the non-autonomous, the existence of a uniform attractor and exponential attractors has been scrutinized when the time-dependent forcing term only satisfies the translation bounded domain instead of translation compact, namely, . A similar problem was discussed in [13] by virtue of the standard method based on the so-called squeezing property. To our best knowledge, the dynamics of (1.1) acting on an unbounded domain has not been considered by predecessors. As we know, if we want to prove the existence of global attractors, the key point is to obtain the compactness of the semigroup in some sense. For bounded domains, the compactness is obtained by a priori estimates and compactness of Sobolev embeddings. This method does not apply to unbounded domains since the embeddings are no longer compact. To overcome the difficulty of the noncompact embedding, in [14], using the idea of Ball [15], the author proved that the solutions are uniformly small for large space and time variables and then showed that the weak asymptotic compactness is equivalent to the strong asymptotic compactness in certain circumstances. In [16], the authors provided new a priori estimates for the existence of global attractors in unbounded domains and then applied this approach to a nonlinear reaction-diffusion equation with a nonlinearity having a polynomial growth for arbitrary order and with distribution derivatives in homogeneous term. More recently, in [17] the authors achieved the existence of global attractors for reaction-diffusion equations in , by using the methods presented in [18]. Our purpose in this paper is to study the existence of global attractors of (1.1) on the unbounded domains , and we borrow the idea of [17, 18]. Our main result is Theorem 4.6. This paper is organized as follows. In Section 2, we recall some basic definitions and related theorems that will be used later. In Section 3, we prove the existence of weak solution for nonclassical diffusion equations in . The main result is stated and proved in Section 4. In this section, we iterate some notations and abstract results. Definition 2.1 (see [18]). Let be a metric space, and let be bounded subsets of . The Kuratowski measure of noncompactness of defined by Definition 2.2 (see [18]). Let be a Banach space, and let be a family of operators on . We say that is a continuous semigroup ( semigroup) (or norm-to-weak continuous semigroup) on , if satisfies(i) (the identity),(ii), for all ,(iii), if in (or , if in ). Definition 2.3 (see [18]). A semigroup (or norm-to-weak continuous semigroup) in a complete metric space is called -limit compact if for every bounded subset of and for every , there is a , such that Condition C (see [18]). For any bounded set of a Banach space , there exists a and a finite dimensional subspace of such that is bounded and where is a bounded projector. Lemma 2.4 (see [18]). Let be a Banach space, and let be a semigroup (or norm-to-weak continuous semigroup) in .(1)If Condition C holds, the is -limit compact.(2)Let be a uniformly convex Banach space. Then is -limit compact if and only if Condition C holds. Lemma 2.5 (see [18]). Let be a Banach space, and let be a semigroup (or norm-to-weak continuous semigroup) in .(1)If Condition C holds, the is -limit compact;(2)Let be a uniformly convex Banach space. Then is -limit compact if and only if Condition C holds. Theorem 2.6 (see [18]). Let be a Banach space. Then the semigroup (or norm-to-weak continuous semigroup) has a global attractor in if and only if(1)there is a bounded absorbing set .(2) is -limit compact. Lemma 2.7 (see [19]). Let be an absolutely continuous positive function on , which satisfies for some the differential inequality for almost every , where and are functions on such that for some and , and for some . Then for some and Lemma 2.8 (see [20]). Let be Banach spaces, with reflexive. Suppose that is a sequence that is uniformly bounded in , and is uniformly bounded in , for some . Then there is a subsequence that converges strongly in . 3. Unique Weak Solution Theorem 3.1. Assume , , and are satisfied. Then for any and , there is a unique solution u of (1.1)-(1.2) such that Moreover, the solution continuously depends on the initial data. Proof. We decompose our proof into three steps for clarity. Step 1. For any , we consider the existence of the weak solution for the following problem in : Choose a smooth function with Since is a bounded domain, so the existence and uniqueness of solutions can be obtained by the standard Faedo-Galerkin methods; see [6, 8, 11, 16]; we have the unique weak solution Step 2. According to Step 1, we denote ; then satisfies For the mathematical setting of the problem, we denote , , , . Multiplying (3.5) by in , using , and , we have By the Poincaré inequality, for some , we conclude that Hence, it follows that We get the following estimate: Similar to (3.9), using , , and , we get where is independent of . and yield Choose such that ; then . Noting that , then , and we have the embedding . According to (3.12) and (3.13), we get where is independent of . Thanks to (3.14), is bounded in , and is bounded in . For , where is independent of . We can obtain that is bounded in . Since , Therefore, there exists , such that , , , and are continuous embedding to . According to (3.5) and (3.14)–(3.16), we obtain So has a subsequent (we also denote ) weak* convergence to in and ; has a subsequent (we also denote ) weak* convergence to . Let outside of ; we can extend to . As introduced in [6, 20], is dense in the dual space of , , , and , so we can choose for all as a test function such that Since for all , there exists bounded domain such that , . It follows that is uniformly bounded in , and . Since , according to Lemma 2.8, there is a subsequence (we also denote ) that converges strongly to in . Using the continuity of and , we have In line with (3.18) and (3.19), and let , we geting for all : Thus, is the weak solution of (3.2) and satisfies Step 3 (uniqueness and continuous dependence). Let , be in , and setting , we see that satisfies Taking the inner product with of (3.22), using , , and , we obtain By the Gronwall Lemma, we get This is uniqueness and is continuous dependence on initial conditions. Thanks to Theorem 3.1, and leting , is a semigroup. 4. Global Attractor in Lemma 4.1. Assume , , and are satisfied. There is a positive constant such that for any bounded subset , there exists such that Proof. Multiplying (1.1) by in , using , and , we have By virtue of the Poincaré inequality, for some , there holds Furthermore, By the Gronwall Lemma, we get We completed the proof. According to Lemma 4.1, we know that is a compact absorbing set of a semigroup of operators generalized by (1.1)-(1.2), , , and . Lemma 4.2. Assume , , and hold. Then for any and , there are some and such that whenever and . Proof. Choose a smooth function with where , , and there is a constant such that . Multiplying (1.1) with and integrating on , we obtain Next we deal with the right hand side of (4.9) one by one: According to the condition and the bounded absorbing set in for , it follows that where is independent of . For any given, let Hence, combining (4.10) with (4.11), when , we conclude that Using and , it yields Since , there exist , such that Then From the assumption , provide , such that Thus combining (4.9), (4.13), (4.16), and (4.17), we finally obtain Furthermore, there holds According to Lemma 2.7, we obtain Thus, we get provided and , we complete the proof. Lemma 4.3. Assume , , and hold. There is a positive constant such that for any bounded subset , there exists such that Proof. Multiplying (1.1) by in , we find Using , , and , we have the following estimates: Together with (4.6) and (4.19)–(4.21), by the Poincaré inequality, for some , this yields By the Gronwall Lemma, we get We complete the proof. Remark 4.4. There is a constant , such that for any bounded subset , when , there holds Lemma 4.5. Assume , , and are satisfied. Then the semigroup associated with the initial value problems (1.1) and (1.2) is -limit compact. Proof. Denote , and we split as where is a smooth function: with , and there is a positive constant such that . Then From Lemma 4.1, we know that as . For any given, we can choose large enough; by Remark 4.4, we can assume So we conclude that For any bounded set , can be split as Then in line with the property of noncompact measure, it follows that On the other hand, From Lemma 4.3, we get Recall that On account of Remark 4.4, it yields Therefore, we have That is, is -limit compact in . Theorem 4.6. Assume , , and hold. Then the semigroup associated with the initial value problems (1.1) and (1.2) has a global attractor in . The authors would like to thank the referee for careful reading of the paper and for his or her many vital comments and suggestions. This work was partly supported by the NSFC (11061030,11101334) and the NSF of Gansu Province (1107RJZA223), in part by the Fundamental Research Funds for the Gansu Universities. K. Kuttler and E. C. Aifantis, “Existence and uniqueness in nonclassical diffusion,” Quarterly of Applied Mathematics, vol. 45, no. 3, pp. 549–560, 1987. View at: Google Scholar | Zentralblatt MATH K. Kuttler and E. Aifantis, “Quasilinear evolution equations in nonclassical diffusion,” SIAM Journal on Mathematical Analysis, vol. 19, no. 1, pp. 110–120, 1988. View at: Publisher Site | Google Scholar | Zentralblatt MATH J. L. Lions and E. Magenes, Non-homogeneous Boundary Value Problems and Applications, Springer, Berlin, Germany, 1972. V. K. Kalantarov, “Attractors for some nonlinear problems of mathematical physics,” Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta imeni V. A. Steklova Akademii Nauk SSSR (LOMI), vol. 152, pp. 50–54, 1986. View at: Google Scholar | Zentralblatt MATH Y.-l. Xiao, “Attractors for a nonclassical diffusion equation,” Acta Mathematicae Applicatae Sinica, vol. 18, no. 2, pp. 273–276, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH C. Y. Sun, S. Y. Wang, and C. K. Zhong, “Global attractors for a nonclassical diffusion equation,” Acta Mathematica Sinica, vol. 23, no. 7, pp. 1271–1280, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH Q.-Z. Ma and C.-K. Zhong, “Global attractors of strong solutions to nonclassical diffusion equations,” Journal of Lanzhou University, vol. 40, no. 5, pp. 7–9, 2004. View at: Google Scholar | Zentralblatt MATH S. Y. Wang, D. S. Li, and C. K. Zhong, “On the dynamics of a class of nonclassical parabolic equations,” Journal of Mathematical Analysis and Applications, vol. 317, no. 2, pp. 565–582, 2006. View at: Google Scholar | Zentralblatt MATH R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, Springer, New York, NY, USA, 1997. C. Y. Sun and M. H Yang, “Dynamics of the nonclassical diffusion equations,” Asymptotic Analysis, vol. 59, no. 1-2, pp. 51–81, 2008. View at: Google Scholar | Zentralblatt MATH Y.-F. Liu and Q. Ma, “Exponential attractors for a nonclassical diffusion equation,” Electronic Journal of Differential Equations, vol. 2009, pp. 1–7, 2009. View at: Google Scholar | Zentralblatt MATH B. Y. Wang, “Attractors for reaction-diffusion equations in unbounded domains,” Physica D, vol. 128, no. 1, pp. 41–52, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH J. M. Ball, “Global attractors for damped semilinear wave equations,” Discrete and Continuous Dynamical Systems A, vol. 10, no. 1-2, pp. 31–52, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH C.-Y. Sun and C.-K. Zhong, “Attractors for the semilinear reaction-diffusion equation with distribution derivatives in unbounded domains,” Nonlinear Analysis, vol. 63, no. 1, pp. 49–65, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH Y. Zhang, C. Zhong, and S. Wang, “Attractors in {L}^{2}\left({ℝ}^{N}\right) for a class of reaction-diffusion equations,” Nonlinear Analysis, vol. 71, no. 5-6, pp. 1901–1908, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH Q. F Ma, S. H. Wang, and C. K. Zhong, “Necessary and sufficient conditions for the existence of global attractors for semigroups and applications,” Indiana University Mathematics Journal, vol. 51, no. 6, pp. 1541–1559, 2002. View at: Google Scholar V. Pata and M. Squassina, “On the strongly damped wave equation,” Communications in Mathematical Physics, vol. 253, no. 3, pp. 511–533, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH J. C. Robinson, Infinite-Dimensional Dynamical Systems: An Introduction to Disspative Parabolic PDEs and the Theory of Global attractors, Cambridge University Press, Cambridge, Mass, USA, 2001. View at: Publisher Site Copyright © 2012 Qiao-zhen Ma et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A Step-by-Step Guide on How to Calculate Standard Deviation | Outlier Standard Deviation: An Introduction How Many Standard Deviations Away? Standard Deviation and the Empirical Rule (68-95-99.7) \sigma or s) is a parameter or statistic that measures the spread of data relative to its mean. It is always positive or zero. A large standard deviation indicates that the data are widely spread out around the mean, while a smaller standard deviation indicates that the data are more tightly clustered around the mean. A standard deviation of zero only occurs when all values in the data are of equal value, and hence, there is no variation in the data. As a quick example, let’s look at two sets of data represented in the graphs below. The chart on the left shows the distribution of weights for adult males in America (aged 20 and above). The mean weight is 197 pounds, and the standard deviation is 42 pounds. The weights vary widely depending on factors such as age, height, fitness, and diet. In contrast, the graph on the right shows the distribution of weights for participants in the 2021 Tour de France (the Tour de France is a world-famous annual cycling event for professional male cyclists). Here, the mean weight is 151 lbs, and the standard deviation is 15 pounds. The difference in the mean and standard deviation between the two groups makes sense if you think about it. Cyclists tend to be lean (a lot thinner than the average American male), and they tend to be of similar stature to one another. There is less variation in the weights of professional cyclists than there is in the weights of the entire adult male population in the United States. The smaller standard deviation (15 pounds rather than 42 pounds) reflects this. The standard deviation for a population is often denoted by the Greek letter sigma, \sigma . When calculated for a sample, the standard deviation is often denoted by the lower-case letter, s. You may also see standard deviation abbreviated as SD or STDEV. The standard deviation tells us, on average, how far away a single data point is from the mean, but any particular observation in your data may be more or less than one standard deviation away from the mean. We can describe the distance between any point in a data set and the data set’s mean in terms of standard deviation units. In the case of the Tour de France cyclists, the mean was 151 lbs, and the standard deviation was 15 lbs. Any particular cyclist’s weight, however, can be described in standard deviation units as follows: A cyclist weighing 166 lbs has a weight that is one standard deviation (1 \sigma ) above the mean (166 = 155 + 15) A cyclist weighing 121 lbs has a weight that is 2 \sigma below the mean (121 = 151 - (2)(15)) A cyclist weighing 173 lbs has a weight that is 1.47 \sigma above the mean (173 = 151 + (1.47)(15)) A cyclist weighing 109 lbs has a weight that is 2.8 \sigma below the mean (109 = 151 - (2.8)(15)) Standard deviation is a useful measure in that it tells you right away whether data are widely dispersed or tightly clustered around the mean. As you have just seen, the standard deviation also provides an easy way to describe the distance between any particular point in the data and the mean of the data. For most data sets, the majority of observations will fall within one standard deviation of the mean. As a rule of thumb, observations that lie more than two standard deviations away from the mean are considered "far" from the mean. Knowing this allows statisticians to understand their data better. For example, if someone told you that German cyclist Max Walscheid weighs 202 lbs, you might not have a good sense of how his weight compares to other cyclists unless you know a lot about cycling. However, knowing that his weight is 3.4 \sigma above the mean gives you an immediate understanding that Walscheid’s weight is far above the average weight of his competitors. He is an outlier. Standard deviations are particularly useful when it comes to describing data that are normally distributed. A normal distribution, if you are not already familiar, is a bell-shaped distribution that is unimodal and symmetric about the mean. Because the Normal distribution is symmetric about the mean, we can make precise statements about the proportion of observations that lie within certain segments of the distribution. We do this using the empirical rule. The empirical rule (or the 68-95-99.7 rule) states that: 68% of all observations in a normal distribution lie within one standard deviation of the mean ( \mu \sigma 95% of all observations lie within two standard deviations of the mean ( \mu ± 2 \sigma And 99.7% of all observations lie within three standard deviations of the mean ( \mu \sigma In other words, most observations in a normal distribution lie within one standard deviation of the mean, and hardly any of the observations lie beyond three standard deviations of the mean. Observations that are further than ±3 \sigma are just 0.3% of the observations (100% - 99.7%). The empirical rule can be applied even when data is approximately normal. Because a normal distribution can approximate so many different types of data, the empirical rule comes in quite handy! Using the empirical rule, we know that 68% of observations in a Normal distribution lie within 1 \sigma of the mean, 95% of observations are within 2 \sigma of the mean, and 99.7% of observations are within 3 \sigma of the mean. Because a Normal distribution is symmetric about the mean, we can further divide the areas under a Normal distribution to find probabilities for smaller segments of the distribution (see the figure below). For example, we can further state that 34% of observations are between the mean and one standard deviation above the mean, or that just 2.35% of data are between ( \mu \sigma \mu \sigma The formulas for calculating standard deviation are below. The formula on the left is the formula for the population standard deviation sigma, \sigma , and the formula on the right is for the sample standard deviation, s. Notice that the calculations are essentially the same. The only real difference is in the denominator. For the population parameter, \sigma , you divide by the population size, N, and for the sample statistic, s, you divide by the sample size minus one (n-1). These formulae may look messy, but if you look closely, ignoring for a moment the square root sign, you’ll notice that what you are essentially calculating is an average of the squared distances between each data point and the mean of the data ( (x_i-\mu)^2 in the case of the population and ( (x_i-\bar{x})^2 in the case of the sample). \Sigma symbol, if you are not familiar, is a summation sign. It means “take the sum of,” so you are summing up all of the squared distances between the data points and the mean and diving by the total number of data points N (or the total number of data points minus one in the case of the sample standard deviation). We'll explain why we first square and then take the square root of these distances later in this article. \sigma = \sqrt{\frac{\sum(x_{i}-\mu)^{2}}{N}} s = \sqrt{\frac{\sum(x_{i}-\bar{x})^{2}}{n-1}} A chart comparing Population Standard Deviation with Sample Standard Deviation Steps for Calculating Standard Deviation Below is a step-by-step example of how to calculate a standard deviation. To keep things simple, we’ll use a sample with just 10 data points (n=10). X = {2, 8, 10, 13, 17, 17, 19, 21, 23, 30} The steps for calculating the standard deviation are listed below, and they are also shown in the following table. Step 1. Calculate the mean of your data, \bar{x} Step 2. Find the squared distances between each data point and the mean. Step 3. Sum up all of the squared distances from Step 2, \Sigma(x_i-\bar{x})^2 Step 4. Divide the sum from Step 3 by the sample size, n, minus 1. Remember! If you are calculating standard deviations for a population, this step is a bit different. You should divide by the population size, N, rather than N-1. We use n-1 only when calculating a sample standard deviation in order to get a closer approximation of the population standard deviation. Step 5. Final Step. Take the square root of the value you found in Step 4. Voila! The sample standard deviation s = 8.069 A table that lists the steps for calculating standard deviation Calculating standard deviation using statistical software While it’s important to understand how to calculate standard deviations by hand, statisticians rarely ever do in practice. Calculating standard deviations by hand can take a lot of time and lead to many errors, especially when dealing with large data sets. Fortunately, it’s incredibly easy to calculate standard deviations using statistical software. Below are some examples of the software and commands you can use. In Excel or Google sheets, use the formula =STDEV(). The list of your data should be included inside the parentheses. Let’s say your data has ten values in cells A1 through A10; the formula would then be =stdev(A1:A10). In Desmos, the command for standard deviation is also stdev() In R and Stata, the command is sd(). For each of these commands, you should include a list of your data or the name of your variable inside the parentheses. For practice, try calculating the standard deviation from the example above using one of options. See if you get the same answer. Standard Deviations and Variance One last thing to note about standard deviations: If you are familiar with variance, you may have noticed that the standard deviation is just the square root of the variance. If you’re not familiar with variance, variance is another measure of spread that is calculated using the following formulas. Looking at these equations, you may be asking yourself why it’s necessary to have both measures. Why do we calculate standard deviations when we already have variance? The answer to this question lies in the name. The standard deviation is a standardized measure of spread. When calculating the variance, we square the distance between each data point and the mean (x_i-\mu)^2 We do this so that the distances for values below the mean don’t cancel out distances for the values above the mean. The only problem with doing this is that the squared distance are in units that can’t be interpreted. Think back to the example we started with: the weight of Tour de France participants. The unit of the data is pounds (lbs). If we square the distance between each cyclist’s weight and the mean weight, we can find the variance using the formula presented above, but the variance is in units that are pounds squared. This makes the variance hard to interpret. When we calculate the standard deviation, we take the square root of the variance to get our measure of spread back into interpretable units, back into pounds rather than pounds squared!
A Complete Guide to Understanding Second Order Differential Equations | Outlier A Complete Guide to Understanding Second Order Differential Equations In this article, we’ll discuss the definition of second order differential equations, the difference between homogeneous and nonhomogeneous differential equations. Then, we’ll learn how to solve second order differential equations. Understanding Second Order Differential Equations What is the Difference Between the First Order and Second Order Differential Equations? How to Solve Homogeneous Linear Second Order Differential Equations What is a second order differential equation? A differential equation is an equation that involves an unknown function and its derivatives. The general equation for a linear second order differential equation is: P(x)y’’ + Q(x)y’ + R(x)y = G(x) y’’ indicates the second derivative of y x y’ indicates the first derivative of y x P(x) Q(x) R(x) are each functions of x . We can call this equation linear because each of these functions is a function of x only. Each term on the left-hand side of the equation has one of these functions of x as its coefficient. Altogether, these terms are equal to G(x) , another function of x It can be helpful to verbalize exactly what we’re solving for. When we solve the differential equation P(x)y’’ + Q(x)y’ + R(x)y = G(x) , we’re solving for a function or set of functions y that satisfy the following condition: the product of some function P(x) and the second derivative y’’ , plus the product of some function Q(x) and the first derivative y’ R(x) y , is equal to another function G(x) You might also see the differential equation P(x)y’’ + Q(x)y’ + R(x)y = G(x) written using Leibniz’s notation, like this: P(x)\frac{d^2y}{dx^2} + Q(x)\frac{dy}{dx} + R(x)y = G(x) We will assume that the functions P(x) Q(x) R(x) are constant coefficients. Then our equation looks like this, where a, b c ay’’ + by’ + cy = G(t) A differential equation is an equation that involves an unknown function and its derivatives. What is the order of a differential equation? The order of a differential equation is determined by the highest order derivative contained in the equation. For example: \frac{dy}{dx} + 2xy = x , the highest derivative is \frac{dy}{dx} . So, this equation is a first order differential equation. y’’ + 5y’ + 6y = 0 y’’ . So, this equation is a second order differential equation. y^{(3)} + 3xy’ - 5y = e^x y^{(3)} . So, this equation is a third order differential equation. How to Solve Homogeneous Linear Second Order Differential Equations? Next, we’ll learn how to find the general solution of homogeneous linear second order differential equations. To do this, we’ll first compare the difference between homogeneous and nonhomogeneous differential equations. Take a look again at our general equation for a linear second order differential equation with constant coefficients: ay’’ + by’ + cy = G(x) G(x) \neq 0 x , then the equation is called nonhomogeneous. For example, y’’ + y’ - 2y = \sin{(x)} is a nonhomogeneous differential equation because \sin{(x)} \neq 0 x Once more, here’s our general equation for a linear second order differential equation with constant coefficients: ay’’ + by’ + cy = G(x) G(x) = 0 x a \neq 0 , then the equation is called homogeneous. Now, our equation looks like this: ay’’ + by’ + cy = 0 We want to find the function (or set of functions) y that satisfies this equation. To solve for this function, it might be helpful to ask yourself this question: for what function is the product of a constant and the function’s second derivative, plus the product of another constant and the function’s first derivative, plus the product of a third constant and the function, equal to 0? For each equation of this type, we can write its corresponding characteristic equation. This equation is also known as the auxiliary equation of ay’’ + by’ + cy = 0 ar^2 + br + c = 0 Finding the roots of this equation helps us to find the general solution of the differential equation. The roots of an equation are the x-intercepts or “zeros.” We can often find the roots of an equation by factoring. When factoring doesn’t work, we can use the quadratic formula to find the roots r_1 r_2 r=\frac{-b\pm\sqrt{b^2-4ac}}{2a} b^2-4ac is called the discriminant. Finding the roots r_1 r_2 of the auxiliary equation, and determining the sign of the discriminant, provides us with the general solution to the differential equation. Using the table above, here are 3 simple steps to finding the general solution for a homogeneous linear second order differential equation: Write the auxiliary equation ar^2 + br + c = 0 Find the roots of the auxiliary equation by factoring or using the quadratic formula. Based on the sign of the discriminant, use the table above to plug the roots into the general solution formula. Let’s try to understand why the exponential function plays a large role in determining the general solution. Consider the function f(x) = e^{rx} r . Its derivative f’(x) = re^{rx} is simply a constant multiple of f(x) itself. Its second derivative is f’’(x) = r^2e^{rx} , which is again a constant multiple of the function f(x) To see how this plays out in finding the general solution, let’s take a simple example of a homogeneous linear differential equation, y’’ - y = 0 . We can rewrite this equation as y’’ = y . This equation is asking us, “what function is equal to its second derivative?” y = e^x satisfies that question, as does Ce^x Ce^{-x} . We can verify this by finding the auxiliary equation using a = 1, b = 0 c = -1 r^2 - 1 = 0 . By factoring this equation, we find that the roots are r = 1 r + 1 . Then, the general solution to y = e^x y = C_1e^x + C_2e^{-x} So, the exponential function plays a big role in answering the question, “what is the general solution of a second order differential equation”? John Urschel, a doctoral candidate in applied mathematics at the Massachusetts Institute of Technology and a retired NFL guard, talks a bit more about exponential functions: Example 1 - Distinct Real Roots y’’ + 3y’ - 10y = 0 The auxiliary equation for y’’ + 3y’ - 10y = 0 r^2 + 3r - 10 = 0 . Now, we can factor this equation to find: r^2 + 3r - 10 = 0 (r+5)(r-2) = 0 r_1 = -5, r_2 = 2 We have two real roots and our discriminant b^2 - 4ac > 0 . Using the table, we can now plug our roots into the general solution formula y = c_1e^{r_1x} + c_2e^{r_2x} So, our general solution is: y = c_1e^{-5x} + c_2e^{2x} Example 2 - Repeated Roots y’’ - 6y’ + 9y = 0 y’’ - 6y’ + 9y = 0 r^2 - 6r + 9 = 0 Now, we can factor this equation to find: r^2 - 6r + 9 = 0 (r-3)(r-3) = 0 (r-3)^2 = 0 r = 3 r = 3 is a repeated root and our discriminant b^2 - 4ac = 0 . Using the table, we can now plug our root into the general solution formula y = c_1e^{rx} + c_2xe^{rx} y = c_1e^{3x} + c_2xe^{3x} Example 3 - Complex Roots y’’ - 4y’ + 13y = 0 y’’ - 4y’ + 13y = 0 r^2 - 4r + 13 = 0 . Now, using the quadratic formula, let’s find the roots of this equation using a = 1, b = -4 c = 13 r=\frac{-b\pm\sqrt{b^2-4ac}}{2a} r=\frac{4\pm\sqrt{4^2-4(1)(13)}}{2(1)} r=\frac{4\pm\sqrt{16-52}}{2} r=\frac{4\pm\sqrt{-36}}{2} At this point, notice that our discriminant b^2-4ac = -36 . This means that we will have two complex roots. As we continue, remember that i = \sqrt{-1} r=\frac{4\pm\sqrt{36}\cdot \sqrt{-1}}{2} r=\frac{4\pm 6i}{2} r=2\pm 3i r_1 = 2 + 3i r_2 = 2 - 3i . Now, using our table, this means that \alpha = 2 \mathcal{B} = 3 . We can now plug \alpha = 2 \mathcal{B} = 3 into the general solution formula y = e^{\alpha x}(c_1\cos{(\mathcal{B}x)} + c_2\sin{(\mathcal{B}x)} y = e^{2 x}(c_1\cos{(3x)} + c_2\sin{(3x)}
How To Calculate Variance In 4 Simple Steps | Outlier How To Calculate Variance In 4 Simple Steps The article explains what variance means, how to calculate it, how to use the formula and the main differences between variance and standard deviation. When to Use the Sample or Population Variance How to Calculate Variance in 4 Steps Calculating Variance in Excel, Google Sheets, R and Desmos Variance is a parameter or a statistic that measures how spread-out data is relative to its mean. We calculate variance as the average of the squared deviations from the mean. Measures of spread like variance are important in statistics because they give you additional information about your data. You can’t get this information by only using measures of center such as: What Variance Can Indicate A low variance tells you your data is clustered closely around the mean and does not vary much. A high variance tells you your data is dispersed widely around the mean and varies quite a bit. Imagine there are two statistics classes taught by two different professors. Both professors have agreed to aim for a class average of 85—a B letter grade. Yet, in one class, the variance of grades is very low. In the other class, the variance of grades is very high. Which class would you prefer to be in? Most of the grades will be close to a B in the class with low variance, while there will be more students with very low or very high grades in the second class. When we calculate variance for a population, the symbol sigma-squared \sigma^2 represents it. When we calculate variance for a sample s^2 denotes it. Variance is a parameter or a statistic that measures how spread-out data is relative to its mean. To calculate variance, take the average of the squared deviations—also called the squared differences—from the mean. Remember, a deviation from the mean is the difference between a particular data point and the mean. In statistics, a population refers to the entire set of objects or events being studied. A sample is a subset of the population. As an example, imagine you’re studying national elections in the United States. Your population of interest consists of every single eligible voter across the 50 states, but because you can’t collect data for the entire population, you draw random samples (subsets) of voters. Sometimes statisticians have data for an entire population, but most of the time, they only have sample data from which they draw statistical inferences about the population. When you are working with population data and calculating variance, use the population variance formula given above. When you are working with sample data and want to calculate variance, use the sample standard deviation formula given above. Here is an example of how to calculate variance in 4 easy steps. Say you have the following sample data on the heights of 10 NBA players randomly selected from the 2021-22 season. E’Twaun Moore 76 1. Find the Sample Mean Find the sample mean \bar{x} of your data To find the variance, you need to first know what the arithmetic mean of your data is. To find the mean, add together all the values in the data set and divide by the sample size n . Since we have 10 people in this data set, the sample size is n=10 \bar{x} =\frac{78 +84+77+80+76+83+76+82+73+75}{10}= 78.4 \text{ inches} 2. Find the Squared Deviation the Mean Find the squared deviation from the mean for every data point in your data set. (x_i-\bar{x})^2 To find the squared deviations from the mean, subtract the sample mean from each player’s height and square the result. This is shown in the third column of the table. (x_i-\bar{x})^2 (78 - 78.4)^2 = 0.16 (84 - 78.4)^2 = 31.36 (77 - 78.4)^2 = 1.96 (80 - 78.4)^2 = 2.56 (76 - 78.4)^2 = 5.76 (83 - 78.4)^2 = 21.16 (76 - 78.4)^2 = 5.76 (82 - 78.4)^2 = 12.96 (73 - 78.4)^2 = 29.16 (75 - 78.4)^2 = 11.56 3. Sum Up All the Squared Deviations Sum up all the squared deviations you found in Step 2. If we sum up the squared deviations from the previous step, we get the sum of the squared deviations from the mean. \sum (x_i-\bar{x})^2 = =0.16+31.36+1.96+2.56+5.76+21.16+5.76+12.96+29.16+11.56 =122.4 4. Find the Variance by Dividing the Sum of Squared Deviations Find the variance by dividing the sum of squared deviations by the sample size minus one ( n Because we are working with sample data, we divide the sum of squared deviations by n -1. If we were instead trying to find the variance of a population, we would divide by the population size N s^2= \frac{\sum (x_i-\bar{x})^2}{n-1}=\frac{122.4}{9}= 13.6 \text{ inches-squared} There’s your answer! The variance for this data set is 13.6 inches squared. If you look at the example above, you’ll notice that variance is measured in units that are very difficult to interpret. In the example, we wanted to calculate the variance of ten NBA players’ heights. We measure the heights in inches, but we measure the variance in inches-squared! We end up with units-squared when we measure variance because we use squared deviations in our calculation. There’s a good reason for doing this. To measure the average variation (or average deviation), we want to make sure that negative deviations for data points that lie below the mean don’t cancel out the positive deviations for data points that lie above the mean. The downside of doing this is that we end up with squared units in our result. To avoid the difficulties of interpreting variance, you will often use a related measure of spread called standard deviation. Standard deviation is just the square root of variance. By taking the square root of the variance, we get our measure back into the interpretable units of the data. In our NBA example, the variance was 13.6 inches squared, so the sample standard deviation would be \sqrt{13.6} \approx 3.69 inches. Because we measure standard deviations in the same units as the data, interpretation is much easier. A standard deviation of 3.69 inches tells us that an NBA player randomly selected from our sample will tend to have a height that is 3.69 inches above or below the average height of 122.4 inches. Standard deviation is just the square root of variance. While it’s important to know how to calculate variance by hand, you are more likely to use programs such as Excel, R, and Desmos to do the calculation for you! In Microsoft Excel or Google sheets, use the formula =VAR() to calculate variance. Your data should be included inside the parentheses, so if you have ten data points in cells A1 through A10; the formula would be =VAR(A1:A10). In Desmos and R, the command for variance is also VAR(). You can type your data right between the parentheses, so if your data consists of the set of numbers {5, 7, 10, 15, 20} you would type VAR(5, 7, 10, 15, 20). If your data is stored as a variable, you can use the var() command with the name of the variable inside the parentheses instead of a list of the data points. For practice, try calculating the variance from our NBA example using one or all of these programs for practice. See if you get the correct answer of 13.6!
Thermal Conductivity of Metal-Oxide Nanofluids: Particle Size Dependence and Effect of Laser Irradiation | J. Heat Transfer | ASME Digital Collection , Pohang, 790-784 Korea Sun Rock Choi, Sun Rock Choi Dongsik Kim e-mail: dskim87@postech.ac.kr A correction has been published: Erratum: “Thermal Conductivity of Metal-Oxide Nanofluids: Particle Size Dependence and Effect of Laser Irradiation” and “The Role of the Viscous Dissipation in Heated Microchannels” [Journal of Heat Transfer, 2007, 129(3)] Kim, S. H., Choi, S. R., and Kim, D. (May 25, 2006). "Thermal Conductivity of Metal-Oxide Nanofluids: Particle Size Dependence and Effect of Laser Irradiation." ASME. J. Heat Transfer. March 2007; 129(3): 298–307. https://doi.org/10.1115/1.2427071 The thermal conductivity of water- and ethylene glycol-based nanofluids containing alumina, zinc-oxide, and titanium-dioxide nanoparticles is measured using the transient hot-wire method. Measurements are performed by varying the particle size and volume fraction, providing a set of consistent experimental data over a wide range of colloidal conditions. Emphasis is placed on the effect of the suspended particle size on the effective thermal conductivity. Also, the effect of laser-pulse irradiation, i.e., the particle size change by laser ablation, is examined for ZnO nanofluids. The results show that the thermal-conductivity enhancement ratio relative to the base fluid increases linearly with decreasing the particle size but no existing empirical or theoretical correlation can explain the behavior. It is also demonstrated that high-power laser irradiation can lead to substantial enhancement in the effective thermal conductivity although only a small fraction of the particles are fragmented. nanofluid, thermal conductivity, particle size, laser ablation, alumina, zinc compounds, titanium compounds, nanoparticles, II-VI semiconductors, particle size, thermal conductivity, laser ablation, suspensions Lasers, Nanofluids, Particle size, Thermal conductivity, Irradiation (Radiation exposure), Particulate matter, Nanoparticles, Water , Oxford, p. Growth of Gold Clusters into Nanoparticles in a Solution Following Laser-Induced Fragmentation Laser-Induced Size Reduction of Noble Metal Particles Size and Shape Transformation of TiO2 Nanoparticles by Irradiation of 308-nm Laser Beam Formation and Characteristics of Cu Colloids from CuO Powder by Laser Irradiation in 2-Propanol A Contribution to the Theory at the Transient Hot-Wire Technique for Thermal Conductivity Measurement A Transient Hot Wire Thermal Conductivity Apparatus for Fluids Measurement of the Thermal Conductivity of Gases by the Transient Hot Wire Method Use of Anodic Tantalum Pentoxide for High-Density Capacitor Fabrication , Boca Raon, FL. Visualization of Microparticle Explosion and Flow Field in Nanoparticle Synthesis by Pulsed Laser Ablation
Alpha Return - Course Hero Introduction to Finance/Risk and Return of Financial Markets/Alpha Return Alpha returns are associated with the normal fluctuations of the securities market. According to modern portfolio theory, investors can develop an optimum risk-return profile using a diversified portfolio. A portfolio is a blend of investments held by an investor that will match their risk-return profile. A risk-return profile represents the investor's tolerance for risk. For example, a risk-averse investor may choose a portfolio with the least risk and greatest return. The theory holds that return is dependent on risk. Understanding securities markets, as well as making an in-depth analysis of them, is fundamental to developing an optimal portfolio. Two major components of return—alpha return and beta return—are identified using a predictive formula. An alpha return is an excess return of an investment over a benchmark or index, such as the S&P 500. Alpha return is a measure of nonsystemic risk. Nonsystemic risk is risk that is particular to one company or asset and does not affect the entire economy. For example, if Big Systems Inc. stock has grown by 22 percent over a quarter and the overall market growth for stocks sold in the United States is 18 percent, then Big Systems Inc. is outperforming the market by 4 percent. Beta return is a measure of the volatility of the return of an investment, creating an opportunity for a quick return or loss. The alpha return is one of five risk ratios used to determine the risk-return profile of portfolios. An alpha return is given as a number that is translated into a percentage over or under the index. Investors see the alpha as the account manager's contribution to the portfolio. An alpha of zero would mean the portfolio is tracking the index exactly and the manager's expertise did not contribute to the fund, such as in this example: Money Company LLC is a financial management firm that Mary Nelson uses to manage her portfolio, which grew by 27 percent this year. The S&P 500 also grew by 27 percent. Money Company earned the same amount for Mary's portfolio as the portfolio would have earned if she had randomly invested without any management. The most common alpha index against which to compare is the S&P 500 because it offers a broad scope of large-cap companies, and components of the index are updated on a quarterly basis. Large-cap companies, or large-capitalization companies, are companies with a large capitalization value. The alpha is calculated using a formula. \text R={\text R}_\text f+{\beta}({\text R}_\text m-{\text R}_\text f)+{\alpha} \begin{aligned}\text R&=\text{Overall Portfolio Return}\\{\text R}_{\text f}&=\text{Risk-Free Return}\\\text{Beta Return}\;(\beta)&=\text{Market Volatility}\\{\text R}_{\text m}&=\text{Market Rate of Return}\\\alpha&=\text{Alpha Return}\end{aligned} The overall portfolio return is the return on investment over a given amount of time, expressed as a percentage of the investment's present value. The risk-free return is the rate of return of a hypothetical investment that has no chance of loss. The volatility of the market is the rate at which prices increase or decrease over a given time. The market rate of return is the standard interest accepted for that type of asset. The alpha return is the excess return of an investment over a benchmark or index, such as the S&P 500. When there is unexpected news that may affect a securities market, the market as a whole will respond accordingly. How well it reacts is its market efficiency. In a perfectly efficient market, called a strong-form efficient market, all information is known to the investors, and an announcement would result in an immediate change to the price with no further adjustments. In a perfectly efficient market, the alpha would be zero because all managers would have the same information and the same ability to change portfolio directions. In a weak-form efficient market, prices are based only on current information, without past knowledge. When a market does not accurately and quickly respond to information, resulting in long-term trends, it is considered inefficient. <Vocabulary>Beta Return
s \frac{d}{\mathrm{ds}} \left[\mathbf{R}″\mathbf{R}‴\mathbf{R}⁗\right]={\mathrm{κ}}^{5}\frac{d}{\mathrm{ds}}\left(\frac{\mathrm{τ}}{\mathrm{κ}}\right) \mathbf{R}\prime \left(s\right)=\mathbf{T} , it then follows that \mathbf{R}″=\mathbf{T}\prime =\mathrm{κ} \mathbf{N} (via the first equation on the right in Table 2.7.1). So, \mathbf{R}‴ =\left(\mathbf{R}″\right)\prime =\left(\mathrm{κ} \mathbf{N}\right)\prime =\mathrm{κ} \mathbf{N}\prime +\mathrm{κ}\prime \mathbf{N} =\mathrm{κ} \left(-\mathrm{κ} \mathbf{T}+\mathrm{τ} \mathbf{B}\right)+\mathrm{κ}\prime \mathbf{N} = -{\mathrm{κ}}^{2}\mathbf{T}+\mathrm{κ}\prime \mathbf{N}+\mathrm{κ} \mathrm{τ} \mathbf{B} ={a}_{3}\mathbf{T}+{b}_{3}\mathbf{N}+{c}_{3}\mathbf{B} Note the use, in the fourth line, of the third equation on the right in Table 2.7.1. The last line simplifies the notation in anticipation of the additional calculations needed to evaluate the box product. However, it is first necessary to obtain \mathbf{R}⁗ \mathbf{R}⁗ =\left(\mathbf{R}‴\right)\prime =\left(-{\mathrm{\kappa }}^{2}\mathbf{T}+\mathrm{κ}\prime \mathbf{N}+\mathrm{κ} \mathrm{τ} \mathbf{B}\right)\prime =\left(-{\mathrm{κ}}^{2}\mathbf{T}\right)\prime +\left(\mathrm{κ}\prime \mathbf{N}\right)\prime +\left(\mathrm{κ} \mathrm{τ} \mathbf{B}\right)\prime =\left(-2 \mathrm{κ} \mathrm{κ}\prime \mathbf{T}-{\mathrm{κ}}^{2}\mathbf{T}\prime \right)+\left(\mathrm{κ}″\mathbf{N}+\mathrm{κ}\prime \mathbf{N}\prime \right)+\left(\mathrm{κ}\prime \mathrm{τ} \mathbf{B}+\mathrm{κ} \mathrm{τ}\prime \mathbf{B}+\mathrm{κ} \mathrm{τ} \mathbf{B}\prime \right) =\left(-2 \mathrm{κ} \mathrm{κ}\prime \mathbf{T}-{\mathrm{κ}}^{2}\left(\mathrm{κ} \mathbf{N}\right)\right)+\left(\mathrm{κ}″\mathbf{N}+\mathrm{κ}\prime \left(-\mathrm{κ} \mathbf{T}+\mathrm{τ} \mathbf{B}\right)\right)+\left(\mathrm{κ}\prime \mathrm{τ} \mathbf{B}+\mathrm{κ} \mathrm{τ}\prime \mathbf{B}+\mathrm{κ} \mathrm{τ} \left(-\mathrm{τ} \mathbf{N}\right)\right) =\left(-2 \mathrm{κ} \mathrm{κ}\prime -\mathrm{κ} \mathrm{κ}\prime \right)\mathbf{T}+\left(-{\mathrm{κ}}^{3}+\mathrm{κ}″-\mathrm{κ} {\mathrm{τ}}^{2}\right)\mathbf{N}+\left(\mathrm{κ}\prime \mathrm{τ}+\mathrm{κ}\prime \mathrm{τ}+\mathrm{κ} \mathrm{τ}\prime \right)\mathbf{B} = -3 \mathrm{κ} \mathrm{κ}\prime \mathbf{T}+\left(\mathrm{κ}″-{\mathrm{κ}}^{3}-\mathrm{κ} {\mathrm{τ}}^{2}\right)\mathbf{N}+\left(2 \mathrm{κ}\prime \mathrm{τ}+\mathrm{κ} \mathrm{τ}\prime \right)\mathbf{B} ={a}_{4}\mathbf{T}+{b}_{4}\mathbf{N}+{c}_{4}\mathbf{B} By invoking the additive property of determinants, \left[\mathbf{x}\mathbf{ }\mathbf{y} \left(\mathbf{z}+\mathbf{w}\right)\right]=\left[\mathbf{x}\mathbf{ }\mathbf{y}\mathbf{ }\mathbf{z}\right]+\left[\mathbf{x}\mathbf{ }\mathbf{y}\mathbf{ }\mathbf{w}\right] , and by writing \mathbf{R}″=\mathrm{κ} \mathbf{N} {b}_{2}\mathbf{N} , the box product becomes \left[\mathbf{R}″\mathbf{R}‴\mathbf{R}⁗\right]= {b}_{2}\left({a}_{4}{c}_{3}-{a}_{3}{c}_{4}\right)\left[\mathbf{NBT}\right] {b}_{2}\left({a}_{4}{c}_{3}-{a}_{3}{c}_{4}\right)\left[\mathbf{BTN}\right] {b}_{2}\left({a}_{4}{c}_{3}-{a}_{3}{c}_{4}\right) where the minus signs introduced by row interchanges in the determinant cancel, and \left[\mathbf{BTN}\right]=\mathbf{B}·\left(\mathbf{T}×\mathbf{N}\right)=\mathbf{B}·\mathbf{B}=1 {b}_{2}=\mathrm{\kappa },{a}_{3}=-{\mathrm{\kappa }}^{2},{a}_{4}=-3 \mathrm{\kappa } \mathrm{\kappa }\prime ,{c}_{3}=\mathrm{\kappa } \mathrm{\tau },{c}_{4}=2 \mathrm{\kappa }\prime \mathrm{\tau }+\mathrm{\kappa } \mathrm{\tau }\prime {b}_{2}\left({a}_{4}{c}_{3}-{a}_{3}{c}_{4}\right) {b}_{2}\left({a}_{4}{c}_{3}-{a}_{3}{c}_{4}\right) ={\mathrm{\kappa }}^{4}⁢\mathrm{\tau }\prime -{\mathrm{\kappa }}^{3}⁢\mathrm{\kappa }\prime ⁢\mathrm{τ} ={\mathrm{\kappa }}^{3}\left(\mathrm{\kappa } \mathrm{\tau }\prime -\mathrm{\kappa }\prime \mathrm{\tau }\right) ={\mathrm{\kappa }}^{5}\left(\frac{\mathrm{\kappa } \mathrm{\tau }\prime -\mathrm{\kappa }\prime \mathrm{\tau }}{{\mathrm{\kappa }}^{2}}\right) ={\mathrm{κ}}^{5} \frac{d}{\mathrm{ds}}\left(\frac{\mathrm{τ}}{\mathrm{κ}}\right) The only place in Maple where symbolic calculations can be made with vectors is in the Physics:-Vectors package. Unfortunately, linear input for this package works best, although the output reflects notation greatly desired in the realm of physics. In the following calculations, the basis vectors \left\{\mathbf{i},\mathbf{j},\mathbf{k}\right\} take the place of the vectors T, N, and B, respectively. Within the package, a left-underscore for the letters i, j, or k indicates a unit basis vector ; a right-underscore, a symbolic vector. \mathrm{with}\left(\mathrm{Physics}:-\mathrm{Vectors}\right): Use the Setup command to implement the notational benefits of the package. \mathrm{Setup}\left(\mathrm{mathematicalnotation}=\mathrm{true}\right): Let R2 represent \mathbf{R}″ , where the right- underscore indicates that this is a vector in the package. R2_ := b[2]*_j; \stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{\mathrm{R2}}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{j}} \mathbf{R}‴ R3_ := a[3]*_i+b[3]*_j+c[3]*_k; \stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{\mathrm{R3}}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{i}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{j}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{k}} \mathbf{R}⁗ \stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{\mathrm{R4}}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{i}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{j}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}\stackrel{\textcolor[rgb]{0,0,1}{∧}}{\textcolor[rgb]{0,0,1}{k}} Use the period for the dot product and &x for the cross product. Apply the expand command. The result is the value of the box product. q := expand(R2_ . (R3_ &x R4_)); \textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{3}} Apply the simplify command to the result of the substitutions enacted by the eval command. Both \mathrm{κ}\prime \mathrm{τ}\prime must be set as Atomic Identifiers to prevent Maple from actually taking a derivative, which by default, is with respect to x \mathrm{simplify}\left(\mathrm{eval}\left(q,\left[{b}_{2}=\mathrm{κ},{a}_{3}=-{\mathrm{κ}}^{2},{a}_{4}=-3 \mathrm{κ} \mathrm{κ}\prime ,{c}_{3}=\mathrm{κ} \mathrm{τ},{c}_{4}=2 \mathrm{κ}\prime \mathrm{τ}+\mathrm{κ} \mathrm{τ}\prime \right]\right)\right) {\textcolor[rgb]{0,0,1}{\mathrm{\kappa }}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\tau }}\textcolor[rgb]{0,0,1}{\prime }\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{\mathrm{\kappa }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\kappa }}\textcolor[rgb]{0,0,1}{\prime }\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\tau }} Write this final result as {\mathrm{κ}}^{3}\left(\mathrm{κ} \mathrm{τ}\prime -\mathrm{κ}\prime \mathrm{τ}\right)={\mathrm{κ}}^{5}\left(\frac{\mathrm{κ} \mathrm{τ}\prime -\mathrm{κ}\prime \mathrm{τ}}{{\mathrm{κ}}^{2}}\right)={\mathrm{κ}}^{5} \frac{d}{\mathrm{ds}}\left(\frac{\mathrm{τ}}{\mathrm{κ}}\right) Slightly more insight into the details of these calculations can be obtained if, instead of using the basis vectors \left\{\mathbf{i},\mathbf{j},\mathbf{k}\right\} , the derivatives of R are written in terms of T, N, and B. r2_ := b[2]*N_: r3_ := a[3]*T_+b[3]*N_+c[3]*B_: expand(r2_ . (r3_ &x r4_)); {\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}\left(\stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{N}}\textcolor[rgb]{0,0,1}{·}\left(\stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{B}}\textcolor[rgb]{0,0,1}{×}\stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{T}}\right)\right)\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{⁢}\left(\stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{N}}\textcolor[rgb]{0,0,1}{·}\left(\stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{B}}\textcolor[rgb]{0,0,1}{×}\stackrel{\textcolor[rgb]{0,0,1}{\to }}{\textcolor[rgb]{0,0,1}{T}}\right)\right) The simplification of the box product to a new product involving just the vectors of the TNB-frame, and not linear combinations of these vectors, is the reward for this extra effort.
Flame Extinction Limits of H2‐CO Fuel Blends | J. Eng. Gas Turbines Power | ASME Digital Collection Combustion and Propulsion Research Laboratory, , El Paso, TX 79968-0521 e-mail: ahsan@utep.edu Mahesh Subramanya, Mahesh Subramanya e-mail: msubrama@bechtel.com Subramanyam R. Gollahalli Combustion and Flame Dynamics Laboratory, , Norman, OK 73019-1052 Choudhuri, A. R., Subramanya, M., and Gollahalli, S. R. (March 28, 2008). "Flame Extinction Limits of H2‐CO Fuel Blends." ASME. J. Eng. Gas Turbines Power. May 2008; 130(3): 031501. https://doi.org/10.1115/1.2835059 The flame extinction limits of syngas (H2‐CO) flames were measured using a twin-flame counterflow burner. Plots of extinction limits (⁠ %f ⁠: volumetric percent of fuel in air) versus global stretch rates were generated at different fuel blend compositions and were extrapolated to determine the flame extinction limit corresponding to an experimentally unattainable zero-stretch condition. The zero-stretch extinction limit of H2‐CO mixtures decreases with the increase in H2 concentration in the mixture. The average difference between the measured flame extinction limit and the Le Chatelier’s calculation is around 7% of the mean value. The measured OH chemiluminescence data indicates that regardless of blend composition the OH radical concentration reduces to a critical value prior to the flame extinction. The measured laminar flame velocity close to the extinction indicates that regardless of fuel composition, the premixed flame of hydrogen fuel blends extinguishes when the mixture laminar flame velocity falls below a critical value. combustion, flames, fuel, gas mixtures, laminar flow Flames, Fuels, Combustion ,” No. AGTSR 98-01, South Carolina Energy Research and Development Center, Clemson, SC. Support of Advanced Fossil Resource Conversion and Utilization Research by Historically Black Colleges and Universities and Other Minority Institutions ,” DE-PS26-05NT42317, National Energy Technology Laboratory, Department of Energy, Pittsburgh, PA. Alternative Fuels for Gas Turbine Plants—An Engineering Procurement, and Construction Contractor’s Perspective Laminar Flame Speeds of Hydrocarbon+Air Mixtures with Hydrogen Addition Combustion Characteristics of Hydrogen-Hydrocarbon Hybrid Fuel Laminar Burning Velocities of n-Butane∕Air Mixture Enriched With Hydrogen An Experimental Study on Extinction and Stability of Stretch Premixed Flames Nineteenth Symposium (International) on Combustion Experimental and Numerical Determination of Laminar Flame Speeds: Mixtures of C2—Hydrocarbons with Oxygen and Nitrogen Twenty-Third Symposium (International) on Combustion Experimental Study of Methane-Air Premixed Flame Extinction at Small Stretch Rates in Microgravity Combustion of Low Calorific Value Gases: Problems and Prospects Premixed-Gas Flame Microgravity Combustion. Fire in Free Fall Flammability of Mixed Gases ,” Report of Investigations 8709, Bureau of Mines, U.S. Department of the Interior. Bui-Pham O’Shaughnessaey Rich Flammability Limits in CH3OH∕CO∕Diluent Mixtures A Numerical Investigation on Elliptic Coaxial Jets ,” MS thesis, University of Texas at El Paso, Texas. Flame Extinction Limits in CH2F2∕Air Mixtures Laminar Flame Speed and Extinction Strain Rates of Mixtures of Carbon Monoxide With Hydrogen, Methane, and Air Experimental Investigation on the Flame Extinction Limit of Fuel Blends S. A. Ng. W. B. Chemiluminescent Emission Measurement of A Diffusion Flame Jet in a Loudspeaker Induced Standing Wave On the Flammability Limit and Heat Loss in Flames With Detailed Chemistry Application of Continuation Methods to Plane Premixed Laminar Flames The Effects of Additives on the Burning Velocities of Flames and Their Possible Prediction by a Mixing Rule Prediction of Burning Velocities of Saturated Carbon Monoxide-Air Flames by Application of Mixing Rules Prediction of Burning Velocities of Carbon Monoxide-Hydrogen-Air Flames Effects of Hydrocarbon Substitution on Atmospheric Hydrogen-Air Flame Propagation
What Is Amplitude Modulation(AM) | Their Advantages And Disadvantages - The Engineering Street Home Electronics And Communication Electronics Engineering What Is Amplitude Modulation(AM) | Their Advantages And Disadvantages Amplitude Modulation(AM): What Is Amplitude Modulation and Benefits Of Amplitude Modulation With Their Advantages And Disadvantages And Also Different Types Of Amplitude Modulation Before this topic we learn about modulation..... what is modulation??? Modulation is a process the baseband signal is superimpose on the carrier wave such that the frequency of baseband signal is same as that of carrier wave.(Baseband signal means message signal) In order to long distance communication modulation process is used. Benefits of modulation technique : Reduction in the height of antenna. Avoids mixing of signals. Increases the range of communication. Multiplexing is possible. Improves quality of reception. Types of modulation technique: Amplitude modulation(AM) Frequency modulation(FM) Phase modulation(PM) Here we learn about the amplitude modulation. Topics we seen in this tutorial..... Introduction of AM Mathematical expression of AM Bandwidth of AM Application of AM Advantages&disadvantages. Introduction of AM: Modulation in which the amplitude of carrier signal is changes accordance to the amplitude of the modulation signal. In above figure first on shows the modulating wave, which is the message signal. The next one is the carrier wave, which is a high frequency signal and contains no information. While, the last one is the resultant modulated wave.i.e, amplitude modulated wave. It can be observed that the positive and negative peaks of the carrier wave, are interconnected with an imaginary line. Mathematical expression of AM: \[ m\left(t\right)={A}_{\mathrm{m\quad }}\mathrm{cos}\left(2\pi {f}_{m}t\right) \] c(t)=Ac cos⁡(2πfct) Am and Ac are the amplitude of the modulating signal and the carrier signal . fm and fc are the frequency of the modulating signal and the carrier signal respectively. s(t)=[Ac+Am cos⁡(2πfmt)]cos⁡(2πfct) (Eq. 1) The ratio of amplitude of modulating signal and amplitude of carrier wave is called modulation index(M). It is also called as modulation depth. s(t)=Ac[1+(Am/Ac)cos(2πfmt)]cos(2πfct) s(t)=Ac[1+Mcos(2πfmt)]cos(2πfct) (Eq. 2) Where, M is Modulation index and it is equal to the ratio of Am and Ac(see earlier). Mathematically, we can write it as M=Am/Ac (eq3) Hence, we can calculate the value of modulation index by using the above formula. Now, let see one more formula for Modulation index by considering Equation 1. We can use this formula for calculating modulation index value, when the maximum and minimum amplitudes of the modulated wave are known. Let Amax and Amin be the maximum and minimum amplitudes of the modulated wave. We will get the maximum amplitude of the modulated wave, when cos(2πfmt) is 1. Amax = Ac+Am (Eq. 4) We will get the minimum amplitude of the modulated wave, when cos(2πfmt) is -1. Amin = Ac−Am (Eq. 5) Amax+Amin = Ac+Am+Ac−Am = 2Ac Ac=Amax+Amin/2 (Eq. 6) Amax−Amin=Ac+Am−(Ac−Am) = 2Am Am=Amax−Amin/2 Am/Ac = [(Amax−Amin)/2] / [(Amax+Amin)/2] M=Amax−Amin /Amax+Amin (Eq. 8) The modulation index is often denoted in percentage called as Percentage of Modulation. Bandwidth (BW): It is the difference between the highest and lowest frequencies of the signal. BW = fmax−fmin The amplitude modulated wave has three frequencies. Those are carrier frequency fc, upper sideband frequency fc+fm and lower sideband frequency fc−fm fmax = fc+fm and fmin = fc−fm Substitute, fmax and fmin values in bandwidth formula. BW = fc+fm−(fc−fm) Application of AM: It is widely used in broadcast transmission. The used of AM in aerospace in widespread. Hence, it is most of used in air band radio. It can be demodulated with less components. AM is easy to implement,i.e constuction is very easy. AM receivers are cheaper There is no required special component. It is not efficient because of its bandwidth. Not efficient in terms of power usages. Effect of noise is take place. You Can See More Advantages And Disadvantages Here.
Lefschetz Fibrations and real Lefschetz fibrations Nermin Salepci1 1 Institut Camille Jordan, Université Lyon I, 43, Boulevard du 11 Novembre 1918 69622 Villeurbanne Cedex, France This note is based on the lectures that I have given during the winter school Winter Braids IV, School on algebraic and topological aspects of braid groups held in Dijon on 10 - 13 February 2014. The aim of series of three lectures was to give an overview of geometrical and topological properties of 4-dimensional Lefschetz fibrations. Meanwhile, I could briefly introduce real Lefschetz fibrations, fibrations which have certain symmetry, and could present some interesting features of them. This note will be yet another survey article on Lefschetz fibrations. There are excellent lecture notes/ survey papers/ book chapters on Lefschetz fibrations. You can, for example, look at [3], [11], [14], [20] among many others. In this note I intent to take my time on real Lefschetz fibrations as much as on Lefschetz fibrations in order not to repeat what was already done perfectly. Nermin Salepci&hairsp;1 author = {Nermin Salepci}, title = {Lefschetz {Fibrations} and real {Lefschetz} fibrations}, TI - Lefschetz Fibrations and real Lefschetz fibrations %T Lefschetz Fibrations and real Lefschetz fibrations Nermin Salepci. Lefschetz Fibrations and real Lefschetz fibrations. Winter Braids Lecture Notes, Volume 1 (2014), Talk no. 4, 19 p. doi : 10.5802/wbln.5. https://wbln.centre-mersenne.org/articles/10.5802/wbln.5/ [1] E. Artin Theory of braids, Ann. of Math. (2), Volume 48 (1947), pp. 101-126 | Article | MR: 19087 | Zbl: 0030.17703 [2] Denis Auroux A stable classification of Lefschetz fibrations, Geom. Topol., Volume 9 (2005), p. 203-217 (electronic) | Article | MR: 2115673 | Zbl: 1084.57024 [3] Denis Auroux; Fabrizio Catanese; Marco Manetti; Paul Seidel; Bernd Siebert; Ivan Smith; Gang Tian Symplectic 4-manifolds and algebraic surfaces, Lecture Notes in Mathematics, 1938, Springer-Verlag, Berlin; Fondazione C.I.M.E., Florence, 2008, xiv+345 pages (Lectures from the C.I.M.E. Summer School held in Cetraro, September 2–10, 2003, Edited by Catanese and Tian) | Article | MR: 2463711 [4] Ana Cannas da Silva Lectures on symplectic geometry, Lecture Notes in Mathematics, 1764, Springer-Verlag, Berlin, 2001, xii+217 pages | Article | MR: 1853077 | Zbl: 1016.53001 [5] Kenneth Nicholas Chakiris The monodromy of genus two pencils, ProQuest LLC, Ann Arbor, MI, 1983, 133 pages Thesis (Ph.D.)–Columbia University | MR: 2633010 [6] Gaston Darboux Sur le problème de Pfaff, Bull. Sci. Math., Volume 6 (1882), pp. 1-35 | Numdam [7] Alex Degtyarev; Nermin Salepci Products of pairs of Dehn twists and maximal real Lefschetz fibrations, Nagoya Math. J., Volume 210 (2013), pp. 83-132 | Article | MR: 3079276 | Zbl: 1304.14074 [8] M. Dehn Die Gruppe der Abbildungsklassen, Acta Math., Volume 69 (1938) no. 1, pp. 135-206 (Das arithmetische Feld auf Flächen) | Article | MR: 1555438 | Zbl: 0019.25301 [9] S. K. Donaldson Lefschetz pencils on symplectic manifolds, J. Differential Geom., Volume 53 (1999) no. 2, pp. 205-236 http://projecteuclid.org/euclid.jdg/1214425535 | Article | MR: 1802722 | Zbl: 1040.53094 [10] Clifford J. Earle; James Eells A fibre bundle description of Teichmüller theory, J. Differential Geometry, Volume 3 (1969), pp. 19-43 | Zbl: 0185.32901 [11] Terry Fuller Lefschetz fibrations of 4-dimensional manifolds, Cubo Mat. Educ., Volume 5 (2003) no. 3, pp. 275-294 | MR: 2065735 | Zbl: 1162.57303 [12] Damien Gayet Hypersurfaces symplectiques réelles et pinceaux de Lefschetz réels, J. Symplectic Geom., Volume 6 (2008) no. 3, pp. 247-266 http://projecteuclid.org/euclid.jsg/1224595247 | Article | MR: 2448826 | Zbl: 1170.53069 [13] Robert E. Gompf Toward a topological characterization of symplectic manifolds, J. Symplectic Geom., Volume 2 (2004) no. 2, pp. 177-206 http://projecteuclid.org/euclid.jsg/1094072003 | Article | MR: 2108373 | Zbl: 1084.53072 [14] Robert E. Gompf; András I. Stipsicz 4 -manifolds and Kirby calculus, Graduate Studies in Mathematics, 20, American Mathematical Society, Providence, RI, 1999, xvi+558 pages | MR: 1707327 | Zbl: 0933.57020 [15] A. Kas On the handlebody decomposition associated to a Lefschetz fibration, Pacific J. Math., Volume 89 (1980) no. 1, pp. 89-104 http://projecteuclid.org/euclid.pjm/1102779371 | Article | MR: 596919 | Zbl: 0457.14011 [16] Mustafa Korkmaz Noncomplex smooth 4-manifolds with Lefschetz fibrations, Internat. Math. Res. Notices (2001) no. 3, pp. 115-128 | Article | MR: 1810689 | Zbl: 0977.57020 [17] Dusa McDuff; Dietmar Salamon Introduction to symplectic topology, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1995, viii+425 pages (Oxford Science Publications) | Zbl: 0844.58029 [18] Boris Moishezon Complex surfaces and connected sums of complex projective planes, Lecture Notes in Mathematics, Vol. 603, Springer-Verlag, Berlin-New York, 1977, i+234 pages (With an appendix by R. Livne) | Article | MR: 491730 | Zbl: 0392.32015 [19] Burak Ozbagci; András I. Stipsicz Noncomplex smooth 4 -manifolds with genus- 2 Lefschetz fibrations, Proc. Amer. Math. Soc., Volume 128 (2000) no. 10, pp. 3125-3128 | Article | MR: 1670411 | Zbl: 0951.57015 [20] Burak Ozbagci; András I. Stipsicz Surgery on contact 3-manifolds and Stein surfaces, Bolyai Society Mathematical Studies, 13, Springer-Verlag, Berlin; János Bolyai Mathematical Society, Budapest, 2004, 281 pages | Article | MR: 2114165 | Zbl: 1067.57024 [21] Nermin Salepci Real Lefschetz fibrations, Université Louis Pasteur. Institut de Recherche Mathématique Avancée (IRMA), Strasbourg, 2007, ii+135 pages (Thèse, Université Louis Pasteur, Strasbourg, 2007) | MR: 2780321 | Zbl: 1216.55004 [22] Nermin Salepci Real elements in the mapping class group of {T}^{2} , Topology Appl., Volume 157 (2010) no. 16, pp. 2580-2590 | Article | MR: 2719402 | Zbl: 1213.57024 [23] Nermin Salepci Classification of totally real elliptic Lefschetz fibrations via necklace diagrams, J. Knot Theory Ramifications, Volume 21 (2012) no. 9, 1250089, 28 pages | Article | MR: 2926572 | Zbl: 1270.57063 [24] Nermin Salepci Invariants of totally real Lefschetz fibrations, Pacific J. Math., Volume 256 (2012) no. 2, pp. 407-434 | Article | MR: 2944983 | Zbl: 1282.57027 [25] Bernd Siebert; Gang Tian On hyperelliptic {C}^{\infty } -Lefschetz fibrations of four-manifolds, Commun. Contemp. Math., Volume 1 (1999) no. 2, pp. 255-280 | Article | MR: 1696101 | Zbl: 0948.57018 [26] Bernd Siebert; Gang Tian On the holomorphicity of genus two Lefschetz fibrations, Ann. of Math. (2), Volume 161 (2005) no. 2, pp. 959-1020 | Article | MR: 2153404 | Zbl: 1090.53072 [27] Ivan Smith Lefschetz fibrations and the Hodge bundle, Geom. Topol., Volume 3 (1999), p. 211-233 (electronic) | Article | MR: 1701812 | Zbl: 0929.53047 [28] Ivan Smith Symplectic Geometry of Lefschetz Fibrations, University of Oxford, 1999 (Ph.D. University of Oxford) [29] N. E. Steenrod The classification of sphere bundles, Ann. of Math. (2), Volume 45 (1944), pp. 294-311 | Article | MR: 9857 | Zbl: 0060.41412 [30] W. P. Thurston Some simple examples of symplectic manifolds, Proc. Amer. Math. Soc., Volume 55 (1976) no. 2, p. 467-468 | MR: 402764 | Zbl: 0324.53031
Understanding credible intervals (using baseball statistics) | R-bloggers In my last post, I explained the method of empirical Bayes estimation, a way to calculate useful proportions out of many pairs of success/total counts (e.g. 0/1, 3/10, 235/1000). I used the example of estimating baseball batting averages based on x hits in opportunities. If we run into a player with 0 hits in 2 chances, or 1 hit in 1 chance, we know we can’t trust it, and this method uses information from the overall distribution to improve our guess. Empirical Bayes gives a single value for each player that can be reliably used as an estimate. But sometimes you want to know more than just our “best guess,” and instead wish to know how much uncertainty is present in our point estimate. We normally would use a binomial proportion confidence interval (like a margin of error in a political poll), but this does not bring in information from our whole dataset. For example, the confidence interval for someone who hits 1 time out of 3 chances is (0.008, 0.906) . We can indeed be quite confident that that interval contains the true batting average… but from our knowledge of batting averages, we could have drawn a much tighter interval than that! There’s no way that the player’s real batting average is .1 or .8: it probably lies in the .2-.3 region that most other players’ do. Here I’ll show how to compute a credible interval using empirical Bayes. This will have a similar improvement relative to confidence intervals that the empirical Bayes estimate had to a raw batting average. I’ll start with the same code from the last post, so that you can follow along in R if you like. mutate(eb_estimate = (H + alpha0) / (AB + alpha0 + beta0)) The end result of this process is the eb_estimate variable. This gives us a new value for each proportion; what statisticians call a point estimate. Recall that these new values tend to be pushed towards the overall mean (giving this the name “shrinkage”): This shrunken value is generally more useful than the raw estimate: we can use it to sort our data, or feed it into another analysis, without worrying too much about the noise introduced by low counts. But there’s still uncertainty in the empirical Bayes estimate, and the uncertainty is very different for different players. We may want not only a point estimate, but an interval of possible batting averages: one that will be wide for players we know very little about, and narrow for players with more information. Luckily, the Bayesian approach has a method to handle this. Consider that what we’re really doing with empirical Bayes estimation is computing two new values for each player: alpha_1 beta_1 . These are the posterior shape parameters for each distribution, after the prior (estimated on the whole dataset) has been updated. We can visualize this posterior distribution for each player. I’m going to pick a few of my favorites from the 1998 Yankee lineup. yankee_1998 <- c("brosisc01", "jeterde01", "knoblch01", "martiti02", "posadjo01", "strawda01", "willibe02") yankee_1998_career <- career_eb %>% filter(playerID %in% yankee_1998) yankee_beta <- yankee_1998_career %>% inflate(x = seq(.18, .33, .0002)) %>% mutate(density = dbeta(x, alpha1, beta1)) ggplot(yankee_beta, aes(x, density, color = name)) + stat_function(fun = function(x) dbeta(x, alpha0, beta0), lty = 2, color = "black") The prior is shown as a dashed curve. Each of these curves is our probability distribution of what the player’s batting average could be, after updating based on that player’s performance. That’s what we’re really estimating with this method: those posterior beta distributions. These density curves are hard to interpret visually, especially as the number of players increases, and it can’t be summarized into a table or text. We’d instead prefer to create a credible interval, which says that some percentage (e.g. 95%) of the posterior distribution lies within an particular region. Here’s Derek Jeter’s credible interval: You can compute the edges of the interval quite easily using the qbeta (quantile of beta) function in R. We just provide it the posterior alpha1 and beta1 parameters: yankee_1998_career <- yankee_1998_career %>% mutate(low = qbeta(.025, alpha1, beta1), high = qbeta(.975, alpha1, beta1)) yankee_1998_career %>% select(-alpha1, -beta1, -eb_estimate) %>% brosisc01 Scott Brosius 1001 3889 0.257 0.244 0.271 jeterde01 Derek Jeter 3316 10614 0.312 0.302 0.320 knoblch01 Chuck Knoblauch 1839 6366 0.289 0.277 0.298 martiti02 Tino Martinez 1925 7111 0.271 0.260 0.280 posadjo01 Jorge Posada 1664 6092 0.273 0.262 0.283 strawda01 Darryl Strawberry 1401 5418 0.259 0.247 0.270 willibe02 Bernie Williams 2336 7869 0.297 0.286 0.305 I usually like to view intervals in a plot like this: The vertical dashed red line is frac{alpha_0}{alpha_0 + beta_0} : the mean batting average across history (based on our beta fit). The earlier plot showing each posterior beta distribution communicated more information, but this is far more readable. Credible intervals versus confidence intervals Note that posterior credible intervals are similar to frequentist confidence intervals, but they are not the same thing. There’s a philosophical difference, in that frequentists treat the true parameter as fixed while Bayesians treat it as a probability distribution. Here’s one great explanation of the distinction. But there’s also a very practical difference, in that credible intervals take prior information into account. If I take 20 random players and construct both confidence intervals (specifically a binomial proportion confidence interval) and posterior credible intervals for each, it could look something like this: These are sorted in order of how many times a player went up to bat (thus, how much information we have about them). Notice that once there’s enough information, the credible intervals and confidence intervals are nearly identical. But for the 0/3 and 0/6 cases, the credible interval is much narrower. This is because empirical Bayes brings in our knowledge from the full data, just as it did for the point estimate. Like most applied statisticians, I don’t consider myself too attached to the Bayesian or frequentist philosophies, but rather use whatever method is useful for a given situation. But while I’ve seen non-Bayesian approaches to point estimate shrinkage (James-Stein estimation being the most famous example), I haven’t yet seen a principled way of shrinking confidence intervals. This makes empirical Bayes posteriors pretty useful! What’s next: Batting Hall of Fame We now have a credible interval for each player, including lower and upper bounds for their batting average. What if you’re collecting players for a “hall of fame”, and you want to find batters that have an average greater than a particular threshold (e.g. greater than .3)? Or forget about baseball: you could be working with genes to test if they’re associated with a disease, and want to collect a set for further study. Or you could be working with advertisements on your site, and want to examine those that are performing especially well or poorly. This is the problem of hypothesis testing. In my next post in this series about empirical Bayes and baseball, I’ll use the posterior distributions we’ve created to define Bayesian classifiers (the (rough) equivalent to frequentist hypothesis testing), and come up with a “hall of fame” of batters while managing the multiple hypothesis testing problems. I’ll also show how you can compare two batters to see if one is better than the other. Appendix: Connection between confidence and credible intervals (Note: This appendix is substantially more technical than the rest of the post, and is not necessary for understanding and implementing the method). Above I mention that credible intervals are not the same thing as confidence intervals and do not necessarily provide the same information. But in fact, two of the methods for constructing frequentist confidence intervals, Clopper-Pearson (the default in R’s binom.test function, shown in the plot above) and Jeffreys, actually use the quantiles of the Beta distribution in very similar ways, which is the reason credible and confidence intervals start looking identical once there’s enough information. You can find the formulae here. Here they are for comparison, again using the qbeta function in R: Bayesian credible interval qbeta(.025, alpha0 + x, beta0 + n - x) qbeta(.975, alpha0 + x, beta0 + n - x) Jeffreys qbeta(.025, 1 / 2 + x, 1 / 2 + n - x) qbeta(.975, 1 / 2 + x, 1 / 2 + n - x) Clopper-Pearson qbeta(.025, x, n - x + 1) qbeta(.975, x + 1, n - x) Notice that the Jeffreys prior is identical to the Bayesian credible interval when alpha_0=frac{1}{2};beta_0=frac{1}{2} . This is called an uninformative prior or a Jeffreys prior, and is basically pretending that we know nothing about batting averages. The Clopper-Pearson interval is a bit odder, since its priors are different for the lower and upper bounds (and in fact neither is a proper prior, since alpha_0=0 for the low bound and beta_0=0 for the high bound, both illegal for the beta). This is because Clopper-Pearson is actually derived from quantiles of the cumulative binomial distribution (see here!), and is simply equivalent to beta through a neat mathematical trick. What matters is this makes it more conservative (wider) than the Jeffrey’s prior, with a lower lower bound and a higher upper bound. Most important is that the Bayesian credible interval, the Clopper-Pearson interval, and the Jeffreys interval all start looking more and more identical when: the evidence is more informative (large n the prior is less informative (small alpha_0 beta_0 This fits what we saw in the graph above. Bayesian methods are especially helpful (relative to frequentist methods) when the prior makes up a large share of the information. It is not necessarily the case for all methods that there is a close equivalent between a confidence interval and a credible interval with an uninformative prior. But it happens more often than you might think! As Rasmuth Bååth puts it, “Inside every classical test there is a Bayesian model trying to get out.”