arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Algebraic fractions ## Definition An algebraic fraction is a fractional expression in which numerator and denominator are polynomials of one or more variables. The most common operations with algebraic fractions are simplification, addition, subtraction, multiplication and division. ## Solved Exercises ### Struggling with math? Access detailed step by step solutions to thousands of problems, growing every day!
Volume 17, issue 4 (2013) 1 S Akbulut, R Kirby, Branched covers of surfaces in $4$–manifolds, Math. Ann. 252 (1979/80) 111 MR593626 2 A J Casson, C M Gordon, On slice knots in dimension three, from: "Algebraic and geometric topology, Part 2" (editor R J Milgram), Proc. Sympos. Pure Math. 32, Amer. Math. Soc. (1978) 39 MR520521 3 A J Casson, C M Gordon, Cobordism of classical knots, from: "À la recherche de la topologie perdue" (editors L Guillou, A Marin), Progr. Math. 62, Birkhäuser (1986) 181 MR900252 4 J C Cha, Topological minimal genus and $L^2$–signatures, Algebr. Geom. Topol. 8 (2008) 885 MR2443100 5 J C Cha, Amenable $L^{2}$–theoretic methods and knot concordance, Inter. Math. Res. Notices (2013) 6 T D Cochran, R E Gompf, Applications of Donaldson's theorems to classical knot concordance, homology $3$–spheres and property $P$, Topology 27 (1988) 495 MR976591 7 T Cochran, S Harvey, C Leidy, Link concordance and generalized doubling operators, Algebr. Geom. Topol. 8 (2008) 1593 MR2443256 8 T D Cochran, S Harvey, C Leidy, Knot concordance and higher-order Blanchfield duality, Geom. Topol. 13 (2009) 1419 MR2496049 9 T D Cochran, S Harvey, C Leidy, 2–torsion in the $n$–solvable filtration of the knot concordance group, Proc. Lond. Math. Soc. 102 (2011) 257 MR2769115 10 T D Cochran, S Harvey, C Leidy, Primary decomposition and the fractal nature of knot concordance, Math. Ann. 351 (2011) 443 MR2836668 11 T D Cochran, W B R Lickorish, Unknotting information from $4$–manifolds, Trans. Amer. Math. Soc. 297 (1986) 125 MR849471 12 T D Cochran, K E Orr, P Teichner, Knot concordance, Whitney towers and $L^2$–signatures, Ann. of Math. 157 (2003) 433 MR1973052 13 T D Cochran, K E Orr, P Teichner, Structure in the classical knot concordance group, Comment. Math. Helv. 79 (2004) 105 MR2031301 14 C W Davis, Linear independence of knots arising from iterated infection without the use of Tristram–Levine signature, Int. Math. Res. Not. 2013 (2013) 15 S K Donaldson, An application of gauge theory to four-dimensional topology, J. Differential Geom. 18 (1983) 279 MR710056 16 N D Elkies, A characterization of the $\mathbf{Z}^n$ lattice, Math. Res. Lett. 2 (1995) 321 MR1338791 17 H Endo, Linear independence of topologically slice knots in the smooth cobordism group, Topology Appl. 63 (1995) 257 MR1334309 18 R Fintushel, R J Stern, Pseudofree orbifolds, Ann. of Math. 122 (1985) 335 MR808222 19 R Fintushel, R J Stern, Instanton homology of Seifert fibred homology three spheres, Proc. London Math. Soc. 61 (1990) 109 MR1051101 20 M H Freedman, F Quinn, Topology of $4$–manifolds, Princeton Mathematical Series 39, Princeton Univ. Press (1990) MR1201584 21 M Furuta, Homology cobordism group of homology $3$–spheres, Invent. Math. 100 (1990) 339 MR1047138 22 P M Gilmer, Configurations of surfaces in $4$–manifolds, Trans. Amer. Math. Soc. 264 (1981) 353 MR603768 23 P M Gilmer, C Livingston, On surgery curves for genus one slice knots, to appear in Pacific Journal Math. 24 P Gilmer, C Livingston, The Casson–Gordon invariant and link concordance, Topology 31 (1992) 475 MR1174253 25 P Gilmer, C Livingston, Discriminants of Casson–Gordon invariants, Math. Proc. Cambridge Philos. Soc. 112 (1992) 127 MR1162937 26 R E Gompf, Smooth concordance of topologically slice knots, Topology 25 (1986) 353 MR842430 27 R E Gompf, A I Stipsicz, $4$–manifolds and Kirby calculus, Graduate Studies in Mathematics 20, Amer. Math. Soc. (1999) MR1707327 28 C M Gordon, Some aspects of classical knot theory, from: "Knot theory" (editor J C Hausmann), Lecture Notes in Math. 685, Springer (1978) 1 MR521730 29 J Greene, S Jabuka, The slice-ribbon conjecture for $3$–stranded pretzel knots, Amer. J. Math. 133 (2011) 555 MR2808326 30 J E Grigsby, D Ruberman, S Strle, Knot concordance and Heegaard Floer homology invariants in branched covers, Geom. Topol. 12 (2008) 2249 MR2443966 31 M Hedden, Knot Floer homology of Whitehead doubles, Geom. Topol. 11 (2007) 2277 MR2372849 32 M Hedden, S G Kim, C Livingston, Topologically slice knots of smooth concordance order two arXiv:1212.6628 33 M Hedden, P Kirk, Instantons, concordance, and Whitehead doubling, J. Differential Geom. 91 (2012) 281 MR2971290 34 M Hedden, C Livingston, D Ruberman, Topologically slice knots with nontrivial Alexander polynomial, Adv. Math. 231 (2012) 913 MR2955197 35 J Hom, The knot Floer complex and the smooth concordance group, to appear in Comm. Math. Helv. arXiv:1111.6635 36 J Hoste, A formula for Casson's invariant, Trans. Amer. Math. Soc. 297 (1986) 547 MR854084 37 S Jabuka, Concordance invariants from higher order covers, Topology Appl. 159 (2012) 2694 MR2923439 38 S Jabuka, S Naik, Order in the concordance group and Heegaard Floer homology, Geom. Topol. 11 (2007) 979 MR2326940 39 P B Kronheimer, T S Mrowka, Gauge theory and Rasmussen's invariant, Journal of Topology (2013) 40 R A Litherland, Cobordism of satellite knots, from: "Four-manifold theory" (editors C Gordon, R Kirby), Contemp. Math. 35, Amer. Math. Soc. (1984) 327 MR780587 41 C Manolescu, B Owens, A concordance invariant from the Floer homology of double branched covers, Int. Math. Res. Not. 2007 (2007) 21 MR2363303 42 B Owens, S Strle, Rational homology spheres and the four-ball genus of knots, Adv. Math. 200 (2006) 196 MR2199633 43 B Owens, S Strle, A characterization of the $\mathbb Z^n\oplus\mathbb Z(\delta)$ lattice and definite nonunimodular intersection forms, Amer. J. Math. 134 (2012) 891 MR2956253 44 P Ozsváth, Z Szabó, Absolutely graded Floer homologies and intersection forms for four-manifolds with boundary, Adv. Math. 173 (2003) 179 MR1957829 45 P Ozsváth, Z Szabó, Knot Floer homology and the four-ball genus, Geom. Topol. 7 (2003) 615 MR2026543 46 P Ozsváth, Z Szabó, Holomorphic disks and knot invariants, Adv. Math. 186 (2004) 58 MR2065507 47 P Ozsváth, Z Szabó, On knot Floer homology and lens space surgeries, Topology 44 (2005) 1281 MR2168576 48 P S Ozsváth, Z Szabó, Knot Floer homology and integer surgeries, Algebr. Geom. Topol. 8 (2008) 101 MR2377279 49 J Rasmussen, Khovanov homology and the slice genus, Invent. Math. 182 (2010) 419 MR2729272 50 V A Rohlin, Two-dimensional submanifolds of four-dimensional manifolds, Funkcional. Anal. i Priložen. 5 (1971) 48 MR0298684 51 D Ruberman, S Strle, Concordance properties of parallel links arXiv:1108.4476 52 A Scorpan, The wild world of $4$–manifolds, Amer. Math. Soc. (2005) MR2136212 53 O J Viro, Branched coverings of manifolds with boundary, and invariants of links, I, Izv. Akad. Nauk SSSR Ser. Mat. 37 (1973) 1241 MR0370605
# zbMATH — the first resource for mathematics Certification of real inequalities: templates and sums of squares. (English) Zbl 1328.90101 Summary: We consider the problem of certifying lower bounds for real-valued multivariate transcendental functions. The functions we are dealing with are nonlinear and involve semialgebraic operations as well as some transcendental functions like $$\cos$$, $$\arctan$$, $$\exp$$, etc. Our general framework is to use different approximation methods to relax the original problem into polynomial optimization problems, which we solve by sparse sums of squares relaxations. In particular, we combine the ideas of the maxplus approximations (originally introduced in optimal control) and of the linear templates (originally introduced in static analysis by abstract interpretation). The nonlinear templates control the complexity of the semialgebraic relaxations at the price of coarsening the maxplus approximations. In that way, we arrive at a new – template based – certified global optimization method, which exploits both the precision of sums of squares relaxations and the scalability of abstraction methods. We analyze the performance of the method on problems from the global optimization literature, as well as medium-size inequalities issued from the Flyspeck project. ##### MSC: 90C22 Semidefinite programming 90C26 Nonconvex programming, global optimization 90C59 Approximation methods and heuristics in mathematical programming 11E25 Sums of squares and representations by other particular quadratic forms 41A10 Approximation by polynomials 41A50 Best approximation, Chebyshev systems ##### Software: Flyspeck; Intsolver; kepler98; NLCertify; Sollya Full Text: ##### References: [1] Adje, A; Gaubert, S; Goubault, E, Coupling policy iteration with semi-definite relaxation to compute accurate numerical invariants in static analysis, Log. Methods Comput. Sci., 8, 1-32, (2012) · Zbl 1237.68054 [2] Akian, M; Gaubert, S; Kolokoltsov, V; Litvinov, G (ed.); Maslov, V (ed.), Set coverings and invertibility of functional Galois connections, No. 377, 19-51, (2005), Providence, RI · Zbl 1080.06001 [3] Akian, M; Gaubert, S; Lakhoua, A, The MAX-plus finite element method for solving deterministic optimal control problems: basic properties and convergence analysis, SIAM J. Control Optim., 47, 817-848, (2008) · Zbl 1157.49034 [4] Ali, MM; Khompatraporn, C; Zabinsky, ZB, A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems, J. Glob. Optim., 31, 635-672, (2005) · Zbl 1093.90028 [5] Allamigeon, X., Gaubert, S., Magron, V., Werner, B.: Certification of bounds of non-linear functions: the templates method. To appear in the Proceedings of Conferences on Intelligent Computer Mathematics, CICM 2013 Calculemus, Bath (2013a) · Zbl 1390.68570 [6] Allamigeon, X., Gaubert, S., Magron, V., Werner, B.: Certification of inequalities involving transcendental functions: combining SDP and max-plus approximation. To appear in the Proceedings of the European Control Conference, ECC’13, Zurich (2013b) · Zbl 1390.68570 [7] Allamigeon, X., Gaubert, S., Magron, V., Werner, B.: Formal Proofs for Nonlinear Optimization. ArXiv e-prints 1404.7282 (2014) [8] Berz, M., Makino, K.: Rigorous global search using taylor models. In: Proceedings of the 2009 Conference on Symbolic Numeric Computation, ACM, New York, NY, USA, SNC ’09, pp. 11-20 (2009). doi:10.1145/1577190.1577198 · Zbl 1356.90110 [9] Calafiore, G; Dabbene, F, Reduced vertex set result for interval semidefinite optimization problems, J. Optim. Theory Appl., 139, 17-33, (2008) · Zbl 1189.90114 [10] Cannarsa, P., Sinestrari, C.: Semiconcave functions, Hamilton-Jacobi equations, and optimal control. In: Progress in Nonlinear Differential Equations and Their Applications, Birkhäuser, Boston (2004) http://books.google.fr/books?id=kr-8FpVY2ooC · Zbl 1095.49003 [11] Cartis, C; Gould, NIM; Toint, PL, Adaptive cubic regularisation methods for unconstrained optimization. part i: motivation, convergence and numerical results, Math. Program., 127, 245-295, (2011) · Zbl 1229.90192 [12] Chevillard, S., Joldes, M., Lauter, C.: Sollya: An environment for the development of numerical codes. In: Fukuda, K., van der Hoeven, J., Joswig, M., Takayama, N. (eds). Mathematical Software—ICMS 2010, Springer, Heidelberg, Germany, Lecture Notes in Computer Science, vol. 6327, pp. 28-31 (2010) · Zbl 1295.65143 [13] Fleming, WH; McEneaney, WM, A MAX-plus-based algorithm for a Hamilton-Jacobi-Bellman equation of nonlinear filtering, SIAM J. Control Optim., 38, 683-710, (2000) · Zbl 0949.35039 [14] Gaubert, S., McEneaney, W.M., Qu, Z.: Curse of dimensionality reduction in max-plus based approximation methods: theoretical estimates and improved pruning algorithms. In: CDC-ECC, IEEE, pp. 1054-1061 (2011) [15] Gil, A., Segura, J., Temme, N.M.: Numerical Methods for Special Functions, 1st edn. Society for Industrial and Applied Mathematics, Philadelphia, PA (2007) · Zbl 1144.65016 [16] Gruber, P.M.: Convex and Discrete Geometry. Springer, Berlin (2007) · Zbl 1139.52001 [17] Hales, TC, A proof of the Kepler conjecture, Math. Intell., 16, 47-58, (1994) · Zbl 0844.52018 [18] Hales, T.C.: A proof of the Kepler conjecture. Ann. Math. (2) 162(3), 1065-1185 (2005). doi:10.4007/annals.2005.162.1065 · Zbl 1096.52010 [19] Hansen, E., Greenberg, R.: An interval newton method. Appl. Math. Comput. 12(2-3):89-98 (1983). doi:10.1016/0096-3003(83)90001-2, http://www.sciencedirect.com/science/article/pii/0096300383900012 · Zbl 0526.65040 [20] Hansen, ER, Sharpening interval computations, Reliab. Comput., 12, 21-34, (2006) · Zbl 1088.65025 [21] Kaltofen, E.L., Li, B., Yang, Z., Zhi, L.: Exact certification in global polynomial optimization via sums-of-squares of rational functions with rational coefficients. JSC 47(1), 1-15 (2012). in memory of Wenda Wu (1929-2009) · Zbl 1229.90115 [22] Lakhoua, A.: Max-Plus Finite Element Method for the Numerical Resolution of Deterministic Optimal Control Problems. PhD thesis. University of Paris 6 (2007) [23] Lasserre, J; Thanh, T, Convex underestimators of polynomials, J. Glob. Optim., 56, 1-25, (2013) · Zbl 1273.90160 [24] Lasserre, JB, Global optimization with polynomials and the problem of moments, SIAM J. Optim., 11, 796-817, (2001) · Zbl 1010.90061 [25] Lasserre, JB; Putinar, M, Positivity and optimization for semi-algebraic functions, SIAM J. Optim., 20, 3364-3383, (2010) · Zbl 1210.14068 [26] Magron, V.: Nlcertify: A tool for formal nonlinear optimization. In: Hong, H., Yap, C. (eds). Mathematical Software—ICMS 2014, Lecture Notes in Computer Science, vol. 8592, pp. 315-320. Springer, Berlin (2014). doi:10.1007/978-3-662-44199-2_49 · Zbl 1251.65168 [27] Maso, G.: An Introduction to Gamma-Convergence. Birkhäuser, Basel (1993) · Zbl 0816.49001 [28] McEneaney, W.M.: Max-plus methods for nonlinear control and estimation. In: Systems & Control: Foundations & Applications. Birkhäuser Boston Inc., Boston, MA (2006) · Zbl 1103.93005 [29] McEneaney, WM, A curse-of-dimensionality-free numerical method for solution of certain HJB pdes, SIAM J. Control Optim., 46, 1239-1276, (2007) · Zbl 1251.65168 [30] McEneaney, WM., Deshpande, A., Gaubert, S.: Curse-of-complexity attenuation in the curse-of-dimensionality-free method for HJB PDEs. In: Proceedings of the 2008 American Control Conference, Seattle, Washington, USA, pp. 4684-4690 (2008). doi:10.1109/ACC.2008.458723 · Zbl 1237.68054 [31] Messine, F.: Extensions of affine arithmetic: application to unconstrained global optimization. JUCS 8(11), 992-1015 (2002) · Zbl 1274.65184 [32] Montanher, T.M.: Intsolver: An Interval Based Toolbox for Global Optimization. Version 1.0, www.mathworks.com (2009) · Zbl 1229.90192 [33] Nagata, J.: Modern General Topology. Bibliotheca Mathematica. North-Holland Pub. Co, Amsterdam (1974) · Zbl 0181.25401 [34] Parrilo, P.A., Sturmfels, B.: Minimizing polynomial functions, DIMACS Ser. In: Discrete Math. Theoret. Comput. Sci., vol. 60, American Mathematical Society, Providence, RI, pp 83-99 (2003) · Zbl 1099.13516 [35] Peyrl, H; Parrilo, PA, Computing sum of squares decompositions with rational coefficients, Theor. Comput. Sci., 409, 269-281, (2008) · Zbl 1156.65062 [36] Putinar, M, Positive polynomials on compact semi-algebraic sets, Indiana Univ. Math. J., 42, 969-984, (1993) · Zbl 0796.12002 [37] Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series. Princeton University Press, Princeton, NJ (1970) [38] Sankaranarayanan, S., Sipma, H.B., Manna, Z.: Scalable analysis of linear systems using mathematical programming. In: Cousot, R. (ed) Proceedings of the Verification, Model Checking and Abstract Interpretation (VMCAI), Springer, Paris, France, LNCS, vol. 3385, pp. 21-47 (2005) · Zbl 1111.68514 [39] Sridharan, S; Gu, M; James, MR; McEneaney, WM, Reduced-complexity numerical method for optimal gate synthesis, Phys. Rev. A, 82, 319, (2010) [40] Waki, H; Kim, S; Kojima, M; Muramatsu, M, Sums of squares and semidefinite programming relaxations for polynomial optimization problems with structured sparsity, SIAM J. Optim., 17, 218-242, (2006) · Zbl 1109.65058 [41] Zumkeller, R.: Rigorous Global Optimization. PhD thesis. École Polytechnique (2008) · Zbl 1237.68054 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# What does “/2” in the structure of calcium benzoate mean? Please see the following figure. This is from the high school chemistry book of my son. He wants to know what does that /2 after $\ce{-COOCa}$ mean. The book is a non English one. And I have changed the relevant caption to English. Primary searches over Google have not revealed anything usable. And my major not being Chemistry might have to do something with this. The organic acid is $\rm{COO}^-$, whereas calcium is $\rm{Ca}^{2+}$. For the structure to be fully correct, there are two benzoic acids per one calcium. Other way round this would be $\rm{(C_6H_5COO)_2Ca}$. Full drawing of this will be way to messy, so it is simpler to put half calcium to one acid.
# Adjusting wavelength to experimental data See code in GitLab. Author: Jason Bayer jason.bayer@ihu-liryc.fr This tutorial demonstrates how to adjust parameters in tissue simulations to match experimental data for conduction velocity, action potential duration, and wavelength. To run the experiments of this tutorial change directories as follows: ## Experimental data The mapping of electrical activity with optical imaging on the epicardium of human ventricles provides an accurate measurement of CV 1 and APD 2 . The data from these studies is shown below and were recorded during baseline pacing with a cycle length of 1000 ms. Tissue state $$CV_{l}$$ (cm/s) $$CV_{t}$$ (cm/s) $$APD_{80}$$ (ms) $${\lambda}_{l}$$ (cm) $${\lambda}_{t}$$ (cm) Nonfailing 92 22 340 34 8 ## 1D cable model A 1.5 cm cable of epicardial ventricular myocytes is used to initially adjust CV and APD to experimentally derived values. The model domain was discretized with linear finite elements with an average edge length of 0.02 cm. ## 2D sheet model A 1.5 cm x 1.5 cm sheet of epicardial tissue is used to verify CV and APD derived in the 1D cable model. The model domain is discretized using quadrilateral element finite elements with an average edge length of 0.02 cm, and a longitudinal fiber direction in each element parallel to the X-axis of the sheet. The mesh was generated using the command below according to the mesher tutorial. ./mesher -size[0] 1.5 -size[1] 1.5 -size[2] 0.0 -resolution[0] 200.0 -resolution[1] 200.0 -resolution[2] 0.0 ## Ionic model For this example, we use the most recent version of the ten Tusscher ionic model for human ventricular myocytes 3 that was modified for the study 4 . This ionic model is labeled GTT2_fast in openCARP's LIMPET library. After a quick literature search, one will find that the slow outward rectifying potassium current $$I_{K_{s}}$$ is heterogeneous across the human ventricular wall 5 . Therefore, the maximal conductance $$G_{K_{s}}$$ of $$I_{K_{s}}$$ is adjusted in order to match the APD=340 ms derived experimentally. Note, the default value for $$G_{K_{s}}$$ in epicardial ventricular myocytes is 0.392 nS/pF. ## Pacing protocol The left side of the 1D cable model and the center of the 2D sheet model is paced with 5-ms-long stimuli at twice capture amplitude for a cycle length and number of beats chosen by the user. ## Conduction velocity To determine initial conditions for the tissue conductivities along ($$\sigma_{il}$$, $$\sigma_{el}$$) and transverse ($$\sigma_{it}$$, $$\sigma_{et}$$) the fibers in the models, tuneCV is used as described in the tutorial Tuning Conduction Velocities. The commands to obtain the two conductivities are listed below. ./tuneCV --converge true --tol 0.0001 --velocity 0.92 --model GTT2_fast --sourceModel monodomain --resolution 200.0 ./tuneCV --converge true --tol 0.0001 --velocity 0.22 --model GTT2_fast --sourceModel monodomain --resolution 200.0 The initial conductivities resulting from tuneCV are $$\sigma_{il}=0.3544$$ S/m, $$\sigma_{el}=1.27$$ S/m, $$\sigma_{it}=0.024$$ S/m, and $$\sigma_{et}=0.0862$$ S/m. To compute CV, activation times are computed for the last beat of the pacing protocol using the openCARP option LATs (see tutorial on electrical mapping), with activation time recorded at the threshold crossing of -10 mV. CV is then computed along the cable by taking the difference in activation times at the locations 1.0 cm and 0.5 cm divided by the distance between the two points. For the sheet model, CV is computed along the longitudinal and transverse fiber directions by taking the difference in activation times at the locations illustrated in the figure below. Specifically, $$CV_{l}$$ = 0.25/(L2-L1) and $$CV_{t}$$ = 0.25/(T2-T1). The locations of L1 and T1 are 0.25 cm away from the tissue center, and L2 and T2 are 0.5 cm away from the tissue center. ## Action potential duration Activation potential duration is computed at 80% repolarization ($$APD_{80}$$) according to [Bayer2016]. This is achieved by using the igbutils function igbapd as illustrated below. ./igbapd --repol=80 --vup=-10 --peak-value=plateau ./vm.igb ## Usage The following optional arguments are available (default values are indicated): ./run.py --help --dimension Options: {cable,sheet}, Default: cable Choose cable for quick 1D parameter adjustments, then the 2D sheet to verify the adjustments. --GKs Default: 0.392 nS/pF Maximal conductance of IKs --Gil Default: 0.3544 S/m Intracellular longitudinal tissue conductivity --Gel Default: 1.27 S/m Extracellular longitudinal tissue conductivity --Git Default: 0.024 S/m Intracellular transverse tissue conductivity --Get Default: 0.0862 S/m Extracellular transverse tissue conductivity --nbeats Default: 3 Number of beats for pacing protocol. This number should be much larger to achieve steady-state --bcl Default: 1000 Basic cycle length for pacing protocol --timess Default: 1000 Time before applying pacing protocol After running run.py, the results for APD, CV, and ($$\lambda$$) can be found in the file adjustment_results.txt within the output subdirectory for the simulation. If the program is ran with the --visualize option, meshalyzer will automatically load the $$V_m(t)$$ for the last beat of the pacing protocol. Output files for activation and APD are also produced for each simulation and can be found in the output directory and loaded into meshalyzer. 1. Determine the value for IKs to obtain the experimental value for APD in the cable. 2. Place this value in the sheet model to verify the model is behaving in the same manner as the simple cable model. 3. Determine a set of Gil and Gel (keep ratio for Gil/Git the same) so that both the longitudinal and transverse wavelengths are less than 8 cm. 1. A GKs=0.25 nS/pF is needed to obtain the APD of 340 ms. ./run.py --GKs 0.25 --dimension cable 1. The following command produces the same APD in the sheet model. ./run.py --GKs 0.25 --dimension sheet 1. The following commands produce longitudinal and transverse wavelengths less than 8 cm. ./run.py --GKs 0.25 --Gil 0.02 --Git 0.0135 --dimension cable ./run.py --GKs 0.25 --Gil 0.02 --Git 0.0135 --dimension sheet References 1. Glukhov AV, Fedorov VV, Kalish PW, Ravikumar VK, Lou Q, Janks D, Schuessler RB, Moazami N, Efimov IR. Conduction remodeling in human end-stage nonischemic left ventricular cardiomyopathy. Circulation, 125(15):1835-1847, 2012. [Pubmed]↩︎ 2. Glukhov AV, Fedorov VV, Lou Q, Ravikumar VK, Kalish PW, Schuessler RB, Moazami N, and Efimov IR. Transmural dispersion of repolarization in failing and nonfailing human ventricle. Circ Res, 106(5):981-991, 2010. [Pubmed]↩︎ 3. ten Tusscher KHWJ, Panfilov AV. Alternans and spiral breakup in a human ventricular tissue model. Am J Physiol Heart Circ Physiol, 291(3):H1088-H1100, 2006. [Pubmed]↩︎ 4. Bayer JD, Lalani GG, EJ Vigmond, SM Narayan, NA Trayanova. Mechanisms linking electrical alternans and clinical ventricular arrhythmia in human heart failure. Heart Rhythm, 13(9):1922-1931, 2016. [Pubmed]↩︎ 5. Pereon Y, Demolombe S, Baro I, Drouin E, Charpentier F, Escande D. Differential expression of kvlqt1 isoforms across the human ventricular wall. Am J Physiol Heart Circ Physiol, 278(6):H1908-H1915, 2000. [Pubmed]↩︎ © Copyright 2020 openCARP project    Supported by DFG    Contact    Imprint and data protection
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Apr 2019, 19:32 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # M19-32 Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 54376 ### Show Tags 16 Sep 2014, 01:07 00:00 Difficulty: 5% (low) Question Stats: 84% (01:04) correct 16% (00:57) wrong based on 159 sessions ### HideShow timer Statistics Jane is 5 years younger than George. George is 2 years older than Alice. If the aggregate age of Jane, George, and Alice is 68 years, how old is Jane? A. 20 B. 22 C. 23 D. 25 E. 28 _________________ Math Expert Joined: 02 Sep 2009 Posts: 54376 ### Show Tags 16 Sep 2014, 01:07 Official Solution: Jane is 5 years younger than George. George is 2 years older than Alice. If the aggregate age of Jane, George, and Alice is 68 years, how old is Jane? A. 20 B. 22 C. 23 D. 25 E. 28 Jane is 5 years younger than George: $$J=G-5$$; George is 2 years older than Alice: $$A=G-2$$; The aggregate age of Jane, George, and Alice is 68 years: $$J+G+A=68$$. Substitute: $$(G-5)+G+(G-2)=68$$, which gives $$G=25$$. Since $$J=G-5$$ then $$J=25-5=20$$. _________________ Intern Joined: 14 Mar 2015 Posts: 16 Location: India Concentration: General Management, Strategy Schools: ISB '18, IIMA , IIMB GPA: 3.53 WE: Engineering (Other) ### Show Tags 05 Sep 2017, 23:36 Bunuel wrote: Jane is 5 years younger than George. George is 2 years older than Alice. If the aggregate age of Jane, George, and Alice is 68 years, how old is Jane? A. 20 B. 22 C. 23 D. 25 E. 28 J = G-5 G = 2+A A+G+J = 68 J=? (G-2)+G+(G-5)=68 3G=75 G=25 J=25-5=20 Ans is A Re: M19-32   [#permalink] 05 Sep 2017, 23:36 Display posts from previous: Sort by # M19-32 Moderators: chetan2u, Bunuel Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
# Tag Info 9 As a student I was involved in the same problem as you are. Let me explain to you in the simplest words without any math. Convolution: It is used to convolute two function. May sound redundant but I´ll put an example: You want to convolute (in a non math term to "combine") a unit cell (which can contain anything you want: protein, image, etc) and a ... 7 I can tell you of at least three applications related to audio. Auto-correlation can be used over a changing block (a collection of) many audio samples to find the pitch. Very useful for musical and speech related applications. Cross-correlation is used all the time in hearing research as a model for what the left and ear and the right ear use to figure ... 7 Let $\theta_a$ and $\theta_c$ respectively denote the maximum magnitudes of the off-peak or out-of-phase periodic autocorrelation functions and the periodic crosscorrelation functions of a set of $K$ sequences of length $N$ and energy $\sum_{n=0}^{N-1}|x[n]]|^2 = N$. In a seminal paper published in 1974, Welch proved that $$\max\big(\theta_a, \theta_c\big)\... 7 No. Quoting Wikipedia's article Independence (probability theory): If X and Y are independent random variables, then the expectation operator \operatorname{E} has the property$$\operatorname{E}[X Y] = \operatorname{E}[X]\operatorname{E}[Y].$$Consider your X(t_1) and Y(t_2) as X and Y in this answer. If both \operatorname{E}[X] \ne ... 6 I guess you can compute for each pixel the correlation coefficient between patches centered on this pixel in the two images of interest. Here is an example where I downloaded the figure attached here and tried to compute the correlation in such a way. The output looks different from the one of the article, but it was to be expected since the resolution is ... 6 This is basically what @hooman suggests: fit a parabola to the three points near the peak of the sample cross-correlation of the data. Using the formula for p here:$$ p = \frac{1}{2} \frac{\alpha - \gamma}{\alpha - 2\beta + \gamma} $$where \alpha,\beta, and \gamma are the values of the sample cross-correlation just before the peak, at the peak, and ... 6 You're correct as the Cross Correlation function vanishes. This has the implicit assumption the process has zero mean (Actually, at least one of them). Namely, in order to have {R}_{XY} \left( \tau \right) = 0 having X \left( t \right) \perp Y \left( t \right) isn't enough but at least of them has zero mean (Namely, \mathbb{E} \left[ X \left( t \... 5 I suppose you mean the cross-correlation at lag zero. Well take an Hilbert space H (i.e. a metric space in which you can define a scalar product \langle\cdot ,\cdot\rangle). Then x,y\in H are orthogonal if \langle x,y\rangle=0, by definition. If your Hilbert Space is L_2(\mathbb{R}) (the space of real square integrable functions) then the scalar ... 5 What are reasons to choose for cross-correlation or cross-covariance when comparing signals with non-zero mean? Well, part of the issue is that cross-correlation as defined in your equation:$$(f \star g)[n]\ \stackrel{\mathrm{def}}{=} \sum_{m=-\infty}^{\infty} f^*[m]\ g[m+n].$$will not exist (or be infinite) if f and g have non-zero mean. So, in ... 5 The general topic of finding similarities between signals is wide ranging: are the signals of same sampling, length, offset, shift or scale? where do they take their values (discrete, real, complex)? are they stationary? noisy? what do you consider similar (whole signals, chunks, specific features)? which are the invariances looked for? and most important:... 5 Lagrange parabolic estimator The standard Lagrange polynomial parabolic interpolation peak finding formula from Peter's answer,$$p = \frac{1}{2} \frac{\alpha - \gamma}{\alpha - 2\beta + \gamma}has bias as function of the true delay d if the cross-correlation peak is that of a critically sampled sinc. If the sampling frequency is increased, the ... 5 The cross-spectral density is in the frequency domain while the cross-correlation function is in the time domain. The two are Fourier Transform pairs, the FT of the cross correlation function is the cross-spectral density. The the two provide the same information, just that one is in the time domain and the other is in the frequency domain. This is just as ... 5 Cross correlation is a measure of similarity between two signals, where one signal is allowed to be time-shifted. In this sense, the correlation is not a single number, but a function of the time shift. We say, "these two signal have a certain correlation R(\Delta) for a time shift \Delta". Intuitively, two signals that tend to have the same sign (both ... 5 As your plot shows, the second form allows for the correlation peak to be negative. Now, what does a strong negative cross correlation mean? It means the signals are very similar, except one has a negative sign in front of it, i.e., x_1 \approx -x_2. Whether or not this makes sense depends a lot on the actual application. In the application you describe, ... 5 Since this is an FIR, the group delay is D=(N-1)/2=20 samples. No, since this is a linear phase (i.e. symmetric or anti-symmetric) filter, the group delay is half the length! (being a FIR isn't sufficient.) The issue is that I get too peaks in the cross correlation, one at zero lag and another at 20 lag. Write down the formula for auto-correlation at ... 4 I suspect your problem occurs due to some scaling issues. Basically you need to normalize your research image to the pattern template by subtracting the mean value of the template. And it is better calculate the ratio of correlation to the standard deviation of both images. I don't know which programming language you are using. I wrote a Matlab code for you ... 4 You're basically doing a bank of hypothesis to find your signal using Matched Filter. Though you use a slightly different method. First of all, you should leave the signal in the time domain and calculate the cross correlation or their multiplication at the frequency domain. Yet, since your signal doesn't have unknown phase (Or delay) multiplication will ... 4 I'm afraid your statement isn't true. This can best be seen in a suitable choice of basis, one that simplifies the cross correlation. This basis if of course the shift invariant periodic Fourier basis on your support interval. Let's label the basis vectors F_n for integer n. The cross correlation of two different such basis vectors vanishes, because the ... 4 This would be a cumbersome way to detect heart beats (or the QRS complex), if that is what you are trying to do ultimately. A little bit about what you are trying to do currently: Your observations are correct and to these I would like to add that no two heart beats are the same and therefore, strictly speaking, your template will be aligning just with ... 4 On closer inspection, I discovered that the erroneous correlation result resembles the correct result, but shifted up and to the left. The former was displayed in scientific format, so it was hard to see the pattern at first. The reason is that taking the conjugate is equivalent to flipping the whole zero-padded kernel, and not just the original kernel (... 4 The function xcorr calculates the correlation of 2 signals. The correlation is known to be a good (The MLE) for delay estimation under Gaussian Noise. Yet, as can be seen in your data you're not using it in the cases it meant to be used. If we assume you have a model of a known signal with Additive White Gaussian Noise (AWGN or any other Additive White ... 4 When using a constant tone audio beacon, beware of room echoes causing multi-path interference and distortion, especially around the leading and trailing portions of your received waveforms. Try using a frequency sweep instead of a constant tone for your transmit waveform. This might provide you with a sharper correlation peak that is less likely to have ... 4 assuming finite power signals: \lVert x \rVert^2 \triangleq \lim_{N \to \infty} \ \frac{1}{2N+1} \sum\limits_{n=-N}^{+N} \big|x[n] \big|^2 \ < +\infty $$this is a Hilbert Space sorta thingie. define inner product:$$ \langle x,y \rangle \triangleq \lim_{N \to \infty} \ \frac{1}{2N+1} \sum\limits_{n=-N}^{+N} x[n] \cdot \overline{y}[n] where $\... 4 if you are searching for similarity between two signals in frequency domain, you can go for coherence. Coherence indicates frequency components common to both signals 4 You can fit a curve to the points around the peak of the crosscorrelation ontained by xcor and find the peak of the fitted curve. Ideally you know the cross correlation function of your signals and you fit that function. For practical purposes a parabola also would do. As a rule of thumb for this approach to work properly the bandwidth of your signal should ... 4 What you have (conceptually) is not a 2D array but a collection of 1D arrays. correlate2D is designed to perform a 2D correlation calculation, so that's not what you need. Iterating through all pairs is not a big ask really - you can still use numpy to perform the cross correlation, you'll just need to have two loops (nested) to determine which signals to ... 4 For GPS, (simplifying for now by omitting corrections for ionsphere and orbit position and relativistic clock offsets), we determine the "Pseudo-range" to each satellite (SV), which will be the relative delay between all the received satellites we have correlated to relative to our local clock- using correlation as you described (delay each locally generated ... 4 It means the best match to template happens outside the image. For instance, let's say your template is 5 by 5. And you got answer which is -1, -1. It means the part of the image which best matches you image is centered at [-1, -1] and you only have part of it in your image. This is really an extreme case. P. S. If you share your data (2 Images) we'll be ... 4 You can use the formulas presented in the answers to: How to calculate a delay (correlation peak) between two signals with a precision smaller than the sampling period? To recap, find the largest value, called$\beta$. Take also the values of the samples just to the left of it,$\alpha$, and just to the right of it,$\gamma\$. Then calculate the peak ... 4 I have a PRN generator that I have validated with live captured signals that is available on the Mathworks Exchange site at this address and equally runs in Octave (Update: I also pasted the core of this in a code block below): https://www.mathworks.com/matlabcentral/fileexchange/14670-gps-c-a-code-generator The two tap coder is as given in the diagram in ... Only top voted, non community-wiki answers of a minimum length are eligible
# Parametrization of $S^3$ embedded in $\mathbb R^4$? I would like to know of any parametrization of the standard 3-sphere: {$(x_1,x_2,x_3,x_4): x_1^2+x_2^2+x_3^2+x_4^2=1$} embedded in $\mathbb R^4$. I know of parametrizations for $S^1$, for $S^2$ , but I cannot think of how to parametrize $S^3$ as above. The closest I found in a search was a formula using quaternions; is it possible (I would prefer, if possible) to avoid using quaternions. Thanks for any ideas - If $$\sin^2 u + \cos^2 u =1$$ then $$(\sin v \sin u)^2 + (\sin v \cos u)^2 = \sin^2 v$$ so $$(\sin v \sin u)^2 + (\sin v \cos u)^2 +\cos^2 v = 1.$$ Can you repeat the same procedure once more? After that, you will have to delimit the values of the parameters if you want to parametrize the sphere exactly once. Alternatively, you can use the formula $$(a^2-b^2-c^2-d^2)^2 + (2ab)^2 + (2ac)^2 + (2ad)^2 = (a^2+b^2+c^2+d^2)^2,$$ which gives you a parametrization of the sphere by rational functions. - How is it possible to parametrize the 3-sphere with just two parameters $u,v$ , like you did in the top part? It seems that if you could use just 2 parameters, you would get the 2-sphere $S^2$. –  user99680 Nov 8 '13 at 1:54 @user99680 Exactly, what I wrote is the parametrization of the $2$-sphere. Which I got from the parametrization of the circle by a simple manipulation. Now I'm saying that you should apply the same manipulation once more: multiply both sides of the equation by $\sin^2 w$ for some third parameter $w$, and then... –  Bruno Joyal Nov 8 '13 at 1:56 :Oh, I see, thanks; let me try it. –  user99680 Nov 8 '13 at 1:59 @user99680 My pleasure. Let me know if you need more help. –  Bruno Joyal Nov 8 '13 at 2:03 :So I get: $(sinwsinvsinu)^2+(sinwsinvcosu)^2+(sinwcosv)^2+cos^2w=1$ . Is this correct? –  user99680 Nov 8 '13 at 2:08 You can repeat the "sines and cosines" process one more time if you want. Like our friend @Bruno Joyal said: For the 1-sphere: $X(\phi)=(\cos(\phi),\sin(\phi))$, so that its squared coordinates sums up to 1. $\cos^2(\phi) + \sin^2(\phi) = 1$ For the 2-sphere: $X(\phi,\psi)=(\cos(\phi)\cdot\cos(\psi),\sin(\phi)\cdot \cos(\psi),\sin(\psi))$, so that its squared coordinates sums up to 1. $\cos^2(\phi)\cdot\cos^2(\psi) + \sin^2(\phi) \cdot \cos^2(\psi) + \cos^2(\psi) =$ $=(\cos^2(\phi) + \sin^2(\phi))\cdot \cos^2(\psi)+\sin^2(\psi)=1$ Repeat the process by multiplying the coordinate functions by $\cos(\theta)$ and adding another coordinate function $\sin(\theta)$, obtaining: $X(\phi,\psi,\theta)=(\cos(\phi)\cdot\cos(\psi)\cdot \cos(\theta),\sin(\phi)\cdot \cos(\psi)\cos(\theta),\sin(\psi)\cdot\cos(\theta),\sin(\theta))$ You may quickly verify that its squared coordinates sums up to 1. This last $X$ is a parametrization of a 3-sphere in $\mathbb{R}^4$. You may repeat this process as much as you like to obtain the parametrization of a n-sphere in $\mathbb{R}^{n+1}$. As pointed out by @user99680, you may not parametrize a 3-sphere with just 2 parameters as a 3-sphere is what we call a 3-manifold. Hence it has 3 dimensions and every little piece of it is homeomorphic to an open set in $\mathbb{R}^3$. If there was a way to parametrize a 3-sphere with just 2 parameters, every little neighbourhood of it would be homeomorphic to an open set of $\mathbb{R}^2$. This would imply that we could establish an homeomorphism from an open set of $\mathbb{R}^3$ to an open set of $\mathbb{R}^2$, which is absurd. -
President Alison Etheridge is IMS President, 2017–18. She writes: Well, that was it. Jon Wellner handed over the gavel (and I rapidly handed it back to Elyse for safe keeping). Tati took a picture and, before I had a chance to think about it, cheerily persuaded me to write something for…
Unusual p-values after weighting I'm still new to R and most probably this is a rookie question, but maybe some of you could help me understand what is happening. I'm analyzing some results of an experiment in which I have three treatments and three outcomes. For simplicity I will only take the control and treatment 1 and the three outcome measures. Before running any statistical test, I needed to create a weighting variable because the sample is not representative of the population. To do so, I've used the anesrake package. This is the code of the weighting: Sex <- c(.49,.51) agecat <- c(.085, .136, .184, .194, .401) names(Sex) <- c("Male", "Female") names(agecat) <- c("18-24", "25-34","35-44","45-54", "Mas que 55") targets <- list(Sex, agecat) names(targets) <- c("Sex", "agecat") #I create a unique id inmy main dataframe main_data_id$$caseid <- 1:length(main_data_id$$Sex) main_data_id$$Sex <- as.factor(main_data_id$$Sex) main_data_id$$agecat <- as.factor(main_data_id$$agecat) main_data_id$$education <- as.factor(main_data_id$$education) #I check the difference between the population and sample distribution of anesrakefinder(targets, main_data_id, choosemethod = "total") #all greater than 5%points main_data_id$$caseid <- as.numeric(main_data_id$$caseid) weighted_data <- anesrake(targets, main_data_id, caseid = main_data_id$caseid, verbose= FALSE, cap = 5, choosemethod = "total", type = "pctlim", pctlim = .05 , nlim = 5, iterate = TRUE , force1 = TRUE) summary(weighted_data) # add weights to the dataset main_data_id$$weightvec <- unlist(weighted_data[1]) n <- length(main_data_id$$Sex) After having created the weighting variable I'm now performing t-tests (weighted and unweighted) to compare the mean of measure 1/2/3 between control and treatment. When comparing the results of the weighted and unweighted t-tests I've noticed something unusual. The treatment was not significant on measure 2 and 3 and the results resulting from weighted and unweighted t-tests are very similar. EX: Unweighted: Welch Two Sample t-test data: treatment1_2$$Measure1 by treatment1_2$$treatment t = -0.62172, df = 509.92, p-value = 0.5344 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.4920101 0.2554664 sample estimates: mean in group 0 mean in group 1 7.320463 7.438735 Weighted: $coefficients t.value df p.value 0.8140750 505.6230093 0.4159852 $additional Difference Mean.x Mean.y Std. Err 0.1553317 7.3958085 7.2404768 0.1908076 But on measure 1 it was significant when performing the normal t-test, but when doing the weighted one the p-value is completely off. Example: Unweighted: Welch Two Sample t-test data: treatment2_3$$Measure1 by treatment2_3$$treatment t = 3.5345, df = 508.84, p-value = 0.0004458 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.308046 1.079077 sample estimates: mean in group 0 mean in group 1 7.438735 6.745174 Weighted: [1] "Two Sample Weighted T-Test (Welch)"$coefficients t.value df p.value 4.133672e+00 4.976646e+02 4.189322e-05
Guess the answers. The quantum harmonic oscillator is the quantum analogue to the classical simple harmonic oscillator. We set ℏ, ! and the mass equal to 1. For the case of the harmonic oscillator, the potential energy is quadratic and hence the total Hamiltonian looks like: H= − ¯h 2 2m d dx2 + 1 2 kx2 (1. Two and three-dimensional harmonic osciilators. We first discuss the exactly solvable case of the simple harmonic oscillator. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Freely available research-based interactive simulations with accompanying activities for the learning and teaching of quantum physics and quantum mechanics from the introductory to the advanced level. Quantum In class we discussed that classically a mass oscillating in a harmonic oscillator potential is more likely to be found at the extremities of the oscillation when it has the highest potential energy and lowest kinetic energy. 2 Expectation value of \hat{{x}}^{2} and \hat{{p}}^{2} for the harmonic oscillator. If we ignore the mass of the springs and the box, this one works. How to Verify the Uncertainty Principle for a Quantum Harmonic Oscillator. Plug this formula into the TISE and you'll see that it works as long as a = 1=2 and E. This paper presents a variant of multiscale quantum harmonic oscillator algorithm for multimodal optimization named MQHOA-MMO. The linear harmonic oscillator, even though it may represent rather non-elementary objects like a solid and a molecule, provides a window into the most elementary structure of the physical world. It is useful to exhibit the solution as an aid in constructing approximations for more complicated systems. For the motion of a classical 2D isotropic harmonic oscillator, the angular momentum about the. 2) is symmetric in. The noncommutativity in the new mode, induces energy level splitting, and is. How can a rose bloom in December? Amazing but true, there it is, a yellow winter rose. 3: Histogram of the radial wavefunction. Then, using the optimal vacuum obtained, we construct the. It can be seen as the motion of a small mass attached to a string, or a particle oscillating in a well shaped as a parabola. Bright, like a moon beam on a clear night in June. Edit: I also update the linked answer to include the analogue of this approach in two dimensions. SYNOPSIS The Harmonic Oscillator's Quantum Mechanical solu-tion involves Hermite Polynomials, which are introduced here in various guises any one of which the reader may. A particle in a square well has a wave function (at time t= 0) ψ(x) = (q 2 a sin 2πx a 0 0, 0, for x<0. In 1D, the dipole system has discrete energy levels. The quantum harmonic oscillator is a model built in analogy with the model of a classical harmonic oscillator. The wave function is the product of the Hermite functions and exponential function If we simply replace, we can see the ground state consists of s-orbit, the 1st excited state consists of p-orbit, and the 2nd excited state consists of d-orbit. Simple Harmonic Oscillator February 23, 2015 One of the most important problems in quantum mechanics is the simple harmonic oscillator, in part 5 Wave function Nowconsiderthewavefunction, n(x),fortheeigenstates. Ground State Wavefunction of Two Particles in a Harmonic Oscillator Potential 4 In nonrelativistic Quantum Mechanics, is the expectation value of a sum of operators always equal to the sum of the expectation values?. Calculate the force constant of the oscillator. Andrei Tokmakoff, MIT Department of Chemistry, 3/10/2009 6- 12 6. k is called the force constant. You just get used to them. The quantum harmonic oscillator (in 1 D) The Hamiltonian for the harmonic oscillator is prepared by relating the potential energy to Hooke’s law: V(x) = 1 2 Kx2: H = −(¯h2 2m)(d2 dx2)+ 1 2 Kx2 and Hψ= Eψ. 11) Summary: Energy level of three different cases. It consists of a mass m, which experiences a single force F, which pulls the mass in the direction of the point x = 0 and depends only on the position x of the mass and a constant k. The dashed vertical lines represent the position of x = 1 and -1. The ground state of a simple quantum harmonic oscillator is a Gaussian function. How to Verify the Uncertainty Principle for a Quantum Harmonic Oscillator. Ogawa3, and K. It is useful to exhibit the solution as an aid in constructing approximations for more complicated systems. In the center of the applet, you will see the probability distribution of the particle's position. (1) supply both the energy spectrum of the oscillator E= E n and its wave function, = n(x); j (x)j2 is a probability density to find the oscillator at the. The Equation for a Harmonic-Oscillator Model of a Diatomic Molecule Contains the Reduced Mass of the Molecule For a diatomic molecule, there is only one vibrational mode, so there will be only a single set of vibrational wavefunctions with associated energies for this system. 1 Quantum Mechanics - The Harmonic Oscillator 1. It provided a tremendous boost to the eld of statistical mechanics, because it was fully consistent with experimental observations of the day. Now we want the eigenfunction coresponding to our eigenvalue. We take the dipole system as an example. HARMONIC OSCILLATOR - EIGENFUNCTIONS IN MOMENTUM SPACE 3 A= m! ˇh¯ 1=4 (16) and H n is a Hermite polynomial. at least three approaches to analytically solving the TISE for the simple harmonic oscillator: 1. Guess the answers. Compare your results to the classical motion x(t) of a. Two Dimensional Harmonic Oscillator in Cylindrical Coordinates. adjacent energy levels is 3. In fact, it's possible to have more than threefold degeneracy for a 3D isotropic harmonic oscillator — for example, E 200 = E 020 = E 002 = E 110 = E 101 = E 011. It is obvious that our solution in Cartesian coordinates is simply, Normalization of wave function Timothy D. The simple harmonic oscillator, a nonrelativistic particle in a potential $$\frac{1}{2}kx^2$$, is an excellent model for a wide range of systems in nature. We can see that this amounts to replac-ing x!pand m!!1 m!, so we get n(p)= 1 (ˇhm!¯ )1=4 1 p 2nn! H n p p hm!¯ e p2=2hm!¯ (17) In particular, the ground state is. Many potentials look like a harmonic oscillator near their minimum. More generally it is a superposition. The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. In the two iterations, MQHOA-MMO only does one thing: sampling according to the wave function at different scales. where k is a constant called the eigenvalue. Harmonic oscillator; Morse oscillator; Current-biased Phase Qubit; Flux-biased Phase Qubit; 2D examples. Physics 422 - 01 Homework Set 4 1. wavefunction. For math, science, nutrition, history. more practice with the H atom. Pictorially, this suggests that J points almost along the zaxis. Ground State Wavefunction of Two Particles in a Harmonic Oscillator Potential 4 In nonrelativistic Quantum Mechanics, is the expectation value of a sum of operators always equal to the sum of the expectation values?. 600 A Energy Wave Functions of Harmonic Oscillator A. Adding an anharmonic contribution to the potential generally changes the form of the trajectories (obtained by solving Newton's equations of motion), into nonperiodic, complicated curves. We will solve the time-independent Schrödinger equation for a particle with the harmonic oscillator potential energy, and. Again, I need help simply starting. the 2D harmonic oscillator. The phase of the real and imaginary parts change with time but the probability density is independent of time. Fig 1: The plot of the quantum mechanical gravitational potential plus harmonic oscillator potential as a function of internuclear distance ‘ ’. This is a purely QM phenomenon! Tunneling is a general feature of QM systems, especially those with very low mass like e- and H. The linear rigid rotor model consists of two point masses located at fixed distances from their center of mass. -----EN 1 = H NN 1 =∫ (ψ N 0)* H1 ψ N 0dτ, N = 1 for first excited state H1 = H - H0 H0 = -h2/(2m) {d2/dx2} + k x2/2 H1 = c x3 + d x4 For the harmonic oscillator, α = 2πνm/h = 4π2νm/h & v = 0 is the ground. 4 Profile of the absorbing imaginary potential. Use the ground-state wavefunction of the simple harmonic oscillator to find x avg, (x 2) avg, and Δx. Media in category "Harmonic oscillators" The following 91 files are in this category, out of 91 total. The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. 1 Chemistry 2 Lecture 5 The Simple Harmonic Oscillator Learning outcomes • Be able to draw the wavefunctions for the first few solutions to the Schrödinger equation for the harmonic oscillator • Be able to calculate the energy separation between the vibrational levels for the. In this paper we will study the 2D-harmonic oscillator in 1:1 resonance. Harmonic oscillators are ubiquitous in physics and engineering, and so the analysis of a straightforward oscillating system such as a mass on a spring gives insights into harmonic motion in more complicated and nonintuitive systems, such as those. HARMONIC OSCILLATOR - EIGENFUNCTIONS IN MOMENTUM SPACE 3 A= m! ˇh¯ 1=4 (16) and H n is a Hermite polynomial. As for the cubic potential, the energy of a 3D isotropic harmonic oscillator is degenerate. It is especially useful because arbitrary potential can be approximated by a harmonic potential in the vicinity of the equilibrium point. Learn about position, velocity, and acceleration vectors. n is your n_x and m is your n_y. For small displacements, this is just a harmonic oscillator. This simulation shows time-dependent 2D quantum bound state wavefunctions for a harmonic oscillator potential. Ogawa3, and K. A set of benchmark test functions including. The Fock-Darwin states are the natural basis functions for a system of interacting electrons trapped inside a 2D quantum dot. 1 Green’s functions The harmonic oscillator equation is mx + kx= 0 (1) This has the solution x= Asin(!t) + Bcos(!t); != r k m (2) where A;Bare arbitrary constants re ecting the fact that we have two arbitrary initial conditions (position and velocity). The wave function is the product of the Hermite functions and exponential function If we simply replace , we can see the ground state consists of s-orbit, the 1st excited state consists of p-orbit, and the 2nd excited state consists of d-orbit. In fact, we may cast any Hamiltonian H = p2 2 + V(x)= H 0 − 1 2 x2. I'd like to find the normalized ground state wavefunction for the anharmonic oscillator (Duffing) whose potential for which there is no analytic solution; an oscillator with a quartic potential, in addition to the quadratic potential. In the two iterations, MQHOA-MMO only does one thing: sampling according to the wave function at different scales. Solutions to the quantum harmonic oscillator. oscillator in sections 2 and 3. Quantum Mechanics Non-Relativistic Theory, volume III of Course of Theoretical Physics. Solving that equation allows one to calculate the stationary wave function of the harmonic oscillator and the corresponding values of the energy. 3 Expectation Values 9. We introduce a mesonic field Φ(x1 ,x2) that depends on the position of both quarks, and then derive the field equations from a covariant lagrangian L(x1, x2). In physics, the harmonic oscillator is a system that experiences a restoring force proportional to the displacement from equilibrium = −. (c) Is this wave function for the ground state or for the first excited state?. The model captures well. 24) The probability that the particle is at a particular xat a particular time t is given by ˆ(x;t) = (x x(t)), and we can perform the temporal average to get the. Explain the origin of this recurrence. I've learned a lot from the help I have received here on SolutionLibrary, and I'm going to try doing this one on myself and hopefully I'll do it right. SYNOPSIS The Harmonic Oscillator's Quantum Mechanical solu-tion involves Hermite Polynomials, which are introduced here in various guises any one of which the reader may. The significance of equations 26 and 32 is that we know exactly which energies correspond to which excited state of the harmonic oscillator. 3D Symmetric HO in Spherical Coordinates *. The second term containing bx 4, however, has a value 3 b 4 α 2 and so makes a contribution towards the ground state energy of the oscillator. k is called the force constant. The Harmonic Oscillator, a Review Here, we review the physics of the one-dimensional harmonic oscillator, a quantum system describing a 1D particle with Hamiltonian H^ = p^2 2m + 1 2 m!2^x2: (1) As we have seen, a key problem is to understand the energy eigenstates of this Hamiltonian, i. A simple sine wave, when graphed, represents a balanced parabola extended in a curved line up to the zenith and down to the apex with no sudden, jerky movements. Example notebooks 1D examples. For math, science, nutrition, history. jpeg 800 × 600; 119 KB. The main differences are that the wave function is nonvanishing only for !L 2 0 is Φ 0f (x) = (m2ω/(πħ)) ¼ exp(-mωx 2 /ħ). Two and three-dimensional harmonic osciilators. Lewis-Riesenfeld quantization and SU(1, 1) coherent states for 2D damped harmonic oscillator. The ground state is a Gaussian distribution with width x 0 = q ~ m!. The exact energy eigenvalues and the wave functions are obtained in terms of potential parameters, magnetic field strength, AB flux field, and magnetic quantum. Suppose we measure the average deviation from equilibrium for a harmonic oscillator in its ground state. A one dimensional harmonic oscillator has an infinite series of equally spaced energy states, with , where is a positive integer or zero, and is the classical frequency of the oscillator. by Reinaldo Baretti Machín ( UPR-Humacao ) The energy formula of the two dimensional harmonic oscillator in cylindrical coordinates is found by numerical integration of Schrodinger equation. The time-dependent wave function The evolution of the ground state of the harmonic oscillator in the presence of a time-dependent driving force has an exact solution. (20 points) Consider as the unperturbed Hamiltonian the two-dimensional harmonic oscillator: where we have made the assumption that the angular frequency w is the same in both the r and u directions a) Denote the energy eigenstates as |n y), where n is the quantum number for oscillations in the x-direction and ny is the quantum number for. Quantum Mechanics Problem Sheet 6 Basics 1. Transformed harmonic oscillator wave functions Next: Parametrization of the LST Up: Transformed Harmonic Oscillator Basis Previous: Local-scaling point transformations The anisotropic three-dimensional HO potential with three different oscillator lengths. As for the cubic potential, the energy of a 3D isotropic harmonic oscillator is degenerate. She needed a physical example of a 2D anisotropic harmonic oscillator (where x and y have different frequencies). Subject: Image Created Date: 10/27/2007 12:08:02 AM. Write an integral giving the probability that the particle will go beyond these classically-allowed points. Ask Question Asked 2 years ago. 3 Wave Function Comparison for Ground State of the In nite Potential Well. e is quantized. A sine wave or sinusoid is a mathematical curve that describes a smooth periodic oscillation. Solving the Schrodinger equation for the harmonic oscillator potential produces a set of distinct wavefunctions and energy levels. Harmonic oscillator wave functions and probability density plots using spreadsheets Popat S. 4 The Two-Dimensional Central-Force Problem The 2D harmonic oscillator is a 2D central force problem (as discussed in TZD Many physical systems involve a particle that moves under the influence of a central force; that is, a force that always points exactly toward, or away from, a force center O. com (Received 20 December 2010 , accepted 28 January 2011) Abstract. In Equation ( 15 ), f x is the operator and can be examined in the forms of power of the coordinate x η , exponential function e − 2 c x , and Gaussian function e − c x 2. (b) Find b and the total energy E. Quantum Wave Function Visualization - Duration: Coherent State of the Harmonic Oscillator in 2D (Quantum Mechanics). 3 Expectation Values 9. The original dimension-9 algebra can be identi ed as u(3) = u(1) su(3). $\begingroup$ In Jens' answer, isn't the 1/(2 a^2) bit there to take into account the factor of 1/2 in front of the laplacian? Also, the Partition is there because he is representing 2d space in a 1d vector (basically, he discretises space, then take the 2d matrix and set the rows one after the other to each other so as to form a 1d vector; the Partition undoes this). A sine wave is a continuous wave. Figure 5 The quantum harmonic oscillator energy levels superimposed on the potential energy function. Post navigation ‹ Previous What is a SSB Modulation and Its Applications. A Brief Introduction to the Quantum Harmonic Oscillator Salvish Goomanee King’s College London, UK Email address: salvish. Raising operator is formed using a finite difference operator, and when acted on ground state wave function, produces excited states. Harmonic oscillator wave function using Schrodinger and equations of the harmonic oscillator are derived. Calculate the expectation values of X(t) and P(t) as a function of time. The color indicates the phase. There are different approaches to solving the quantum harmonic oscillator. 14(b)] Confirm that the wavefunction for the first excited state of a one-dimensional linear harmonic oscillator given in Table 8. Time-Dependent 2D Harmonic Oscillator in Presence of the Aharanov-Bohm Effect Article (PDF Available) in International Journal of Theoretical Physics 45(9):1791-1797 · November 2006 with 130 Reads. A harmonic oscillator (quantum or classical) is a particle in a potential energy well given by V ( x )=½ kx ². Write an integral giving the probability that the particle will go beyond these classically-allowed points. Or different wave functions corresponding to the same energy level. Matrix elements over the harmonic oscillator wave function are defined as follows: (15) ν ∣ f x ∣ ν ′ = ∫ − ∞ ∞ ψ ν α, x f x ψ ν ′ α ′, x ′ d x. 1 Classical Case The classical motion for an oscillator that starts from rest at location x 0 is x(t) = x 0 cos(!t): (9. Clone via HTTPS Clone with Git or checkout with SVN using the repository’s web address. Inviting, like a flre in the hearth of an otherwise dark. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Review : 1-D a†a algebra of U(1) representations Review : Translate T(a) and/or Boost B(b) to construct coherent state Review : Time evolution of coherent state (and "squeezed" states) 2-D a†a algebra of U(2) representations and R(3) angular momentum operators 2D-Oscillator basic states and operations Commutation relations Bose-Einstein symmetry vs Pauli-Fermi-Dirac (anti)symmetry. A simple harmonic oscillator is an oscillator that is neither driven nor damped. Andrei Tokmakoff, MIT Department of Chemistry, 3/10/2009 6- 12 6. Quantum Harmonic Oscillator 6 By letting we can rewrite : Quantization of Energy Recall that in the course of this derivation, the following substitutions were made: and: therefore: Since is a non-negative integer, then can only take on discrete values, i. Harmonic oscillator; Morse oscillator; Current-biased Phase Qubit; Flux-biased Phase Qubit; 2D examples. James Clerk Maxwell unknowingly discovered a correct relativistic, quantum theory for the light quantum, forty-three years before Einstein postulated the photon's existence. We prove a spectrum localization theorem and obtain a regularized trace formula for a compactly supported perturbation of this operator. Consider a diatomic molecule AB separated by a distance with an equilbrium bond length. Motion of a particle on a ring. It is then shown that it gives the same results as the wave function in the position basis. Use the same method, just change the wavefunction to that for the first excited state. A set of benchmark test functions including. Independent of the initial conditions, the trajectories in a 2D harmonic oscillator are ellipses. A simple, harmonic oscillator at the point x=0 generates a wave on a rope. Transformed harmonic oscillator wave functions Next: Parametrization of the LST Up: Transformed Harmonic Oscillator Basis Previous: Local-scaling point transformations The anisotropic three-dimensional HO potential with three different oscillator lengths. (b) Find b and the total energy E. The Harmonic Oscillator, a Review Here, we review the physics of the one-dimensional harmonic oscillator, a quantum system describing a 1D particle with Hamiltonian H^ = p^2 2m + 1 2 m!2^x2: (1) As we have seen, a key problem is to understand the energy eigenstates of this Hamiltonian, i. This is because the imaginary part of the. Varga1;4 1Department of Physics, Niigata University, Niigata 950-21, Japan 2Graduate School of Science and Technology, Niigata University, Niigata 950-21, Japan 3 RIKEN, Hirosawa, Wako, Saitama 351-01, Japan. Figure (1) show the time evolution for a number of time steps of the real and imaginary parts of the wavefunction and the probability density for the stationary state n = 3 of the truncated harmonic oscillator. oscillator in sections 2 and 3. 42 Example Consider the 2D harmonic oscillator V 1 2 mω 2 x 2 y 2 If we measure from PHYS 44 at University of Edinburgh. 2D Quantum Harmonic Oscillator. nuclear wavefunction on the ground state with the time-evolution of the same wavepacket on the when initially projected onto the excited state Ft t t( )= ϕϕge( ) ( ). p By substituting in the Schrödinger equation for the harmonic oscillator, show that the ground-state vibrational wave function is an eigenfunction of the total energy operator. 14(b)] Confirm that the wavefunction for the first excited state of a one-dimensional linear harmonic oscillator given in Table 8. The Quantum Harmonic Oscillator. A harmonic oscillator (quantum or classical) is a particle in a potential energy well given by V ( x )=½ kx ². A one-dimensional harmonic oscillator wave function is. Quantum Harmonic Oscillator. The Finite Well. Since we now have the eigenvalue, we do not want to keep recalculating the. The fixed distance between the two masses and the values of the masses are the only characteristics of the rigid model. PROBLEM SET SOLUTIONS CHAPTER 9, Levine, Quantum Chemistry, 5th Ed. We define a new set of ladder operators for the 2D system as a linear combination of the x and y ladder operators and construct the SU(2) coherent states, where these is the wave-function of the 1D oscillator, and n( ) are the Hermite polynomials. Normalize wave function. Schmidt Department of Physics and Astronomy Arizona State University. The harmonic oscillator The one-dimensional harmonic oscillator is arguably the most important ele-mentary mechanical system. goomanee@kcl. This can be written in dimensionless form as H0 Ñw = 1 2 p p0 2 + 1 2 x x0 2. We just include Output = wfs OutputFormat = axis_x. The short of it is that it's the kinetic energy minus the potential energy of a given mass*. This levels is known as degenerate levels. 1 Classical Case The classical motion for an oscillator that starts from rest at location x 0 is x(t) = x 0 cos(!t): (9. Now we want the eigenfunction coresponding to our eigenvalue. Harmonic oscillator (PDF: 18 pages, 250 KB). is a model that describes systems with a characteristic energy spectrum, given by a ladder of. A Brief Introduction to the Quantum Harmonic Oscillator Salvish Goomanee King’s College London, UK Email address: salvish. 1 is a solution of the Schrödinger equation for the oscillator and that its energy is ω. The plot of the potential energy U(x) of the oscillator versus its position x is a parabola (Figure 7. In quantum mechanics a harmonic oscillator with mass mand frequency!is described by the following Schr¨odinger’s equation: ~2 2m d2 dx2 + 1 2 m!2x2 (x) = E (x): (1) The solution of Eq. The red line is the expectation value for energy. ψ = A x e − b x 2 (a) Show that ψ satisfies Equation 40. James Clerk Maxwell unknowingly discovered a correct relativistic, quantum theory for the light quantum, forty-three years before Einstein postulated the photon's existence. The aim of this. It is especially useful because arbitrary potential can be approximated by a harmonic potential in the vicinity of the equilibrium point. for an anharmonic oscillator. 1 2-D Harmonic Oscillator. 1) where we will consider the integer nas finite, so that it is also true that m≫ 1. Thus, you. But let me consider the 1-dimensional harmonic oscillator, to avoid extraneous complications. If we ignore the mass of the springs and the box, this one works. Calculation of energy eigenvalues for the quantum 0 is the Hamiltonian for a harmonic oscillator with the mass and the angular frequency squeezed vacuum state as a one-parameter trial wavefunction, and minimize the energy of the system by variation [4]. Newton's law of motion F = ma is generally non-linear, since F(x) is usually a non-linear function of x. goomanee@kcl. HARMONIC OSCILLATOR AND COHERENT STATES Figure 5. Harmonic Oscillator and Coherent States 5. Locate the nodes of the harmonic oscillator wave function with v = 5. Transformed harmonic oscillator wave functions Next: Parametrization of the LST Up: Transformed Harmonic Oscillator Basis Previous: Local-scaling point transformations The anisotropic three-dimensional HO potential with three different oscillator lengths. The quantities L'+1=2 n are the generalized. for an anharmonic oscillator. Using the ground state solution, we take the position and. (20 points) Consider as the unperturbed Hamiltonian the two-dimensional harmonic oscillator: where we have made the assumption that the angular frequency w is the same in both the r and u directions a) Denote the energy eigenstates as |n y), where n is the quantum number for oscillations in the x-direction and ny is the quantum number for. 2 The wavefunction is separable in Cartesian coordinates, giving a product of three one-dimensional oscillators with total energies. Quantum harmonic oscillator is an important model system taught in upper level physics and physical chemistry courses. Time-Dependent 2D Harmonic Oscillator in Presence of the Aharanov-Bohm Effect Article (PDF Available) in International Journal of Theoretical Physics 45(9):1791-1797 · November 2006 with 130 Reads. The wavefunction contains all the information about the state of the system. When a system is in an eigenstate of observable A (i. How to Verify the Uncertainty Principle for a Quantum Harmonic Oscillator. 2D harmonic oscillator + 1D double well potential (type II): 0 = 1/ , 0 = ℏ/ , = 0, 2 2𝑉 = 1 2 − + 2 + 2 IV. The corrections, due to the boundary and the space dimension, to the ground-stste energy and wave function are calculated by using a linear approximation method which is linear in energy and by. Schrodinger s three regions (we already did this!) 2. Thus, it is. Introduction We return now to the study of a 1-d stationary problem: that of the simple harmonic oscillator (SHO, in short). More generally it is a superposition. Motion of a particle on a ring. Ground State Wavefunction of Two Particles in a Harmonic Oscillator Potential 4 In nonrelativistic Quantum Mechanics, is the expectation value of a sum of operators always equal to the sum of the expectation values?. wavefunction, the wavefunction of the state at the bottom of the ladder,whichistheground state ofthesimpleharmonicoscillator,has energy E = 1 2 h¯ω. The time-independent Schrödinger equation for a 2D harmonic oscillator with commensurate frequencies can generally given by. Vibration and Rotation of Molecules Chapter 18 Molecular Energy Translational Vibrational motion -harmonic oscillator, KE and PE -classical approach Center of mass coordinates Rotational wave function. Quantum Harmonic Oscillator 6 By letting we can rewrite : Quantization of Energy Recall that in the course of this derivation, the following substitutions were made: and: therefore: Since is a non-negative integer, then can only take on discrete values, i. wavefunction. Normalize wave function. Therefore, we can replace t in y = f(x) = Acosωt by t + x / v for the wave travelling in negative x-direction. Adding an anharmonic contribution to the potential generally changes the form of the trajectories (obtained by solving Newtons equations of motion) into nonperiodic complicated curves. 02; grid = N[a Range[-n, n]]; derivative2 = NDSolve`FiniteDifferenceDerivative[2, grid]["DifferentiationMatrix"]. Easy interview question got harder. As for the cubic potential, the energy of a 3D isotropic harmonic oscillator is degenerate. p By substituting in the Schrödinger equation for the harmonic oscillator, show that the ground-state vibrational wave function is an eigenfunction of the total energy operator. In following section, 2. Plot the wavefunction of the final state for the two-dimensional harmonic oscillator. Harmonic Oscillator. The Fock-Darwin states are the natural basis functions for a system of interacting electrons trapped inside a 2D quantum dot. polar coordinates in two dimensions.
# A Course of Pure Mathematics pdf download free Download A Course of Pure Mathematics pdf free. This is the book written by G.H. Hardy from the film The Man Who Knew Infinity, the same Mathematician who helped the famous Indian mathematician Srinivasa Ramanujan find his recognition. There can be few textbooks of mathematics as well-known as Hardy's Pure Mathematics. Since its publication in 1908, it has been a classic work to which successive generations of budding mathematicians have turned at the beginning of their undergraduate courses. In its pages, Hardy combines the enthusiasm of a missionary with the rigor of a purist in his exposition of the fundamental ideas of the differential and integral calculus, of the properties of infinite series and of other topics involving the notion of limit. download link: http://noteable.site/2S9/A_Course_Of_Pure_Mathematics.pdf Top
# Homework Help: Evidence for Quark Production Tags: 1. Mar 16, 2015 ### unscientific 1. The problem statement, all variables and given/known data (a) What are their vertex and propagator factors? (b) Find the value of R. (c) Explain the peaks at 3, 10 and 100 GeV. 2. Relevant equations 3. The attempt at a solution Part (a) They have the same propagator factor $\frac{1}{P \cdot P}$. Vertex factor for muon production is $(1)(-1)g_{EM}^2 = -g_{EM}^2$. Vertex factor for hadron production is $\sum -q^2$. Part (b) In the rage of 2GeV to 20GeV, types of quarks produced: u,d,s,c,b. $$R = 3 \times \frac{e^2\left( -\frac{4}{9} -\frac{1}{9} - \frac{1}{9} - \frac{4}{9} - \frac{1}{9} \right)}{-e^2} = \frac{11}{3}$$ Factor of 3 comes from the 3 colours, R, G and B. Part (c) Masses of quarks: u, d ~ $10^{-3} GeV$, s~ $0.1GeV$, c~$1 GeV$, b~$4eV$, t~$170GeV$. Peak at 3GeV comes from production of charmed quarks. Peak at 10GeV comes from production of bottom quarks. Peak at 100GeV comes from production of top quarks. 2. Mar 16, 2015 ### Staff: Mentor Not all of them are possible over the full range of 2 to 20 GeV. Not the general production, that leads to the flat R you calculated. There is something more special going on at those peaks (related to the quarks). Compare that to the mass of two top quarks. Can that be right? 3. Mar 18, 2015 ### unscientific Mass of quarks d,u,s,c,b are given by 0.005, 0.003, 0.1, 1.2, 4.2 GeV respectively. That's clearly lower than 20GeV, what is restricting their formation? If it's not the heavier quarks that's forming, I'm guessing it's the formation of hadrons? You're right in saying that two top quarks will have energy of 340GeV, but can't a combination of top and other quarks be formed? 4. Mar 18, 2015 ### Staff: Mentor At 20 GeV all of them are possible, but at 2 GeV that is not true. There is a different particle at 90 GeV. The top quark decays before it hadronizes, and even if it would form hadrons they would be above the top mass. Hadrons do not have an energy significantly above the quark masses - of the order of 1 GeV more is possible, but not 80 GeV. 5. Mar 18, 2015 ### unscientific Sorry I should have been more specific. Here are the range for quark production: uds : 2-3 GeV udsc: 3-10 GeV udscb: 10-20 GeV To summarize, peak at 3GeV corresponds to the J/psi $(c \bar c)$ meson being produced, peak at 10GeV corresponds to the upsilon $(b \bar b)$ meson being produced and peak at 90 GeV corresponds to the $Z^0$ Boson. But Z meson is produced by coupling to all the standard model fermions. Hadronic: $f = u, d, s ...$ Leptonic: $f = e^-, \mu^-, \tau^-$ Other (invis): $f = \nu_{e, \mu, \tau}$ None of these have mass 45 GeV. How does the Z boson get its mass of 90 GeV? Last edited: Mar 18, 2015 6. Mar 18, 2015 ### Staff: Mentor Right. It is an elementary particle with this mass. It is not a bound state of other particles.
# Counterclockwise Rotation Calculator So what's the torque about that point due to each of the three forces that act on the object? Use the rule I stated in my first post to determine whether a given torque is clockwise or counterclockwise. All bullets and factory loads have a published G1 Ballistic Coefficient. Those that rotate the plane counterclockwise (to the left) are called levorotatory (from the Latin laevus, "left"). Rule When we rotate a figure of 90 degrees counterclockwise, each point of the given figure has to be changed from (x,y) to (-y. A rotation in the x-y plane by an angle θ measured counterclockwise from the positive x-axis is represented by the real 2×2 special orthogonal matrix,2 cosθ −sinθ sinθ cosθ. This is the MODE used in computing problems involving COMPLEX NUMBERS. To cancel rotation, just use a G69. Then, simply connect the points to create the new figure. If the preimage is rotated in a clockwise direction, the angle of rotation is negative. About goggintoric. You can type them in yourself or let the calculator create the values. The amount of this rotation is obtained with the aid of the disc. Take this as a given. This would work with both horizontal and vertical lines. motion calculator," begin at zero and move around the GDF counterclockwise. RotationMatrix[\[Theta]] gives the 2D rotation matrix that rotates 2D vectors counterclockwise by \[Theta] radians. Find the degree measure of the angle for each rotation of 3600. When a geometric object is rotated, it is rotated about a given point through a given angle. →x represents a counterclockwise rotation by the angle α followed by a counterclockwise rotation by the angle β. The rotation matrix is easy get from the transform matrix, but be careful. 439 drinking straw for demonstration purposes straightedge Playing Division Dash Student Reference Book, p. Had the meter been marked Kh 3. Rule for 90° counterclockwise rotation:. As the Earth spins in a counter-clockwise direction on its axis, anything flying or flowing over a long distance above its surface is deflected. Then indicate the lens rotation, direction, and vertex. If this figure is rotated 90 ° counterclockwise, find the vertices of the rotated figure and graph. The relative orientation between two orthogonal righthanded 3D cartesian coordinate systems, let's call them xyz and ABC, is described by a real orthogonal 3x3 rotation matrix R, which is commonly parameterized by three so-called Euler angles α, β and γ. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. Checking on a calculator: sin(135) = 0. Selecting the axis of rotation at the 1 cm mark will eliminate any torques produced by the pivot (force P) and thus, the torque produced by. Covers the terminology and notation for the four quadrants of the plane, and answers some typical homework problems related to quadrants. Join the coolest social network. The hub is weighted so that it does not rotate, but it contains gears to count the number of wheel revolutions—it then calculates the distance traveled. Rotations and complex eigenvalues Math 130 Linear Algebra D Joyce, Fall 2015 Rotations are important linear operators, but they don't have real eigenvalues. Buttons in the pop-up window. Chapter 10 - Rotation and Rolling II. rotation of the dial equals 1 cubic foot of water or 7. For example, for part (a) the axis is O. How to determine pump rotation by impeller design. If you are using a servo controller though and are mixing Hitec and Futaba servos it may cause a problem. About the Rotation Sensor. What am I missing? What am I missing? Note A friend suggested that it's the coordinate system that's being rotated, but the wolfram site (linked above) seems to explicitly exclude that from being the cause of my misunderstanding (see (1) and (3) on the wolfram site). The most common rotation angles are 90 degrees, 180 degrees, 270 degrees etc. 100 in valve lift. 4) Problem 9. This means that to reverse the direction of motion, change positive values to negative, and negative values to positive. Usually, you will be asked to rotate a shape around the origin, which is the point (0, 0) on a coordinate plane. A geometric interpretation of multiplication. (5), the moment of inertia depends on the axis of rotation. Related Topics. Because of this discrete nature of step – wise rotation of a stepper motor, they are often employed in industrial automation, CNC systems, etc. information concerning the direction of the rotation and the amount of the rotation necessary to move from the initial side of the angle to the terminal side. REFLECTIONS. Asking some people whether their propeller is Left or Right Hand, Clockwise Rotating or Counter Clockwise Rotating can often generate a blank look. Calculate the specific rotation of (2 R,3 R)-tartaric acid based on the following observation: A 0. RotationMatrix[{u, v}] gives the matrix that rotates the vector u to the direction of the vector v in any dimension. Selecting the axis of rotation at the 1 cm mark will eliminate any torques produced by the pivot (force P) and thus, the torque produced by. For less bottom-out resistance,, turn the knob counterclockwise. C ONSIDER THE FIRST QUADRANT point (a, b), and let us reflect it about the y-axis. center of rotation, and image points is x. Since Zwift was released, there has been a steady run of updates and new features. Learn about the rules for 90 degree clockwise rotation about the origin. The clamp arm rotates 90° as it extends away from the work piece providing clearance and access for unload/ load operations. The converse is also true. But the other thing is, if you think about it, a lot of the rotations that you might want to do in R3 can be described by a rotation around the x-axis first-- which we did in this video-- then by rotation around the y-axis and then maybe some rotation around the z-axis. Here, Δ A ' B ' O is obtained by rotating Δ A B O by 180 ° about the origin. informing of the 90° rotation overlap (use this warning for safe operation). Featured Item: Optical Rotation Carbon is the core element around which the chemistry of life has evolved. Define counterclockwise rotation. There are integer number variants of the functions as well. "Center" is the 'center of rotation. Solving the 5x5x5 (Professor) Cube. 24 Remove the servo sleeve retainers. In this lesson you’ll learn about the concepts and the basics of Translation, Reflection, Dilation, and Rotation. A clockwise (typically abbreviated as CW) motion is one that proceeds in the same direction as a clock's hands: from the top to the right, then down and then to the left, and back up to the top. Press ENTER. Using this convention, a vector with a direction of 30 degrees is a vector that has been rotated 30 degrees in a counterclockwise direction relative to due east. The hub is weighted so that it does not rotate, but it contains gears to count the number of wheel revolutions—it then calculates the distance traveled. For example the matrix. So lets plug that in for all of ur points. 07/08/2019 17:25:35. Referring to the above figure (Goldstein 1980), the equation for the "fixed" vector in the transformed coordinate system (i. Forward direction - Reverse power 4. Rotation Input Rotation Metering Slots Figure 6 22 Remove the two dowel pins from cylinder barrel face, see figure 5. check the examination booklet before you start. The direction of rotation caused by this force is shown with a curved arrow. Typical Clutch Installation. clockwise or counterclockwise? (b) an axis through C, perpendicular to the page. This post has been set up as the UNOFFICIAL Zwift User manual, to keep abreast of the changes and document the tips, tricks, issues and information which has been announced and also discovered by the world of Zwift riders. In 1852 Foucault used a gyroscope to demonstrate the Earth's rotation. If this direction is parallel to the x-axis, only the x-component of the field will survive and the y-component will be removed. Rotations with matrices. A rotation by 90° about the origin can be seen in the picture below in which A is rotated to its image A'. so a simple check if oldAngle > new Angle will fail to tell you that you went left. Reverse direction - Reverse power Self Locking. Still, Polaris is famous because the entire northern sky wheels around it. clockwise or counterclockwise? The torque here is net torque, so I don't understand how to apply the right-hand rule here. 4) Problem 9. It is an online Geometry tool requires side length of a square. FixPicture. Upgrade unit to 1 RPM and Counterclockwise Rotation. (a) Find the standard matrix A for the linear transformation T. But such an ellipse can always be obtained by starting with one in the standard position, and applying a rotation and/or a translation. Its three points (x, y) are displayed in a vertex matrix. The positive sense of the translation and rotation are also shown in the figure. Calculating net torque is a common exercise in physics classes, and it is usually taught during an introduction to rotational equilibrium. Many people think Polaris is the sky's brightest star. Referring to the sketch above, let $\vec{r}=x\vec{e_{x}}+y\vec{e_{y}}$ be rotated counterclockwise by angle $\theta$ radians to the vector [math. The triangles formed by the. The latter curves are. In a rotation, the original figure and its image are congruent. Pji MtitProjection Matrix The 4The 4××4 projection matrix is really just a linear 4 projection matrix is really just a linear transformation in homogeneous space It doesn’t actually perform the projection, but. Direction of the torque in this situation is “-” because force rotates the object in counterclockwise direction. When the line has reached the position M'N' , its original point of tangent A has reached the position K , having traced the involute curve AK during the motion. FLATHEAD ENGINE BASICS by "rumbleseat" FORD FLATHEAD WEIGHT: 1934 through 1948 V-B 85/90/100 hp flathead engines weigh 525 lbs (with cast iron heads). Take this as a given. Calculate the lever-arm for the 100-g mass, using the above equation, and compare it with the experimental value. Still, Polaris is famous because the entire northern sky wheels around it. Rule When we rotate a figure of 90 degrees counterclockwise,. This is why under the "specification" section for each servo we list the direction of rotation as either clockwise or counter clockwise. Polarimetry – Optical Rotation vs. of the figure after a rotation of y) 8. Add this item to your cart with the item above if you would like it to be 1 RPM and ALSO have a Counterclockwise Rotation Motor built into the unit. in this solar system) rotate. The figure is rotated counterclockwise or clockwise about a point. And since a rotation matrix commutes with its transpose, it is a normal matrix, so can be diagonalized. Borrowing aviation terminology, these rotations will be referred to as yaw, pitch, and roll: A yaw is a counterclockwise rotation of about the -axis. Now rotate the crank counterclockwise until you have dropped at least 0. Take the solar system, for example. 15 m diameter and goes through 200,000 rotations, how many kilometers should the odometer read? Microwave ovens rotate at a rate of about 6 rev/min. Noting that any identity matrix is a rotation matrix, and that matrix multiplication is associative, we may summarize all. Rotations and complex eigenvalues Math 130 Linear Algebra D Joyce, Fall 2015 Rotations are important linear operators, but they don't have real eigenvalues. Add this item to your cart with the item above if you would like it to be 1 RPM and ALSO have a Counterclockwise Rotation Motor built into the unit. The product and inverse of rotations, or combinations of reflection and rotation, are again matrices of the same type. Then indicate the lens rotation, direction, and vertex. If impacting the anterior maxilla, ccw rotation occurs and the mandible follows. Other than the fairly exotic ability to follow a “NURBS” path, most g-code controllers only support two kinds of motion: linear and. Clockwise and counterclockwise rotation can be assessed only in the chest-leads (V1 - V6). Do the results make sense in terms of your answer in part (a)? Recall the trigono-metric. From this point forward, all rotations will be counterclockwise, unless stated. When a figure is turned about a given point, the transformation is called a. Counterclockwise rotation Positive angle (a) Clockwise rotation Negative angle (b) Counterclockwise. Below, find out the proper blade rotation for the summer, the reasons why, and how exactly to right your fan’s course. 4 Euler angles 12 Uniform random rotation matrices 13 See also 14 Notes. Calculate the net torque (magnitude and direction) on the beam about the following axes. Mohr's Circle Equation •The circle with that equation is called a Mohr's Circle, named after the German Civil Engineer Otto Mohr. This rotation is counter-clockwise. The rotation angle to the principal axis is θ p which is 1/2 the angle from the line AB to the horizontal line FG. Determine the angle of rotation. Right ascension is measured from this point eastward along the equator, counterclockwise as viewed from the northern celestial pole. Odometer: the odometer records total water use in a similar way as the odometer in your car records miles driven. Because all rotational motions have an axis of rotation, a torque must be defined about a rotational axis. The sun and all the planets all formed around five billion years ago when a huge cloud of gas and dust in space collapsed because of gravitation. Because ˇ 2 >0, it is a counterclockwise rotation. In Figure 1, this normal is the unit normal represented by the line OA. Polarimetry – Optical Rotation vs. A moment also has a sense; A clockwise rotation about the center of moments will be considered a positive moment; while a counter-clockwise rotation about the center of moments will be considered negative. The Rotation Period and Day Length of the Moon. Journal of Genetics 170: 2027-2030. Cat transmissions are proven in the oil and gas industry and widely known for their exceptional power, leading durability, ease of operation, and shifting options. For about one month before and after, it appears to move `backwards' across the sky. So, the terminal side will be one-fourth of the way around the circle, moving counterclockwise from the positive x-axis. The second butterfly curve shows what. dir – 1 and counterclockwise dir – 0. As with straight-line motion, we can define the positive direction based on what's convenient in a particular case. In other words, it is the multiplication of force and the shortest distance between application point of force and the fixed axis. In this section we are going to take a look at a theorem that is a higher dimensional version of Green's Theorem. One available option is a stroke limiter that allows a precise stopping at any degree less than maximum. Home >; Techtips >; A Primer on Ignition Timing for your Classic Ford; Popular Subjects: Chevy; LS Engines; Engine Building; Ford; Mopar; Transmissions & Drivelines. Clutch surface is tempered to provide effective rust protection. Clockwise and Counterclockwise Clockwise. A clockwise rotation is a negative moment and a counterclockwise rotation is a positive moment. Heavy Duty Four Wing Galvanized Steel Props are constructed with galvanized steel blades for long lasting durability and strength. Try it yourself: Most screws and bolts are tightened, and faucets/taps are closed, by turning clockwise. Consider the following. Driveline geometry. 4a Oil well pumping rig, adapted from Meriam and Kraige (1992). Translation gives the option to move up,down,left,or right one unit. Coordinates. The dynamometer graphs (rod load versus rod position) that are used. Heavy Duty Four Wing Galvanized Steel Props Are Used For General, Industrial And Commercial Applications. If the axes are rotated counterclockwise, then the point itself appears to rotate clockwise, with respect to fixed axes. This problem can easily be fixed but not as easy as flipping a switch on a transmitter. So, the terminal side will be one-fourth of the way around the circle, moving counterclockwise from the positive x-axis. Solutions for Review Problems 1. Standard 1in. The ray is allowed to rotate. 1 day ago · So, added 2 280mm Rads to the Loop, one EK Coolstream 45mm, and One AlphaCool UT60, along with a D5 Pump/Res Combo, went in the loop just fine, and did reduce overall temps by about 7c under load, but instead of jumping straight to 110c with Prime 95 Small FFTs at 27c Room Temp, I am Jumping Straight to 103c at the same 27c Room Test Instantly, but only Climbing a degree or 2 Above that after. Reverse direction - Reverse power Self Locking. Solving the 5x5x5 (Professor) Cube. Cat transmissions are proven in the oil and gas industry and widely known for their exceptional power, leading durability, ease of operation, and shifting options. The inverse of a rotation matrix is its transpose, which is also a rotation matrix: The product of two rotation matrices is a rotation matrix: For n greater than 2, multiplication of n×n rotation matrices is not commutative. NEMA MG-1-1998 Rev 2 Para 14. The Beam Calculator allows for the analysis of stresses and deflections in straight beams. What is one fourth of a rotation going counterclockwise? 1/4 of 360 degrees = 90 degrees which is a right angle Which figure a square or trapezoid will rotate onto itself in 90?. Rotations; Rotations; Clockwise Rotation Exploration; Counter-clockwise Rotation Exploration; Scale Factor Right Triangle Exploration; Reflections; Exploring Translations; Dilation about the Origin; Rotations of Polygons; Rotate Point About Origin; Rotating 180 degrees about the origin. Secondly, you need to know an angle of rotation that tells you exactly how far to rotate and last you need a direction, either clockwise or counterclockwise, so to make this little more spefic, I've drawn a little diagram here that shows rotating point a to a prime about point p x degrees counterclockwise, so again the f8need to know is if you. Clean the knob thoroughly. This calculator seems to take my negative coordinate as indication of counter-clockwise rotation. In this picture, we have a different situation where the object is fixed to the wall with an angle to the horizontal. While there are still many other ways to bypass a combination padlock, including the use of a shim, bolt cutters, locksmiths, or even a blowtorch, none are quite as elegant and non-destructive as old school combination cracking. While their origins are mysterious, their uses are pragmatic. Shaft is supported by 2 bearings. Another type of Rotational Equilibrium problem in AP Physics that is commenly seen on exams is the Ladder Syle Problem. Clutch surface is tempered to provide effective rust protection. Use a remainder instead of the actual measured value. If we want to rotate a figure we operate similar to when we create a reflection. In simpler terms, imagine gluing a triangle to the second hand of a clock that is spinning backwards. Checking on a calculator: sin(135) = 0. • If the rotation is in a counterclockwise direction, the angle formed is a positive angle. What is the sign of the torque in the figure? Torques are measured in the units of force times distance. rotation, it is far from obvious that AB will also be a rotation (around some mysterious third axis). The main cause of the Coriolis effect is the Earth's rotation. This website provides free access to calculators designed by Associate Professor Michael Goggin, cataract and refractive surgeon, based in Adelaide, Australia. Are positive numbers clockwise or counter-clockwise? I can't remember myself, but there's a little cheat that helps me look smart in front of a class. The angle of rotation, is the calculation of how many degrees a shape or an object should be turned if it needs to look the same as its original position. 0 mL with water and placed in a 1. shaft can be used for tillers, compressors, log splitters, edgers and more. The (counterclockwise) rotation matrices are the orthogonal matrices of determinant 1. Calculating clockwise/anti-clockwise angles from a point. The rotation of an angle in standard position originates from the initial ray. So, a net torque will cause an object to rotate with an angular acceleration. Since the direction of a positive angle in a circle is counterclockwise, we take counterclockwise rotations as being positive and clockwise rotations as negative. This is a clockwise rotation. The rotation angle is defined to be positive for a rotation that is counterclockwise when viewed by an observer looking along the rotation axis towards the origin. The different phases of the moon include new moon, full moon, first quarter, third quarter, waxing and waning crescent, and waxing and waning gibbous. To understand the relationship between linear and angular speed. We will focus on rotation about a single axis of rotation, which is analogous to one-dimensional straight-line motion. Rotation can be done clockwise as well as counterclockwise. 2) When labeling bends that will be rotated, refer to the amount of rotation as the horizontal and/or. 8L/351 Water Pumps, Mechanical with Counterclockwise Water Pump Rotation and get Free Shipping on Orders Over $99 at Summit Racing!. Transformation of Graphs Using Matrices - Rotations A rotation is a transformation in a plane that turns every point of a preimage through a specified angle and direction about a fixed point. Some people would refer to the voltages created by this generator to be "counter-clockwise" because if you start with A: the A-Phase Voltage reaches its peak first, followed by the C-Phase Voltage, and then. To get the precise relationship between angular and linear velocity, we again consider a pit on the rotating CD. Calculate the correct lens to order for soft toric lenses. Determin the c ordinates of point (x, y) after rotatio s Bf 900, 1800 2700, and 3600. Rotation Worksheets Rotation worksheets have numerous practice pages to rotate a point, rotate triangles, quadrilaterals and shapes both clockwise and counterclockwise (anticlockwise). Note: This is only an upgrade option, you must add the item you wish to upgrade to your shopping cart as well. Around V3 or V4 the R waves become larger than the S waves and this is called the 'transitional zone'. What is the image after a rotation 270° counterclockwise about the origin?. Spinning the crank clockwise again, turn the engine until the dial reads 0. Young stars rotate faster than old stars, and massive stars tend to rotate faster than low-mass stars. Obtaining rotation is much easier when the force is applied far from the hinges as the force F 3 in the figure (hence the placement of door handles opposite the hinges). You can shortcut some of these steps by defining rotation directly on axial coordinates, but hex vectors don't work for offset coordinates and I don't know a shortcut for offset coordinates. The angle is 0° to +180° measured as a counter-clockwise rotation from the positive x axis, or is 0° to -180° measured as a clockwise rotation from the positive x axis,. Letter A, which we place in the middle, labels the point where the two lines meet, and is called the vertex of the angle. A formula which transforms a given coordinate system by rotating it through a counterclockwise angle about an axis. [Geometry Goal 3] • Investigate the relationship between rotations and degrees. Rotation about the Origin 3. The angle 135° has a reference angle of 45°, so its sin will be the same. translate it 8 units to the right, then reflect it over the line y =-3 Session 1—Mathematics (No Calculator) Mathematics Grade 8 Page 11 GO ON TO THE NEXT PAGE. A geometric interpretation of multiplication. The main cause of the Coriolis effect is the Earth's rotation. full rotation of the disc corresponds to 7. To calculate the speed and angular velocity of objects. 199º u u A u B qL3 24 EI (150 lb in. Loading 90 Degree Rotations. Loading Rotation about a Point. Reverse direction - Reverse power Self Locking Imagine the gear ratio in the previous sign drive is 60:1 instead of 5:1. Negative Angle. As in the 2D case, the first matrix, , is special. There are four possible cases: 1. New coordinates by rotation of axes Calculator - High. The rotation angle is the counter-clockwise motion of your index finger during the rotation. Learn how to rotate a figure and different points about a fixed point. Using this convention, a vector with a direction of 30 degrees is a vector that has been rotated 30 degrees in a counterclockwise direction relative to due east. How can we calculate the required torque to rotate 1200 kg of mass? I would like to calculate torque to rotate 1200 kg of mass which is attached in 1. Determin the c ordinates of point (x, y) after rotatio s Bf 900, 1800 2700, and 3600. a translation 8 units to the right and 1 unit up followed by a 90° counterclockwise rotation about the origin 6. Look at the front of the pump. Each rotation is specified by an angle of rotation. This is the same dimensions as work. Learn about the rules for 90 degree clockwise rotation about the origin. Impellers must turn in a direction so that the fluid is pushed (not scooped) through the pump (see sample drawing below). In order to find the remaining reaction forces, you will need to find the sum of the forces in both the x and y direction. \$\begingroup\$@AlexBrown: Assuming the x-axis points right and the y-axis points up, (-y,x) is indeed the counter-clockwise rotation of (x,y). G69 G-Code: Cancel Rotation. All valves, guides and seats are checked and replaced if needed. G3600 A4 engines provide a wide range of power options to fit your gas compression application. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. Home >; Techtips >; A Primer on Ignition Timing for your Classic Ford; Popular Subjects: Chevy; LS Engines; Engine Building; Ford; Mopar; Transmissions & Drivelines. In the manner. Liberty Lift is a manufacturer and supplier of beam pumping units and brings a management team with decades of artificial lift experience, including the design and engineering of many competitive products used today. matrix R2019. Easily find your stainless or aluminum prop with our PropFinder. Using this convention, a vector with a direction of 30 degrees is a vector that has been rotated 30 degrees in a counterclockwise direction relative to due east. Moreover, the latter is obtained from the former by rotation through 90 o in the positive (counterclockwise) direction. Ace model numbers which include a “CW” have a clockwise rotation; all other models are counterclockwise rotation. Clean the knob thoroughly. You can also review infomration about other course website hosting platforms and options. For Exercises 7—9, use the graph shown. 24 Remove the servo sleeve retainers. This rotation is counterclockwise, which is considered positive. I'm unsure how many turns they are expecting; the question seems incomplete. But such an ellipse can always be obtained by starting with one in the standard position, and applying a rotation and/or a translation. Forward direction - Forward power 2. The rotation matrix is given by. Clockwise rotations are negative. The Unit Circle Written by tutor ShuJen W. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. We will focus on rotation about a single axis of rotation, which is analogous to one-dimensional straight-line motion. Being able to quicky convert wind speed values from units like knots, beaufort, m/s and km/h to another is quite helpful when you're in a pinch. They will, how-ever, have complex eigenvalues. This MODE is commonly used for GENERAL CALCULATIONS. Can someone please help me and let me know what the equation is? Thanks. Above it is shown as a vector pointing to the right and labeled d. Keratometry & Toric Adjustor Calculators. Get the White Ball into the Black Hole. rotates points in the xy-Cartesian plane counter-clockwise through an angle θ about the origin of the Cartesian coordinate system. full rotation of the disc corresponds to 7. How do you do a 90 degree counter-clockwise rotation around a point? I know around the origin it's$(-y,x)$, but what would it be around a point? $$(-y - a,x - b)$$ Where$(a,b)$is the rotation point. If you turn off the motor. Transformation of Graphs Using Matrices - Rotations A rotation is a transformation in a plane that turns every point of a preimage through a specified angle and direction about a fixed point. In the Northern Hemisphere the rotation appears counter-clockwise, while from the Southern Hemisphere the spin looks clockwise. Best Answer: Medusa is correct, but assuming we are talking about rotating coordinates around the origin, the point (x, y) goes to (cos(t)x + sin(t)y, -sin(t)x + cos(t)y ) for a counter-clockwise rotation of t degrees. When direct coupling shafts, always MATCH THE OPPOSITE ROTATION pump with the shaft. The presentation covering such content will be done by the instructor in own handwriting, using video and with the help of several examples with solution. The positive sense of the translation and rotation are also shown in the figure. The second butterfly curve shows what. Above it is shown as a vector pointing to the right and labeled d. A rotation in the x–y plane by an angle θ measured counterclockwise from the positive x-axis is represented by the real 2×2 special orthogonal matrix,2 cosθ −sinθ sinθ cosθ. STRESS TRANSFORMATION AND MOHR’S CIRCLE p p column with comp ressive load free-body #1 free-b ody#2 p θ σ x x σ x y x y x y σ yy Figure 5. Rotation of Axes 7 x y 24 24 4 4 xˆ yˆ 4x2 2 4xy 1 7y2 5 24 u 5 arcsin(œ5/5) Figure 5 There is an easily applied formula that can be used to determine which conic will be produced once the rotation has been performed. The amount of rotation is called the angle of rotation and it is measured in degrees. Buttons in the pop-up window. the Same Direction of Rotation as Their Direction of Revolution. 3 θ θ Example of Mohr's Circle for Moment of Inertia. For example, a door’s center of rotation is at its hinges. Our CRT application group has a "magnetic cage". →x represents a counterclockwise rotation by the angle α followed by a counterclockwise rotation by the angle β. Topics include seasons, moon phases, coordinate systems, light, and more. The units for angular velocity are radians per second (rad/s). Reverse direction - Reverse power Self Locking Imagine the gear ratio in the previous sign drive is 60:1 instead of 5:1. From this angle of arm 20 must be subtracted the slope of the unit vector [Prai- (H -misjtated in a counterclockwise or negative direction. Scroll down the page for more examples and solutions on rotation about the origin in the coordinate plane. Identifying a Rotation Rotate the puzzle piece 270 degrees clockwise about point p. The present study investigated the effects of a 5-day rotating work schedule in the advance, or "counter-clockwise", vs. Reflection gives the option to reflect over x-axis, y-axis, y=x, or y=-x. How do you do a 90 degree counter-clockwise rotation around a point? I know around the origin it's$(-y,x)$, but what would it be around a point? $$(-y - a,x - b)$$ Where$(a,b)\$ is the rotation point. , the above figure corresponds to an alias transformation), is. The first 25 consecutive patients with high occlusal plane angulation, dysfunction, and pain who were treated with temporomandibular joint (TMJ) total joint prostheses and simultaneous maxillomandibular counterclockwise rotation were evaluated before surgery (T1), immediately after surgery (T2), and. The stress tensor at a point in a machine element with respect to a Cartesian coordinate system is given by the following array: Equation f. A positive degree measurement means you're rotating counterclockwise, whereas a negative degree measurement means you're rotating clockwise. The clutch is idle until engaged by an on-off toggle switch located at the driver’s seat. Revolution is the movement of the Earth around the Sun.
Hinge loss can be defined using $\text{max}(0, 1-y_i\mathbf{w}^T\mathbf{x}_i)$ and the log loss can be defined as $\text{log}(1 + \exp(-y_i\mathbf{w}^T\mathbf{x}_i))$ I have the following questions: 1. Are there any disadvantages of hinge loss (e.g. sensitive to outliers as mentioned in http://www.unc.edu/~yfliu/papers/rsvm.pdf) ? 2. What are the differences, advantages, disadvantages of one compared to the other? Logarithmic loss minimization leads to well-behaved probabilistic outputs. Hinge loss leads to some (not guaranteed) sparsity on the dual, but it doesn't help at probability estimation. Instead, it punishes misclassifications (that's why it's so useful to determine margins): diminishing hinge-loss comes with diminishing across margin misclassifications. So, summarizing: • Logarithmic loss leads to better probability estimation at the cost of accuracy • Hinge loss leads to better accuracy and some sparsity at the cost of much less sensitivity regarding probabilities • +1. Minimizing logistic loss corresponds to maximizing binomial likelihood. Minimizing squared-error loss corresponds to maximizing Gaussian likelihood (it's just OLS regression; for 2-class classification it's actually equivalent to LDA). Do you know if minimizing hinge loss corresponds to maximizing some other likelihood? I.e. is there any probabilistic model corresponding to the hinge loss? – amoeba Mar 28 '18 at 15:51 • @amoeba It's an interesting question, but SVMs are inherently not-based on statistical modelling. Having said that, check this answer by Glen_b. The whole thread is about it, but for the epsilon-insensitive hinge instead. – Firebug Mar 28 '18 at 16:03 What are the impacts of choosing different loss functions in classification to approximate 0-1 loss I just want to add more on another big advantages of logistic loss: probabilistic interpretation. An example, can be found here Specifically, logistic regression is a classical model in statistics literature. (See, What does the name "Logistic Regression" mean? for the naming.) There are many important concept related to logistic loss, such as maximize log likelihood estimation, likelihood ratio tests, as well as assumptions on binomial. Here are some related discussions. Likelihood ratio test in R Why isn't Logistic Regression called Logistic Classification? Is there i.i.d. assumption on logistic regression? Difference between logit and probit models
# How to use Aroon Oscillator to measure the strength of a trend ## What is Aroon Oscillator? The Aroon Oscillator (ARO) was developed by Tushar Chande and is similar to the Relative Strength Index. It attempts to show when a new trend is starting and is quite useful in capturing trend and identifying range-bound markets. The indicator consists of two lines – Up and Down. They measure how long it has been since the highest high or the lowest low that has occured within an n period range. You can read more about AROON in his book – The New Technical Reader. This post and the model spreadsheet shows how to use Aroon Oscillator to measure the strength of a trend. ## How to Calculate Aroon Oscillator? $Aroon[up] = \{\frac{n - PeriodsFromHighestHigh}{n}\} X 100$ $Aroon[down] = \{\frac{n - PeriodsFromLowestLow}{n}\} X 100$ ### What are the key takeaways from Aroon Oscillator? Aroon Oscillator is a powerful tool to measure the strength of a trend. When Aroon Up value stays between 70 and 100, an upward trend is formed. When the Aroon Down stays between 70 and 100, a downward trend is formed. A strong upward trend is formed when Aroon Up is above 70 and Aroon Down stays below 30 at the same time. Similarly, a strong downward trend is formed when the Aroon Down is above 70 while the Aroon Up is below 30. Both Up and Down indicators are complementary. For a 14 day lookback period, the highest high occured at the 2nd candle. The Aroon Up indicator is simply calculated as: AroonUp = ((Lookback – # of Periods since highest high) / Lookback) x 100 AroonUp = ((14 – 2) / 14) X 100 or 85.71 Both the Up and Down indicators are indicated as percentages. The main question being addressed here is – how recent are the highest high or lowest low in the past “n” lookback periods. On the same note, the AroonDown is calculated using the lowest low in the past “n” lookback periods. AroonDown = ((Lookback – # of Periods since lowest low) / Lookback) x 100 Both Aroon Up and Down indicators are subtracted to obtain Aroon Oscillator. AroonOscillator = AroonUp – AroonDown Always be on lookout for crossover. When the Aroon Up crosses above the Aroon Down, it indicates a strengthening of the upward trend. When the Aroon Down crosses above the Aroon Up, it indicates a weakening of the upward trend. ## Limitations of Aroon Oscillator Aroon oscillator can signal good entry points. It can also give bad or false signals. Like all technical indicators, Aroon is a lagging indicator. Therefore, it is vulnerable to sudden spikes in the asset price. Traders should have efficient exit strategy from such volatilities. The UI spreadsheet retrieves quote information from Bloomberg OpenMarkets through a lightweight data-interchange format called JSON or JavaScript Object Notification. If this is your first time programming JSON in Excel, you must add certain COM objects to retrieve Bloomberg Markets Web Service through VBA. Under Developer->Visual Basic->Tools->References, add the highlighted objects. This site uses Akismet to reduce spam. Learn how your comment data is processed.
## Sunday, 9 July 2017 ### Python matplotlib: insets and aligned legends Sometimes it is needed to create a plot with several lines and to assign a legend to each of them, which quite often ends in cumbersome and clunky legends. In my opinion, a more elegant way is include the legend as aligned text aside to each plot line, which is possible using Python and matplotlib. Furthermore, to better highlight some details it is also possible to use an inset, hence to add a box which zooms a particular detail of the plot. In the following is the Python code. The style is optimised for an IEEEtran journal. ## Thursday, 9 February 2017 ### Peer review template based on IEEEtran and pandoc When you start to be involved in scientific publishing, one of the side-effects is to become a reviewer for some journals. This side-effect is an extremely important part of the scientific publishing process, which should ensure the quality of the published papers. It has also some other advantages, as to be able to view the papers few months before they would be published. The major downside is the time required for a good review, even days. To simplify the writing process I decided to create a simple workflow which is based on $$\LaTeX$$, IEEEtran and pandoc. The final output will be made up of a .pdf file, formatted as an IEEEtran journal, and a .rst file, formatted as a reStructuredText. The .pdf is ideal for viewing the math, figures, etc., while the .rst file is ideal to be pasted in the comments part, which is the part that will be sent in the email to the authors. ### International Morse Code This is a nerdy post, just to share a cheat-sheet which contains the International Morse Code along with the NATO Phonetic alphabet. I created this, using various resources online, just to lose some time... ## Wednesday, 8 February 2017 ### Import a RF design from a Gerber file into Kicad KiCad is a wonderful software for PCB designing, which is opensource and can supply the complete workflow from the schematic to the final layout. Unfortunately, it is not intended for RF circuit and antenna design. In these cases, a suitable program should be used, which could be ADS, CST, HFSS, or others. These are highly expensive but, unfortunately, the opensource tools are still not so complete. A good one is Qucs, but is missing some features and is quite complex to use. Moreover, it is missing the layout function. Long story short, I was facing with the need to create a PCB by incorporating some custom Wireless Power Transfer coils, which I could export to a gerber file from CST, some antennas, again from CST, and some matching circuits, from ADS. The idea was to import the different parts from the gerber into different footprints, which will be assigned to some schematics blocks and then placed in the pcb. There are no ways to get it simply by using some pre-baked functions, but some tricks are needed. I will list two possible ways that can be used. The first is easier but the result can be less perfect from a conceptual point of view. The result is practically the same board. ## Monday, 9 May 2016 ### Visually engaging periodic plots using Python In order to obtain periodic images with a technical feel to be embedded in a website, I decided to compose them with Python and Matplotlib. The plots are essentially some sums and multiplications of "noisy" sines.
# Classical Fundamentals of Statistical and Thermal Physics by Frederick Reif ## For those who have used this book 37.5% 62.5% 0 vote(s) 0.0% 4. ### Strongly don't Recommend 0 vote(s) 0.0% 1. Jan 19, 2013 ### Greg Bernhardt Code (Text): 1. Introduction to Statistical Methods Random Walk and Binomial Distribution / General Discussion of the Random Walk 2. Statistical Description of Systems of Particles Statistical Formulation of the Mechanical Problem / Interaction between Macroscopic Systems 3. Statistical Thermodynamics Irreversibility and the Attainment of Equilibrium / Thermal Interaction between Macroscopic Systems / General Interaction between Macroscopic Systems / Summary of Fundamental Results 4. Macroscopic Parameters and Their Measurement 5. Simple Applications of Macroscopic Thermodynamics Properties of Ideal Gases / General Relations for a Homogeneous Substance / Free Expansion and Throttling Processes / Heat Engines and Refrigerators 6. Basic Methods and Results of Statistical Mechanics Ensembles Representative of Situations of Physical Interest / Approximation Methods / Generalizations and Alternative Approaches 7. Simple Applications of Statistical Mechanics General Method of Approach / Ideal Monatomic Gas / The Equipartition Theorem / Paramagnetism / Kinetic Theory of Dilute Gases in Equilibrium 8. Equilibrium between Phases or Chemical Species General Equilibrium Conditions / Equilibrium between Phases / Systems with Several Components; Chemical Equilibrium 9. Quantum Statistics of Ideal Gases Maxwell-Boltzmann, Bose-Einstein, and Fermi-Dirac Statistics / Ideal Gas in the Classical Limit / Black-Body Radiation / Conduction Electrons in Metals 10. Systems of Interacting Particles Solids / Nonideal Classical Gas / Ferromagnetism 11. Magnetism and Low Temperatures 12. Elementary Kinetic Theory of Transport Processes 13. Transport Theory Using the Relaxation Time Approximation 14. Near-Exact Formulation of Transport Theory 15. Irreversible Processes and Fluctuations Transition Probabilities and Master Equation / Simple Discussion of Brownian Motion / Detailed Analysis of Brownian Motion / Calculation of Probability Distributions / Fourier Analysis of Random Functions / General Discussion of Irreversible Processes Appendices Last edited: May 6, 2017 2. Jan 20, 2013 ### Dr Transport If you know this book, you should have the basics of Stat Mech down, It doesn't do anything with regards to modern techniques (Field Theory), but still a very well written book. 3. Jan 22, 2013 ### DrewD Great book. I would have strongly recommended this except that I felt there were some organizational issues I had with the book. Mainly, I remember a number of times where he would put a reference to a formula three chapters back, and after a minute of searching for it, the equation would turn out to be something like $E_{tot}=E_1+E_2$. Great book, but I'd love a slightly better edited copy. (Yes, I'm being sort of nitpicky, but Stat Mech is a rough enough subject) 4. Jan 26, 2013 ### vela Staff Emeritus I have to admit, I didn't care for this book when I took stat mech as an undergrad. I remember trying to read it, and it seemed like Reif took a long time to get to the point. 5. Jan 26, 2013 ### Jorriss This is how I feel book. It's a 'fine' book, but not exceptional. 6. Jan 26, 2013 ### jasonRF I self-taught myself classical stat mech and some kinetic theory from this book. The fact that it was so wordy was fine, since I didn't have a professor to guide me and I learned a lot from the text. I did have to take notes, and ended up making a several page list of important definitions and equations to help me wade through the later chapters. This was necessary since the book isn't made to easily find things in, as noted by other reviewers. Many of the problems are interesting, and I had fun working them out. Some of the early chapters probably had too few problems, though. Most were do-able for me, but a few left me stumped. A number of years later I ran across the book by Schroeder, which I think would have been better for the basic stat mech (but covers no kinetic theory); it seems to make things clear and concise and is much easier to read. Overall I enjoyed the book, but if this is your first exposure to stat mech I recommend Schroeder over Reif (Schroeder is cheaper, too!). Others may know of even easier books. jason
# Regarding Gravitation and Gravitational Fields ## Homework Statement A satellite is designed to orbit Earth at an altitude above its surface that will place it in a gravitational field with a strength of 4.5N/kg. a) Calculate the distance above the surface of Earth at which the satellite must orbit. b) Assuming the orbit is circular, calculate the acceleration of the satellite and its direction. c) At what speed must the satellite travel in order to maintain this orbit? ## Homework Equations g=(G*Me)/r^2, where Me is earth's mass ## The Attempt at a Solution For a), I've used the above equation, with given g value(4.5) substituted into the equation, and got 3.0 x 10^3km as my final answer but b) and c) is where I have the problem. In order to calculate the acceleration for b), I believe that I need to calculate the velocity(v=sqrt(G*Me/r)) first then substitute into (a = v^2/r). However, if I end up getting an answer for part b), didn't I just do c) as well? Because for c) I need to calculate the velocity as well.. I'd like to know whether or not I was wrong about this before I get in any further. Thank you in advance for your comments. ## Answers and Replies Chi Meson Science Advisor Homework Helper Hint: orbital motion is "free-fall." What is the acceleration of free fall? The way you calculated b) is not wrong, and it will work. However, how did you arrive at your equation for velocity? To derive it, you would start by equating centripetal force with gravitational force. What is the centripetal acceleration? Can you figure it out without first calculating velocity? acceleration in free-fall is 9.8m/s/s right? Only near the surface of the earth, because the gravitational force is GMm/r2 = mg = 9.8m on the surface. Chi Meson Science Advisor Homework Helper acceleration in free-fall is 9.8m/s/s right? only where g is 9.8 N/kg
We show some advanced settings for solving the onebody Schrödinger equation which is based on Solving one-body problems. In the following, syst is a finalized kwant system with leads and we import the onebody module from tkwant: from tkwant import onebody ## 2.7.1. Boundary conditions¶ Special boundary conditions have to be provided in order to solve the dynamic equations for an open quantum systems (with leads). For onebody.WaveFunction they must be precalculated: scattering_states = kwant.wave_function(syst, energy=1, params={'time':0}) psi = onebody.WaveFunction.from_kwant(syst, psi_st, boundaries=boundaries, energy=1.) For the scattering state solver boundary conditions are calculated on the fly. One can provide different boundary conditions by the keyword boundary psi = onebody.ScatteringStates(syst, energy=1, lead=0, boundaries=boundaries)[mode] For closed quantum systems (without leads), no boundary conditions are needed. A tutorial on boundary conditions is given in Boundary conditions. An example script which shows alternative boundary conditions is given in Alternative boundary conditions. ## 2.7.2. Time integration¶ The time integration can be changed by prebinding values with the module functool.partial to the onebody solver. In the current example, we change the relative tolerance rtol of the time-stepping algorithm: import functools as ft solver_type = ft.partial(tkwant.onebody.solvers.default, rtol=1E-5) psi = onebody.WaveFunction.from_kwant(syst=syst, boundaries=boundaries, psi_init=psi_st, energy=1., solver_type=solver_type) ## 2.7.3. Time-dependent perturbation¶ When the method onebody.WaveFunction.from_kwant() is used, the time-dependent perturbation $$W(t)$$ is extracted from the Hamiltonian of a Kwant system. By defaut, Tkwant uses cubic spline interpolation to interpolate $$W(t)$$ in time with a static discretization time $$dt$$. The interpolation is used for performance reasons, in order to minimize the number of calls to Kwant. One can change the discretization time $$dt$$, which by default is $$dt = 1$$, to a different value: import functools as ft perturbation_type = ft.partial(tkwant.onebody.kernels.PerturbationInterpolator, dt=0.5) psi = onebody.WaveFunction.from_kwant(syst=syst, psi_init=psi_st, energy=1., boundaries=boundaries, perturbation_type=perturbation_type) Setting $$dt = 0$$ will switch off interpolation and always evaluate the exact $$W(t)$$ function. Alternatively, one can switch off interpolation directly with psi = onebody.WaveFunction.from_kwant(syst=syst, psi_init=psi_st, energy=1., boundaries=boundaries, perturbation_type=tkwant.onebody.kernels.PerturbationExtractor) ## 2.7.4. Saving and restarting states¶ Sometimes we might like to save a state in order to resume a calculation on a later stage. An easy way is to to use the pickle package: import pickle psi = onebody.WaveFunction.from_kwant(syst=syst, psi_init=psi_st, energy=1., boundaries=boundaries, kernel_type=onebody.kernels.Scipy) saved = pickle.dumps(psi) Saving a state works currently only with the tkwant.onebody.kernels.Scipy kernel. The saved object saved can be stored. Recovering the state later on in order to continue the calculation is possible by using
## Richard Healey Print publication date: 2007 Print ISBN-13: 9780199287963 Published to Oxford Scholarship Online: January 2008 DOI: 10.1093/acprof:oso/9780199287963.001.0001 Show Summary Details Page of PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 23 May 2017 # (p.265) Appendix E Algebraic Quantum Field Theory Source: Gauging What's Real Publisher: Oxford University Press This appendix shows how algebraic quantum field theory provides a clear mathematical framework within which it is possible to raise and answer questions about the relations among various representations of the states and observables of a quantum field theory. It motivates and explains the idea of an abstract Weyl algebra of field observables and points out the interpretative significance of the fact that the Stone–von Neumann theorem does not extend to its representations. It says what is meant by a Fock representation, and explains how this is related to the occupation number representation of a system of quantum particles. Much of it relies on the paper by Ruetsche (2002) and the appendix to that by Earman and Fraser (2006). The Heisenberg relations $Display mathematics$ (E.1) generalize formally to equal‐time commutation relations (ETCRs) for field systems such as the following for operators corresponding to a real classical scalar field ϕ(x, t): $Display mathematics$ (E.2) as well as anticommutation relations for field operators acting on states of fermionic systems such as electrons and quarks. But the presence of the delta function δ3(xx ) means that these field commutators are not really well defined. To arrive at a well‐defined algebraic generalization of the Heisenberg relations it is necessary to introduce “smeared” field operators—field operators parametrized by a family of “test” functions peaked around points like (x, t) that fall off sufficiently fast away from there (perhaps restricted even to functions of compact support). This gives rise to a basic algebra of operators of the form ̂ϕ(f x, t)), ̂π(f(x, t)) for a real scalar field, with analogous generalizations for fields of other kinds (complex, vector, etc.). As appendix D explained, it is (p.266) also necessary to replace the Heisenberg form of the canonical commutation relations by a Weyl form in which all operators are bounded and can therefore be defined on all vectors in a Hilbert space on which they act. Just as the pair of vectors (a, b) defining the Weyl operator Ŵ(a, b) for a particle system serves to pick out a point in the finite‐dimensional phase space of that particle system, so also a pair of test functions (g, f) picks out a point in the infinite‐dimensional phase space of a field system. Particle Weyl operators Ŵ(a, b) therefore generalize to field Weyl operators Ŵ(g, f). Now on the classical phase space for a field theory like that of the Klein–Gordon field there is a so‐called symplectic form σ(f, g) that generalizes the form (a.db.c) on the phase space of a classical particle system. The multiplication rule 7.8 accordingly generalizes to $Display mathematics$ (E.3) which specifies a so‐called abstract Weyl algebra for the Klein–Gordon field and provides the required rigorous form of the ETCR's E.2.1 The explicit expression for the symplectic form in this case is given by the following integral over a space‐like “equal‐time” hyperplane Σ $Display mathematics$ (E.4) We now face the problem of characterizing the representations of the Weyl algebra specified by equation E.3. This is the analogous problem for a quantum field theory to that considered in appendix D for a quantum particle theory. The problem is now set in the context of an algebraic approach to quantum field theory, so before we continue it is appropriate to reflect on just what that amounts to. In the algebraic approach to quantum field theory, observables are represented by an abstract algebra  of operators, and states are represented by linear functionals s on this algebra. So if Â12 are elements of , then $Display mathematics$ (E.5) Such a state is intended to yield the expectation value for a measurement of an arbitrary observable in .  itself is taken to be a C* algebra: a complete, normed (p.267) vector space over the complex numbers whose elements may be multiplied in such a way that ∀Â1, Â2 ∈ , ‖Â1Â2‖ ≤ ‖Â1‖ ‖Â2‖, with an involution operation * satisfying conditions modeled on those of the Hilbert space adjoint operation, plus ∀ ∈ , ‖ * Â‖=‖Â‖2.2 Abstract states s on  are linear functionals satisfying $Display mathematics$ (E.6) $Display mathematics$ (E.7) The bounded operators of a Hilbert space ℬ(𝖧) constitute one concrete realization of a C* algebra. In the context of the algebraic approach to quantum field theory, we seek a representation in some Hilbert space of an abstract C* algebra of smeared field operators with states on them. Every representation of the Weyl algebra specified by the Weyl relations E.2 will give rise to such a representation, since the Weyl algebra constitutes a C* algebra. Arepresentation of an abstract C* algebra  on a Hilbert space 𝖧 is a * ‐homomorphism π: ℬ(𝖧) of that algebra into the algebra of bounded linear operators on 𝖧, i.e. a structure‐preserving map of elements of  onto a C* algebra constituted by elements of that algebra which satisfies the condition $Display mathematics$ (E.8) Such a representation is faithful if and only if π(Â) = 0 →  = 0, and irreducible if and only if the only subspaces of the Hilbert space 𝖧 left invariant by the operators {π(Â):  ∈  are 𝖧 and the null subspace {0}. Every representation of a Weyl C* algebra is faithful. Two representations π,π of an abstract C* algebra  are unitarily equivalent if and only if there is a unitary map U:ℬ(𝖧π) → 𝒝(𝖧π) such that π(Â) = Uπ(Â)U −1 for all  ∈ . The Stone–von Neumann theorem does not generalize to representations of field Weyl algebras like those specified by E.3. While such an algebra does possess Hilbert space representations, these are not all unitarily equivalent to one another. Indeed, there is a continuous infinity of inequivalent representations of equation E.3’s algebra. One important kind of representation is called a Fock representation. This is related to the occupation number representation for the quantum harmonic oscillator considered in appendix D To get the idea of a Fock representation, recall the discussion of the real Klein–Gordon field in chapter 5, section 5.1. (p.268) The general solution to the classical Klein–Gordon equation (5.1) $Display mathematics$ (E.9) may be expressed as $Display mathematics$ (E.10) where $ω k 2 = k 2 + m 2$ corresponds to the relativistic energy–momentum relation E 2 = p 2 c 2 + m 2 c 4 with E = hωk, p = h k and here and in the rest of this appendix we have chosen units so that c = h = 1. The canonical conjugate field π(x μ) is defined by $Display mathematics$ (E.11) where ℒ is the Klein–Gordon Lagrangian density $𝒧 = 1 2 [ ( ∂ μ φ ) ( ∂ μ φ ) − m 2 φ 2 ]$. On quantization, ϕ,π become operators ̂ϕ, ̂π, and the solution to the quantized Klein–Gordon equation is $Display mathematics$ (E.12) where the commutation relations for the operators â(k) and its adjoint â(k) that follow from this and equations E.2 are $Display mathematics$ (E.13) $Display mathematics$ (E.14) If we define a so‐called number operator (k)≡ â (k)â(k), then these give $Display mathematics$ (E.15) $Display mathematics$ (E.16) It follows that â(k),â(k) act respectively as raising and lowering operators on eigenstates |n k 〉 of the number operator with (k)|n k〉 = δ3(0)n k|n k〉: $Display mathematics$ (E.17) $Display mathematics$ (E.18) (p.269) Hence â(k)|n k 〉 is an eigenstate of (k) corresponding to eigenvalue n k + 1. Similarly, â(k)|n k 〉 is an eigenstate of (k) corresponding to eigenvalue n k − 1. Provided the system has a unique state of lowest energy, by repeatedly applying the lowering operators one arrives at that ground state—the so‐called vacuum state |0 〉 —a simultaneous eigenstate of every number operator (k) with eigenvalue n k = 0. The Hamiltonian operator Ĥ for the Klein–Gordon field has the form $Display mathematics$ (E.19) which becomes $Display mathematics$ (E.20) The commutation relations for the raising and lowering operators then give $Display mathematics$ (E.21) If one follows custom in ignoring as unmeasurable the infinite zero‐point energy associated with the delta function, one can therefore try to interpret the total energy of a Klein–Gordon field as consisting of the sum of the energies ωk of all its constituent quanta of momentum k. Similarly, the total momentum represented by the operator $Display mathematics$ (E.22) might be interpreted as consisting of the sum of the momenta of all its constituent quanta. A total number operator may also be defined as $Display mathematics$ (E.23) whose eigenvalues might indicate the total number of quanta present in the field. The vacuum state satisfies |0〉 = 0|0〉, in accordance with its interpretation as a state in which no quanta are present. Other states of the quantized Klein–Gordon field may then be built up from the vacuum state by successive applications of linear combinations of raising and lowering operators; indeed every state in the representation may be approximated to (p.270) arbitrary precision in this way. A state |n k 〉 that can be formally “created” from the vacuum state |0 〉 by application of the raising (or “creation”) operator â(k) $Display mathematics$ (E.24) is a simultaneous eigenstate of and Ĥ with eigenvalues kk respectively. It is naturally thought to contain one quantum whose energy and momentum values obey the usual relativistic relation. Repeated action with this and other “creation” operators is naturally thought to result in a state containing multiple quanta of various energies and momenta, always obeying this relation. But a typical state will be a superposition of such states, with no determinate number of quanta, and no determinate energy or momentum. The algebraic approach makes it possible to place this heuristic treatment of a Fock representation on a sounder mathematical footing, and to state precisely what counts as a Fock representation of an abstract Weyl algebra. Instead of focusing on field operators defined at each space‐time point, one considers a corresponding abstract algebra of operators which have Hilbert space representations as “smeared” fields. In a Fock representation of a Weyl algebra, a creation or annihilation operator is parametrized not by momentum, but by an element of a complex Hilbert space 𝖧1 (called, suggestively, the one‐particle Hilbert space). For all f, g ∈ 𝖧1, their commutation relations are $Display mathematics$ (E.25) permitting the definition of a number operator (f) = a (f)a(f) with $Display mathematics$ (E.26) $Display mathematics$ (E.27) and a total number operator = ∑i a (f i)a(f i) over an orthonormal basis {f i} for 𝖧1. A symmetric Fock space 𝖥(𝖧1) is built up from 𝖧1 as the infinite direct sum of symmetrized tensor products of 𝖧1 with itself: 𝖥(𝖧1) = ℂ ⊕ s(𝖧1) ⊕ s(𝖧1 ⊗ 𝖧1) ⊕ . . .. The creation and annihilation operators are defined over a common dense domain D of 𝖥(𝖧1).3 A representation of the Weyl algebra specified by E.3 is a Fock representation in 𝖥(𝖧1) if and only if there is a unique vacuum state |0 〉 in D with a(f)|0 〉 = 0 for all f ∈ 𝖧1, and D is the span of {a (f 1)a (f 2) . . . a (f n)|0 〉 }. In a Fock representation, the total number operator is a densely defined self‐adjoint operator independent of the basis used to define it with spectrum {0,1,2,. . . }. Any representation of (p.271) the Weyl algebra defined by E.3 with such a number operator is either a Fock representation or a direct sum of Fock representations. But the Fock representation of a free quantum field like the Klein–Gordon field is only one among an infinite number of unitarily inequivalent representations of the Weyl form of the basic ECTRs. One way to get a handle on this multiplicity is to associate representations of a Weyl algebra with states defined on that algebra. An abstract state s on an abstract Weyl algebra  (with identity Î) is a map from  into real numbers satisfying $Display mathematics$ (E.28) $Display mathematics$ (E.29) $Display mathematics$ (E.30) A state s is pure just in case it cannot be expressed as a linear sum of other states. A representation of  in a Hilbert space 𝖧 is a map π: → ℬ(𝖧) from  into the set ℬ(𝖧) of bounded self‐adjoint operators on 𝖧 such that the images of elements of  themselves constitute a concrete Weyl algebra under the corresponding algebraic operations on ℬ(𝖧). Since  is a C* algebra, each state s on  defines a representation πs of the operators in  by self‐adjoint operators on a Hilbert space 𝖧s, in accordance with the Gelfand–Naimark–Segal theorem: Any abstract state s on a C* algebra  gives rise to a unique (up to unitary equivalence) faithful representation (πs,𝖧s) of  and vector Ωs ∈ 𝖧s such that $Display mathematics$ (E.31) and such that the set {πs(Â)Ωs: ∈  is dense in 𝖧s. This representation is irreducible if s is pure.4 Each vector |ψ 〉 in the space 𝖧 of a representation of  defines an abstract state s by s(Â) = <ψ|π(Â)|ψ > , and so to any vector that represents a state in a representation of  there corresponds a unique abstract state on . But if the GNS representations of abstract states s, s are not unitarily equivalent, then s cannot be represented as a vector or density operator on 𝖧s . Since a representation π will map the elements of  into a proper subset of the set of bounded self‐adjoint operators on 𝖧, a concrete Hilbert space representation of  will contain additional candidates for physical magnitudes represented by operators in ℬ(𝖧), over and above those represented by elements of . ## Notes: (1) The Weyl algebra itself is constituted by a set of abstract operators {Â} generated from the Ŵ(g, f) satisfying E.3 as well as Ŵ* (g, f) = Ŵ(−g, − f). It is closed under complex linear combinations. The * operation satisfies (cÂ)* =cÂ* , where c is the complex conjugate of c. The algebra possesses a unique norm ‖Â‖ satisfying ‖Â* Â‖=‖Â‖2. The Weyl algebra is also closed under this norm, making it a C* algebra. (2) Specifically, we have the following conditions: $Display mathematics$ (3) A set of vectors in a Hilbert space is dense just in case every vector in the space is arbitrarily close in the Hilbert space norm to a member of that set. (4) Recall that an irreducible representation is one in which the only subspaces of 𝖧s that are invariant under the operators πs(Â) are 𝖧s and the null subspace.
NULL Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / JOMH / Volume 17 / Issue 4 / DOI: 10.31083/jomh.2021.025 Open Access Original Research Effect of a combination of aerobic exercise and dietary modification on liver function in overweight and obese men Show Less 1 Research Academy of Grand Health, Ningbo University, Ningbo City, Zhejiang, China 2 Faculty of Sport Science, Ningbo University, Ningbo City, Zhejiang, China J. Mens. Health 2021 , 17(4), 176–182; https://doi.org/10.31083/jomh.2021.025 Submitted: 11 January 2021 | Accepted: 8 February 2021 | Published: 30 September 2021 Abstract Background: Obesity is not only associated with cardiovascular diseases but also a primary cause of liver dysfunction and other related diseases. This study’s aim was to determine the impact of a combination of dietary modification and aerobic exercise on liver function in overweight and obese adult males. Methods: 45 overweight or obese men were randomly divided between the control group (n = 22) and intervention group (n = 23). Subjects in the intervention group were provided with dietary modification and aerobic exercise programmes. Dietary modification is a diet which restricts calorie intake and balances nutrients. Before and after 12 weeks of intervention, participants’ anthropometric characteristics and biochemical parameters relating to liver function including aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP) and gamma-glutamyl transferase (GGT) were measured. Results: 12 weeks of aerobic exercise and dietary modification resulted in average weight loss of 10.6%, and body mass index, waist circumference and fat percentage decreased by 10.2%, 9.4% and 14.5% (p $<$ 0.05). AST, ALT, GGT and ALP in the intervention group reduced by 20.6%, 18.1%, 37.7% and 6.1% (p $<$ 0.05). Compared to the control group, AST, ALT, GGT and ALP in the intervention group were markedly lower (p $<$ 0.05). Furthermore, there was a markedly positive relationship between the reduction rates of body weight and GGT (p $<$ 0.05). Conclusion: 12 weeks of aerobic exercise and dietary modification caused significant weight, waist circumference and body fat percentage reduction in overweight and obese men and their liver function was improved. The findings can provide a scientific reference for the improvement of liver function and prevention of liver diseases among overweight and obese people. Keywords Aerobic exercise Dietary modification Liver function Overweight and obesity Weight reduction 1. Introduction Due to high energy intake and low levels of physical activity, the prevalence of obesity has increased rapidly [1]. Obesity, as a chronic disease, has become one of the most serious public health threats of the 21st century [2]. Many studies have noted that obesity and being overweight are generally accompanied by one or more risk factors of cardiovascular disease [3-5]. Obesity not only has an association with cardiovascular diseases but also a major cause of liver dysfunction and other related diseases. Recent studies have proven that being overweight or obese can result in liver dysfunction including impaired hepatic mitochondrial function, liver cirrhosis and fibrosis, and can even lead to the occurrence of non-alcoholic fatty liver disease [6-8]. Therefore, for overweight and obese people, improving their liver function and preventing fatty liver diseases is of great significance. The liver plays a crucial role in lipid metabolism. Lipids can accumulate in the liver due to an imbalance that exists between the delivery of fat derived from adipose tissue stores or food intake to the liver and the consumption of fat as an ingredient of lipoproteins [9]. This partly explains why such a close relationship exists between liver diseases and obesity. At the same time, several epidemiological research projects have demonstrated that almost all patients who suffer from non-alcoholic steatohepatitis are between 10% and 40% over their ideal weight. Although adults with normal body weight can suffer from non-alcoholic steatohepatitis, it is more frequently detected in those who are overweight or obese [10]. It is essential for overweight and obese individuals to choose an appropriate method for reducing their body weight. Generally, aerobic exercise and dietary modification are considered to be effective and non-medical treatments for the monitoring and management of body weight. Previous studies have proven that aerobic exercise intervention is better for improving cardiopulmonary function and decreasing the risk of cardiovascular disease in comparison to dietary modification intervention [11,12]. At the same time, dietary modification intervention may be more effective than aerobic exercise intervention for facilitating a reduction in body weight and body fat [13,14]. Therefore, a combined aerobic exercise and dietary modification intervention is more frequently used to facilitate weight reduction among overweight and obese people. However, there is still debate surrounding whether this combined intervention can improve liver function in different individuals [15-17] and the impact the combined intervention has on the liver function of overweight and obese adults remains unclear. Liver function is measured by several biochemical parameters including aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP) and gamma-glutamyl transferase (GGT) [18,19]. This study’s aim was to examine the effect of a combined aerobic exercise and dietary modification intervention on weight reduction and liver function in adult overweight and obese males. The study’s hypothesis was that 12 weeks of dietary modification and aerobic exercise intervention could reduce body weight and improve the liver function of overweight and obese adult males. 2. Methods 2.1 Study design The study was a randomised controlled trial which used a control group. By utilising a random number generator, participants were randomly assigned to either the control group (n = 22) or the intervention group (n = 23) and participants were provided with aerobic exercise and dietary modification for 12 weeks. Participants’ anthropometric characteristics and blood biochemical indicators relating to liver function were measured both before and after the 12-week intervention period. The purposes and procedures of the research were all explained to participants and it was requested that they read and sign an informed consent form before participation. The research protocol was in full compliance with the latest modification of the Ethics Guidelines of the Declaration of Helsinki which was reviewed and approved by the Human Ethics Board of Ningbo University. 2.2 Participants In this study, overweight was defined as a body mass index (BMI) of 24.0 to 27.9 kg/m${}^{2}$ and obesity was regarded as a BMI $\geq$ 28.0 kg/m${}^{2}$, based on the guidelines for preventing and controlling obesity in Chinese adults [20]. Subjects were recruited using local newspaper advertisements and posters for this study. Subjects who fulfilled the following inclusion criteria were included in our study: 1) adult men aged 20 years or older; 2) a BMI now more than 24 kg/m${}^{2}$; 3) body weight change of no more than 5 kg in the previous 6 months; 4) no exercise habits, meaning a total exercise time of less than 150 minutes per week; 5) no current or past disorders relating to the respiratory or cardiovascular systems; 6) no injuries or musculoskeletal disorders which affect participation in physical activity. A total of 45 men fulfilled these criteria. The subjects’ baseline characteristics are shown in Table 1. Table 1.The baseline characteristics of the subjects Control group (n = 22) Intervention group (n = 23) P-value Age (yrs) 49.5 ± 11.4 50.8 ± 10.9 0.92 Height (cm) 167.2 ± 5.6 167.9 ± 6.8 0.36 Weight (kg) 80.0 ± 9.8 81.1 ± 10.6 0.52 BMI (kg/m${}^{2}$) 28.3 ± 2.1 28.7 ± 2.6 0.14 2.3 Measurements 2.3.1 Anthropometric characteristics Subjects were required to wear light clothing with no shoes when their body height and weight were measured. BMI was then calculated as body weight in kilograms divided by the square of body height in metres (kg/m${}^{2}$). Each subject’s waist circumference was measured with a measuring tape. Bioelectrical impedance analysis was used for assessing body fat percentage using the bipolar foot-to-foot technique (BF-689; Tanita). Before measurements were taken, participants were required to fast for 10 to 12 hours and they had to have an empty bladder. Accuracy was 0.1 kilogram (kg) for the measuring of fat mass and 0.1% for the assessing of body fat percentage with a 50 kHz intensity of frequency of induction. It was reported that the repeatability coefficient of the measurements was 0.985 and the technical error was 0.639 [21]. 2.3.2 Liver function Following 10 to 12 hours of fasting, each participant’s venous blood sample was obtained from the antecubital vein and delivered directly to the central laboratory for the analysis of liver function parameters, including AST, ALT, GGT and ALP. Liver enzymes were tested enzymatically using an automatic analyser (Ci8200; Abbott Architect) which used standard protocols based on the manufacturer’s instructions. All laboratory assays were tested without knowing any of the subjects’ information. 2.4 Interventions 2.4.1 Aerobic exercise The intervention was a moderate-intensity aerobic exercise programme consisting of a 90-minute session for 12 weeks (three times per week). Each 90-minute session included 15 minutes of warm-up and stretching exercises, 60 minutes of jogging or brisk walking and 15 minutes of cool-down and stretching exercises. Exercise intensity was expressed by the percentage value of maximal oxygen consumption (VO${}_{2}$max) and each subject’s VO${}_{2}$max and corresponding heart rate were obtained before the intervention. The linear relationship between the VO${}_{2}$, ratings of perceived exertion and heart rate has been proven [22]. Subjects exercised at 50% to 60% of their VO${}_{2}$max for the first four weeks of the intervention and this was gradually increased. For the final four weeks, the exercise intensity was set 60% to 70% of the subject’s VO${}_{2}$max. Indoor exercises were conducted on rainy days (seven times) using stair-stepping or ergometric cycling and the intensity of the indoor exercise matched that of the outdoors exercise as closely as possible. Several principal researchers and experienced fitness trainers were involved in the aerobic exercise intervention. 2.4.2 Dietary modification The dietary modification programme consisted of one 90-minute session each week for 12 weeks and included theoretical and practical knowledge of diet management. During the session, participants were taught how to restrict their calorie intake to 1,680 kcal per day (a mean of 840 kcal of carbohydrates, 420 kcal of fat and 420 kcal of protein). The dietary modification programme’s main objective was to help subjects restrict their calorie intake and obtain a well-balanced intake of carbohydrates, protein, fat, vitamins, minerals and amino acids. Our research team had more than 10 years of experience in instructing subjects with this dietary modification. Their experience has proven that the dietary modification intervention is incredibly effective and safe for decreasing body weight and helping subjects form healthy eating habits. Additional detailed information relating to the dietary modification programme has been previously published [23]. 2.5 Statistical analyses In this study, the test data were presented as mean $\pm$ SD. Within-group differences between baseline and follow-up changes were tested using a paired t-test. Univariate analyses of variance (ANOVA) were conducted to assess between-group statistical differences. Pearson’s correlations were used for determining the relationship of reduction rate between anthropometric characteristics and liver function. Data analysis was conducted using the IBM Statistical Package for Social Sciences (SPSS, version 22.0). For statistical analysis, p $<$ 0.05 was considered to be statistically significant. 3. Results 3.1 Changes in anthropometric characteristics following the intervention programme The results of differences in anthropometric characteristics following 12 weeks of the intervention programme are shown in Table 2. No significant changes in body weight, waist circumference, BMI and body fat percentage were observed after 12 weeks in the control group (p $>$ 0.05). However, in the intervention group, body weight decreased from 81.1 $\pm$ 10.6 kg to 72.4 $\pm$ 9.2 kg (a decrease of 10.6%, p $<$ 0.05), BMI decreased from 28.7 $\pm$ 2.6 kg/m${}^{2}$ to 25.7 $\pm$ 2.3 kg/m${}^{2}$ (a reduction of 10.2%, p $<$ 0.05), waist circumference decreased from 99.0 $\pm$ 8.4 cm to 89.6 $\pm$ 7.8 cm (a decrease of 9.4%, p $<$ 0.05) and body fat percentage decreased from 25.0 $\pm$ 4.3% to 21.5 $\pm$ 5.0% (a reduction of 14.5%, p $<$ 0.05) after 12 weeks of intervention. The intervention group’s body weight, waist circumference, BMI and body fat percentage showed significant decreases in comparison to the control group (p $<$ 0.05). Table 2.Changes in anthropometric characteristics following 12 weeks of the intervention programme Control group Intervention group Pre-intervention Post-intervention Pre-intervention Post-intervention Weight (kg) 80.0 ± 9.8 80.2 ± 9.5 81.1 ± 10.6 72.4 ± 9.2${}^{*,\}$ BMI (kg/m${}^{2}$) 28.3 ± 2.1 28.3 ± 2.4 28.7 ± 2.6 25.7 ± 2.3${}^{*,\}$ Waist circumference (cm) 98.5 ± 9.0 98.8 ± 9.7 99.0 ± 8.4 89.6 ± 7.8${}^{*,\}$ Body fat percentage (%) 25.4 ± 4.6 25.6 ± 5.1 25.0 ± 4.3 21.5 ± 5.0${}^{*,\}$ Note: ${}^{*}$ means p $<$ 0.05 vs. pre-intervention; ${}^{\}$ means p $<$ 0.05 vs. control group. 3.2 Changes in liver function parameters after the intervention programme The results of changes in liver function parameters following 12 weeks of the intervention programme are shown in Table 3. In the control group, no significant changes were observed in AST, ALT, GGT and ALP after 12 weeks (p $>$ 0.05). However, in the intervention group, AST decreased from 26.4 $\pm$ 9.7 U/L to 19.7 $\pm$ 5.6 U/L (a decrease of 20.6%, p $<$ 0.05), ALT decreased from 32.4 $\pm$ 22.7 U/L to 23.0 $\pm$ 10.4 U/L (a reduction of 18.1%, p $<$ 0.05), GGT decreased from 40.5 $\pm$ 19.0 U/L to 23.2 $\pm$ 9.9 U/L (a decrease of 37.7%, p $<$ 0.05) and ALP decreased from 203.0 $\pm$ 46.0 U/L to 188.7 $\pm$ 42.4 U/L (a reduction of 6.1%, p $<$ 0.05) following the intervention programme. The intervention group’s AST, ALT, GGT and ALP showed significant decreases in comparison to the control group (p $<$ 0.05). Table 3.Changes in liver function parameters following 12 weeks of the intervention programme Control group Intervention group Pre-intervention Post-intervention Pre-intervention Post-intervention AST (U/L) 27.1 ± 10.4 27.3 ± 9.8 26.4 ± 9.7 19.7 ± 5.6${}^{*,\}$ ALT (U/L) 32.8 ± 20.7 34.4 ± 19.3 32.4 ± 22.7 23.0 ± 10.4${}^{*,\}$ GGT (U/L) 39.5 ± 18.4 41.4 ± 19.2 40.5 ± 19.0 23.2 ± 9.9${}^{*,\}$ ALP (U/L) 200.7 ± 50.6 207.2 ± 45.7 203.0 ± 46.0 188.7 ± 42.4${}^{*,\}$ Note: AST, aspartate aminotransferase; ALT, alanine aminotransferase; GGT, gamma-glutamyl transferase; ALP, alkaline phosphatase. ${}^{*}$ means p $<$ 0.05 vs. pre-intervention; ${}^{\}$ means p $<$ 0.05 vs. control group. 3.3 Correlations of reduction rate between anthropometric characteristics and liver function parameters The results of correlations of reduction rate between anthropometric characteristics and liver function parameters are shown in Table 4. It can be seen from the table that only the GGT reduction rate displays a marked correlation with the decline rate of body weight, waist circumference, BMI and body fat percentage (r = 0.56 to 0.79, p $<$ 0.05). No significant correlation was found between the decline rate of obesity and the reduction rates of AST, ALT and ALP (p $>$ 0.05). Table 4.Correlations of reduction rate between anthropometric characteristics and liver function parameters AST Δ ALT Δ GGT Δ ALP Δ Weight reduction -0.12 -0.09 0.76* -0.10 BMI reduction -0.11 -0.06 0.72* -0.16 Waist circumference reduction 0.03 0.04 0.79* 0.04 Body fat percentage reduction -0.26 -0.28 0.56* 0.28 Note: AST, aspartate aminotransferase; ALT, alanine aminotransferase; GGT, gamma-glutamyl transferase; ALP, alkaline phosphatase. $\Delta$ means reduction rate; ${}^{*}$ means the significance level p $<$ 0.05. 4. Discussion Being overweight or obese makes individuals more vulnerable to liver dysfunction. Previous studies have shown that liver cirrhosis and fibrosis, impaired hepatic mitochondrial function and non-alcoholic fatty liver disease are all related to obesity [6-8]. Weight reduction can be helpful in facilitating the improvement of liver function. A combined aerobic exercise and dietary modification intervention is frequently used to aid weight reduction in overweight and obese individuals, but its effect on liver function remained unclear. Therefore, the aim of this study was to conduct an exploration of the impact of a combination of aerobic exercise and dietary modification on liver function in overweight and obese adult males. The results showed that 12 weeks of aerobic exercise and dietary modification led to significant weight, waist circumference and body fat percentage reduction in overweight and obese men and improved their liver function. Obesity has become an increasing global public health problem in recent years. According to World Health Organization statistics [24], the prevalence of being overweight and obese among individuals aged 18 years and older is 39%, among which the prevalence of obesity is 13%. The fundamental cause of being overweight and obese is believed to be an increased intake of energy-dense foods and low levels of physical activity. Being overweight or obese can lead to a variety of metabolic disorders, including hypertension, hyperlipidaemia, type 2 diabetes and non-alcoholic fatty liver disease [25-27]. Many studies have shown that weight reduction significantly prevents the occurrence and development of the aforementioned metabolic disorders [28,29]. Therefore, it is both beneficial and healthy for overweight and obese adults to control their body weight or lower their BMI. The liver is an important organ which is involved in maintaining the balance of lipid metabolism. When the body is in a state of illness, there are abnormalities in the lipid metabolism and a large number of lipid components enter the liver cells in order to make the liver synthesise the fat which has increased and accumulated, causing swelling, degeneration and even apoptosis in the liver cells and thereby resulting in impaired liver function [30,31]. Being overweight or obese can cause abnormal lipid metabolism, which can result in liver dysfunction. Verrijken et al. [32] reported that obesity is positively and significantly related to liver function parameters, including AST, ALT, GGT and ALP, and participants with high levels of visceral adipose tissue ($\geq$ 113 cm${}^{2}$) have a poorer liver function than those with low levels of visceral adipose tissue ($<$ 113 cm${}^{2}$). This study further demonstrated that weight reduction among overweight and obese men can dramatically enhance the values of AST, ALT, GGT and ALP. The study’s findings were consistent with those of Skrypnik et al. [15], who reported that exercise intervention causes a significant improvement to liver function among individuals with abdominal obesity. Based on the above results, it can be concluded that 12 weeks of a combination of aerobic exercise and dietary modification significantly improves the liver function of overweight and obese adult males. Obesity and fatty liver have a close relationship. Previous research has highlighted that being overweight or obese is an independent factor that affects liver fibrosis in those with non-alcoholic fatty liver disease [33,34]. There is a greater likelihood that obesity will cause non-alcoholic steatohepatitis by disturbing Kupffer cell function and sensitising oxidant stress and hepatocytes to endotoxin [35]. Hannah and Harrison [36] demonstrated that a 3% to 5% weight reduction is related to decreased steatosis, while weight reduction of 7% to 10% is associated with fibrosis regression and non-alcoholic steatohepatitis remission. In this study, it was discovered that body weight decreased by 10.6% and the AST, ALT, GGT and ALP decreased by 20.6%, 18.1%, 37.7% and 6.1% following the combination of aerobic exercise and dietary modification intervention. The results show that liver function improvement may be concerned with weight reduction in overweight and obese adult males. Aerobic exercise has been reported to decrease liver fat and insulin among obese individuals [37,38]. Keating et al. [39] documented that 45 to 60 minutes of aerobic exercise with 50% VO${}_{2}$max (four days per week) can significantly decrease 28% of intrahepatic lipid and that 30 to 45 minutes of aerobic exercise with 70% VO${}_{2}$max (three days per week) can reduce 29% of intrahepatic lipid. Huang et al. [40] further noted that a combination of dietary modification and aerobic exercise at 70% of the target heart rate can be effective for improving the liver histology of adults with non-alcoholic steatohepatitis (BMI $>$ 25 kg/m${}^{2}$). In addition, Baba et al. [17] discovered that 60% to 70% of maximal heart rate exercise combined with dietary modification can help decrease ALT concentrations among those with non-alcoholic steatohepatitis. The study also discovered that the normalisation of ALT is independent of weight loss. Similarly, this study showed that a combination of 50-70% of maximal heart rate exercise and dietary modification caused significantly improved liver function. In particular, the intervention can be used for the improvement of liver dysfunction and diseases that are triggered by being overweight or obese. GGT is produced by the hepatocyte mitochondria and confined to the cytoplasm and intrahepatic bile duct epithelium [41]. Previous studies have demonstrated that GGT is significantly and negatively associated with an increase in physical activity [42,43]. GGT concentrations greater than 109 U/L and exercise times of less than 60 minutes per week are considered to be risk factors for diabetes [44]. Therefore, increased physical activity or an exercise intervention can help reduce GGT concentrations. Chen et al. [45] noted that a 10-week exercise programme in combination with diet improvement can significantly reduce GGT among those with fatty liver disease. Ohno et al. [46] observed that GGT values decreased by 50% following a 10-week exercise programme for sedentary individuals. In this study, it was also discovered that GGT values demonstrate a significant downward trend following 12 weeks of intervention. Further analysis showed that a reduction in GGT is significantly associated with a decrease in weight, waist circumference, BMI and percentage fat (r = 0.56 to 0.79, p $<$ 0.05). To summarise, exercise alone or exercise combined with dietary improvement can improve GGT and the improvement of GGT is associated with a reduction in body weight. 5. Conclusions It is incredibly important for overweight and obese individuals to reduce their weight and improve their liver function. The results of this study suggest that 12 weeks of a combination of aerobic exercise and dietary modification significantly reduce weight, waist circumference and body fat percentage among overweight and obese men, while significantly improving their liver function. The findings can provide a scientific reference for the improvement of liver function and prevention of liver diseases in overweight and obese people. Author contributions Xiao-Guang Zhao is contributed in study design, manuscript writing, and submission; Hui-Ming Huang is contributed in study design and data analysis; Chen-Ya Du is contributed in manuscript writing and data collecting. Ethics approval and consent to participate The study protocol was reviewed and approved by the Human Ethics Board of Ningbo University (No: RAGH20190715). Acknowledgment The authors thank numerous subjects participated in this study. The authors express their gratitude to all the editors and peer reviewers for their comments and suggestions. Funding This research was supported by the Fundamental Research Funds for the Provincial Universities of Zhejiang (SJWY2020005), the Zhejiang Philosophical and Social Science Programme (21NDJC004Z), and the National Social Science Foundation in China (18BTY100). Conflict of interest The authors declare no competing interests. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share
# What drives the difference between M1 & M2 money supply (in the US)? From what I understand the only entity that controls M1 in US is the Federal Reserve. Is it true that M2-M1( M2 minus M1; the part of M2 that is NOT in M1 like timed deposits) is controlled by the commercial banks? Edit (by JS): I think the question is of interest and of importance in current market conditions. I attach a picture that I have commonly encountered over the past 12 months: the market capitalization of the S&P 500 vs. M2 money supply. These types of charts have gained popularity since the FED started massive balance sheet expansion in March 2020 to combat the market volatility induced by the Covid pandemic: in a way, such charts provide a "rational" explanation for the "irrationally high" stock market levels. I would myself be interested in a thorough answer to the OP's question, ideally from someone with an Economics background who takes interest in central bank policy. For example: • why do so many analysts use market cap vs. M2 money supply, rather than market cap vs. M1 money supply? • What exactly drives the difference between M2 and M1 money supply (I never studied macro properly, apologies if this is obvious) Ps: it is not possible to upload a picture into a comment, so I chose to edit the OP's question instead. • Hey Mike, I took the liberty to modify your question: pls let me know if that is not ok with you (I am happy to post it as a standalone question, but I think it's best to merge the two questions into one). – Jan Stuller Jan 21 at 9:10 • Thank you Jan. I think the two questions/remarks are indeed very related so they belong in the same place. However I'm sure that part of M1 is controlled by commercial banks( through loans that create more checkable deposits)) and not entirely by the Federal Reserve. I think the only money supply that is fully controlled by the Federal Reserve is the MB( monetary base). – Mike Cocos Jan 21 at 18:28 ## 2 Answers This is in response to the part of your question that asks about M1 versus M2, although it seems you've more or less answered parts of your own question. M1 is the simplest monetary aggregate and includes items most widely used as a medium of exchange (approximately 85% of household purchases are made using M1 balances); it is defined as follows: \begin{align*} \mbox{Aggregate M1} &= \mbox{Currency held by the public} \\ &+ \mbox{Travelers cheques} \\ &+ \mbox{Demand deposits (checking accounts that pay no interest)} \\ &+ \mbox{Other checkable deposits (checking accounts that pay interest)} \end{align*} M2 is a broader definition of money that adds to M1 other assets with check-writing features, such as money market deposit accounts, and other assets that can be turned into cash quickly with very little cost, such as savings deposits. \begin{align*} \mbox{Aggregate M2} &= \mbox{M1} \\ &+ \mbox{Term deposits (deposits locked up for a period of time)} \\ &+ \mbox{Savings deposits} \\ &+ \mbox{Retail money funds (mutual funds investing in safe short-term assets)} \end{align*} Looking at why central banks usually focus on M2 instead of M1 to monitor monetary policy gives us a quick sense for what drives M1 versus M2: 1. Interest rate changes. Higher rates entice people to switch balances in checking accounts, which pay little or no interest, into savings accounts, which pay more interest. This activity causes M1 to shrink but does not affect M2. Since people can relatively easily spend money from their savings account balances, it can be misleading to focus on trends in M1. 2. Financial innovations. The dividing line between checking and savings accounts has been steadily blurred with banks getting around the Fed prohibiting interest payments on checking accounts by, for example, creating savings accounts that earn interest but whose balances were automatically transferred into checking accounts when required. This actually led to the definition of M1 being expanded to include such accounts: "other checkable deposits." 3. Financial deregulation. Non-bank financial institutions such as mutual savings banks, credit unions, and savings-and-loans associations were at one time not allowed to have checking accounts, so their deposits were not included in M1. Current monetary aggregates include deposits at all financial institutions. As you can infer from the above discussion, the main driver of growth in M1 versus M2 should be the interest rate offered on money substitutes, as long as the institutional structure of where firms and individuals hold their deposits doesn't undergo a significant change. However, to therefore conclude that that the M2-M1 differential is primarily determined by commercial banks is probably too simplistic. Clearly, one effect of the Fed's massive bond purchases (QE) is to lower interest rates across the yield curve (and therefore more or less equalize the interest rate differential between checking and savings accounts). Added Later In fact, the following article shows how a prior episode of QE led to an increase in the growth rate of M1 versus M2: What's Driving up Money Growth? • Great comment, thank you, really thorough and well explained. So basically it makes more sense to scale the market cap of (say) SPX with M2, rather than M1, because M2 also includes money markets, which are essentially the "checking accounts of professional institutions" , whilst M1 is mostly retail money: correct? – Jan Stuller Jan 22 at 7:09 • Good question but I don't know if I have a good answer. I have not yet come across a rigorous contemporary analysis of why M2 is superior to M1 in this context although I have seen a 1982 study by Sorensen suggesting that there is no difference. However, as per Mike's question and given the dramatic impact of QE on M1 in recent times, it appears as if M1 would be a better proxy. – Sharad Jan 22 at 17:57 • There is no switching cost from M2 to M1 and now all money is essentially M1 (by nature) and M2 (by definition). If and when interest rates move in the opposite direction, and no one is going to park any cash in checking accounts, all money will become M2 by nature and by definition. – Sergei Rodionov Jan 22 at 19:58 Broadly speaking, if something is in M2 and not M1, it's because there's some friction in spending that money, while M1 allows for mostly frictionless transactions. M1 consists of currency in circulation, checkable/demand deposits, and travelers checks. All of these forms of money can be used to facilitate transactions immediately. M2 further incorporates savings accounts, money market accounts/mutual funds, and low-value time deposits. These forms of money all require at least some amount of time or some sort of transaction cost and typically cannot be used for transactions directly and on demand. But, they can be converted into M1 relatively easily and then used for transactions. That's the primary difference. • A follow-up question that comes to mind: QE is basically the FED buying US Government Bonds in open market transactions. What would then be the "path" of this printed money into the M2 "orbit"? I imagine the FED buys the bonds directly from an institution (be it a primary dealer of US govies, a corporate bank or a fund): this cash can then directly enter M2 via the money markets: correct? So (simplistically), each time the FED purchases some bonds via printed money, M2 grows by that amount, would you agree? (whilst most likely M1 stays constant, unless the money is "deposited") – Jan Stuller Jan 22 at 10:52 • This is interesting comment. If FED bought directly from bank and that bank then deposited the cash back to FED as excess reserves this would not be counted as M2. However, if indirectly the seller was a retail investor, or corporate, who deposited the proceeds back with a bank then that deposit would be regarded as M2, or?? – Attack68 Jan 22 at 15:41 • @JanStuller, money that enters money market accounts is not necessarily part of M2. M2 consists of low-value money market accounts (< \$100k) or those belonging specifically to individuals. Larger value accounts, like those of institutions, are part of M3. But that money will get to M2 eventually through salaries, dividends, etc., as the corporation transfers money to indviduals. – Amaan M Jan 22 at 17:35 • @Attack68, that sounds correct, but I'd also point out that since the bank's excess reserves have increased, they will lend out money until they're back at the reserve requirement. So, in that sense, whether the Fed buys from institutions or individuals, M2 will increase. – Amaan M Jan 22 at 17:39 • @JanStuller That makes sense, but they are highly correlated, so you'll likely have similar outcomes regardless of which one you use if you're looking at percent changes. – Amaan M Jan 22 at 18:38
Let P(E) denote the probability of the event E. Given P(A) = 1, P(B) = 1/2, the values of P(A | B) and P(B | A) respectively are A 1/4, 1/2 B 1/2, 1/14 C 1/2, 1 D 1, 1/2 P(A | B) is Probability of A given B has occurred which is 1. P(B | A) is probability of B given that A has occurred which is 1/2
## sh3lsh one year ago Graph Theory! 1. sh3lsh Could you help me parse this theorem? $\Sigma \deg^{-}(v) = \Sigma \deg^{+}(v) = |E|$ 2. sh3lsh Let G = (V,E) Underneath the sigmas are supposed to be $v \in V$ 3. sh3lsh (it's a graph with directed edges)
## Evaluation of Particle Resuspension and Single-layer Rates with Exposure Time and Friction Velocity for Multilayer Deposits in a Turbulent Boundary Layer Impact Factor: 2.735 5-Year Impact Factor: 2.827
## Wednesday, August 29, 2012 ### Calculating Mutually Exclusive Fixed Effects * Let's imagine we would like to estimate how effective 90 different teachers in 3 different grades each teaching 20 students are individually.  Every student receives a teacher and all teachers are assigned to students randomly (an important assumption often violated). clear set obs 90 gen teffect = rnormal()+1.5 label var teffect "True Teacher Effect" gen tid = _n * This should create 3 different grades for the teachers to be assigned to * We will expand to 20 students per teacher expand 20 * This is the base student effect level. gen student_effect = rnormal() * This is the starting levels of the teachers gen start_level = rnormal() * This is a random normal variation achievement gain over the year gen u = rnormal() gen current_level = teffect + student_effect + .75*start_level + 2*grade + u*5 * Multiplying start_level by .75 implies that students retain 75% of the ability that they had going into the school year. * Now we want to see how well we can infer teacher ability tab tid, gen(tid_) * We might want to start with a straightforward regression of current achievement on teacher id reg current_level tid_2-tid_90 start_level * Since tid=1 is ommitted due to multicolinearity, we will set its effect equal to 0 as a base of reference. gen reg_res = 0 if tid==1 * Note, you may want to identify the true magnitude of the teacher effect.  This however, is not possible because all students have recieved teachers.  Therefore, we can only at best hope to estimate how good teachers are relative to each other. forv i = 2/90 { cap replace reg_res = _b[tid_i'] if tid == i' } * The problem with this is now we have to figure out how to compare teachers. * One way would be to correlate the estimated teacher effect with the true teacher effect (which we know). corr reg_res teffect * This correlation looks really bad primarily because teachers are only teaching in one grade each and grades have different learning effects. spearman reg_res teffect * The spearman rank correlation fairs even worse than the pearson correlation two (scatter reg_res teffect if grade==1)  /// (scatter reg_res teffect if grade==2)  /// (scatter reg_res teffect if grade==3), /// * We can see generally their is a correlation between higher teacher effect and higher estimates of teacher effects across all grades.  However, within grades the correlation is even more clear. * One may attempt to correct this problem by including grade dummies * However, the system experiences multicolinearity issues and sometimes drops the grade dummies. * To control this we will drop the first teacher in each grade. * This regression still is a little fishy however. * Within each grade the estimated teacher effects is relative to the omitted teacher. * Thus if the omitted teacher is high in grade 1 and low in grade 2 then the correlations will be thrown off. gen reg_GD = 0 if tid==1 | tid==31 | tid == 61 forv i = 2/90 { cap replace reg_GD = _b[tid_i'] if tid == i' } corr reg_GD teffect spearman reg_GD teffect * Including the grade dummies greatly improves the teacher estimates. * An alternative method would be to demean current achievement bysort grade: egen mean_current_level = mean(current_level) gen dm_current_level = current_level-mean_current_level reg dm_current_level tid_2-tid_90 start_level gen dm_results = 0 if tid==1 forv i = 1/90 { cap replace dm_results = _b[tid_i'] if tid == i' } * The problem with this is now we have to figure out how to compare teachers. corr dm_results teffect spearman dm_results teffect * Finally an alternative approach may be to do the original regression but demean the teacher estimates by grade post estimation. bysort grade: egen mean_reg_res = mean(reg_res) gen dm_reg_res = reg_res - mean_reg_res * The problem with this is now we have to figure out how to compare teachers. corr dm_reg_res teffect spearman dm_reg_res teffect two (scatter dm_reg_res teffect if grade==1)  /// (scatter dm_reg_res teffect if grade==2)  /// (scatter dm_reg_res teffect if grade==3), ///
Thread: Help with parametric equation 1. Help with parametric equation Find parametric equations for the line of intersection of the two planes. P1: 2x − y + z = 1 P2 : x − y + z = 2 <2,-1,1> X <1,-1,1> = <0,-1,1> x = -1 y=t z=3+t I don't know how the book gets this answer. 2. Originally Posted by khuezy Find parametric equations for the line of intersection of the two planes. P1: 2x − y + z = 1 P2 : x − y + z = 2 <2,-1,1> X <1,-1,1> = <0,-1,1> x = -1 y=t z=3+t I don't know how the book gets this answer. Solve simultaneously: 2x − y + z = 1 .... (1) x − y + z = 2 .... (2) (1) - (2): x = -1. Substitute into x = -1 into either (1) or (2): − y + z = 3. Now let y = t, say, where $t \in R$ and solve for z.
# Longitudinal wave Longitudinal waves, also known as "l-waves", are waves in which the displacement of the medium is in the same direction as, or the opposite direction to, the direction of travel of the wave. Mechanical longitudinal waves are also called compressional waves or compression waves, because they produce compression and rarefaction when traveling through a medium. The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse mechanical waves are also called "t-waves" or "shear waves". Plane pressure pulse wave Representation of the propagation of an omnidirectional pulse wave on a 2d grid (empirical shape) ## Examples Longitudinal waves include sound waves (vibrations in pressure, particle displacement, and particle velocity propagated in an elastic medium) and seismic P-waves (created by earthquakes and explosions). In longitudinal waves, the displacement of the medium is parallel to the propagation of the wave. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Sound waves in air are longitudinal, pressure waves. ### Sound waves In the case of longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula $y(x,t) = y_0 \cos \Bigg( \omega \left(t-\frac{x}{c} \right) \Bigg)$ where: • y is the displacement of the point on the traveling sound wave; • x is the distance the point has traveled from the wave's source; • t is the time elapsed; • y0 is the amplitude of the oscillations, • c is the speed of the wave; and • ω is the angular frequency of the wave. The quantity x/c is the time that the wave takes to travel the distance x. The ordinary frequency (f) of the wave is given by $f = \frac{\omega}{2 \pi}.$ For sound waves, the amplitude of the wave is the difference between the pressure of the undisturbed air and the maximum pressure caused by the wave. Sound's propagation speed depends on the type, temperature, and composition of the medium through which it propagates. ### Pressure waves In an elastic medium with rigidity, a harmonic pressure wave oscillation has the form, $y(x,t)\, = y_0 \cos(k x - \omega t +\varphi)$ where: • y0 is the amplitude of displacement, • k is the wavenumber, • x is the distance along the axis of propagation, • ω is the angular frequency, • t is the time, and • φ is the phase difference. The restoring force, which acts to return the medium to its original position, is provided by the medium's bulk modulus.[1] ## Electromagnetic Maxwell's equations lead to the prediction of electromagnetic waves in a vacuum, which are transverse (in that the electric fields and magnetic fields vary perpendicularly to the direction of propagation).[2] However, waves can exist in plasmas or confined spaces, called plasma waves, which can be longitudinal, transverse, or a mixture of both.[2][3] Plasma waves can also occur in force-free magnetic fields. [4] In the early development of electromagnetism, there were some like Alexandru Proca (1897-1955) known for developing relativistic quantum field equations bearing his name (Proca's equations) for the massive, vector spin-1 mesons. In recent decades some extended electromagnetic theorists, such as Jean-Pierre Vigier and Bo Lehnert of the Swedish Royal Society, have used the Proca equation in an attempt to demonstrate photon mass [5] as a longitudinal electromagnetic component of Maxwell's equations, suggesting that longitudinal electromagnetic waves could exist in a Dirac polarized vacuum. After Heaviside's attempts to generalize Maxwell's equations, Heaviside came to the conclusion that electromagnetic waves were not to be found as longitudinal waves in "free space" or homogeneous media.[6] But Maxwell's equations do lead to the appearance of longitudinal waves under some circumstances, for example, in plasma waves or guided waves. Basically distinct from the "free-space" waves, such as those studied by Hertz in his UHF experiments, are Zenneck waves.[7] The longitudinal modes of a resonant cavity are the particular standing wave patterns formed by waves confined in a cavity. The longitudinal modes correspond to those wavelengths of the wave which are reinforced by constructive interference after many reflections from the cavity's reflecting surfaces. Recently, Haifeng Wang et al. proposed a method that can generate a longitudinal electromagnetic (light) wave in free space, and this wave can propagate without divergence for a few wavelengths.[8] ## References 1. ^ Weisstein, Eric W., "P-Wave". Eric Weisstein's World of Science. 2. ^ a b David J. Griffiths, Introduction to Electrodynamics, ISBN 0-13-805326-X 3. ^ John D. Jackson, Classical Electrodynamics, ISBN 0-471-30932-X. 4. ^ Gerald E. Marsh (1996), Force-free Magnetic Fields, World Scientific, ISBN 981-02-2497-4 5. ^ Lakes, R. (1998). Experimental limits on the photon mass and cosmic magnetic vector potential. Physical review letters, 80(9), 1826-1829 6. ^ Heaviside, Oliver, "Electromagnetic theory". Appendices: D. On compressional electric or magnetic waves. Chelsea Pub Co; 3rd edition (1971) 082840237X 7. ^ Corum, K. L., and J. F. Corum, "The Zenneck surface wave", Nikola Tesla, Lightning observations, and stationary waves, Appendix II. 1994. 8. ^ Haifeng Wang, Luping Shi, Boris Luk'yanchuk, Colin Sheppard and Chong Tow Chong, "Creation of a needle of longitudinally polarized light in vacuum using binary optics," Nature Photonics, Vol.2, pp 501-505, 2008, doi:10.1038/nphoton.2008.127
# A-level Physics/Equation Sheet Equations, constants, and other useful data that the A-level student of physics is required to memorise. ## Forces and Motion ### Newtonian Mechanics #### Conventions • $\vec{v}_0\$ denotes initial velocity • $\vec{v}\$ denotes final velocity • $\vec{a}\$ denotes acceleration • $\vec{s}$ denotes displacement • $t\$ denotes time • $W\$ denotes work done • $m,\ m_i$ denotes mass or mass of object $i$ • $P\$ denotes Power • A vector without it's arrow implies the magnitude of the vector, e.g. $b=\left|\vec{b}\right|$ #### Kinematic Equations • $\vec{v} = \frac {\Delta \vec{s}}{\Delta t} = \frac{d \vec{s}}{dt}$ • $\vec{a} = \frac {\Delta \vec{v}}{\Delta t} = \frac{d \vec{v}}{dt}$ • $\ \Delta v = v_0 + a \Delta t$ • $v^2 = v_0^2 + 2a \Delta s\$ • $s = s_0 + v_0 \Delta t + \frac{1}{2}a (\Delta t)^2$ • $\Delta s = \frac{v_0+v}{2} \cdot \Delta t$ #### Force and Momentum • $\vec{p} = m \vec{v}$ • $\vec{F_{net}} = m \vec{a}$ • $\vec{F} = \frac{\Delta \vec{p}}{\Delta t} = \frac {m \Delta \vec{v}}{\Delta t} = \frac{d \vec{p}}{dt}$ • $\vec{F_g}=\frac{Gm_1m_2}{r^2}$ #### Work and Energy • $E_g = mgh\$ For small heights where $g$ can be treated as constant • $E_g = \frac{-Gm_1m_2}{r}$ (for any height) • $E_K = \frac{1}{2}mv^2$ • $W = \vec{F} \cdot \Delta \vec{x} = \int \vec{F} \cdot d \vec{x}$ • $P = \frac {\Delta W}{\Delta t} = \frac{dW}{dt}$
# Has anyone seen this sort of graph property used before? Consider the following property of a graph $G$: The graph $G$ has no independent cutset of vertices, $S$, such that the number of components of $G-S$ is more than $|S|$ (the size of $S$). (That is, cannot delete 1 vertex and leave 2+ components, cannot delete 2 independent vertices and leave 3+ components etc.) For some as-yet-unexplained reason, this property has arisen in a couple of questions relating to chromatic roots; needing a name we called this property $\alpha$-1-tough, which uses the notation from graph toughness plus the adjective $\alpha$ to indicate "independent". Basically we believe that $\alpha$-1-tough graphs are well-behaved with respect to chromatic polynomials; the evidence is that various small graphs that violate certain reasonably well-founded and natural conjectures are very clearly NOT $\alpha$-1-tough. Having failed miserably at all attempts to prove anything sensible using this property, I wondered if anyone anywhere has seen this, or a similar, graph property appear anywhere. (I have posted a longer article about this on my (shared) blog, but am not sure of the policy about posting links to your own stuff so I won't do so just in case.) Edit: The blog entry is http://symomega.wordpress.com/2012/01/06/chromatic-roots-the-multiplicity-of-2/ - I think you should link to the relevant blog entry. Anyone who wants to investigate this would appreciate knowing more details. –  Joseph O'Rourke Jan 11 '12 at 13:26 Ok, now added... just didn't want anyone to think that I'm trying to drive traffic to my blog (not that there would be any point). –  Gordon Royle Jan 11 '12 at 22:50 gordon.royle@uwa.edu.au –  Gordon Royle Jan 23 '12 at 23:10
# Documentation ### This is machine translation Translated by Mouse over text to see original. Click the button below to return to the English verison of the page. Quadratic programming is the problem of finding a vector x that minimizes a quadratic function, possibly subject to linear constraints: `$\underset{x}{\mathrm{min}}\frac{1}{2}{x}^{T}Hx+{c}^{T}x$` such that A·x ≤ b, Aeq·x = beq, l ≤ x ≤ u. ### `interior-point-convex``quadprog` Algorithm The `interior-point-convex` algorithm performs the following steps: #### Presolve/Postsolve The algorithm begins by attempting to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step include: • Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables. • Check if any linear inequality constraint involves just one variable. If so, check for feasibility, and change the linear constraint to a bound. • Check if any linear equality constraint involves just one variable. If so, check for feasibility, and then fix and remove the variable. • Check if any linear constraint matrix has zero rows. If so, check for feasibility, and delete the rows. • Check if the bounds and linear constraints are consistent. • Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and fix the variables at their appropriate bounds. • Change any linear inequality constraints to linear equality constraints by adding slack variables. If algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message. The algorithm might arrive at a single feasible point, which represents the solution. If the algorithm does not detect an infeasible or unbounded problem in the presolve step, it continues, if necessary, with the other steps. At the end, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step. For details, see Gould and Toint [63]. #### Generate Initial Point The initial point `x0` for the algorithm is: 1. Initialize `x0` to `ones(n,1)`, where `n` is the number of rows in H. 2. For components that have both an upper bound `ub` and a lower bound `lb`, if a component of `x0` is not strictly inside the bounds, the component is set to `(ub + lb)/2`. 3. For components that have only one bound, modify the component if necessary to lie strictly inside the bound. #### Predictor-Corrector Similar to the `fmincon` interior-point algorithm, the `interior-point-convex` algorithm tries to find a point where the Karush-Kuhn-Tucker (KKT) conditions hold. For the quadratic programming problem described in Quadratic Programming Definition, these conditions are: Here • $\overline{A}$ is the extended linear inequality matrix that includes bounds written as linear inequalities. $\overline{b}$ is the corresponding linear inequality vector, including bounds. • s is the vector of slacks that convert inequality constraints to equalities. s has length m, the number of linear inequalities and bounds. • z is the vector of Lagrange multipliers corresponding to s. • y is the vector of Lagrange multipliers associated with the equality constraints. The algorithm first predicts a step from the Newton-Raphson formula, then computes a corrector step. The corrector attempts to better enforce the nonlinear constraint sizi = 0. Definitions for the predictor step: • rd, the dual residual: `${r}_{d}=Hx+c-{A}_{eq}^{T}y-{\overline{A}}^{T}z.$` • req, the primal equality constraint residual: `${r}_{eq}={A}_{eq}x-{b}_{eq}.$` • rineq, the primal inequality constraint residual, which includes bounds and slacks: `${r}_{ineq}=\overline{A}x-\overline{b}-s.$` • rsz, the complementarity residual: rsz = Sz. S is the diagonal matrix of slack terms, z is the column matrix of Lagrange multipliers. • rc, the average complementarity: `${r}_{c}=\frac{{s}^{T}z}{m}.$` In a Newton step, the changes in x, s, y, and z, are given by: `$\left(\begin{array}{cccc}H& 0& -{A}_{eq}^{T}& -{\overline{A}}^{T}\\ {A}_{eq}& 0& 0& 0\\ \overline{A}& -I& 0& 0\\ 0& Z& 0& S\end{array}\right)\left(\begin{array}{c}\Delta x\\ \Delta s\\ \Delta y\\ \Delta z\end{array}\right)=-\left(\begin{array}{c}{r}_{d}\\ {r}_{eq}\\ {r}_{ineq}\\ {r}_{sz}\end{array}\right).$` However, a full Newton step might be infeasible, because of the positivity constraints on s and z. Therefore, `quadprog` shortens the step, if necessary, to maintain positivity. Additionally, to maintain a "centered" position in the interior, instead of trying to solve sizi = 0, the algorithm takes a positive parameter σ, and tries to solve sizi = σrc. `quadprog` replaces rsz in the Newton step equation with rsz + ΔsΔz – σrc1, where 1 is the vector of ones. Also, `quadprog` reorders the Newton equations to obtain a symmetric, more numerically stable system for the predictor step calculation. For details, see Mehrotra [47]. #### Multiple Corrections After calculating the corrected Newton step, `quadprog` can perform more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio [62]. #### Total Relative Error `quadprog` calculates a merit function φ at every iteration. The merit function is a measure of feasibility, and is also called total relative error. `quadprog` stops if the merit function grows too large. In this case, `quadprog` declares the problem to be infeasible. The merit function is related to the KKT conditions for the problem—see Predictor-Corrector. Use the following definitions: `$\begin{array}{c}\rho =\mathrm{max}\left(1,‖H‖,‖\overline{A}‖,‖{A}_{eq}‖,‖c‖,‖\overline{b}‖,‖{b}_{eq}‖\right)\\ {r}_{\text{eq}}={A}_{\text{eq}}x-{b}_{\text{eq}}\\ {r}_{\text{ineq}}=\overline{A}x-\overline{b}+s\\ {r}_{\text{d}}=Hx+c+{A}_{\text{eq}}^{T}{\lambda }_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda }}_{\text{ineq}}\\ g={x}^{T}Hx+{f}^{T}x-{\overline{b}}^{T}{\overline{\lambda }}_{\text{ineq}}-{b}_{\text{eq}}^{T}{\lambda }_{\text{eq}}.\end{array}$` The notation $\overline{A}$ and $\overline{b}$ means the linear inequality coefficients, augmented with terms to represent bounds. The notation ${\overline{\lambda }}_{\text{ineq}}$ similarly represents Lagrange multipliers for the linear inequality constraints, including bound constraints. This was called z in Predictor-Corrector, and ${\lambda }_{\text{eq}}$ was called y. The merit function φ is `$\frac{1}{\rho }\left(\mathrm{max}\left({‖{r}_{\text{eq}}‖}_{\infty },{‖{r}_{\text{ineq}}‖}_{\infty },{‖{r}_{\text{d}}‖}_{\infty }\right)+g\right).$` ### `trust-region-reflective``quadprog` Algorithm Many of the methods used in Optimization Toolbox™ solvers are based on trust regions, a simple yet powerful concept in optimization. To understand the trust-region approach to optimization, consider the unconstrained minimization problem, minimize f(x), where the function takes vector arguments and returns scalars. Suppose you are at a point x in n-space and you want to improve, i.e., move to a point with a lower function value. The basic idea is to approximate f with a simpler function q, which reasonably reflects the behavior of function f in a neighborhood N around the point x. This neighborhood is the trust region. A trial step s is computed by minimizing (or approximately minimizing) over N. This is the trust-region subproblem, (9-1) The current point is updated to be x + s if f(x + s) < f(x); otherwise, the current point remains unchanged and N, the region of trust, is shrunk and the trial step computation is repeated. The key questions in defining a specific trust-region approach to minimizing f(x) are how to choose and compute the approximation q (defined at the current point x), how to choose and modify the trust region N, and how accurately to solve the trust-region subproblem. This section focuses on the unconstrained problem. Later sections discuss additional complications due to the presence of constraints on the variables. In the standard trust-region method ([48]), the quadratic approximation q is defined by the first two terms of the Taylor approximation to F at x; the neighborhood N is usually spherical or ellipsoidal in shape. Mathematically the trust-region subproblem is typically stated (9-2) where g is the gradient of f at the current point x, H is the Hessian matrix (the symmetric matrix of second derivatives), D is a diagonal scaling matrix, Δ is a positive scalar, and ∥ . ∥ is the 2-norm. Good algorithms exist for solving Equation 9-2 (see [48]); such algorithms typically involve the computation of a full eigensystem and a Newton process applied to the secular equation `$\frac{1}{\Delta }-\frac{1}{‖s‖}=0.$` Such algorithms provide an accurate solution to Equation 9-2. However, they require time proportional to several factorizations of H. Therefore, for large-scale problems a different approach is needed. Several approximation and heuristic strategies, based on Equation 9-2, have been proposed in the literature ([42] and [50]). The approximation approach followed in Optimization Toolbox solvers is to restrict the trust-region subproblem to a two-dimensional subspace S ([39] and [42]). Once the subspace S has been computed, the work to solve Equation 9-2 is trivial even if full eigenvalue/eigenvector information is needed (since in the subspace, the problem is only two-dimensional). The dominant work has now shifted to the determination of the subspace. The two-dimensional subspace S is determined with the aid of a preconditioned conjugate gradient process described below. The solver defines S as the linear space spanned by s1 and s2, where s1 is in the direction of the gradient g, and s2 is either an approximate Newton direction, i.e., a solution to $H\cdot {s}_{2}=-g,$ (9-3) or a direction of negative curvature, ${s}_{2}^{T}\cdot H\cdot {s}_{2}<0.$ (9-4) The philosophy behind this choice of S is to force global convergence (via the steepest descent direction or negative curvature direction) and achieve fast local convergence (via the Newton step, when it exists). A sketch of unconstrained minimization using trust-region ideas is now easy to give: 1. Formulate the two-dimensional trust-region subproblem. 2. Solve Equation 9-2 to determine the trial step s. 3. If f(x + s) < f(x), then x = x + s. These four steps are repeated until convergence. The trust-region dimension Δ is adjusted according to standard rules. In particular, it is decreased if the trial step is not accepted, i.e., f(x + s) ≥ f(x). See [46] and [49] for a discussion of this aspect. Optimization Toolbox solvers treat a few important special cases of f with specialized functions: nonlinear least-squares, quadratic functions, and linear least-squares. However, the underlying algorithmic ideas are the same as for the general case. These special cases are discussed in later sections. The subspace trust-region method is used to determine a search direction. However, instead of restricting the step to (possibly) one reflection step, as in the nonlinear minimization case, a piecewise reflective line search is conducted at each iteration. See [45] for details of the line search. A popular way to solve large symmetric positive definite systems of linear equations Hp = –g is the method of Preconditioned Conjugate Gradients (PCG). This iterative approach requires the ability to calculate matrix-vector products of the form H·v where v is an arbitrary vector. The symmetric positive definite matrix M is a preconditioner for H. That is, M = C2, where C–1HC–1 is a well-conditioned matrix or a matrix with clustered eigenvalues. In a minimization context, you can assume that the Hessian matrix H is symmetric. However, H is guaranteed to be positive definite only in the neighborhood of a strong minimizer. Algorithm PCG exits when a direction of negative (or zero) curvature is encountered, i.e., dTHd ≤ 0. The PCG output direction, p, is either a direction of negative curvature or an approximate (tol controls how approximate) solution to the Newton system Hp = –g. In either case p is used to help define the two-dimensional subspace used in the trust-region approach discussed in Trust-Region Methods for Nonlinear Minimization. #### Linear Equality Constraints Linear constraints complicate the situation described for unconstrained minimization. However, the underlying ideas described previously can be carried through in a clean and efficient way. The trust-region methods in Optimization Toolbox solvers generate strictly feasible iterates. The general linear equality constrained minimization problem can be written (9-5) where A is an m-by-n matrix (m ≤ n). Some Optimization Toolbox solvers preprocess A to remove strict linear dependencies using a technique based on the LU factorization of AT [46]. Here A is assumed to be of rank m. The method used to solve Equation 9-5 differs from the unconstrained approach in two significant ways. First, an initial feasible point x0 is computed, using a sparse least-squares step, so that Ax0 = b. Second, Algorithm PCG is replaced with Reduced Preconditioned Conjugate Gradients (RPCG), see [46], in order to compute an approximate reduced Newton step (or a direction of negative curvature in the null space of A). The key linear algebra step involves solving systems of the form $\left[\begin{array}{cc}C& {\stackrel{˜}{A}}^{T}\\ \stackrel{˜}{A}& 0\end{array}\right]\left[\begin{array}{c}s\\ t\end{array}\right]=\left[\begin{array}{c}r\\ 0\end{array}\right],$ (9-6) where $\stackrel{˜}{A}$ approximates A (small nonzeros of A are set to zero provided rank is not lost) and C is a sparse symmetric positive-definite approximation to H, i.e., C = H. See [46] for more details. #### Box Constraints The box constrained problem is of the form (9-7) where l is a vector of lower bounds, and u is a vector of upper bounds. Some (or all) of the components of l can be equal to –∞ and some (or all) of the components of u can be equal to ∞. The method generates a sequence of strictly feasible points. Two techniques are used to maintain feasibility while achieving robust convergence behavior. First, a scaled modified Newton step replaces the unconstrained Newton step (to define the two-dimensional subspace S). Second, reflections are used to increase the step size. The scaled modified Newton step arises from examining the Kuhn-Tucker necessary conditions for Equation 9-7, ${\left(D\left(x\right)\right)}^{-2}g=0,$ (9-8) where `$D\left(x\right)=\text{diag}\left({|{v}_{k}|}^{-1/2}\right),$` and the vector v(x) is defined below, for each 1 ≤ i ≤ n: • If gi < 0 and ui < ∞ then vi = xi – ui • If gi ≥ 0 and li > –∞ then vi = xi – li • If gi < 0 and ui = ∞ then vi = –1 • If gi ≥ 0 and li = –∞ then vi = 1 The nonlinear system Equation 9-8 is not differentiable everywhere. Nondifferentiability occurs when vi = 0. You can avoid such points by maintaining strict feasibility, i.e., restricting l < x < u. The scaled modified Newton step sk for the nonlinear system of equations given by Equation 9-8 is defined as the solution to the linear system $\stackrel{^}{M}D{s}^{N}=-\stackrel{^}{g}$ (9-9) at the kth iteration, where $\stackrel{^}{g}={D}^{-1}g=\text{diag}\left({|v|}^{1/2}\right)g,$ (9-10) and $\stackrel{^}{M}={D}^{-1}H{D}^{-1}+\text{diag}\left(g\right){J}^{v}.$ (9-11) Here Jv plays the role of the Jacobian of |v|. Each diagonal component of the diagonal matrix Jv equals 0, –1, or 1. If all the components of l and u are finite, Jv = diag(sign(g)). At a point where gi = 0, vi might not be differentiable. ${J}_{ii}^{v}=0$ is defined at such a point. Nondifferentiability of this type is not a cause for concern because, for such a component, it is not significant which value vi takes. Further, |vi| will still be discontinuous at this point, but the function |vigi is continuous. Second, reflections are used to increase the step size. A (single) reflection step is defined as follows. Given a step p that intersects a bound constraint, consider the first bound constraint crossed by p; assume it is the ith bound constraint (either the ith upper or ith lower bound). Then the reflection step pR = p except in the ith component, where pRi = –pi.
# Buck converter without load - is it dangerous in long term? As a followup to my previous questions (Q1 - circuit diagram is here, Q2). I disassembled the LED bulbs, cleaned and resoldered the components on the DC-DC converter boards, replaced caps, and chandelier-assembly seems to work well. Background: Originally LED bulb is an assembly comprising of both DC-DC converter and LEDs, and (as I now clearly see) is not serviceable. If any part explodes within the bulb, manufacturer does not expect that bulb's chassis explodes (however while plastic it is made of is not flammable, gases are having hard times getting out of it). Now, when I separated converter and LEDs, I must ensure that any of them do not fail making fire or smoke and dirt. The issue: when I remove the bulb (which is having LEDs only now), converter remains without load. I was lucky to notice that in this case voltage at its output goes at approx 85 V, and output capacitor rated for 50 V heated and was about to explode. I have two same model bulbs with different converters: BP2832A-based (board revision 1.0, English, Chinese) and DU8671-based (board revision 1.1, Chinese). While BP2832A says nothing about nature of its output, DU8671 has much better datasheet which says that output of the circuit based on it should be 40 Vdc ~ 80 Vdc on page 5, above the circuit diagram (more or less same reading I get without load). I suspect (maybe wrongly) that BP2832A should have the same range, as long as I also measure its output to be about 85 V without load. Question: It seems that buck converter is not designed to work without load, right? And resistor of 30 kOhm at output does not make a difference. I suspect that converter tries to reach nominal current set by Rcs resistor, within the defined range (40-80V), and if it does not reach this current, it stays with the clock frequency at the upper limit of voltage level. Is it OK to run converter in this mode? Let's say I will up-rate output cap from 50 V to 100 V so that it withstands max voltage on the output without heating and explosion, but in general, like in my case - if bulb containing LEDs is removed from the socket, can this no-load mode be harmful for buck converter chip in long term, flyback diode and choke? How much current device is expected to consume in this mode? What are the pitfalls from your experience? Question 2: How this voltage 40Vdc ~ 80Vdc range in DU8671 for circuit of page 5 is calculated? Is it based on Tleb and Tdelay timings? Or Toff/Ton? P.S. Bulb's circuit, based on DU8671, is having two parallel 3.3 Ohm resistors on its CS pin, abd my current measurements (120 mA) matches result of formula on page 5. However another bulb based on BP2832A is having the same output current (measured) with almost the same setting, however its formula is missing 2 in the denominator! found the answer, BP2832A's has two formulas with one Ipk and another Iled which is Ipk/2... Update: here're results of the project Board from the bottom, power routing Board from the top And within chandelier assembly Bulbs (with LEDs only) are still heating, but I would say they are ~80 C. Central hub heats a little, but still can be touched by the hand. The design still needs to be tested for durability though. • Schematic? Overview? – winny Jan 31 '18 at 22:03 • Circuit diagram is in Q1, edited. Generally the typical circuit diagram is drawn in both datasheets for BP2832A and for DU8671. The only difference with mine are values of resistors. – Anonymous Jan 31 '18 at 22:08 Usually a DC-DC converter for an LED string will be a current regulator, not a voltage regulator. A current regulating converter will use feedback to adjust the switching duty cycle up or down until the load current matches a setpoint. If the load (LED) is removed then the converter will measure no load current and just keep increasing the output voltage. Running a current regulator without a load typically won't damage it, but the voltage will go up until the over-voltage threshold is reached. You should be OK as long as you rate the output capacitors avove the over-voltage setpoint. Also, the over-voltage setpoint is usually adjustable on most converter chips by changing a resistor, so you could try reducing the setpoint rather than increasing the capacitor ratings. • Actually 50 V cap is already very close to the limit as it generates 42 V under normal load. Any idea (looking to datasheets) which resistor to tackle to change this over-voltage setpoint? – Anonymous Jan 31 '18 at 23:13 • @Anonymous The datasheet says that the over-voltage protection level is set by the resistor connected to the ROVP pin (which is pin 2 on the chip). The correct value can be found by using the formula on page 6 of the datasheet. – user4574 Feb 1 '18 at 0:13
# zbMATH — the first resource for mathematics Boolean powers and stochastic spaces. (English) Zbl 0789.03038 We investigate the relationship between the Boolean power $$\mathbb{R}[\mathbb{B}]$$ of $$\mathbb{R}$$ and the elementary stochastic space $$E$$ in the sense of D. Kappos [Probability algebras and stochastic spaces (1969; Zbl 0196.185)]. We obtain here that these two spaces are isomorphic. In this way, we obtain a stochastic interpretation of the Boolean power structure. The development is similar to Takeuti’s Boolean analysis. The main difference lies in the fact that we use a full Boolean-valued model, known as Boolean power, and a two-step procedure: First we develop a restrictive model (a discrete or a kind of first order model), the Boolean power in which all the axioms of the reals can be transferred immediately, and then we complete it using Cauchy sequences or Dedekind cuts in order to get a model isomorphic to the stochastic space $$V$$. In this way, we avoid the general Scott-Solovay model and we get instead a model which is more appropriate for generalizing the Robinsonian Infinitesimal Analysis to Boolean Analysis. ##### MSC: 03C90 Nonclassical models (Boolean-valued, sheaf, etc.) 60B99 Probability theory on algebraic and topological structures 03H05 Nonstandard models in mathematics Full Text: ##### References: [1] GRÄTZER G.: Universal Algebra. (2nd edition), Springer-Verlang, Berlin, 1979. · Zbl 0412.08001 [2] HALMOS P. R.: Measure Theory. Van Nostrand, Reinhold, Toronto, 1950. · Zbl 0040.16802 [3] KAPPOS D.: Probability Algebras and Stochastic Spaces. Academic Press. New York, 1969. · Zbl 0196.18501 [4] MANSFIELD R.: The theory of Boolean ultrapowers. Ann. Math. Logic 2 (1971), 297-323. · Zbl 0216.29401 · doi:10.1016/0003-4843(71)90017-9 [5] POTTHOLF K.: Boolean ultrapowers. Arch. Math. Logic 16 (1974), 37-48. · Zbl 0285.02045 · doi:10.1007/BF02025117 · eudml:137883 [6] SCOTT D.: A proof of the independence of the Continuum hypothesis. Math. Systems Theory 2 (1967), 89-111. · Zbl 0149.25302 · doi:10.1007/BF01705520 [7] SCOTT D.: Boolean models and non-standard analysis. Applications of Model Theory to Algebra, Analysis and Probability (W. A. J. Luxemburg. Holt. Reinhart & Winston 1969. [8] STROYAN K.-LUXEMBURG W. A. J.: Introduction to the Theory of Infinitesimals. Academic Press, 1976. · Zbl 0336.26002 [9] TAKEUTI G.: Two Applications of Logic to Mathematics. Iwanami & Princeton Univ. Press, 1978. · Zbl 0393.03027 · doi:10.1515/9781400871346 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Logic Module¶ ## Introduction¶ The logic module for SymPy allows to form and manipulate logic expressions using symbolic and boolean values. ## Forming logical expressions¶ You can build boolean expressions with the standard python operators & (And), | (Or), ~ (Not): >>> from sympy import * >>> x, y = symbols('x,y') >>> y | (x & y) Or(And(x, y), y) >>> x | y Or(x, y) >>> ~x Not(x) You can also form implications with >> and <<: >>> x >> y Implies(x, y) >>> x << y Implies(y, x) Like most types in SymPy, Boolean expressions inherit from Basic: >>> (y & x).subs({x: True, y: True}) True >>> (x | y).atoms() set([x, y]) ## Boolean functions¶ sympy.logic.boolalg.to_cnf(expr)[source] Convert a propositional logical sentence s to conjunctive normal form. That is, of the form ((A | ~B | ...) & (B | C | ...) & ...) Examples >>> from sympy.logic.boolalg import to_cnf >>> from sympy.abc import A, B, D >>> to_cnf(~(A | B) | D) And(Or(D, Not(A)), Or(D, Not(B))) class sympy.logic.boolalg.And[source] Logical AND function. It evaluates its arguments in order, giving False immediately if any of them are False, and True if they are all True. Examples >>> from sympy.core import symbols >>> from sympy.abc import x, y >>> x & y And(x, y) Attributes nargs class sympy.logic.boolalg.Or[source] Logical OR function It evaluates its arguments in order, giving True immediately if any of them are True, and False if they are all False. Attributes nargs class sympy.logic.boolalg.Not[source] Logical Not function (negation) Note: De Morgan rules applied automatically Attributes nargs class sympy.logic.boolalg.Xor[source] Logical XOR (exclusive OR) function. Attributes nargs class sympy.logic.boolalg.Nand[source] Logical NAND function. It evaluates its arguments in order, giving True immediately if any of them are False, and False if they are all True. Attributes nargs class sympy.logic.boolalg.Nor[source] Logical NOR function. It evaluates its arguments in order, giving False immediately if any of them are True, and True if they are all False. Attributes nargs class sympy.logic.boolalg.Implies[source] Logical implication. A implies B is equivalent to !A v B Attributes nargs class sympy.logic.boolalg.Equivalent[source] Equivalence relation. Equivalent(A, B) is True if and only if A and B are both True or both False Attributes nargs class sympy.logic.boolalg.ITE[source] If then else clause. Attributes nargs ## Inference¶ This module implements some inference routines in propositional logic. The function satisfiable will test that a given boolean expression is satisfiable, that is, you can assign values to the variables to make the sentence True. For example, the expression x & ~x is not satisfiable, since there are no values for x that make this sentence True. On the other hand, (x | y) & (x | ~y) & (~x | y) is satisfiable with both x and y being True. >>> from sympy.logic.inference import satisfiable >>> from sympy import Symbol >>> x = Symbol('x') >>> y = Symbol('y') >>> satisfiable(x & ~x) False >>> satisfiable((x | y) & (x | ~y) & (~x | y)) {x: True, y: True} As you see, when a sentence is satisfiable, it returns a model that makes that sentence True. If it is not satisfiable it will return False sympy.logic.inference.satisfiable(expr, algorithm='dpll2')[source] Check satisfiability of a propositional sentence. Returns a model when it succeeds Examples: >>> from sympy.abc import A, B >>> from sympy.logic.inference import satisfiable >>> satisfiable(A & ~B) {A: True, B: False} >>> satisfiable(A & ~A) False
# Symmetry breaking and Superfluid - Mott Insulator transition I know my question is similar to what mentioned in this post: Symmetry breaking in Bose-Hubbard model. Yet, I don't find it clear. I've in mind a 1D Bose-Hubbard Hamiltonian. Moving from the Mott Insulator phase to the Superfluid Phase, a spontaneuos symmetry breaking occurs. What does it mean? Can you provide a clear explanation of it? • I invite you to check the following posts, where I gave answer for the case of normal metal / superconducting phase transition : physics.stackexchange.com/a/134410/16689 , physics.stackexchange.com/a/69490/16689 , physics.stackexchange.com/q/306515/16689. Please correct your question such that it becomes clear : you discuss spontaneous symmetry breaking, not symmetry breaking, and the transformation you write down explicitly dos not change the Hamiltonian at all. Hence it's trivial to say its solutions are unchanged ... – FraSchelle Feb 13 '17 at 9:20 • The problem is not about changing the phase locally or globally, it's about the possible conservation law following symmetries of the problem. For superfluidity the symmetry is U(1), i.e. you can change the global phase of the system without changing the Hamiltonian. It results (via Noether) that the number of particles is conserved. The spontaneous symmetry breaking related to a symmetry is associated to Goldstone phenomena. What I misunderstood, is that you want to apply a local phase rotation, which is usually known as a U(1) gauge transform (...) – FraSchelle Feb 13 '17 at 9:26 • If a Hamiltonian is invariant with respect to a gauge transform, one says it presents a gauge redundancy, not a symmetry, though both naming exists because of history reasons. A clear way to distinguish them is in the naming, the Goldstone phenomenon associated to symmetry breaking becomes a Higgs mechanism in the case of gauge redundancy. Anyways, the gauge redundancy is broken in superconductors (or charged superfluids if you prefer), in superfluids only the U(1) symmetry is broken. So your question is not clearly defined to many aspects unfortunately. – FraSchelle Feb 13 '17 at 9:29 • I apologize for the imprecisions. I'm quite new to quantum phase transitions. I try to modify my question in order to make it more clear. – AndreaPaco Feb 13 '17 at 9:46 • Thank for editing your question. Yes, the picture of aligning the phase along one direction is a good picture in order to understand the $U\left(1\right)\rightarrow\mathbb{Z}_{2}$ spontaneous symmetry breaking. – FraSchelle Feb 13 '17 at 15:23 $U(1)$ symmetry is not broken, it is spontaneously broken, meaning that although the Hamiltonian/Lagrangian might enjoy the symmetry the ground state does not. For example, the ferromagnetic/paramagnetic transition has a SO(3) symmetry that becomes spontaneously broken when you go into the ferromagnetic phase (the magnetic dipoles all point in the same direction), though we can still rotate the ground state with $SO(3)$ and get another ground state. So what is physically breaking the $U(1)$ symmetry in the superfluid phase? It is the quantum-mechanical phase of the bosons; the ground state of the superfluid aligns the quantum-mechanical phases of the bosons in a particular direction. In the insulating phase, the phases do not align in the ground state • Yes. For example: in magnetic (Ising) system, when the temperature is decreased below a certain crtitical temperature $T_c$ a non zero spontaneous magnetization shows up. Provided that no external magnetic field is applied, the direction of this spontaneous magnetization can either up or down, with 50% probability. The fact that the spins (partially) align in a specific direction is indeed the symmetry breaking, because in the hamiltonian there is no a priviledged direction. Can you please explain in a similar way the symmetry breaking relevant to SF/MI phase transition? – AndreaPaco Feb 12 '17 at 0:59 • Yes. The point is that in the ground state, the relative phase difference between sites is always $0$, and they will evolve in time coherently. At $t=0$, the phases will all lock-on to a particular value (which breaks the symmetry), and from that point onward everything evolves together. – Aaron Feb 12 '17 at 18:19
# Rydberg Formula ## Rydberg Formula Rydberg formula is a mathematical formula used to predict the wavelength of light resulting from an electron moving between energy levels of an atom. If the state of an electron in a hydrogen atom is slightly perturbed, then the electron can make a transition to another stationary. The transition will emit a photon with a certain wavelength. When an electron shifts from an orbital with high energy to a lower energy state, a photon of light is generated. A photon of light gets absorbed by the atom when the electron moves from low energy to a higher energy state. The Rydberg formula is given by: $$\frac{1}{\lambda }=R{{z}^{2}}\left( \frac{1}{n_{1}^{2}}-\frac{1}{n_{2}^{2}} \right)$$  ; Where, $$\lambda$$ = Wavelength of the photon, $$R$$ = Rydberg Constant =$$1.097\times {{10}^{7}}{{m}^{-1}}$$, $$Z$$ = Atomic number of the atom, $${{n}_{1}}$$ And $${{n}_{2}}$$ are integers, where$${{n}_{2}}>{{n}_{1}}$$. How to find the wavelength using Rydberg Formula? Problem: Find the wavelength of the electromagnetic radiation that is emitted from an electron relaxes from n=3 to n=1? Solution: Given, $$Rydberg\,\,Cons\tan t(R)=1.097\times {{10}^{7}}{{m}^{-1}}$$, $$Z=1$$, $${{n}_{1}}=1\,\,\,\And \,\,\,{{n}_{2}}=3$$, Rydberg formula: $$\frac{1}{\lambda }=R{{z}^{2}}\left( \frac{1}{n_{1}^{2}}-\frac{1}{n_{2}^{2}} \right)$$, $$\frac{1}{\lambda }=1.0974\times {{10}^{7}}\left( \frac{1}{{{1}^{1}}}-\frac{1}{{{3}^{2}}} \right)=1.0974\times {{10}^{7}}\left( \frac{1}{1}-\frac{1}{9} \right)$$, $$\frac{1}{\lambda }=1.0974\times {{10}^{7}}(0.889)=0.9755886\times {{10}^{7}}$$, $$\therefore \,\,Wavelength(\lambda )=1.025\times {{10}^{-7}}m.$$.
# Win Probability Of Poker Hands This produces 7-card hands with 3 pairs. More hot questions question feed Mathematics Tour Help Chat Contact Feedback Mobile Company Stack Overflow Stack Overflow Business Developer Jobs About Press Legal Privacy Policy Stack Exchange Network Technology Life / Arts Culture / Recreation Science Other Stack Overflow Server Fault Super User Web Applications Ask Ubuntu Webmasters Game Development TeX - LaTeX Software Engineering Unix & Linux Ask Different (Apple) WordPress Development Geographic Information Systems Electrical Engineering Android Enthusiasts Information Security Database Administrators Drupal Answers SharePoint User Experience Mathematica Salesforce ExpressionEngine® Answers Stack Overflow em Português Blender Network Engineering Cryptography Code Review Magento Software Recommendations Signal Processing Emacs Raspberry Pi Stack Overflow на русском Programming Puzzles & Code Golf Stack Overflow en español Ethereum Data Science Arduino Bitcoin more (30) Photography Science Fiction & Fantasy Graphic Design Movies & TV Music:There are sets of 5 distinct ranks from which we must remove the 10 sets corresponding to straights.     Odds of being dealt two cards higher than J      The probability of being dealt two cards higher than J is 3.619%. Blaise Pascal (1623-1662) also contributed to probability theory. This probability is 0422569. Hot Network Questions How to help a friend through a rough breakup when his actions are testing the limits of my patience?It would be simple if all one had to do to become a winning player was to memorize the following Poker tables.Your browser or device may not support Javascript or it may be disabled. ## The player to the left of the big blind must either call or raise the big blind bet Two cards shall be dealt down to each player, starting with the person to the dealer's left. Portland Maine Poker Club We offer a hold'em poker odds calculator, an Omaha odds calculator, a Parx Casino Online Promo Code free poker tracker, hand Learn your exact chances of winning in any given hand. Only the players who have not folded have a chance to win the round. Gun Lake Casino Restaurants Select all 4 Wpt La Poker Classic Main Event 2019 suits for those cards.Cite as: All-in: Probabilities of Poker Hands Each of the 2,598,960 possible hands of poker win probability of poker hands is equally likely when dealt 5 cards from a standard poker deck.Probability of facing a better A a) The probability worst casino house edge than a specific opponent will have AA when you have an Ax hand There are 50 cards remaining (you hold two, one of which is an ace), three of which are aces. nunneleygroup.com There is win probability of poker hands usually a limit to the number of raises a player may make, typically three.For example 8 drinking roulette allegro , 9 . $P(\text{High Card Hand})=\frac{1302540}{2598960}=\frac{1277}{2548}.\ _\square$ Probability of One Pair Hand Show Probability $P(\text{One Pair Hand})=\frac{352}{833}\approx 0.422569$ Show Computation $\text{One Pair Hand Frequency}=\binom{13}{1}\binom{4}{2}\binom{12}{3}\binom{4}{1}^3=1098240$ First select 1 rank out of the 13 for the pair. For example 7 , . Westfield Casino Phone Number Probability of seeing a specific flop This calculation does not take your or your opponent's cards into consideration, but calculates the probability of seeing a specific flop with 52 cards in the deck.P represents the probability you wish to convert to odds. 1 If your hole cards are suited, and there are two more of your juegos gratis del casino tragamonedas cleopatra suit on win probability of poker hands the board, you can most often treat any flush as the nuts since it's very rare that you will be up against another person with two hole cards of your suit. Closest Casino To Wenatchee Wa ## Blackjack Gioco Gratis Italiano Any five cards that do not form any higher poker hand. Chances of Holding Various Poker Hands in the First Five Cards Dealt When the Joker is Wild Making a 53-Card Pack Table 9 Rank of Hands Number of Possible Ways Hand can be Made Chance of Being Dealt in Original 5 Cards Five of a Kind 13 1 in 220,745.0 Royal Flush 24 1 in 119,570.2 Straight Flush 216 1 in 13,285.5 Four of a Kind 3,120 Full House http://www.natesholdem.com/pre-flop-odds.php Flop Odds, Probability, Texas Holdem Poker, Tips, Odds, Tells Nate's Holdem Classic: Roulette Programmieren We could determine the number of high card hands by removing the hands which have already been counted in one of the previous categories.       Odds of being dealt AK suited      The probability of being dealt AK suited is 0.301%. All cards count as its poker value. https://suedebar.co.uk/camrose-casino-buffet Http://www.pokercalculatoronline.com/ Poker Odds Calculator - PokerCalculatorOnline.com This is not your typical poker odds calculator. Then select a rank (out of the remaining 12) and a suit for the final card in the hand. At PokerStars, we deal many varieties of poker, some of which use different hand rankings. Using the CardsChat odds calculator effectively gives you a simple snapshot of what you can expect and, like any good poker calculator, makes it easy to make the right decision quickly. For example A . Probability of improving on the river Again, simple odds and outs. Texas Hold'em, specifically No Limit Texas Hold'em.Probabilities of Poker Hands Each of the 2,598,960 possible hands of poker is equally likely when dealt 5 cards from a standard poker deck. If your table is loose, as if often the case online, you can play a bit looser yourself. When Will Online Gambling Be Legal In South Africa Because of this, one can use probability by outcomes to compute the probabilities of each classification of Maxim Lykov Official Poker Rankings poker hand. What are the odds? https://mnvsports.rs/leos-casino-liverpool-menu The player to the dealer's left must win probability of poker hands make a "small blind" aristocrat slots itunes bet. 81.9:1 Example 2:The following tables look at two win probability of poker hands party poker reviews different sets of rules. Poker Odds Calculator - Free Texas Hold'em Poker Odds Calculator | PokerNews Poker probabilities - Statistics Odds Calculator Poker probability - Wikipedia statistics - How do I programmatically calculate Poker Odds Poker odds calculate the chances of you holding a winning hand. ## Sc2 Poker Defence Revolution Merge List Odds of being dealt AA or KK      The probability of being dealt AA or KK is 0.904%. During each round of play, players are dealt cards from a standard 52-card deck, and the goal of each player is to have the best 5-card hand at the .. In such a case the following formula can be used: In any variant of poker, the aim is to constitute a hand of five cards, making You can go on and ask the probability of winning the hand at It would be simple if all one had to do to become a winning player was to memorize the following Poker tables.Poker Combinations for 1 to 8 DecksExpand Hand 1 8 5 of a kind 728 1179648 4 of a kind 624 334233600 3 of a kind 54912 Infinite Decks The following table shows the number of combinations if each card was dealt from a separate deck, which would be mathematically equivalent to an infinite number of decks. Free Roulette High Limit Then, select 2 distinct suits out of the 4 for each of those pairs. Learn as you play Someone that is lacking experience win probability of poker hands should be looking to learn with every hand, with a poker limpar slot memoria ram calculator that becomes much easier. So eliminating identical hands that ignore relative suit values, there are only 134,459 distinct hands.5-Card Stud with Partially Wild Joker Hand Probability five of a kind 1 0.000063 four of a kind 624 0.007155 3 of a kind 54912 1 5-Card Stud with Fully-Wild Joker Hand Probability five of a kind 13 0.000063 four of a meadows casino bridal shower kind win probability of poker hands 624 0.007155 3 of a kind 54912 1 The next table shows the combinations and probability with two fully-wild jokers. Determined to know why his strategy was unsuccessful, he consulted with Pascal. ( ). In 1494, Fra Luca Paccioli released his work Summa de arithmetica, geometria, proportioni e proportionalita which was the first written text on probability. How should I calculate odds in my head quickly? Three-of-a-kind:Such Lock Poker Merge Network a hand must have 6 distinct ranks. Party Pooper Casino For example, if I'm drawing both to a set and to a flush, e.g. Why so few deaths win probability of poker hands of final fantasy a realm reborn vanity slots Americans in the Bataan Death March? $P(\text{High Card casino en ligne roulette avis Hand})=\frac{1302540}{2598960}=\frac{1277}{2548}.\ _\square$ Probability of One Pair Hand Show Probability $P(\text{One Pair Hand})=\frac{352}{833}\approx 0.422569$ Show Computation $\text{One Pair Hand Frequency}=\binom{13}{1}\binom{4}{2}\binom{12}{3}\binom{4}{1}^3=1098240$ First select 1 rank out of the 13 for the win probability of poker hands pair. Only the top five cards matter. ## You can then come back here and specify hand ranges for any of the players you like • This means there are 45 - 34 = 990choices not producing a flush. • NEW888. • Copy Bloques Autocad Casino Gratis every other value from a column An Englishman Playing 'Ell With the Great Lakes! • $P(\text{Royal Flush Hand})=\frac{4}{2598960}=\frac{1}{649740}.\ _\square$ Each of these probabilities assumes that you are only dealt 5 cards.Examples , 2 Tie. •      Odds of being dealt two suited cards      The probability of being dealt two suited cards is 23.529%. • Notice that this category includes 75% of all hands dealt.The frequencies are calculated in a manner similar to that shown for 5-card hands, except additional complications arise due to the extra two cards in the 7-card poker hand. • 5-Card Low Ball, no Straights or Flushes Hand Straight flush Four of a kind 624 Straight Three of a kind 54912 1 5-Card Low Ball — Straights & Flushes Enforced Hand 0.000014 Four of a kind 624 0.003925 Three of a kind 54912 Omaha In Omaha the player many use any 2 of his own 4 cards, and any 3 of the 5 community cards, to form the best highest and lowest poker hand. • The binomial coefficient can be used to calculate certain combinations of cards. • The poker odds chart below shows the probabilities of obtaining various winning hands  in Texas Hold'em Poker. • Calculate How Much Money You Could Win and The Odds You're Getting:The probabilities calculated below are based on drawing 5 cards from a shuffled poker deck. ### Roulette Americana Gratis 1. New and improved Poker Odds Calculator. 2. Odds Against Chances of Improving the Hand in Draw Poker When Drawing Two Cards to Three of a Kind Table 5 Odds against any improvement 8.5 to 1 Odds against making a full house 15.5 to 1 Odds against making four of a kind 22.5 to 1 Chances of Improving the Hand in Draw Poker When Drawing One Card to Three of a Kind Plus a Kicker Table 6 Odds against any improvement 11 to 1 Odds against making a full house 15 to 1 Odds against making four of a kind 46 to 1 These two tables above show that the best chance for improvement with three of a kind is to draw two cards and not hold a kicker.Knowing your odds of winning at any point in a hand is a good base of understanding but poker is a game of incomplete information and you won't have access to your opponent's actual hand to make your decisions. 3. This is a really far fetched draw, and our only reason for including it is to show just how far fetched it is. 4. For instance, with a royal flush, there are 4 ways to draw one, and 2,598,956 ways to draw something else (2,598,960 - 4), so the odds against drawing a royal flush are 2,598,956 :Pot Odds 4 to a flush 19.57% 4.11 4 to an outside straight 17.39% 4.75 4 to an inside straight 8.70% Hand Strength Calculator I'm proud to present my new and improved Poker Odds Calculator. 5. This card is called the "river." Another round of betting will ensue, starting with the player to the dealer's left. 6. Eg AK vs 99 or AJ vs 77. 7. If we sum the preceding numbers, we obtain 133,784,560 and we can be confident the numbers are correct. • Here you can select suit combinations for any of your selections from the previous tab. • The types of 5-card poker hands in decreasing rank are straight flush 4-of-a-kind full house flush straight 3-of-a-kind two pairs a pair high card The total number of 7-card poker hands is . • If you have a draw other than the ones we've listed above, and want to figure out your odds for it, this is the way. • Combinatorics poker share | cite | improve this question edited Feb 4 '16 at 8:53 Learner 7315 asked Feb 4 '16 at 8:08 xiver77 1667 @SubhadeepDey That's alternative notation for binomial coefficient.There are three main types. • Frequency of 7-card poker hands In some popular variations of poker such as Texas Hold 'Em, a player uses the best five-card poker hand out of seven cards. ## For poker players, stochastics is the most interesting part of studying probability 1. It is NOT a requirement that the player use both of his own cards. 2. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824.The probability is 0.000240. 3. There are many ways to calculate different kinds of poker odds. 4. This gives us flushes with 6 suited cards.For example, if I'm drawing both to a set and to a flush, e.g. • This means duplicate counting can be troublesome as can omission of certain hands. • IF YOU MEAN TO EXCLUDE ROYAL FLUSHES, SUBTRACT 4 (SEE THE NEXT TYPE OF HAND): • Four Card Stud — Two Jokers Hand Probabilities Four of a kind 195 0.023032 Three of a kind 10488 1 The next table is for a seven-card stud game with one fully wild joker. • Finally, suppose we have 5 cards in the same suit.Today, CardPlayer.com is the best poker information portal for free poker content, offering online poker site reviews and exclusive online poker bonus deals. • Probability, example, cards, grouping, order, replacement Probability calculation for a variant of poker where you get dealt a seven-card hand and use all of the cards.
1. ## Definite Intergral Any hints on where to start? Integral from 1 to 0 of (x*e^x)/(x+1)^2 dx 2. Originally Posted by millerst Any hints on where to start? Integral from 1 to 0 of (x*e^x)/(x+1)^2 dx Make the following observation $\frac{x\cdot e^x}{(1+x^2)}=\frac{e^x}{1+x}-\frac{e^x}{(1+x)^2}$ 3. Or we should say that clearly , $\frac{xe^{x}}{(1+x)^2} = \frac{ e^x (x+1) - e^x }{ (1+x)^2}$ Let $D$ be $\frac{d}{dx}$ ( the differential operator ) $= \frac{ e^x (x+1) - e^x }{ (1+x)^2} = \frac{ (x+1) D(e^x) - D(x+1) e^x }{ (1+x)^2 }$ can you see something special ? 4. Originally Posted by simplependulum Or we should say that clearly , $\frac{xe^{x}}{(1+x)^2} = \frac{ e^x (x+1) - e^x }{ (1+x)^2}$ Let $D$ be $\frac{d}{dx}$ ( the differential operator ) $= \frac{ e^x (x+1) - e^x }{ (1+x)^2} = \frac{ (x+1) D(e^x) - D(x+1) e^x }{ (1+x)^2 }$ can you see something special ? That works too. But I was actually making the observation $\frac{xe^x}{(1+x)^2}=\frac{e^x}{1+x}-\frac{e^x}{(1+x^2}=D\left(e^x\right)\cdot\frac{1}{ 1+x}+e^x\cdot D\left(\frac{1}{1+x}\right)$ Same thing, two different ways.
+1 vote 222 views Let $G = (V, T, S, P)$ be a context-free grammar such that every one of its productions is of the form $A \rightarrow ν$, with $|ν| = k > 1$. The derivation tree for any string $W \in L (G)$ has a height such that 1. $h < \frac{(|W|-1)}{k-1}$ 2. $\log_{k} |W| \leq h$ 3. $\log_{k} |W| < h < \frac{(|W|-1)}{k-1}$ 4. $\log_{k} |W| \leq h \leq \frac{(|W|-1)}{k-1}$ recategorized | 222 views 0 is it D?
Prove the identity: (hint: use the laws of cosine) cos A = b² + c² - a² ..a.........2abc Using the above fact, now prove: (Hint: what would cos B and cos C look like?) ....................b.........c cos A + cos B + cos C = a² + b² + c² ..a.......b.......c........2abc 2. Originally Posted by adhesive Prove the identity: (hint: use the laws of cosine) cos A = b² + c² - a² ..a.........2abc Using the above fact, now prove: (Hint: what would cos B and cos C look like?) ....................b.........c cos A + cos B + cos C = a² + b² + c² ..a.......b.......c........2abc The Law of cosines is $a^2 = b^2+c^2-2bc\cos A$ Re-arranging: $-2bc\cos A = a^2-b^2-c^2$ Multiplying both sides by -1: $2bc\cos{A}=b^2+c^2-a^2$ Dividing both sides by 2abc: $\frac{\cos{A}}{a}=\frac{b^2+c^2-a^2}{2abc}$ Similarly, by changing around variables, you can get: $\frac{\cos B}{b}=\frac{a^2+c^2-b^2}{2abc}$ $\frac{\cos C}{c} = \frac{a^2+b^2-c^2}{2abc}$ So if you simply add them all up: $\frac{\cos A}{a}+\frac{\cos B}{b} + \frac{\cos C}{c} = \frac{b^2+c^2-a^2}{2abc}+\frac{a^2+c^2-b^2}{2abc}+\frac{a^2+b^2-c^2}{2abc}$ $=\frac{a^2+b^2+c^2}{2abc}$ 3. Fantastic, thank you very much!
# mg.metric geometry – Equal products of triangle areas Claim. Given hexagon circumscribed about an ellipse. Let $$A_1,A_2,A_3,A_4,A_5,A_6$$ be the vertices of the hexagon and let $$B$$ be the intersection point of its principal diagonals. Denote area of triangle $$triangle A_1A_2B$$ by $$K_1$$, area of triangle $$triangle A_2A_3B$$ by $$K_2$$,area of triangle $$triangle A_3A_4B$$ by $$K_3$$,area of triangle $$triangle A_4A_5B$$ by $$K_4$$,area of triangle $$triangle A_5A_6B$$ by $$K_5$$ and area of triangle $$triangle A_1A_6B$$ by $$K_6$$ .Then, $$K_1 cdot K_3 cdot K_5=K_2 cdot K_4 cdot K_6$$
economics taxes inequality # Pigovian Taxes and Income [ See here for a more general and abstract (yet less mathy) proof. ] ## Background The standard economic response to a negative externality is a Pigovian tax Pigovian tax. If a bee grower's bees sting the neighboring kids, then the government should impose a tax on beekeeping in residential areas, to discourage this. However, I believe this analysis is oversimplified, because economists make the implicit assumption that a dollar tax to you is the same as a dollar tax to me. This isn't true. A minimal amount of self-reflection should reveal that the billionth dollar you make isn't as valuable to you as the thirteenth. Expert opinion and studies show that income has diminishing returns in making you happy Diminishing marginal utility of income? Caveat emptor. Altering this assumption fundamentally changes how Pigovian taxes should be levied. ## A Simple Model A simple way to model consumption is to theorize that Alice can divvy up her income to consume multiple goods. We will further assume that Alice's utility grows logarithmically with regards to each good's consumption. From this, some calculus is enough to show that Alice will always spend the same amount on each good - regardless of how the price changes. To use the economic term, this implies that each good is unit elastic. This allows us to model each good individually and compute Alice's utility as $$U = \ln{\frac{I}{P+T}}$$ where $I$ is the income Alice has allocated to the good, $P$ is the price of the good, and $T$ is the tax levied on the consumption of the good. To consider Alice's contribution to social utility instead, we need to account for the revenue raised by a sales tax. To do this, we will define $R$ as the amount of utility society gets on the marginal tax revenue. This gives us a social utility function: $$U = \ln{\frac{I}{P+T}} + RT\frac{I}{P+T}$$ Solving this for the optimal tax just requires taking the derivative of $U$ with respect to $T$, setting that derivative to 0, and then solving for $T$. This yields $$T = IRP - P$$ The main takeaway from this is that our model implies the tax rate should increase with income - that is, it implies a very progressive tax. This makes sense, because our model isn't incorporating any changes in labor decisions. This is a shortcoming of the model, but seeing as our primary purpose is to model Pigovian taxes, and these taxes are generally small as a proportion of income, this will probably have a minimal impact on our conclusions. ## Pigovian Taxes in the Model So, let's add a negative externality to our model that scales linearly with consumption of a good. We'll let $x$ denote the size of the externality. Alice's contribution to social utility becomes $$U = \ln{\frac{I}{P+T}} + RT\frac{I}{P+T} - x\frac{I}{P+T}$$ Using calculus to solve again yields $$T = IRP - P + Ix$$ Note, that $IRP - P$ is the same as the tax as we got above (for a good with no externalities). The part of the tax that internalizes the externality is the $Ix$ term. This implies the sales tax should be increased in proportion to the size of both the externality and also Alice's income. ## Generalizing Due to results like the fundamental theorems of welfare economics and the Atkinson-Stiglitz Theorem, the default response of most economists to calls for progressive Pigovian taxes is that they are irrelevant. The default belief is that policy should only maximize efficiency rather than equality with one exception: a progressive income taxation/transfer system. For this reason, for the above reason to be accepted as an interesting argument, we have to show that even if we have an optimal income tax/transfer system, progressive Pigovian taxes are still desirable. Another way we should generalize this is to show the same results hold for goods that aren't unit-elastic. Unfortunately, generalizing in these two ways makes the math significantly more complicated. I actually kind of want to try to publish the results, but I'll merely put a sketch of the proof here. For simplicity, I'll use carbon emission as the thing to tax. Alright, here goes nothing... Suppose (1) the externality from CO2 is small relative to total income and (2) we live in a society with an optimal income tax system. Now, the general worry with any kind of Pigovian tax is that it will distort labor by altering the marginal and effective tax rates at different income levels. For instance, suppose rich people spend a smaller portion of their income on electricity. This implies that the Pigovian tax would fall as a percentage as income increases - that is, the Pigovian tax would raise effective marginal income tax rates. To get around this problem, we will give tax credits to people equal to the Pigovian tax applied to the average CO2 caused by people at their income level. For instance, if the average person making \$80k emits 2000 pounds of CO2 per year and the Pigovian tax is \$2 per pound, we'll give everyone making \$80k \$4000 in tax credit. In this way, we ensure the average marginal and effective tax rates at each income level remain the same. In plain English, this means, the tax system is no more/less progressive after the Pigovian tax is implemented. However, beyond the average, people will face different marginal and effective tax rates depending on their CO2-income elasticity and CO2 consumption. However, if we assume the effects of taxes on labor is locally a line (i.e. the slope is non-zero and the second derivative is reasonably small), then the behavioral change caused by some people facing slightly effective higher tax rates will be perfectly offset by people at the same income level facing effectively slightly lower tax rates. The overall effect on labor will be 0, and this strongly suggests that this means the original optimal income tax system will remain optimal after we add (a) the Pigovian tax and (b) the tax credit discussed above. Now we reach the final step. Nothing in the above analysis assumes the Pigovian tax is a fixed number. In fact, we can make it any function of income we want and the above results hold so long as its size is small compared to a person's income. Typically, we set the Pigovian tax equal to the surplus lost by the externality. However, there what we actually care about is the social welfare (utility) lost, not surplus. So, in order to properly internalize the tax, we need to convert from surplus-lost to utility-lost to induce the ideal change among consumers. If $u(x)$ yields the utility someone gets from income, then this conversion factor is just $u'(x)$. In particular, if $u(x) = \log(x)$, then $u'(x) = 1/x$, which implies the Pigovian tax should be proportional to income. If, on the other hand, $u(x) = -x^{-0.35}$ (my preferred model), then the Pigovian tax should be proportional to $x^{1.35}$. ## Conclusions and Caveats Now, I must admit there are large practical and political concerns with implementing a system based on the above analysis, but I think the claim that Pigovian taxes should be roughly proportional to income seems reasonable. In fact, at least one country levies traffic fines in proportion to your income, and since fines are effectively Pigovian taxes, this is a real-world example of the such an implementation (though probably with less mathematical justifiaction) Morever, there are still some practical applications. For instance, since smokers tend to earn less than non-smokers The Economic Consequences of Being a Smoker. he above anslysis suggests that standard estimates of the optimal cigarette tax will be higher than the true optimum. Likewise, all the above applies equally well to taxing "sin taxes" that help internalize internalities. Wikipedia contributors. (2019, November 9). Pigovian tax. In Wikipedia, The Free Encyclopedia. Retrieved 21:46, November 9, 2019, from https://en.wikipedia.org/w/index.php?title=Pigovian_tax&oldid=925315154 Easterlin, R. A. (2005). Diminishing marginal utility of income? Caveat emptor. Social Indicators Research, 70(3), 243-255. https://doi.org/10.1007/s11205-004-8393-4 Hotchkiss, Julie L. and Pitts, M. Melinda, Even One is Too Much: The Economic Consequences of Being a Smoker (July 1, 2013). FRB Atlanta Working Paper Series 2013-3. http://dx.doi.org/10.2139/ssrn.2359224
Why is the line spacing different between these paragraphs? In the following two paragraphs, the line spacing seems to be wider in the bottom paragraph, and I can't for the life of my fathom why: The code for this part plus a little before and after is {\small \noindent\vb{Systems of First-Order Linear ODEs}\\ %parts omitted for brevity \vspace{10pt} \noindent\vb{Sturm-Liouville Theory}\\ Introduction and motivation to Sturm-Liouville problems, SL eigenvalues problems, expanding functions in eigenfunctions, convergence of expansions, applications to non- homogeneous BVPs and linear PDEs, transforming ODEs to SL form, singular SL problems and the Bessel equation, solving the wave equation in 2D. \vspace{10pt} \noindent\vb{The Laplace Transform}\\ Introduction to the Laplace transform, calculating the Laplace transform for various simple functions and products, using the Laplace transform to solve second-order linear ODEs, and ODEs with discontinuous or impulsive forcing, convolutions, application of convolution theorem to ODEs and integral equations. } \pagebreak \tableofcontents • why the low level markup \noindent\vb{Systems of First-Order Linear ODEs}\\ instead of a section heading??? You also get warnings about the mis-use of \\ which should never be used at the end of a paragraph. Dec 4 '21 at 9:56 Always include a blank line before ending the scope of a size change. The scope of \smallchanges at the }but the second paragraph has not been set yet so it is set with the baselineskip restored at the }designed for \normalsize, even though it used the small font. Unrelated but \vspace{10pt} \noindent\vb{The Laplace Transform}\\ generates warnings due to the mis-use of \\ and does not include any of the features LaTex normaly uses for a heading such as preventing a page break after the heading. It would be better marked up as \section*{The Laplace Transform} • Adding the blank line at the end of the \small section helps, thank you. Dec 4 '21 at 10:26
# Question #3ea48 Nov 14, 2015 $794.2 \text{m}$ #### Explanation: The expression for range is : $d = \frac{v \cos \theta}{g} \left(v \sin \theta + \sqrt{{\left(v \sin \theta\right)}^{2} + 2 g {y}_{0}}\right)$ ${y}_{0}$ is the height of launch which, in this case, is $114 \text{m}$. $\therefore d = \frac{90 \times 0.92}{9.8} \left(90 \times 0.39 + \sqrt{{\left(90 \times 0.39\right)}^{2} + \left(2 \times 9.8 \times 114\right)}\right)$ $d = 8.449 \left(35.1 + \sqrt{1236 + 2234.4}\right)$ $d = 794.2 \text{m}$
# Truth value of for all x in {} and there exist x in {} 1. Sep 12, 2005 ### agro truth value of "for all x in {}" and "there exist x in {}" Suppose T is a true statement. Now, given a nonempty set A, both the statement for all x in A, T and there exist x in A, T are true. However, let E be the empty set. What is the truth value of for all x in E, T and there exist x in E, T ? In the second chapter of Paul R. Halmos' book "Naive Set Theory", he stated that if the variable x doesn't appear in sentence S, then the statements for all x, S and there exist x, S both reduce to S. Is that something that is just agreed upon? In that case, the statement for all x in E, T reduces to T which is true (eventhough there's nothing in E) and so does the statement there exist x in E, T (eventhough there exist nothing in E). I find that counterintuive although if it is indeed the agreed upon rule, I think I just have to get used to it (but any justification would greatly help . Your comments? Thanks, Agro 2. Sep 12, 2005 ### HallsofIvy Staff Emeritus I would be inclined to phrase this as an implication: If x is in set A, then x is in set A is always true. In particular, if the hypothesis, "if x is in set A" is false the implication is trivially true. That would apply to the empty set but also to other sets. Suppose A= {x,y} and you assert "if x is in set A, then x is in set A". Could I point to "z" and say since z is not in A your statement is false? Of course not. Similarly with A= {}. The hypothesis "x is in set A" is false for all x so the implication is trivially true. 3. Sep 12, 2005 ### honestrosewater If T is a true statement, then there exist x in A and for all x in A are already superfluous - no matter what A is. In For all x in A, Px Px alone cannot be assigned a truth-value; You need to add For all x in A in order to form a statement that can be assigned a truth-value. However, Pb where b is an individual, can be assigned a truth-value. So your example is just adding For all x in A to what is already a formula: For all x in A, Pb Adding for all x makes no difference - there's no x in Pb. Does that make sense? 4. Sep 12, 2005 ### AKG It seems to me that to say there exists x in A such that T is the case is to say that there exists x in A (and this x happens to satisfy some condition). But if A is empty, then there exists no x in A, so should the sentence be false? We could deduce that it is false, I think. $$(\exists x)(x \in A\ \wedge \ T) \supset [(\exists x)(x \in A)\ \wedge \ (\exists x)T]$$ $$\neg (\exists x)(x \in A)$$ Therefore: $$\neg [(\exists x)(x \in A)\ \wedge \ (\exists x)T]$$ and hence $$\neg (\exists x)(x \in A\ \wedge \ T)$$ 5. Sep 12, 2005 ### honestrosewater If a statement S can be assigned a truth-value, then S contains no free variables. If S contains no free variables, then adding a quantifier - any quantifier, ex. For all x in {} - to S has no effect, since the quantifier has no variables to bind.* For example, 0 = 0 contains no free variables - it contains no variables at all. It can be assigned a truth-value, say, true. For all x in {}, 0 = 0 is then still true, because there are no free variables in 0 = 0 for the quantifer to apply to. The OP said that T was a true statement. So T could be 0 = 0. You guys are saying that the truth of 0 = 0 depends on how some variable, say, x, is quanitified. But there is no x in 0 = 0, so this doesn't make sense. The rules make sense to me. Does anyone see what I'm saying here? There is no free variable in T. *As long as the added quantifier isn't positioned so as to change the scope of any other quantifiers in S. In every language that I've seen, adding a quantifier to the left of S, as in the OP's example, does not change the scope of any other quantifiers in S. 6. Sep 12, 2005 ### honestrosewater I'm not familiar with the system allowing you to break up 1) $$\exists x \in A$$ into 2) $$\exists x (x \in A)$$ Is (2) a complete statement? Surely (1) isn't a complete statement? Anyway, notice the quote from the book in the OP: If S is $$(x \in A\ \wedge \ T)$$ then x is a free variable in S, and I think the author's and my comments are still true. Last edited: Sep 12, 2005 7. Sep 12, 2005 ### AKG I am not questioning the truth of T, I am questioning the truth of (Ex)(x in A & T). And to say "Ex in A" is, I believe, short form for "(Ex)(x in A)". I'm not sure what you mean by this. Generally, the universe of discourse is non-empty. But if it is empty, then to say that 'there exists x such that S' obviously implies that 'there exists x', and if there does not exist any x, then there doesn't exist any x such that S. Also, note that there's a difference between saying (Ex)(S) and (Ex)(x in A & S). 8. Sep 13, 2005 ### honestrosewater :rofl: Okay, I'm starting to understand the OP's confusion. 1) If S contains no free variables, then $\exists x (S)[/tex] is equivalent to S. Do you agree with (1)? That was my whole point. I thought it was a simple matter and was looking at [itex]\exists x \in S$ as an 'indivisible unit', just as $\exists x$ is an indivisible unit. I'm not saying that I was right in considering $\exists x \in S$ to be indivisible - it just didn't cross my mind that it would be otherwise. Can they be assigned a truth-value? Sure, and the rules for empty universes are different (don't ask me how - I'm not very familiar with them - I think they're called free logics). But the OP didn't say anything about the universe. When you say 'there exists x such that S', it is implied that x varies over the universe, yes? But makes me think that A and E are subsets of the universe. Clearly if A is non-empty, whether it refers to the universe or a proper subset of it, the universe is non-empty, so I never even thought that the OP was asking about empty universes or using the rules for empty universes. Meh, I'm not disagreeing with you. I'm just trying to figure out what exactly $\exists x \in A$ means to you. What's the difference? S could be x in B & T. And if the x in Ex varies over the universe, wouldn't Ex(S) be short for Ex(x in U & S), where U is the universe? 9. Sep 13, 2005 ### honestrosewater I am genuinely confused by part of this (hopefully, only for the moment), but I think 1) If S contains no free variables, then $\exists x(S)$ is equivalent to S. is the key to clarifying things. 10. Sep 14, 2005 ### AKG Normally I would, but if that sentence is to be read, "there exists x such that S" then there must exist x for the sentence to be true, and if the U.D. is empty, then no x exists and so the sentence in question is false. I hope that we can agree that if the U.D. is empty, the following is true: $$(\forall x)(\neg S)$$ In fact, the following sentence would also be true: $$(\forall x)(S)$$ but we need only focus on the first one. Now if you accept the truth of the sentence $(\forall x)(\neg S)$ then you accept the truth of $\neg (\exists x)(S)$ and thus the falsehood of $(\exists x)(S)$. Set membership is a two place predicate defined by $\in xy$ iff x is an element of y. Normally this predicate is written in infix notation so we write $x \in y$ as you know. We certainly need such a predicate, and we certainly need $(\exist x)$ to be an indivisible unit of its own. Given that we can define $(\exists x \in A)(\dots )$ in terms of just the existential quantifier and the set membership predicate, it doesn't make sense to add $(\exists x \in A)(\dots )$ as its own new indivisible unit. I could be wrong, but to me, this seems most sensible. We would define them as follows: $$(\exist x \in A)(\mathbf{P}) := (\exist x)((x\in A) \wedge \mathbf{P})$$ $$(\forall x \in A)(\mathbf{P}) := (\forall x)((x \in A) \supset \mathbf{P})$$ Certainly, $(\exists x)(x \in A)$ can be assigned a truth-value. Given the above, $(\exists x \in A)$ alone cannot be assigned a truth-value. I see where you were confused. I wasn't being precise when I said that $(\exists x \in A)$ can be replaced by $(\exists x)(x \in A)$. Yes. If E is empty though, then the sentence: There exists x in E, T is the sentence: There exists an element x in E such that x satisfies the condition T but if there is no element in E satisfying the condition T, or any condition at all, since there is no element in E at all! Well the difference should be obvious (note that I mean S to refer to the same sentence in both expressions. Yeah, that seems right to me. 11. Sep 14, 2005 ### Hurkyl Staff Emeritus Well, there are good reasons to consider $(\exists x \in A)$ an indivisible unit: a logic with unbounded quantifiers (like $(\exists x)$) is often more powerful than a logic without them, and there are circumstances where this extra power is undesirable. 12. Sep 15, 2005 ### honestrosewater Okay, S containing no free variables was the important part. S contains only individuals or bound variables, so for example, S could be Pa or $\exists x(Px)$. On one interpretation, $\exists x(Pa)$ and $\exists x (\exists x(Px))$ reduce to Pa and $\exists x(Px)$. IMO, this interpretation makes sense. $\exists x$ only acquires meaning when it quantifies whatever variables fall within its scope. If S doesn't have any free variables, then no variables fall within the scope of $\exists x$ - so $\exists x$ doesn't mean anything in that case; It's superfluous. In other words, $\exists x (Px)$ and $\forall x (Px)$ say different things because some variable falls within the scope of the quantifiers. But $\exists x (Pa)$ and $\forall x (Pa)$ say the same thing, Pa, because no variables fall within the scope of the quantifiers. But that's only one way of looking at it. You say that you read $\exists x \in E (S)$ as "There exists an element x in E such that x satisfies the condition S". But it seems like you are really reading it as "There exists an element x in E and that x satisfies the condition S". ?? Also, since S can be assigned a truth-value, I'm not sure how it is a 'condition'. The interesting thing is that $\exists x(S)$ seems to be equivalent to $\exists x \in U (S)$, which, by your definitions (which seem fine to me), $$(\exists x \in A)(\mathbf{P}) := (\exists x)((x\in A) \wedge \mathbf{P})$$ $$(\forall x \in A)(\mathbf{P}) := (\forall x)((x \in A) \supset \mathbf{P})$$ means that $\exists x(S)$ is equivalent to $\exists x (x \in U \wedge S)$. If U is non-empty, $\exists x (x \in U)$ is always true, so this interpretation doesn't really change anything. But if U is empty, $\exists x (x \in U)$ is always false, so $\exists x (x \in U \wedge S)$ is always false. Er, I forgot what my point was. I think it was that this interpretation assigns a truth-value to $\exists x$. Can you distribute the quantifier in $\exists x (x \in U \wedge S)$, getting $(\exists x (x \in U)) \wedge (...))$? Last edited: Sep 15, 2005 13. Sep 15, 2005 ### AKG Do you see a real difference? Why not? 14. Sep 16, 2005 ### honestrosewater I agree with you and your source on this: $$\begin{array}{|c|c|c|c|c|c|}\hline 1&2&3&4&5&6 \\ \hline \forall x (Px)&\neg (\exists x (\neg Px))&T&T&F&F\\\hline \neg (\forall x (Px))& \exists x (\neg Px)&F&F&T&T\\\hline \forall x (\neg Px)& \neg (\exists x (Px))&T&F&T&F\\\hline \neg(\forall x (\neg Px))&\exists x (Px)&F&T&F&T\\\hline \end{array}$$ This table is from an old post of mine - I've not been disagreeing with you about this. Columns 1 and 2 are equivalent. Column 3 lists the truth-values for the empty universe. But notice both in my table and your source that the quantifiers bind a variable. $\mathbf{P}$ could be Qb, where b is an individual. If the universe is empty, there are no individuals, so Qb is false, i.e., $\mathbf{P}$ is false. $$(\forall x \in \{\})(\mathbf{P})$$ should then also be false. If $\mathbf{P}$ was Qx, then $$(\forall x \in \{\})(\mathbf{P})$$ should be true. That is what I'm saying. Do you see a difference between $\mathbf{P}$ containing free variables and not containing free variables? 15. Sep 16, 2005 ### AKG Yes, I see. But I still think that $(\exists x)(P)$ means that there exists an x such that P, and if there exists no x, then there does not exist an x such that P, in fact there does not an exist such that ~P, or Qx, or anything else. To say that there exists an x such that P is to say that there is some x satisfying the condition that P is true, where P may or may not contain x. If no x exists, then no x exists to satisfy that condition, or any condition. 16. Sep 16, 2005 ### honestrosewater Yeah, I see what you're saying. If it isn't the free variable thing, I guess you take Ex to mean more than I take it to mean. But it seems like we've covered all of the bases and haven't really gotten anywhere, so I don't know what else to say. But if P is already true or false, how does it still need to be satisfied? Er, I don't know, maybe it's time to let it be. 17. Sep 16, 2005 ### AKG Yeah, maybe. I see your point, and what I'm suggesting is just a matter of what I'm reading into the sentences, I'm not sure on the technical details. I'll go hunting around the net to see if there is an answer either way to this question. I think I found one here. Go the webpage and search for (hit Ctrl+F) "alternative approach". Points 3 and 4 under that heading, in conjunction with the definitions for truth and falsity which follow, seem to support my idea that a universally quantified sentence over an empty domain is always true, and an existentially quantified sentence over an empty domain is always false. If you scroll up to the first definition for truth in L2 (this "alternative approach" you will search for is the alternative approach - 'Tarski style' - for defining truth for L2), you'll see something said about the truth of quantified sentences over empty domains, but that approach seems like it would take some more study to understand, whereas the Tarski style approach seems to give a very clear answer. I'm not sure if the non-Tarski approach actually says something different about the truth of such sentences because, as I said, it is a little harder to understand, but the page in general seems interesting so check it out if you like. 18. Sep 17, 2005 ### honestrosewater Eh, I didn't read the whole page, but if I understood things correctly, yes, what they call predicates include formulas with no free variables. You can make up whatever you rules you want, of course, I just don't understand how that approach works - it doesn't make sense to me. I'm getting Hodges' book when the next edition comes out. He allows for empty domains, so I'll see if what he does makes sense to me. I have two problems with this approach: 1) It seems to treat $\exists x$ sort of like, say, P &, where P is a formula (formula being the only strings that can be assigned truth-values). $\exists x$ cannot be assigned a truth-value; It is incomplete in some way. P & also cannot be assigned a truth-value; it is incomplete in some way. The difference is that while part of P & is a formula, in my view, no part of $\exists x$ is a formula. That is, you seem to read $\exists x$ as if it has a subformula: "There exists an x & this x ...". You take "there exists an x" to be a formula - in that it can be either true or false. If the universe is non-empty, you take "there exists an x" to be true, so you read $\exists x$ as T &, where T is a true formula; If the universe is empty, you take "there exists an x" to be false, so you read $\exists x$ as F &, where F is a false formula. This is the only way that I can make sense of $\exists x$ being able to affect the truth-value of any formula to which it's attached. Do you see the connection I'm making? The truth-value of T & Q depends only on Q, while the truth-value of F & Q is always F. This is the same way you use $\exists x$. Is that the way you see things? I suppose you should then treat $\forall x$ as P v (P or), since F v Q depends on Q and T v Q is always T. But that gets into the second problem. 2) There doesn't seem to be an analogous way to treat $\forall x$. "For all x & this x..." isn't a natural reading. If $\exists x$ is making a claim about existence (if it's actually claiming that something does exist), then $\forall x$ must also make some kind of claim about existence, since only one of the quantifiers is necessary ($\exists x(P) \Leftrightarrow \neg(\forall x (\neg P))$ and so on). The normal reading of $\forall x$, "for all x", doesn't seem to make any claim about the existence of anything.? And in keeping with the treatment of $\exists x$ as P &, you should treat $\forall x$ as P v (or whatever), but it doesn't make sense to me to treat $\forall x$ as containing a subformula - or, rather, it makes even less sense than treating $\exists x$ as containing a subformula, or it doesn't make sense in the same way. Bah, I don't know if any of these explanations are making sense. The clearest way I can put it is that I think quantifiers don't say anything in themselves; They only say something about the variables to which they apply. So if they don't apply to any variables, they just don't say anything at all. Again, when the quantifiers come into play, I think we agree on their effect. I think we only disagree about when the quantifiers come into play (and when they are superfluous). Last edited: Sep 17, 2005 19. Sep 17, 2005 ### honestrosewater Oh, I should add that in my main logic book, only the universal quantifier is used (in the object language) and its valuation (the s* function in the link) is: $$(\forall x \alpha)^{v} = \left\{\begin{array}{cl}T & \mbox{ if } \alpha^{v(x/u)} = T \mbox{ for every } u \in U, \\ F & \mbox{ otherwise} \end{array}\right.$$ Where x is a variable, v is a valuation, $\alpha$ is a formula, u is an individual in the universe U, and (x/u) means that every instance of x has been replaced with u (this is very similar to the language in your link, if not the same). If there are no instances of x in $\alpha$, then replacing them with u doesn't make a difference (if it even makes sense). And if $\alpha$ is itself a universal formula, then you're just repeating a process that's already been done, so it still makes no difference - if I understand everything anyway. I can't read half of what my book says because I've forgotten many of the definitions. But one of the problems is show that if x is not free in $\alpha$, then $$\forall x \alpha \leftrightarrow \alpha \leftrightarrow \exists x \alpha$$.​ Of course, they assume that U is non-empty, so this doesn't really help us. I'm just saying that my interpretation does work in that case. 20. Sep 17, 2005 ### Hurkyl Staff Emeritus Sigh, it's always hard to chime in when it's hard to figure out just what people are asking! (I hope I don't mix up proposition and predicate in what follows -- I always forget which is which) Ex is, in fact, incomplete. In the specification of the language of formal logic, the only place such a thing appears is part of a string that looks like Ex:P or Ex in A:P. I think it's incorrect to say that the quantifiers act on the variables: each quantifier Ex is something that operates on a predicate to produce a predicate of one fewer variables. (we always infer that in Ex:P that x is a free variable in P, even if it doesn't explicitly appear. I think I'm talking semantics here, and not syntax) I rather like the geometric interpretation of formal logic, which I first saw in the context of real algebraic geometry. Any predicate in n free variables corresponds to a surface in R^n. For example, the predicate: x*x + y*y = 1 corresponds to the unit circle in R^2. (Of course, we can also treat this as a predicate of 3 free variables, x, y and z, and say that this is a cylinder in R^3, et cetera) The existential quantifier is just projection. The predicate: Ex: x*x + y*y = 1 has one free variable -- it corresponds to the interval [-1, 1] in R. The operation of Ex is to project points in R^2 onto their second coordinate. The universal quantifier has a similar (but not as nice) geometric interpretation. I don't remember why I thought this was relevant, though.
× # Help in Integration How do i calculate $$\int { { \left( 1-{ x }^{ m } \right) }^{ \frac { 1 }{ n } }{ \left( 1-{ x }^{ n } \right) }^{ \frac { 1 }{ m } }dx }$$ For Positive Integers m and n. P.S. : i have no idea how to solve this Note by Rishabh Deep Singh 1 month, 1 week ago Sort by:
# Iterables¶ Lists are iterable, which is pretty much the definition of an iterable [1]: l = [1, 2, 3, 4] [2]: for element in l: print(element) 1 2 3 4 [3]: d = {1: 'one', 2:'two'} [4]: d [4]: {1: 'one', 2: 'two'} Pairwise iteration: over keys and values in parallel [5]: for key, value in d.items(): print(key, value) 1 one 2 two Question: can I use a dictionary to search for the value and get the key as an answer? Answer: use pairwise iteration like shown above, and search for the vale manually. Beware though that this is linear search and thus not nearly as fast as a dictionary key search. [6]: for key, value in d.items(): if value == 'two': print(key) break 2 Iterating over the dictionary itself (not using any iteration method of it) iterates over the keys [7]: for key in d: print(key) 1 2 Iterating over the values [8]: for value in d.values(): print(value) one two ## set constructor¶ A set literal [9]: s = {1,2,3} s [9]: {1, 2, 3} Constructing a set from an iterable (in this case a string) absorbs what it iterates over. [10]: s = set('abc') s [10]: {'a', 'b', 'c'} Consequentially, you can make a set from a dictionary [11]: s = set(d) s [11]: {1, 2} # Fast vs. Simple¶ [12]: l = [1,2,3,4,5,6,7,8,9] The in operator on a list can only search through it from beginning to end. Here we use 9 comparisons. (In a list with millions of elements we would take at most millions of comparisons which is not fast.) [13]: 9 in l [13]: True Manually implementing what the in operator does. [14]: answer = False for elem in l: # linear search!! if elem == 9: break [14]: True Using a set is a better way to determine membership. It is implemented as a hash table internally. [15]: s = {1,2,3,4,5,6,7,8,9} 9 in s [15]: True Insertion order is not guaranteed to be preserved by a set, although it is in the simplest cases. [16]: for elem in s: print(elem) 1 2 3 4 5 6 7 8 9 # for, Iterables, range and Generators¶ [17]: for i in [0,1,2,3]: print(i) 0 1 2 3 This is the same like above, from a functionality point of view. Only cheaper, memorywise, because no 4 integers are kept in memory. (Think of millions of integers, again.) [18]: for i in range(4): print(i) 0 1 2 3 The iterator protocol, explained. [19]: r = range(4) [20]: it = iter(r) [21]: next(it) [21]: 0 [22]: next(it) [22]: 1 [23]: next(it) [23]: 2 [24]: next(it) [24]: 3 # Tuples, Tuple Unpacking, Returning Multiple Values from Functions¶ Johannes: “what’s this?” [25]: def f(): return 1, # comma? Tuple unpacking: syntactic sugar [68]: a, b = 1, 2 is the same as [69]: (a, b) = (1, 2) This allows us to swap two variables in one statement, for example [70]: a, b = b, a Returning multiple values is the same as returning a tuple [71]: def f(): return (1, 2, 3) This is the same as … [29]: def f(): return 1, 2, 3 [30]: retval = f() What is returned in both cases is a tuple [31]: retval [31]: (1, 2, 3) [32]: type(retval) [32]: tuple The same is more expressively written as … [37]: a, b, c = f() # tuple unpacking Back to Johannes’ question: 1, is a one-tuple [33]: def f(): return 1, [34]: retval = f() [72]: retval [72]: (1, 2, 3) The same concept - tuple unpacking - is used in pairwise iteration btw. [45]: d = { 1: 'one', 2: 'two'} [46]: for key, value in d.items(): print(key, value) 1 one 2 two # Object Oriented Programming¶ An empty class [47]: class Message: pass Creating a object of that class [48]: m = Message() [49]: type(m) [49]: __main__.Message A constructor, to be called when an object is created [50]: class Message: # prio # dlc # msg1 # ... def __init__(self, prio, dlc, msg1): print('prio:', prio, 'dlc:', dlc, 'msg1:', msg1) [51]: m = Message(1, 5, 'whatever message that could be') prio: 1 dlc: 5 msg1: whatever message that could be The same, only using keyword parameters for better readability and maintainability [52]: m = Message(prio=1, dlc=5, msg1='whatever message that could be') prio: 1 dlc: 5 msg1: whatever message that could be Order is irrelevant when using keyword parameters [53]: m = Message(dlc=5, prio=1, msg1='whatever message that could be') prio: 1 dlc: 5 msg1: whatever message that could be [54]: m [54]: <__main__.Message at 0x7f41f5ff26a0> self is the object that is being created. You can use it to hold members (to remember values). [55]: class Message: def __init__(self, prio, dlc, msg1): self.prio = prio self.dlc = dlc self.msg1 = msg1 [56]: m = Message(dlc=5, prio=1, msg1='whatever message that could be') print('prio:', m.prio) print('dlc:', m.dlc) print('msg1:', m.msg1) prio: 1 dlc: 5 msg1: whatever message that could be [57]: msglist = [] msglist.append(Message(dlc=5, prio=1, msg1='whatever message that could be')) msglist.append(Message(prio=5, dlc=1, msg1='another wtf message')) [58]: msglist [58]: [<__main__.Message at 0x7f41f5ff4160>, <__main__.Message at 0x7f41f5ff41c0>] # datetime¶ Date and time is a complex matter. The datetime module has all of it. [59]: import datetime [60]: now = datetime.datetime.now() now [60]: datetime.datetime(2020, 10, 28, 12, 34, 19, 291130) [61]: type(now) [61]: datetime.datetime [62]: import time now_timestamp = time.time() [63]: now_timestamp [63]: 1603884859.3412576 [64]: now = datetime.datetime.fromtimestamp(now_timestamp) now [64]: datetime.datetime(2020, 10, 28, 12, 34, 19, 341258) [65]: then = datetime.datetime(2019, 10, 22) [66]: now - then [66]: datetime.timedelta(days=372, seconds=45259, microseconds=341258)
Overall Objectives Application Domains Highlights of the Year New Software and Platforms Partnerships and Cooperations Bibliography PDF e-Pub ## Section: New Results ### Wireless Networks Participants: Yue Li, Imad Alawe, Quang Pham, Patrick Maillé, Yassine Hadjadj-Aoul, César Viho, Gerardo Rubino Mobile wireless networks' improvements. Software Defined Networking (SDN) is one of the key enablers for evolving mobile network architecture towards 5G. SDN involves the separation of control and data plane functions, which leads, in the context of 5G, to consider the separation of the control and data plane functions of the different gateways of the Evolved Packet Core (EPC), namely Serving and Packet data Gateways (S and P-GW). Indeed, the envisioned solutions propose to separate the S/P-GW into two entities: the S/P-GW-C, which integrates the control plane functions and the S/P-GW-U that handles the User Equipment (UE) data plane traffic. There are two major approaches to create and update user plane forwarding rules for such a partition: (i) considering an SDN controller for the S/P-GW-C (SDNEPC) or (ii) using a direct specific interface to control the S/P-GW-U (enhancedEPC). In [38], we evaluate, using a testbed, those two visions against the classical virtual EPC (vEPC), where all the elements of the EPC are virtualized. Besides evaluating the capacity of the vEPC to manage and scale to UE requests, we compare the performances of the solutions in terms of the time needed to create the user data plane. The obtained results allow drawing several remarks, which may help to dimension the vEPC's components as well as to improve the S/P-GW-U management procedure. One of the requirements of 5G is to support a massive number of connected devices, considering many use-cases such as IoT and massive Machine Type Communication (MTC). While this represents an interesting opportunity for operators to grow their business, it will need new mechanisms to scale and manage the envisioned high number of devices and their generated traffic. Particularity, the signaling traffic, which will overload the 5G core Network Function (NF) in charge of authentication and mobility, namely Access and Mobility Management Function (AMF). The objective of [37] is to provide an algorithm based on Control Theory allowing: (i) to equilibrate the load on the AMF instances in order to maintain an optimal response time with limited computing latency; (ii) to scale out or in the AMF instance (using NFV techniques) depending on the network load to save energy and avoid wasting resources. Obtained results indicate the superiority of our algorithm in ensuring fair load balancing while scaling dynamically with the traffic load. In [64] we are going further by using new advances on machine learning, and more specifically Recurrent Neural Networks (RNN), to predict accurately the arrival traffic pattern of devices. The main objective of the proposed approach is to early react to congestion by pro-actively scaling the AMF VNF in a way to absorb such congestion while respecting the traffic constraints. Energy consumption improvements. Recently in cellular networks, the focus has been moved to seeking ways to increase the energy efficiency by better adapting to the existing users behaviors. In [17], we are going a step further in studying a new type of disruptive service by trying to answer the question “What are the potential energy efficiency gains if some of the users are willing to tolerate delays?”. We present an analytical model of the energy usage of LTE base stations, which provides lower bounds of the possible energy gains under a decentralized, noncooperative setup. The model is analyzed in six different scenarios (such as micro-macro cell interaction and coverage redundancy) for varying traffic and user-tolerable delays. We show that it is possible to reduce the power consumption by up to 30%. Computation offloading in mobile network. Mobile edge computing (MEC) emerges as a promising paradigm that extends the cloud computing to the edge of pervasive radio access networks, in near vicinity to mobile users, reducing drastically the latency of end-to-end access to computing resources. Moreover, MEC enables the access to up-to-date information on users' network quality via the radio network information service (RNIS) application programming interface (API), allowing to build novel applications tailored to users' context. In [25] and [49], we present a novel framework for offloading computation tasks, from a user device to a server hosted in the mobile edge (ME) with highest CPU availability. Besides taking advantage of the proximity of the MEC server, the main innovation of the proposed solution is to rely on the RNIS API to drive the user equipment (UE) decision to offload or not computing tasks for a given application. The contributions are twofold. First, we propose the design of an application hosted in the ME, which estimates the current value of the round trip time (RTT) between the UE and the ME, according to radio quality indicators available through RNIS API, and provides it to the UE. Second, we present a novel computation algorithm which, based on the estimated RTT coupled with other parameters (e.g., energy consumption), decide when to offload UE's applications computing tasks to the MEC server. The effectiveness of the proposed framework is demonstrated via testbed experiments featuring a face recognition application. Services improvement in wireless heterogeneous networks. With the rapid growth of HTTP-based Adaptive Streaming (HAS) multimedia video services on the Internet, improving the Quality of Experience (QoE) of video delivery will be highly requested in wireless heterogeneous networks. Various access technologies such as 3G/LTE and Wi-Fi with overlapping coverage is the main characteristic of network heterogeneity. Since contemporary mobile devices are usually equipped with multiple radio interfaces, mobile users are enabled to utilize multiple access links simultaneously for additional capacity or reliability. However, network and video quality selection can have notable impact on the QoE of DASH clients facing the video service's requirements, the wireless channel profiles and the costs of the different links. In this context, the emerging Multi-access Edge Computing (MEC) standard gives new opportunities to improve DASH performance, by moving IT and cloud computing capabilities down to the edge of the mobile network. In [45], we propose a MEC-assisted architecture for improving the performance of DASH-based streaming, a standard implementation of a HAS framework in wireless heterogeneous networks. With the proposed algorithm running as a MEC service, the overall QoE and fairness of DASH clients are improved in a real time manner in case of network congestion. QoE aware routing in wireless networks. This year we continued our research on QoE-based optimization routing for wireless mesh networks. The difficulties of the problem are analyzed and centralized and decentralized algorithms are proposed. The quality of the solution, the computational complexity of the proposed algorithm, and the fairness are our main concerns. Several centralized approximation algorithms have been already proposed in order to address the complexity and the quality of possible solutions. This year, we focused mainly on distributed algorithm to complement of the existing centralized algorithms. We propose decentralized heuristic algorithms based on the well-known Optimized Link-State Routing (OLSR) protocol. Control packets of OLSR are modified so as to be able to convey QoE-related information. The routing algorithm chooses the paths heuristically. After that, we studied message passing algorithms in order to find near optimal routing solutions in cooperative distributed networks. These algorithms have been published in [27], [13]. Sensors networks. In the literature, it is common to consider that sensor nodes in a clustered-based event-driven Wireless Sensor Network (WSN) use a Carrier Sense Multiple Access (CSMA) protocol with a fixed transmission probability to control data transmission. However, due to the highly variable environment in these networks, a fixed transmission probability may lead to a significant amount of extra energy consumption. In view of this, three different transmission probability strategies for event-driven WSNs were studied in [51]: the optimal one, the “fixed” approach and a third “adaptive” method. As expected, the optimum strategy achieves the best results in terms of energy consumption but its implementation in a practical system is not feasible. The commonly used fixed transmission strategy (the probability for any node to attempt transmission is a constant) is the simplest approach but it does not adapt to changes in the system’s conditions and achieves the worst performance. In the paper, we find that our proposed adaptive transmission strategy, where that probability is changed depending on specific conditions and in a very precise way, is pretty easy to implement and achieves results very close to the optimal method. The three strategies are analyzed in terms of energy consumption but also regarding the cluster formation latency. In [28], we also investigate cluster head selection schemes. Specifically, we consider two intelligent schemes based on the fuzzy $C$-means and $k$-medoids algorithms, and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and some selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different considered transmission probability schemes. Transmission delay, throughput and energy are also important criteria to consider in wireless sensor networks (WSNs). The IEEE 802.15.4 standard was conceived with the objective of reducing resource's consumption in both WSNs and Personal Area Networks (WPANs). In such networks, the slotted CSMA/CA still occupies a prominent place as a channel control access mechanism with its inherent simplicity and reduced complexity. In [26], we propose to introduce a network allocation vector (NAV) to reduce energy consumption and collisions in IEEE 802.15.4 networks. A Markov chain-based analytical model of the fragmentation mechanism, in a saturated traffic, is given as well as a model of the energy consumption using the NAV mechanism. The obtained results show that the fragmentation technique improves at the same time the throughput, the access delay and the bandwidth occupation. They also show that using the NAV allows reducing significantly the energy consumption when applying the fragmentation technique in slotted CSMA/CA under saturated traffic conditions.
# Comments on "An Update of the HLS Estimate of the Muon g-2"by M.Benayoun {\it et al.}, arXiv:1210.7184v3 1 BaBar LAL - Laboratoire de l'Accélérateur Linéaire Abstract : In a recent paper \cite{benayoun} M.Benayoun {\it et al.} use a specific model to compare results on the existing data for the cross section of the process $e^+e^-\rightarrow \pi^+\pi^-$ and state conclusions about the inconsistency of the BABAR results with those from the other experiments. We show that a direct model-independent comparison of the data at hand contradicts this claim. Clear discrepancies with the results of Ref. \cite{benayoun} are pointed out. As a consequence we do not believe that the lower value and the smaller uncertainty obtained for the prediction of the muon magnetic anomaly are reliable results. Keywords : Document type : Journal articles Domain : https://hal.in2p3.fr/in2p3-00915014 Contributor : Sabine Starita Connect in order to contact the contributor Submitted on : Friday, December 6, 2013 - 2:04:35 PM Last modification on : Sunday, June 26, 2022 - 12:00:15 PM ### Citation M. Davier, B. Malaescu. Comments on "An Update of the HLS Estimate of the Muon g-2"by M.Benayoun {\it et al.}, arXiv:1210.7184v3. European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2013, 73, pp.2597. ⟨10.1140/epjc/s10052-013-2597-1⟩. ⟨in2p3-00915014⟩ Record views
## Seminars and Colloquia by Series Friday, September 11, 2009 - 15:00 , Location: Skiles 255 , Jinwoo Shin , MIT , Organizer: Prasad Tetali We consider the #P complete problem of counting the number of independent sets in a given graph. Our interest is in understanding the effectiveness of the popular Belief Propagation (BP) heuristic. BP is a simple and iterative algorithm that is known to have at least one fixed point. Each fixed point corresponds to a stationary point of the Bethe free energy (introduced by Yedidia, Freeman and Weiss (2004) in recognition of Hans Bethe's earlier work (1935)). The evaluation of the Bethe Free Energy at such a stationary point (or BP fixed point) leads to the Bethe approximation to the number of independent sets of the given graph. In general BP is not known to converge nor is an efficient, convergent procedure for finding stationary points of the Bethe free energy known. Further, effectiveness of Bethe approximation is not well understood. As the first result of this paper, we propose a BP-like algorithm that always converges to a BP fixed point for any graph. Further, it finds an \epsilon approximate fixed point in poly(n, 2^d, 1/\epsilon) iterations for a graph of n nodes with max-degree d. As the next step, we study the quality of this approximation. Using the recently developed 'loop series' approach by Chertkov and Chernyak, we establish that for any graph of n nodes with max-degree d and girth larger than 8d log n, the multiplicative error decays as 1 + O(n^-\gamma) for some \gamma > 0. This provides a deterministic counting algorithm that leads to strictly different results compared to a recent result of Weitz (2006). Finally as a consequence of our results, we prove that the Bethe approximation is exceedingly good for a random 3-regular graph conditioned on the Shortest Cycle Cover Conjecture of Alon and Tarsi (1985) being true. (Joint work with Venkat Chandrasekaran, Michael Chertkov, David Gamarnik and Devavrat Shah) Friday, September 4, 2009 - 15:00 , Location: Skiles 255 , Karthekeyan Chandrasekaran , College of Computing , Organizer: Prasad Tetali Lovasz Local Lemma (LLL) is a powerful result in probability theory that states that the probability that none of a set of bad events happens is nonzero if the probability of each event is small compared to the number of events that depend on it. It is often used in combination with the probabilistic method for non-constructive existence proofs. A prominent application of LLL is to k-CNF formulas, where LLL implies that, if every clause in the formula shares variables with at most d \le 2^k/e other clauses then such a formula has a satisfying assignment. Recently, a randomized algorithm to efficiently construct a satisfying assignment was given by Moser. Subsequently Moser and Tardos gave a randomized algorithm to construct the structures guaranteed by the LLL in a very general algorithmic framework. We will address the main problem left open by Moser and Tardos of derandomizing their algorithm efficiently when the number of other events that any bad event depends on is possibly unbounded. An interesting special case of the open problem is the k-CNF problem when k = \omega(1), that is, when k is more than a constant. Friday, August 21, 2009 - 15:00 , Location: Skiles 255 , Satoru Iwata , Kyoto University , Organizer: Prasad Tetali In this lecture, I will explain the greedy approximation algorithm on submodular function maximization due to Nemhauser, Wolsey, and Fisher. Then I will apply this algorithm to the problem of approximating an monotone submodular functions by another submodular function with succinct representation. This approximation method is based on the maximum volume ellipsoid inscribed in a centrally symmetric convex body. This is joint work with Michel Goemans, Nick Harvey, and Vahab Mirrokni. Wednesday, August 19, 2009 - 15:00 , Location: Skiles 255 , Satoru Iwata , Kyoto University , Organizer: Prasad Tetali In this lecture, I will review combinatorial algorithms for minimizing submodular functions. In particular, I will present a new combinatorial algorithm obtained in my recent joint work with Jim Orlin. Friday, August 14, 2009 - 15:05 , Location: Skiles 255 , Prof. Satoru Iwata , Kyoto University , Organizer: Prasad Tetali In this lecture, I will explain connections between graph theory and submodular optimization. The topics include theorems of Nash-Williams on orientation and detachment of graphs. Thursday, May 21, 2009 - 11:00 , Location: Skiles 255 , Joshua Cooper , Department of Mathematics, University of South Carolina , Organizer: Prasad Tetali We consider the Ulam "liar" and "pathological liar" games, natural and well-studied variants of "20 questions" in which the adversarial respondent is permitted to lie some fraction of the time. We give an improved upper bound for the optimal strategy (aka minimum-size covering code), coming within a triply iterated log factor of the so-called "sphere covering" lower bound. The approach is twofold: (1) use a greedy-type strategy until the game is nearly over, then (2) switch to applying the "liar machine" to the remaining Berlekamp position vector. The liar machine is a deterministic (countable) automaton which we show to be very close in behavior to a simple random walk, and this resemblance translates into a nearly optimal strategy for the pathological liar game. Friday, April 24, 2009 - 15:00 , Location: Skiles 255 , Mokshay Madiman , Department of Statistics, Yale University , Organizer: Prasad Tetali We develop an information-theoretic foundation for compound Poisson approximation and limit theorems (analogous to the corresponding developments for the central limit theorem and for simple Poisson approximation). First, sufficient conditions are given under which the compound Poisson distribution has maximal entropy within a natural class of probability measures on the nonnegative integers. In particular, it is shown that a maximum entropy property is valid if the measures under consideration are log-concave, but that it fails in general. Second, approximation bounds in the (strong) relative entropy sense are given for distributional approximation of sums of independent nonnegative integer valued random variables by compound Poisson distributions. The proof techniques involve the use of a notion of local information quantities that generalize the classical Fisher information used for normal approximation, as well as the use of ingredients from Stein's method for compound Poisson approximation. This work is joint with Andrew Barbour (Zurich), Oliver Johnson (Bristol) and Ioannis Kontoyiannis (AUEB). Friday, April 17, 2009 - 15:00 , Location: Skiles 255 , Guantao Chen , Georgia State University , Organizer: Prasad Tetali Let G be a graph and K be a field. We associate to G a projective toric variety X_G over K, the cut variety of the graph G. The cut ideal I_G of the graph G is the ideal defining the cut variety. In this talk, we show that, if G is a subgraph of a subdivision of a book or an outerplanar graph, then the minimal generators are quadrics. Furthermore we describe the generators of the cut ideal of a subdivision of a book. Tuesday, April 7, 2009 - 11:00 , Location: Skiles 255 , Adam Marcus , Yale University , Organizer: Prasad Tetali The entropy function has a number of nice properties that make it a useful counting tool, especially when one wants to bound a set with respect to the set's projections. In this talk, I will show a method developed by Mokshay Madiman, Prasad Tetali, and myself that builds on the work of Gyarmati, Matolcsi and Ruzsa as well as the work of Ballister and Bollobas. The goal will be to give a black-box method for generating projection bounds and to show some applications by giving new bounds on the sizes of Abelian and non-Abelian sumsets. Friday, April 3, 2009 - 15:00 , Location: Skiles 255 , Alexandra Kolla , UC Berkeley , Organizer: Prasad Tetali I will present an approximation algorithm for the following problem: Given a graph G and a parameter k, find k edges to add to G as to maximize its algebraic connectivity. This problem is known to be NP-hard and prior to this work no algorithm was known with provable approximation guarantee. The algorithm uses a novel way of sparsifying (patching) part of a graph using few edges.
# Issues with training SSD on own dataset I'm new to ML and trying to train a SSD300, with some Keras-Code github.com/pierluigiferrari/ssd_keras I found on github. For training I'm using an own (very small) dataset of objects that are not in any of the bigger known datasets. My dataset has the following characteristics: • objects have very different sizes in images (from around 20x40 to 250x200) • there is only one class labeld in the images • images are in RGB • all images are sized to fit in 300x300 • dataset contains 319 images for training and validation Now my problem is, that the loss-function for validation doesn't converge, but training loss does. See this image showing the loss functions over the epochs. I trained 120 epochs with 1000 steps each: When I try to use the trained weights, coming out of this training, I get zero detections in image. It seems like the model didn't learn anything. I'm using pretrained weights for the underlaying VGG-16 network provided in the github-repository. It is trained on imagenet dataset. My parameters are as follows: img_height = 300 # Height of the model input images img_width = 300 # Width of the model input images img_channels = 3 # Number of color channels of the model input images mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights. swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images. n_classes = 1 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets scales = scales_pascal aspect_ratios = [[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters two_boxes_for_ar1 = True steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer. offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer. clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation normalize_coords = True 1. How to interpret the loss function? Is it because of the small dataset or because of wrong parameters or something else? 2. Do I have to train my own classifier (VGG-16) or can I use the pretrained one even when my objects don't appear in the pretrained dataset? 3. Do I have to train for a longer time? Means for more epochs? As additional information: I already trained a faster R-CNN model with the exact same dataset. It worked quiet good and gives me good results. What you are experiencing is called overfitting and it happens because of your very small dataset. All the model cares about is performance on the training dataset, so given the opportunity, it will simply attempt to memorize it. This is what happens in you case, you feed a model which contains over 130 Million parameters less than 319 images.
# Irreducible non-symmetric matrix with only real eigenvalues I'm looking for an counterexample. Some Notation: A matrix $A$ is irreducible if there is no permutation matrix $P$ so that $$P^{-1} A P = \begin{bmatrix} E & G \\ 0 & F \end{bmatrix}$$ where $E$ and $F$ are square. Two matrices $A$ and $B$ are diagonally conjugated if there is a non-singular diagonal matrix $D$ with $D^{-1} A D = B$. Given a real, irreducible matrix $A$, which is not diagonally conjugated to a symmetric matrix. Is there such a matrix $A$ which has only real eigenvalues? The conditions do rule out typical counterexamples like $\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$ (reducible) and $\begin{bmatrix} 1 & 1 \\ 2 & 1 \end{bmatrix}$ (diagonally conjugate to $\begin{bmatrix} 1 & \sqrt{2} \\ \sqrt{2} & 1 \end{bmatrix}$). Denote by $T_A \colon \mathbb{R}^n \rightarrow \mathbb{R}^n$ the linear map corresponding to multiplication by $A \in M_n(\mathbb{R})$. The matrix $A$ is irreducible according to your definition if and only if the subspaces $\mathrm{span} \{ e_{i_1}, \ldots, e_{i_k} \}$ are not $T_A$-invariant for any $1 \leq k \leq n - 1$ and $1 \leq i_1 < \ldots < i_k \leq n$ where $(e_1, \ldots, e_n)$ is the standard basis. Translating the condition that $A$ is not diagonally conjugate to a symmetric matrix to $T_A$ is possible but a bit more messy so we won't bother but the following observation is enough to construct an example: If $A$ is diagonally conjugate to a symmetric matrix, then in particular $A$ is diagonalizable. So in order to construct a counterexample, it is enough to find $A$ with real eigenvalues that is not diagonalizable such that the spaces $\mathrm{span} \{ e_{i_1}, \ldots, e_{i_k} \}$ are not $T_A$-invariant. The most basic example of $A'$ with real eigenvalues that is not diagonalizable is arguably $$A' = \left( \begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix} \right) \in M_2(\mathbb{R}).$$ Unfourtunately, it has $\mathrm{span} \{ e_1 \}$ as invariant subspace so let us apply a change of basis sending $e_1$ to $(1,1)$ and $e_2$ to $e_2$. Thus, we can take $$A = \left( \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix} \right) \left( \begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix} \right) \left( \begin{matrix} 1 & 0 \\ -1 & 1 \end{matrix} \right) = \left( \begin{matrix} -1 & 1 \\ -1 & 1 \end{matrix} \right).$$ The matrix $A$ is irreducible, nilpotent with real eigenvalues $0$ and is not conjugate to a symmetric matrix .
# Chain rule / Taylor expansion / functional derivative ## Homework Statement To show that ##\rho(p',s)>\rho(p',s') => (\frac{\partial\rho}{\partial s})_p\frac{ds}{dz}<0## where ##p=p(z)##, ##p'=p(z+dz)##, ##s'=s(z+dz)##, ##s=s(z)## ## Homework Equations I have no idea how to approach this. I'm thinking functional derivatives, taylor expansions, but if someone could please give me a clue where to start. as above.
Article Text Why we should allow performance enhancing drugs in sport 1. J Savulescu1, 2. B Foddy2, 3. M Clayton2 1. 1Uehiro Chair of Practical Ethics, University of Oxford, Oxford, UK 2. 2Murdoch Childrens Research Institute, Melbourne, Victoria, Australia 1. Correspondence to:
 Professor Savulescu
 Flat 2, 3 Bradmore Road, Oxford OX2 6QW, UK; julian.savulescuphilosophy.ox.ac.uk ## Statistics from Altmetric.com The legalisation of drugs in sport may be fairer and safer In 490 BC, the Persian Army landed on the plain of Marathon, 25 miles from Athens. The Athenians sent a messenger named Feidipides to Sparta to ask for help. He ran the 150 miles in two days. The Spartans were late. The Athenians attacked and, although outnumbered five to one, were victorious. Feidipides was sent to run back to Athens to report victory. On arrival, he screamed “We won” and dropped dead from exhaustion. The marathon was run in the first modern Olympics in 1896, and in many ways the athletic ideal of modern athletes is inspired by the myth of the marathon. Their ideal is superhuman performance, at any cost. ## DRUGS IN SPORT The use of performance enhancing drugs in the modern Olympics is on record as early as the games of the third Olympiad, when Thomas Hicks won the marathon after receiving an injection of strychnine in the middle of the race.1 The first official ban on “stimulating substances” by a sporting organisation was introduced by the International Amateur Athletic Federation in 1928.2 Using drugs to cheat in sport is not new, but it is becoming more effective. In 1976, the East German swimming team won 11 out of 13 Olympic events, and later sued the government for giving them anabolic steroids.3 Yet despite the health risks, and despite the regulating bodies’ attempts to eliminate drugs from sport, the use of illegal substances is widely known to be rife. It hardly raises an eyebrow now when some famous athlete fails a dope test. In 1992, Vicky Rabinowicz interviewed small groups of athletes. She found that Olympic athletes, in general, believed that most successful athletes were using banned substances.4 Much of the writing on the use of drugs in sport is focused on this kind of anecdotal evidence. There is very little rigorous, objective evidence because the athletes are doing something that is taboo, illegal, and sometimes highly dangerous. The anecdotal picture tells us that our attempts to eliminate drugs from sport have failed. In the absence of good evidence, we need an analytical argument to determine what we should do. ## CONDEMNED TO CHEATING? We are far from the days of amateur sporting competition. Elite athletes can earn tens of millions of dollars every year in prize money alone, and millions more in sponsorships and endorsements. The lure of success is great. But the penalties for cheating are small. A six month or one year ban from competition is a small penalty to pay for further years of multimillion dollar success. Drugs are much more effective today than they were in the days of strychnine and sheep’s testicles. Studies involving the anabolic steroid androgen showed that, even in doses much lower than those used by athletes, muscular strength could be improved by 5–20%.5 Most athletes are also relatively unlikely to ever undergo testing. The International Amateur Athletic Federation estimates that only 10–15% of participating athletes are tested in each major competition.6 The enormous rewards for the winner, the effectiveness of the drugs, and the low rate of testing all combine to create a cheating “game” that is irresistible to athletes. Kjetil Haugen7 investigated the suggestion that athletes face a kind of prisoner’s dilemma regarding drugs. His game theoretic model shows that, unless the likelihood of athletes being caught doping was raised to unrealistically high levels, or the payoffs for winning were reduced to unrealistically low levels, athletes could all be predicted to cheat. The current situation for athletes ensures that this is likely, even though they are worse off as a whole if everyone takes drugs, than if nobody takes drugs. Drugs such as erythropoietin (EPO) and growth hormone are natural chemicals in the body. As technology advances, drugs have become harder to detect because they mimic natural processes. In a few years, there will be many undetectable drugs. Haugen’s analysis predicts the obvious: that when the risk of being caught is zero, athletes will all choose to cheat. The recent Olympic games in Athens were the first to follow the introduction of a global anti-doping code. From the lead up to the games to the end of competition, 3000 drug tests were carried out: 2600 urine tests and 400 blood tests for the endurance enhancing drug EPO.8 From these, 23 athletes were found to have taken a banned substance—the most ever in an Olympic games.9 Ten of the men’s weightlifting competitors were excluded. The goal of “cleaning” up the sport is unattainable. Further down the track the spectre of genetic enhancement looms dark and large. ## THE SPIRIT OF SPORT So is cheating here to stay? Drugs are against the rules. But we define the rules of sport. If we made drugs legal and freely available, there would be no cheating. The World Anti-Doping Agency code declares a drug illegal if it is performance enhancing, if it is a health risk, or if it violates the “spirit of sport”.10 They define this spirit as follows.11 The spirit of sport is the celebration of the human spirit, body, and mind, and is characterised by the following values: • ethics, fair play and honesty • health • excellence in performance • character and education • fun and joy • teamwork • dedication and commitment • respect for rules and laws • respect for self and other participants • courage • community and solidarity Would legal and freely available drugs violate this “spirit”? Would such a permissive rule be good for sport? Human sport is different from sports involving other animals, such as horse or dog racing. The goal of a horse race is to find the fastest horse. Horses are lined up and flogged. The winner is the one with the best combination of biology, training, and rider. Basically, this is a test of biological potential. This was the old naturalistic Athenian vision of sport: find the strongest, fastest, or most skilled man. Training aims to bring out this potential. Drugs that improve our natural potential are against the spirit of this model of sport. But this is not the only view of sport. Humans are not horses or dogs. We make choices and exercise our own judgment. We choose what kind of training to use and how to run our race. We can display courage, determination, and wisdom. We are not flogged by a jockey on our back but drive ourselves. It is this judgment that competitors exercise when they choose diet, training, and whether to take drugs. We can choose what kind of competitor to be, not just through training, but through biological manipulation. Human sport is different from animal sport because it is creative. Far from being against the spirit of sport, biological manipulation embodies the human spirit—the capacity to improve ourselves on the basis of reason and judgment. When we exercise our reason, we do what only humans do. The result will be that the winner is not the person who was born with the best genetic potential to be strongest. Sport would be less of a genetic lottery. The winner will be the person with a combination of the genetic potential, training, psychology, and judgment. Olympic performance would be the result of human creativity and choice, not a very expensive horse race. Classical musicians commonly use β blockers to control their stage fright. These drugs lower heart rate and blood pressure, reducing the physical effects of stress, and it has been shown that the quality of a musical performance is improved if the musician takes these drugs.12 Although elite classical music is arguably as competitive as elite sport, and the rewards are similar, there is no stigma attached to the use of these drugs. We do not think less of the violinist or pianist who uses them. If the audience judges the performance to be improved with drugs, then the drugs are enabling the musician to express him or herself more effectively. The competition between elite musicians has rules—you cannot mime the violin to a backing CD. But there is no rule against the use of chemical enhancements. Is classical music a good metaphor for elite sport? Sachin Tendulkar is known as the “Maestro from Mumbai”. The Associated Press called Maria Sharapova’s 2004 Wimbledon final a “virtuoso performance”.13 Jim Murray14 wrote the following about Michael Jordan in 1996: “You go to see Michael Jordan play for the same reason you went to see Astaire dance, Olivier act or the sun set over Canada. It’s art. It should be painted, not photographed.It’s not a game, it’s a recital. He’s not just a player, he’s a virtuoso. Heifetz with a violin. Horowitz at the piano.” Indeed, it seems reasonable to suggest that the reasons we appreciate sport at its elite level have something to do with competition, but also a great deal to do with the appreciation of an extraordinary performance. Clearly the application of this kind of creativity is limited by the rules of the sport. Riding a motorbike would not be a “creative” solution to winning the Tour de France, and there are good reasons for proscribing this in the rules. If motorbikes were allowed, it would still be a good sport, but it would no longer be a bicycle race. We should not think that allowing cyclists to take EPO would turn the Tour de France into some kind of “drug race”, any more than the various training methods available turn it into a “training race” or a “money race”. Athletes train in different, creative ways, but ultimately they still ride similar bikes, on the same course. The skill of negotiating the steep winding descent will always be there. ## UNFAIR? People do well at sport as a result of the genetic lottery that happened to deal them a winning hand. Genetic tests are available to identify those with the greatest potential. If you have one version of the ACE gene, you will be better at long distance events. If you have another, you will be better at short distance events. Black Africans do better at short distance events because of biologically superior muscle type and bone structure. Sport discriminates against the genetically unfit. Sport is the province of the genetic elite (or freak). The starkest example is the Finnish skier Eero Maentyranta. In 1964, he won three gold medals. Subsequently it was found he had a genetic mutation that meant that he “naturally” had 40–50% more red blood cells than average.15 Was it fair that he had significant advantage given to him by chance? The ability to perform well in sporting events is determined by the ability to deliver oxygen to muscles. Oxygen is carried by red blood cells. The more red blood cells, the more oxygen you can carry. This in turn controls an athlete’s performance in aerobic exercise. EPO is a natural hormone that stimulates red blood cell production, raising the packed cell volume (PCV)—the percentage of the blood comprised of red blood cells. EPO is produced in response to anaemia, haemorrhage, pregnancy, or living at altitude. Athletes began injecting recombinant human EPO in the 1970s, and it was officially banned in 1985.16 At sea level, the average person has a PCV of 0.4–0.5. It naturally varies; 5% of people have a packed cell volume above 0.5,17 and that of elite athletes is more likely to exceed 0.5, either because their high packed cell volume has led them to success in sport or because of their training.18 Raising the PCV too high can cause health problems. The risk of harm rapidly rises as PCV gets above 50%. One study showed that in men whose PCV was 0.51 or more, risk of stroke was significantly raised (relative risk  =  2.5), after adjustment for other causes of stroke.19 At these levels, raised PCV combined with hypertension would cause a ninefold increase in stroke risk. In endurance sports, dehydration causes an athlete’s blood to thicken, further raising blood viscosity and pressure.20 What begins as a relatively low risk of stroke or heart attack can rise acutely during exercise. In the early 1990s, after EPO doping gained popularity but before tests for its presence were available, several Dutch cyclists died in their sleep due to inexplicable cardiac arrest. This has been attributed to high levels of EPO doping.21 The risks from raising an athlete’s PCV too high are real and serious. Use of EPO is endemic in cycling and many other sports. In 1998, the Festina team was expelled from the Tour de France after trainer Willy Voet was caught with 400 vials of performance enhancing drugs.22 The following year, the World Anti-Doping Agency was established as a result of the scandal. However, EPO is extremely hard to detect and its use has continued. Italy’s Olympic anti-doping director observed in 2003 that the amount of EPO sold in Italy outweighed the amount needed for sick people by a factor of six.23 In addition to trying to detect EPO directly, the International Cycling Union requires athletes to have a PCV no higher than 0.5. But 5% of people naturally have a PCV higher than 0.5. Athletes with a naturally high PCV cannot race unless doctors do a number of tests to show that their PCV is natural. Charles Wegelius was a British rider who was banned and then cleared in 2003. He had had his spleen removed in 1998 after an accident, and as the spleen removes red blood cells, its absence resulted in an increased PCV.24 There are other ways to increase the number of red blood cells that are legal. Altitude training can push the PCV to dangerous, even fatal, levels. More recently, hypoxic air machines have been used to simulate altitude training. The body responds by releasing natural EPO and growing more blood cells, so that it can absorb more oxygen with every breath. The Hypoxico promotional material quotes Tim Seaman, a US athlete, who claims that the hypoxic air tent has “given my blood the legal ‘boost’ that it needs to be competitive at the world level.”25 There is one way to boost an athlete’s number of red blood cells that is completely undetectable:26 autologous blood doping. In this process, athletes remove some blood, and reinject it after their body has made new blood to replace it. This method was popular before recombinant human EPO became available. “By allowing everyone to take performance enhancing drugs, we level the playing field.” There is no difference between elevating your blood count by altitude training, by using a hypoxic air machine, or by taking EPO. But the last is illegal. Some competitors have high PCVs and an advantage by luck. Some can afford hypoxic air machines. Is this fair? Nature is not fair. Ian Thorpe has enormous feet which give him an advantage that no other swimmer can get, no matter how much they exercise. Some gymnasts are more flexible, and some basketball players are seven feet tall. By allowing everyone to take performance enhancing drugs, we level the playing field. We remove the effects of genetic inequality. Far from being unfair, allowing performance enhancement promotes equality. ## JUST FOR THE RICH? Would this turn sport into a competition of expensive technology? Forget the romantic ancient Greek ideal. The Olympics is a business. In the four years before the Athens Olympics, Australia spent $547 million on sport funding,27 with$13.8 million just to send the Olympic team to Athens.28 With its highest ever funding, the Australian team brought home 17 gold medals, also its highest. On these figures, a gold medal costs about $32 million. Australia came 4th in the medal tally in Athens despite having the 52nd largest population. Neither the Australian multicultural genetic heritage nor the flat landscape and desert could have endowed Australians with any special advantage. They won because they spent more. Money buys success. They have already embraced strategies and technologies that are inaccessible to the poor. Paradoxically, permitting drugs in sport could reduce economic discrimination. The cost of a hypoxic air machine and tent is about US$7000.29 Sending an athlete to a high altitude training location for months may be even more expensive. This arguably puts legal methods for raising an athlete’s PCV beyond the reach of poorer athletes. It is the illegal forms that level the playing field in this regard. One popular form of recombinant human EPO is called Epogen. At the time of writing, the American chain Walgreens offers Epogen for US$86 for 6000 international units (IU). The maintenance dose of EPO is typically 20 IU per kg body weight, once a week.30 An athlete who weighs 100 kg therefore needs 2000 IU a week, or 8600 IU a month. Epogen costs the athlete about US$122 a month. Even if the Epogen treatment begins four years before an event, it is still cheaper than the hypoxic air machine. There are limits on how much haemoglobin an athlete can produce, however much EPO they inject, so there is a natural cap on the amount of money they can spend on this method. Meanwhile, in 2000, the cost of an in competition recombinant EPO test was about US\$130 per sample.31 This test is significantly more complex than a simple PCV test, which would not distinguish exogenous or endogenous EPO. If monetary inequalities are a real concern in sport, then the enormous sums required to test every athlete could instead be spent on grants to provide EPO to poorer athletes, and PCV tests to ensure that athletes have not thickened their blood to unsafe levels. ## UNSAFE? Should there be any limits to drugs in sport? There is one limit: safety. We do not want an Olympics in which people die before, during, or after competition. What matters is health and fitness to compete. Rather than testing for drugs, we should focus more on health and fitness to compete. Forget testing for EPO, monitor the PCV. We need to set a safe level of PCV. In the cycling world, that is 0.5. Anyone with a PCV above that level, whether through the use of drugs, training, or natural mutation, should be prevented from participating on safety grounds. If someone naturally has a PCV of 0.6 and is allowed to compete, then that risk is reasonable and everyone should be allowed to increase their PCV to 0.6. What matters is what is a safe concentration of growth hormone—not whether it is natural or artificial. We need to take safety more seriously. In the 1960s, East German athletes underwent systematic government sanctioned prescription of anabolic steroids, and were awarded millions of dollars in compensation in 2002. Some of the female athletes had been compelled to change their sex because of the large quantities of testosterone they had been given.32 We should permit drugs that are safe, and continue to ban and monitor drugs that are unsafe. There is another argument for this policy based on fairness: provided that a drug is safe, it is unfair to the honest athletes that they have to miss out on an advantage that the cheaters enjoy. Taking EPO up to the safe level, say 0.5, is not a problem. This allows athletes to correct for natural inequality. There are of course some drugs that are harmful in themselves —for example, anabolic steroids. We should focus on detecting these because they are harmful not because they enhance performance. Far from harming athletes, paradoxically, such a proposal may protect our athletes. There would be more rigorous and regular evaluation of an athlete’s health and fitness to perform. Moreover, the current incentive is to develop undetectable drugs, with little concern for safety. If safe performance enhancement drugs were permitted, there would be greater pressure to develop safe drugs. Drugs would tend to become safer. This is perhaps best illustrated by the case of American sailor Kevin Hall. Hall lost his testicles to cancer, meaning that he required testosterone injections to remain healthy. As testosterone is an anabolic steroid, he had to prove to four separate governing bodies that he was not using the substance to gain an advantage.33 Any tests that we do should be sensitive to the health of the athlete; to focus on the substances themselves is dogmatic. Not only this, but health testing can help to mitigate the dangers inherent in sport. For many athletes, sport is not safe enough without drugs. If they suffer from asthma, high blood pressure, or cardiac arrhythmia, sport places their bodies under unique stresses, which raise the likelihood of a chronic or catastrophic harm. For example, between 1985 and 1995, at least 121 US athletes collapsed and died directly after or during a training session or competition—most often because they had hypertrophic cardiomyopathy or heart malformations.34 The relatively high incidence of sudden cardiac death in young athletes has prompted the American Heart Association to recommend that all athletes undergo cardiac screening before being allowed to train or compete.35 Sometimes, the treatments for these conditions will raise the performance of an athlete beyond that which they could attain naturally. But safety should come first. If an archer requires β blockers to treat heart disease, we should not be concerned that this will give him or her an advantage over other archers. Or if an anaemic cyclist wants to take EPO, we should be most concerned with the treatment of the anaemia. If we are serious about safety in sport, we should also be prepared to discuss changes to the rules and equipment involved in sports which are themselves inherently dangerous. Formula One motor racing, once the most deadly of sports, has not seen a driver death in over six years, largely because of radical changes in the safety engineering of the tracks and the cars. Meanwhile, professional boxing remains inherently dangerous; David Rickman died during a bout in March 2004, even though he passed a physical examination the day before.36 ## CHILDREN Linford Christie, who served a two year drug ban from athletics competition, said that athletics “is so corrupt now I wouldn’t want my child doing it”.37 But apart from the moral harms to children in competing in a corrupt sport, should we withhold them from professional sport for medical reasons? The case where the athletes are too young to be fully autonomous is different for two important reasons. Firstly, children are much less capable of rejecting training methods and treatments that their coach wishes to use. Secondly, we think it is worth protecting the range of future options open to a child. There is a serious ethical problem with allowing children to make any kind of choice that substantially closes off their options for future lifestyles and career choices. If we do not consider children competent for the purposes of allowing them to make choices that cause them harm, then we should not allow them to decide to direct all of their time to professional gymnastics at age 10. The modifications such a choice can make to a child’s upbringing are as serious, and potentially as harmful, as many of the available performance enhancing drugs. Children who enter elite sport miss large parts of the education and socialisation that their peers receive, and are submitted to intense psychological pressure at an age when they are ill equipped to deal with it. We argue that it is clear that children, who are not empowered to refuse harmful drugs, should not be given them by their coaches or parents. But the same principles that make this point obvious should also make it obvious that these children should not be involved in elite competitive sport in the first place. However, if children are allowed to train as professional athletes, then they should be allowed to take the same drugs, provided that they are no more dangerous than their training is. Haugen’s model showed that one of the biggest problems in fighting drug use was that the size of the rewards for winning could never be overshadowed by the penalties for being caught. With this in mind, we can begin to protect children by banning them from professional sport. ## CLIMATE OF CHEATING If we compare the medical harms of the entire worldwide doping problem, they would have to be much less than the worldwide harms stemming from civilian illicit drug use. And yet, per drug user, the amount of money spent on combating drugs in sport outweighs the amount spent on combating civilian drug use by orders of magnitude. We can fairly assume that if medical harms and adherence to law were the only reasons we felt compelled to eradicate doping, then the monetary value we placed on cleaning up sport should be the same, per drug user, as the monetary value we place on eradicating recreational drug use. And yet it is not. Because of this, it should be obvious that it is not medical harms that we think are primarily at stake, but harm to sport as a whole, a purported violation of its spirit. It is a problem for the credibility of elite sport, if everyone is cheating. If it is this climate of cheating that is our primary concern, then we should aim to draft sporting rules to which athletes are willing to adhere. ## PROHIBITION It is one thing to argue that banning performance enhancing drugs has not been successful, or even that it will never be successful. But it should also be noted that the prohibition of a substance that is already in demand carries its own intrinsic harms. The Prohibition of Alcohol in America during the 1920s led to a change in drinking habits that actually increased consumption. Driven from public bars, people began to drink at home, where the alcohol was more readily available, and the incidence of deaths due to alcoholism rose or remained stable, while they dropped widely around the world in countries without prohibition.38 Furthermore, as the quality of the alcohol was unregulated, the incidence of death from poisoned alcohol rose fourfold in five years.39 Even when prohibition leads to a decrease in consumption, it often leads to the creation of a black market to supply the continuing demand, as it did in the Greenland study of alcohol rationing.40 Black markets supply a product that is by definition unregulated, meaning that the use is unregulated and the safety of the product is questionable. The direct risks from prohibiting performance enhancing drugs in sport are similar, but probably much more pronounced. Athletes currently administer performance enhancing substances in doses that are commensurate with the amount of performance gain they wish to attain, rather than the dose that can be considered “safe”. The athletic elite have near unlimited funds and the goal of near unlimited performance, a framework that results in the use of extremely unsafe doses. If athletes are excluded when their bodies are unsafe for competition, this kind of direct consequence from prohibition would be reduced. ## THE PROBLEM OF STRICT LIABILITY Lord Coe, a dual Olympic champion, has defended the doctrine of “strict liability”, as it is currently applied to athletes who use a banned substance:41 “…The rule of strict liability—under which athletes have to be solely and legally responsible for what they consume—must remain supreme. We cannot, without blinding reason and cause, move one millimetre from strict liability—if we do, the battle to save sport is lost.” The best reason for adhering to this rule is that, if coaches were made responsible for drugs that they had given to their athletes, then the coach would be banned or fined, and the athlete could still win the event. In this situation, other athletes would still be forced to take drugs in order to be competitive, even though the “cheat” had been caught. But the doctrine of strict liability makes victims of athletes such as those of the East German swim team, who are competing in good faith but have been forced to take drugs. It also seems dogmatically punitive for athletes like British skier Alain Baxter, who accidentally inhaled a banned stimulant when he used the American version of a Vicks decongestant inhaler, without realising that it differed from the British model.42 It seems that strict liability is unfair to athletes, but its absence is equally unfair. Our proposal solves this paradox—when we exclude athletes only on the basis of whether they are healthy enough to compete, the question of responsibility and liability becomes irrelevant. Accidental or unwitting consumption of a risky drug is still risky; the issue of good faith is irrelevant. ## ALTERNATIVE STRATEGIES Michael Ashenden43 proposes that we keep progressive logs of each athlete’s PCV and hormone concentrations. Significant deviations from the expected value would require follow up testing. The Italian Cycling Federation decided in 2000 that all juniors would be tested to provide a baseline PCV and given a “Hematologic Passport”. Although this strategy is in many ways preferable to the prohibition of doping, it does nothing to correct the dangers facing an athlete who has an unsafe baseline PCV or testosterone concentration. ## TEST FOR HEALTH, NOT DRUGS The welfare of the athlete must be our primary concern. If a drug does not expose an athlete to excessive risk, we should allow it even if it enhances performance. We have two choices: to vainly try to turn the clock back, or to rethink who we are and what sport is, and to make a new 21st century Olympics. Not a super-Olympics but a more human Olympics. Our crusade against drugs in sport has failed. Rather than fearing drugs in sport, we should embrace them. In 1998, the president of the International Olympic Committee, Juan-Antonio Samaranch, suggested that athletes be allowed to use non-harmful performance enhancing drugs.44 This view makes sense only if, by not using drugs, we are assured that athletes are not being harmed. Performance enhancement is not against the spirit of sport; it is the spirit of sport. To choose to be better is to be human. Athletes should be the given this choice. Their welfare should be paramount. But taking drugs is not necessarily cheating. The legalisation of drugs in sport may be fairer and safer. The legalisation of drugs in sport may be fairer and safer ## REFERENCES View Abstract • This article has been reproduced in Dutch in Geneeskunde en Sport with an editorial comment; the Dutch translation and editorial are available here as pdfs (printer friendly versions). Dit artikel is in het Nederlands overgenomen in Geneeskunde en Sport met een redactioneel commentaar; de Nederlandse vertaling van het artikel en het redactionele commentaar zijn hier beschikbaar als pdfs (printer vriendelijke versie). Files in this Data Supplement: • [view PDF] - Article: Waarom we prestatieverbeterende middelen in sport zouden moeten toestaan. • [view PDF] - Editorial: Redactioneel commentaar. ## Footnotes • An earlier, abridged version of this piece was published as “Good sport, bad sport” in If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
# Illustration Energies of an Electron with Antisymmetric Wave Function in the Finite Potential Box (Graph) Get illustration Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Sharing and adapting of the illustration is allowed with indication of the link to the illustration. Graph of the following transcendental equation:$$-\cot\left( \frac{L}{2}\sqrt{\frac{2m}{\hbar^2} \, W^-} \right) ~=~ \sqrt{\frac{V_0}{W^-} ~-~ 1}$$ Here, the right and left sides of the equation were plotted (a bit rescaled) separately as a function of $$W^-$$. This equation was obtained by solving the Schrödinger equation for an electron in a finite potential well described by a antisymmetric wave function. And $$W^-$$ corresponds to the energy of the electron. The intersection $$W^-_1$$ is the only allowed energy of the electron inside the potential box. In this case, the electron can only occupy one energy in the potential box if it has an antisymmetric state.
OBJECTIVE—Understanding how individuals weigh the quality of life associated with complications and treatments is important in assessing the economic value of diabetes care and may provide insight into treatment adherence. We quantify patients’ utilities (a measure of preference) for the full array of diabetes-related complications and treatments. RESEARCH DESIGN AND METHODS—We conducted interviews with a multiethnic sample of 701 adult patients living with diabetes who were attending Chicago area clinics. We elicited utilities (ratings on a 0–1 scale, where 0 represents death and 1 represents perfect health) for hypothetical health states by using time-tradeoff questions. We evaluated 9 complication states (e.g., diabetic retinopathy and blindness) and 10 treatment states (e.g., intensive glucose control vs. conventional glucose control and comprehensive diabetes care [i.e., intensive control of multiple risk factors]). RESULTS—End-stage complications had lower mean utilities than intermediate complications (e.g., blindness 0.38 [SD 0.35] vs. retinopathy 0.53 [0.36], P < 0.01), and end-stage complications had the lowest ratings among all health states. Intensive treatments had lower mean utilities than conventional treatments (e.g., intensive glucose control 0.67 [0.34] vs. conventional glucose control 0.76 [0.31], P < 0.01), and the lowest rated treatment state was comprehensive diabetes care (0.64 [0.34]). Patients rated comprehensive treatment states similarly to intermediate complication states. CONCLUSIONS—End-stage complications have the greatest perceived burden on quality of life; however, comprehensive diabetes treatments also have significant negative quality-of-life effects. Acknowledging these effects of diabetes care will be important for future economic evaluations of novel drug combination therapies and innovations in drug delivery. Diabetes significantly increases an individual's risk of developing multiple microvascular and cardiovascular complications, and the risk of these complications can be significantly reduced with intensive and comprehensive diabetes care (1). Current recommendations for the ideal risk factor targets (e.g., A1C <7%) and specific therapies (e.g., prophylactic aspirin) for diabetes care reflect the findings of multiple clinical trials (24). Although intensive and comprehensive diabetes care may generate significant health benefits, the current level of adoption of comprehensive diabetes care is incomplete. Quality-of-care studies indicate that there has been a steady rise in the proportion of patients taking beneficial medications such as aspirin and that there have been reductions in the proportion of patients with poor risk factor control (5). At the same time, large proportions of patients continue to have poor glycemic (20%), blood pressure (33%), and cholesterol control (40%) (5). These ongoing deficiencies have led to a large public investment in diabetes quality improvement programs (6). The success of these quality improvement efforts depends, in part, on whether or not patients are willing to take the multiple medications that comprise comprehensive diabetes care. Patients’ willingness to adopt this care is likely to be determined, in part, by their perceptions of the relative quality-of-life effects of complications and treatments (7,8). These perceptions are also critical for economic evaluations of quality improvement efforts and treatment innovations. The development of combination drugs such as the polypill, a proposed treatment combining an aspirin, a diuretic, an ACE inhibitor, a β-blocker, folic acid, and a statin, is motivated by the desire to simplify the treatment experience (9). Novel insulin delivery methods are intended to eliminate the discomfort associated with insulin injections (10). Whether these innovations will prove to be economically valuable depends on accurately accounting for the adverse quality-of-life effects of treatments and their downstream effects. Quality-of-life effects are reflected in medical cost-effectiveness analyses (CEAs) using quality-of-life weights called utilities. Utilities are quantitative measures of preference on a 0–1 scale, where 0 represents death and 1 represents life in perfect health (11). Despite the importance of understanding the utilities for treatment and complication health states related to diabetes care, there have been no systematic efforts to directly elicit utilities for the full array of complications and treatments that patients may experience. As a result, important complication and treatment states have never been accounted for in prior CEAs, (12). The utilities for several intermediate microvascular complication states (e.g., diabetic neuropathy) are unknown. Accounting for the effects of these states may influence CEA results because the incidences of intermediate complications are high compared with those of end-stage complications (3). Even more striking is the lack of accounting for the quality-of-life effects of treatments. We have previously found that accounting for the quality-of-life effects of treatments can alter the conclusions of CEA for intensive glucose control, and this may prove to be the case for comprehensive diabetes care (13). Thus, we set out to systematically collect, describe, and compare patients’ utilities for the full range of complications and treatments related to diabetes. From May 2004 to May 2006, we conducted face-to-face interviews with patients without dementia who were aged ≥18 years, living with diabetes, and attending clinics affiliated with an academic medical center (University of Chicago, Chicago, IL) and physician offices affiliated with a suburban hospital (MacNeal Hospital, Berwyn, IL). Prospective subjects were initially identified through clinic scheduling software based on ICD-9 codes for diabetes (i.e., 250.xx). Randomly identified patients were sent study recruitment letters. Letters were followed by a telephone call. We performed a screening telephone mini-mental status examination and excluded patients with scores ≤17 (14). We successfully contacted 2,990 patients, and 2,398 of these patients were eligible for the study. A total of 910 patients (38% of eligible subjects) scheduled interviews, and 701 patients (29% of eligible subjects) completed interviews. The average of age of subjects who completed interviews did not differ from that of other eligible patients. Interviews took ∼1 h and were conducted by trained interviewers in English or Spanish. All Spanish interview materials were professionally translated and back translated. We elicited utilities using the time-tradeoff method (15). For each time-tradeoff elicitation, patients were given a description of a hypothetical health state and asked to consider life in that state. The text of all health state descriptions is included in an online appendix (available at http://dx.doi.org/10.2337/dc07-0499). The health state descriptions were based upon our prior study of diabetes-related health state utilities (13) and existing descriptions in the literature. Health state descriptions were reviewed with the clinical faculty at the University of Chicago and pilot tested with patients. During the time-tradeoff elicitation, patients were asked to give their preference for 10 years in the health state of interest and a shorter period of time in perfect health. Using the ping-pong method, patients were asked a series of iterative questions where the time in perfect health was systematically altered by yearly increments and questioning was stopped, when the patient was indifferent between a given time choice. The point at which the patient was indifferent between the time choices was used to calculate the utility score (e.g., if 6 years of life in perfect health = 10 years with an amputation, the utility = 0.60). To minimize the effects of order response bias, the order of utility assessments was randomly allocated along two dimensions of the health states: 1) complication states versus treatment states and 2) severe/intensive states versus intermediate/conventional states. The descriptions of several complication health states were based on previous descriptions of life with complications found in the utility literature (blindness [16], diabetic retinopathy [symptomatic] [16], end-stage renal disease on hemodialysis [17], amputation [18], and major and minor stroke [19]). When such descriptions were not available we developed health state descriptions based on clinical experience and from published descriptions of life with such complications (angina-stage II Canadian Heart Association [20], diabetic neuropathy [symptomatic] [18], and diabetic nephropathy [21]). For each treatment state, we described the daily experience of treatments, the laboratory testing associated with treatments, and the likelihood of side effects. Patients were asked not to consider long-term effects of treatments on complications but to focus on the daily quality-of-life effects of treatments. We based our description of intensive and conventional glucose control on the treatment protocols and patient experiences of the U.K. Prospective Diabetes Study (UKPDS) (3). With intensive glucose control, patients were told that they would be more likely to be given multiple oral agents and insulin, that the frequency of major hypoglycemic episodes would be higher, and that the need for self-glucose of monitoring would be greater to achieve A1C <7% in comparison with conventional glucose control (A1C = 7.9%). Similarly, we used the UKPDS blood pressure trial protocols as the basis for descriptions of intensive and conventional blood pressure control (2). Patients were told that with intensive blood pressure control they would be more likely to be given three to four blood pressure agents compared with conventional blood pressure control. Descriptions for the remaining treatment states were based on data from the medical literature (e.g., aspirin [22] and cholesterol-lowering medication [23]). We also queried patients about their perceptions of quality of life with comprehensive diabetes care, which we described as the combination of cholesterol-lowering medication, aspirin, intensive blood pressure control, intensive glucose control, diet, and exercise. This combination represented care that was both comprehensive in breadth but also intensive in terms of risk factor goals. We also asked patients about a state we called the comprehensive care with polypill state. This state was identical to the comprehensive diabetes care state except that the number of pills taken per day was reduced by the use of the polypill. After utility elicitation, patients were asked about their overall health status, current medications, relationship with their physician, beliefs regarding medications, and willingness to take more medications. Medical records were abstracted for data on current medications, comorbidities (Charlson comorbidity index [24]), and risk factor levels. We performed a 10% rereview and found moderate to excellent agreement among abstractors. The intraclass correlation coefficient for A1C was 0.92. κ statistics for the presence of complications ranged from 0.59 to 0.79. ### Statistical analysis All analyses were performed using SAS statistical software (release 8.1; SAS Institute, Cary, NC). We describe the distribution of utilities using the mean, median, mode, SD, skewness, and kurtosis provide graphical illustration of the distributions of utility scores. Paired t tests were used to compare multiple health state utilities ascertained from the same individuals. Wilcoxon's rank-sum tests were used for comparisons of utilities across subgroups. The mean age of subjects was 63 years; 42% were men, 38% were black, and 24% were Latino (Table 1). The mean duration of diabetes was 9.9 years and the mean Charlson comorbidity index was 2.64 (24). Of the patients, 23% had experienced a microvascular complication, and 30% reported having cardiovascular complications. In comparison with nationally reported risk factors levels, study subjects had lower mean glucose and cholesterol levels but similar blood pressure levels (5). The majority (61%) used oral diabetes medications alone, 25% used insulin as part of their therapy, and 14% used no medications for glucose control. ### Patient utilities for diabetes-related complications Among the complication state utilities, each end-stage complication had a lower mean utility than its intermediate complication counterpart (e.g., major stroke 0.31 vs. minor stroke 0.70, P < 0.01) (Table 2). The complication state with the lowest mean utility was major stroke (0.31). Study patients rated complication utilities for, angina, diabetic neuropathy, and mild kidney disease similarly. In addition, diabetic retinopathy utilities were equivalent to amputation ratings. ### Patient utilities for diabetes-related treatments Each intensive treatment state had a lower mean utility than its conventional counterpart (e.g., intensive glucose control 0.67 vs. conventional glucose control 0.76, P < 0.01) (Table 3). The individual diabetes-related treatment with the lowest mean utility rating was intensive glucose control (0.67) and the comprehensive diabetes care treatment state was the lowest rated treatment state overall (0.64). The highest rated treatment states were life with diet and exercise therapy. The comprehensive care with polypill state had a mean utility (0.66) that was slightly higher but not significantly different from that of comprehensive diabetes care. Among treatment utilities, conventional glucose control was rated similar to conventional blood pressure control, as was cholesterol-lowering medication and conventional blood pressure control. The intensive glucose control and comprehensive care with polypill states were rated equally, and diet therapy was equivalent to exercise therapy. ### Comparisons of complication and treatment utilities Mean utilities for the comprehensive diabetes care and the comprehensive care with polypill were not statistically different from the mean utilities for angina, diabetic neuropathy, and diabetic nephropathy (P > 0.04). The mean utility for intensive glucose control was not significantly different from that for diabetic neuropathy. All other comparisons were significantly different (P < 0.01). ### Heterogeneity of health state utilities Each health state had significant variation in scores as reflected in large SDs (0.23–0.36) and ranges of observed values (Tables 2 and 3). Many health state utility distributions had a trimodal distribution with variation in weighting 0, 0.5, and 1. For complication states, the end-stage complications had especially heavy left-sided tails near 0, indicated by a positive skewness value. Between 12 and 50% of patients were willing to give up 8 of 10 years in perfect health to avoid life with complications. For treatment states, the mode of all utility distributions was ≥0.95, and distributions tended to have a right-sided deviation, with less prominent left-sided tails, indicated by a negative skewness value. Between 10 and 18% of patients were willing to give up 8 of 10 years of life in perfect health to avoid life with treatments. ### Impact of experience on health state utilities Patients with existing complications had a general tendency to rate life with those complications higher than those without complications. This was only statistically significant for major stroke (0.42 vs. 0.31), diabetic neuropathy (0.70 vs. 0.64), and diabetic retinopathy (0.61 vs. 0.53). In a similar fashion, patients who were taking specific medications had a general tendency to give higher utilities for related treatment states than patients not taking those medications. This was only statistically significant for intensive glucose control (0.76 vs. 0.66) and aspirin (0.84 vs. 0.81). The overall hierarchy of health states was not different among patients with complications/medications and those without them. Patients with diabetes perceive significant differences in the quality-of-life effects of complications and treatments related to their condition. On average, patients rated life with complications, especially end-stage complications, as significantly lower than that of life with treatments. However, we also found that patients perceived comprehensive diabetes care as having significant negative effects on quality of life, and these effects were equivalent to life with several intermediate complications. This quality-of-life burden appeared to arise from the prospect of multiple daily insulin injections rather than the prospect of multiple oral agents. This is implied by the facts that the treatment states with the lowest ratings each included multiple daily injections of insulin and that the utilities for comprehensive diabetes care and comprehensive care with polypill were not significantly different. It is important to note that these differences in mean utilities are directly influenced by the heterogeneity in patient utilities and that this heterogeneity varied by complications and treatments. For complication states, it was common to see a heavy left-sided tail for end-stage complications. For treatment states, the majority of patients actually rated life with treatments as being close to perfect health, indicating that treatments were not burdensome. At the same time, an important minority of patients (10–18%) gave ratings indicating that they perceived life with treatments as being a significant burden on quality of life. Our observation that there is significant heterogeneity in patient treatment preferences highlights the importance of incorporating a shared decision-making approach into everyday diabetes care. Acknowledging individual patient treatment preferences may be one of the keys to translating findings from clinical trial populations to general patient populations (8). These utility values may be used in future cost-effectiveness analyses of diabetes care. This study provides directly elicited utilities from a single population of adult patients living with type 2 diabetes. It provides an additional source of utility data that may have particular advantages when one is comparing alternative diagnostic or treatment options (11). Indirect methods of utility elicitation (e.g., EuroQoL) (25,26) have a primary advantage of ease of administration; however, they may be relatively insensitive to important differences for particular treatment decisions. Directly eliciting utilities for specific health states provides a more theoretically sound (11) and sensitive approach to detecting differences in patients’ preferences regarding different health states. The primary limitation of direct elicitation methods is the challenge of collecting such data; however, this study was performed to overcome this limitation. This study also provides utilities for complications and treatments that have not been considered previously in analyses, and accounting for these utilities may shift the balance of CEA results (13). A major insight that has not been extensively studied in previous CEAs of chronic diseases is that any negative quality-of-life effect of treatment can outweigh its benefits over a population. Failure to acknowledge the quality-of-life effects of current treatments may lead to an overestimation of the benefits of ongoing quality improvement efforts and an underestimation of the benefits of treatment innovations (10). It is important to note that these utilities represent patient-derived utilities and that there may still be a need to collect these health state utilities from the general population to accurately reflect the societal perspective in base case CEAs (11). CEAs of diabetes care have tended to rely on utilities that are available in the literature, and these have tended to be patient derived (26). Several limitations of this study should be considered when these results are interpreted. The preferences of this particular patient population may not be representative of those of all patients living with diabetes. All of our patients had an established relationship with a provider, and they may represent a group of patients more adherent to treatment than those in the general population. However, our study sample is ethnically and economically diverse. Our results are also limited by the fact that the validity of utility measurements cannot be directly assessed because there is no gold standard for measuring preferences. However, our patient population had significant experience with the various described health states, the order of our utility results has face validity, and our complication utilities are similar to those collected by the time-tradeoff method (27). Another limitation of the study is that we did not formally assess the reliability of the utility ratings over time. Our comparisons of patients with and without experiences with complications and medications provide some insight into how these utilities might change over time. Finally, our utility ratings are influenced by the specific descriptions of health states provided during the survey. This study has important implications for current policies and programs that are designed to enhance the quality of chronic disease management. Many of these programs essentially encourage patients to add more medications to their treatment regimen. In the near future, the results of the Action to Control Cardiovascular Risk in Diabetes trial may actually lead to even lower risk factor goals that will require even greater use of medications to achieve them. Our study results show that taking multiple medications on a routine basis represents a significant burden for many patients. Our study helps elucidate what facets of medication taking concern patients and provides a starting point from which we can think about how to overcome these concerns with patients. Quality of life related to treatments will be likely to improve if we can simplify or modify current treatments through treatment innovations. Without such technological innovations, we may still be able to allay patient concerns by educating patients very early in their disease about the true nature of optimal diabetes care, by incorporating their preferences into treatment decisions, and by acknowledging patient preferences and quality-of-life concerns in public health efforts to improve the quality of diabetes care. Table 1— Demographics and clinical characteristics Age 63 ± 14 Male 42 Race/ethnicity African American 268 (38) White 215 (31) Latino 164 (23) Health insurance Private 66 Medicare 46 Medicaid 18 Annual income ($) <10,000 20 10–25,000 26 25–50,000 34 >50,000 21 Duration of diabetes 9.9 ± 8.6 Self-reported comorbid conditions or complications Hypertension 74 Hypercholesterolemia 65 Eye disease 19 Kidney disease 8 Foot disease (peripheral neuropathy and amputation) 52 Heart disease 30 Stroke 11 Risk factor levels A1C 7.45 ± 1.62 A1C <7% 47 LDL cholesterol (mg/dl) 97 ± 34 LDL cholesterol <100 (mg/dl) 61 Systolic blood pressure (mmHg) 13 ± 18 Systolic blood pressure <130 mmHg 42 Diastolic blood pressure (mmHg) 74 ± 11 Diastolic blood pressure <80 mmHg 65 Mean number of medications 6 ± 4 Mean number of glucose-lowering medications Chart report 2 ± 1 Interview report 1 ± 1 Mean number of diabetes-related medications (including blood pressure, cholesterol, and aspirin) Chart report 4 ± 2 Interview report 4 ± 2 Glucose-lowering therapy Diet alone Chart report 14 Interview report 19 Oral medications alone Chart report 61 Interview report 58 Insulin and oral medications Chart report 11 Interview report 10 Insulin alone Chart report 14 Interview report 13 Aspirin Chart report 38 Interview report 40 Cholesterol-lowering drug Chart report 61 Interview report 57 Blood pressure–lowering drug Chart report 77 Interview report 73 Age 63 ± 14 Male 42 Race/ethnicity African American 268 (38) White 215 (31) Latino 164 (23) Health insurance Private 66 Medicare 46 Medicaid 18 Annual income ($) <10,000 20 10–25,000 26 25–50,000 34 >50,000 21 Duration of diabetes 9.9 ± 8.6 Self-reported comorbid conditions or complications Hypertension 74 Hypercholesterolemia 65 Eye disease 19 Kidney disease 8 Foot disease (peripheral neuropathy and amputation) 52 Heart disease 30 Stroke 11 Risk factor levels A1C 7.45 ± 1.62 A1C <7% 47 LDL cholesterol (mg/dl) 97 ± 34 LDL cholesterol <100 (mg/dl) 61 Systolic blood pressure (mmHg) 13 ± 18 Systolic blood pressure <130 mmHg 42 Diastolic blood pressure (mmHg) 74 ± 11 Diastolic blood pressure <80 mmHg 65 Mean number of medications 6 ± 4 Mean number of glucose-lowering medications Chart report 2 ± 1 Interview report 1 ± 1 Mean number of diabetes-related medications (including blood pressure, cholesterol, and aspirin) Chart report 4 ± 2 Interview report 4 ± 2 Glucose-lowering therapy Diet alone Chart report 14 Interview report 19 Oral medications alone Chart report 61 Interview report 58 Insulin and oral medications Chart report 11 Interview report 10 Insulin alone Chart report 14 Interview report 13 Aspirin Chart report 38 Interview report 40 Cholesterol-lowering drug Chart report 61 Interview report 57 Blood pressure–lowering drug Chart report 77 Interview report 73 Data are means ± SD, n (%), or %. n = 701. Table 2— Complication utilities ComplicationMeanMedianModeSDSkewnessKurtosis Angina 0.64 0.75 0.95 0.31 −0.65 −0.87 Mild stroke 0.70 0.85 0.95 0.31 −0.99 −0.36 Major stroke 0.31 0.26 0.05 0.31 0.90 −0.46 Diabetic neuropathy 0.66 0.85 0.95 0.34 −0.79 −0.87 Amputation 0.55 0.55 0.95 0.36 −0.25 −1.46 Diabetic retinopathy 0.53 0.50 0.05 0.36 −0.17 −1.53 Blindness 0.38 0.35 0.05 0.35 0.49 −1.26 Diabetic nephropathy 0.64 0.80 0.95 0.35 −0.72 −1.02 End-stage renal disease 0.35 0.25 0.05 0.33 0.66 −1.03 ComplicationMeanMedianModeSDSkewnessKurtosis Angina 0.64 0.75 0.95 0.31 −0.65 −0.87 Mild stroke 0.70 0.85 0.95 0.31 −0.99 −0.36 Major stroke 0.31 0.26 0.05 0.31 0.90 −0.46 Diabetic neuropathy 0.66 0.85 0.95 0.34 −0.79 −0.87 Amputation 0.55 0.55 0.95 0.36 −0.25 −1.46 Diabetic retinopathy 0.53 0.50 0.05 0.36 −0.17 −1.53 Blindness 0.38 0.35 0.05 0.35 0.49 −1.26 Diabetic nephropathy 0.64 0.80 0.95 0.35 −0.72 −1.02 End-stage renal disease 0.35 0.25 0.05 0.33 0.66 −1.03 Data are n. Table 3— Treatment utilities TreatmentMeanMedianModeSDSkewnessKurtosis Conventional glucose control 0.76 0.95 0.95 0.31 −1.46 0.68 Intensive glucose control 0.67 0.85 0.95 0.34 −0.88 −0.77 Conventional blood pressure control 0.77 0.95 0.95 0.30 −1.52 0.88 Intensive blood pressure control 0.73 0.90 0.95 0.32 −1.22 0.03 Aspirin 0.80 0.95 0.95 0.29 −1.78 1.80 Cholesterol-lowering drug 0.78 0.95 0.95 0.29 −1.60 1.19 Comprehensive diabetes care 0.64 0.75 0.95 0.34 −0.67 −1.03 Comprehensive care with polypill 0.66 0.85 0.95 0.34 −0.81 −0.83 Diet 0.88 0.95 1.0 0.24 −2.67 6.17 Exercise 0.89 1.0 1.0 0.23 −2.86 7.34 TreatmentMeanMedianModeSDSkewnessKurtosis Conventional glucose control 0.76 0.95 0.95 0.31 −1.46 0.68 Intensive glucose control 0.67 0.85 0.95 0.34 −0.88 −0.77 Conventional blood pressure control 0.77 0.95 0.95 0.30 −1.52 0.88 Intensive blood pressure control 0.73 0.90 0.95 0.32 −1.22 0.03 Aspirin 0.80 0.95 0.95 0.29 −1.78 1.80 Cholesterol-lowering drug 0.78 0.95 0.95 0.29 −1.60 1.19 Comprehensive diabetes care 0.64 0.75 0.95 0.34 −0.67 −1.03 Comprehensive care with polypill 0.66 0.85 0.95 0.34 −0.81 −0.83 Diet 0.88 0.95 1.0 0.24 −2.67 6.17 Exercise 0.89 1.0 1.0 0.23 −2.86 7.34 Data are n. This study was supported by National Institute of Aging Career Development Award (K23-AG021963 to E.S.H.), National Institute of Diabetes and Digestive and Kidney Diseases Diabetes Research and Training Center (P60 DK20595 to S.E.S.B., E.S.H., and D.O.M.), a Centers for Disease Control and Prevention Potential Extramural Project (U36-CCU319276), and the Chicago Center of Excellence in Health Promotion Economics (P30-CD000147 to E.S.H. and D.O.M.). We acknowledge the technical support of Nidhi Thakur. 1. Gaede P, Vedel P, Larsen N, Jensen GV, Parving HH, Pedersen O: Multifactorial intervention and cardiovascular disease in patients with type 2 diabetes. N Engl J Med 348 : 383 –393, 2003 2. UK Prospective Diabetes Study Group: Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. BMJ 317 : 703 –713, 1998 3. UK Prospective Diabetes Study Group: Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). Lancet 352 : 837 –853, 1998 4. Standards of medical care in diabetes—2007. Diabetes Care 30 (Suppl. 1) : S4 –S41, 2007 5. Saaddine JB, Cadwell B, Gregg EW, Engelgau MM, Vinicor F, Imperatore G, Narayan KM: Improvements in diabetes processes of care and intermediate outcomes: United States, 1988–2002. Ann Intern Med 144 : 465 –474, 2006 6. Steinbrook R: Facing the diabetes epidemic–mandatory reporting of glycosylated hemoglobin values in New York City. N Engl J Med 354 : 545 –548, 2006 7. Vijan S, Hayward RA, Ronis DL, Hofer TP: The burden of diabetes therapy: implications for the design of effective patient-centered treatment regimens. J Gen Intern Med 20 : 479 –482, 2005 8. UK: Prospective Diabetes Study Group: Quality of life in type 2 diabetic patients is affected by complications but not by intensive policies to improve blood glucose or blood pressure control (UKPDS 37). Diabetes Care 22 : 1125 –1136, 1999 9. Wald NJ, Law MR: A strategy to reduce cardiovascular disease by more than 80%. BMJ 326 : 1419 –1423, 2003 10. McMahon GT, Arky RA: Inhaled insulin for diabetes mellitus. N Engl J Med 356 : 497 –502, 2007 11. Gold MR, Patrick DL, Torrance GW, Fryback DG, Hadorn DC, Kamlet MS, Daniels N, Weinstein MC: Identifying and valuing outcomes. In Cost-Effectiveness in Health and Medicine , Gold MR, Siegel JE, Russell LB, Weinstein MC, Eds. New York, Oxford University Press, 1996 12. The CDC Diabetes Cost-Effectiveness Group: Cost-effectiveness of intensive glycemic control, intensified hypertension control, and serum cholesterol level reduction, for type 2 diabetes. JAMA 287 : 2542 –2551, 2002 13. Huang ES, Jin L, Shook M, Chin MH, Meltzer DO: The impact of patient preferences on the cost-effectiveness of intensive glucose control in older patients with new onset diabetes. Diabetes Care 29 : 259 –264, 2006 14. Folstein MF, Folstein SE, McHugh PR: “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res 12 : 189 –198, 1975 15. Neumann PJ, Goldie SJ, Weinstein MC: Preference-based measures in economic evaluation in healthcare. Annu Rev Public Health 21 : 587 –611, 2000 16. Sharma S, Oliver-Fernandez A, Bakal J, Hollands H, Brown GC, Brown MM: Utilities associated with diabetic retinopathy: results from a Canadian sample. Br J Ophthalmol 87 : 259 –261, 2003 17. Churchill DN, Torrance GW, Taylor DW, Barnes CC, Ludwin DS, Shimizu A, Smith EKM: Measurement of quality of life in end-stage renal disease: the time trade-off approach. Clin Invest Med 10 : 14 –20, 1987 18. Eckman MH, Greenfield S, Mackey WC, Wong JB, Kaplan SH, Sullivan L, Dukes K, Pauker SG: Foot infections in diabetic patients: decision and cost-effectiveness analysis. JAMA 273 : 712 –720, 1995 19. Shin AY, Porter PJ, Wallace MC, Naglie G: Quality of life of stroke in younger individuals: utility assessment in patients with arteriovenous malformations. Stroke 28 : 2395 –2399, 1997 20. Campeau L: Grading of angina pectoris. Circulation 54 : 522 –523, 1976 21. American Diabetes Association: Diabetic nephropathy. Diabetes Care 26 : S94 –S98, 2003 22. Gage BF, Cardinalli AB, Owens DK: Cost-effectiveness of preference-based antithrombotic therapy for patients with nonvalvular atrial fibrillation. Stroke 29 : 1083 –1091, 1998 23. Downs JR, Oster G, Santanello NC, Air Force Coronary Atherosclerosis Prevention Study Research Group: HMG CoA reductase inhibitors and quality of life. JAMA 269 : 3107 –3108, 1993 24. Charlson ME, Pompei P, Ales KL, MacKenzie CR: A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 40 : 373 –383, 1987 25. Clarke P, Gray A, Holman R: Estimating utility values for health states of type 2 diabetic patients using the EQ-5D (UKPDS 62). Med Decis Making 22 : 340 –349, 2002 26. Coffey JT, Brandle M, Zhou H, Marriott D, Burke R, Tabaei BP, Engelgau MM, Kaplan RM, Herman WH: Valuing health-related quality of life in diabetes. Diabetes Care 25 : 2238 –2243, 2002 27. Tengs TO, Wallace A: One thousand health-related quality-of-life estimates. Med Care 38 : 583 –637, 2000 Published ahead of print at http://care.diabetesjournals.org on 10 July 2007. DOI: 10.2337/dc07-0499.
• This event has passed. # Office of Science Graduate Student Research (SCGSR) Program [Deadline to Apply] ### November 14, 2019 See organizer’ page for details. The goal of the Office of Science Graduate Student Research (SCGSR) program is to prepare graduate students for science, technology, engineering, or mathematics (STEM) careers critically important to the DOE Office of Science mission, by providing graduate thesis research opportunities at DOE laboratories.  The SCGSR program provides supplemental awards to outstanding U.S. graduate students to pursue part of their graduate thesis research at a DOE laboratory/facility in areas that address scientific challenges central to the Office of Science mission. The research opportunity is expected to advance the graduate students’ overall doctoral thesis while providing access to the expertise, resources, and capabilities available at the DOE laboratories/facilities. The SCGSR program is sponsored and managed by the DOE Office of Science’s Office of Workforce Development for Teachers and Scientists (WDTS), in collaboration with the 6 Office of Science research programs and the DOE national laboratories/facilities. Online application and awards administration support is provided by Oak Ridge Institute of Science and Education (ORISE) under Oak Ridge Associated Universities (ORAU). The SCGSR program provides supplemental funds for graduate awardees to conduct part of their thesis research at a host DOE laboratory/facility in collaboration with a DOE laboratory scientist within a defined award period. Collaborating DOE Laboratory Scientists may be from any of the participating DOE national laboratories/facilities. The award period for the proposed research project at DOE laboratories/facilities may range from 3 to 12 consecutive months. View Office of Science Priority Research Areas for 2019 Solicitation 2 ## Details Date: November 14, 2019 Event Category: Event Tags: , , , , Website: https://science.osti.gov/wdts/scgsr/ ## Organizer Department of Energy: Office of Science Website: https://science.osti.gov/
Show these approximations of $\cos$, $\sin$ and $\tan$ are exact. A while back I was looking for an approximation to $\cos(x)$ and I constructed a polynomial with zeros in the same places as the first few zeros of $cos(x)$: $$c_n(x) = \frac{\prod_{i=1}^n (x-(n-\frac12)\pi) (x-(\frac12-n)\pi)}{\text{normalising constant}}$$ The normalising constant is chosen so that $c_n(0) = 1$. I wrote a program to determine whether this approximation was any good, and was astonished to discover that in the limit of large $n$ it is exact! At least to the numerical accuracy I was using. I found the same thing for a similar approximation to $\sin(x)$: $$s_n(x) = \frac{x \prod_{i=1}^n (x-n\pi)(x+n\pi)}{\text{normalising constant}}$$ This time the normalising constant is chosen so that $s_n'(0) = 1$. I also tried making an approximation to $\tan(x)$, by making a function that has poles in the same places as the first few poles of $\tan(x)$: $$t_n(x) = \sum_{i=1}^n \left(\frac{1}{x-(n-\frac12)\pi} + \frac{1}{x-(\frac12-n)\pi}\right)$$ Again I was surprised to discover that $\lim_{n\rightarrow\infty}t_n(x) = \tan(x)$ to numerical accuracy. I think it is also possible to make similar series for $\sec(x)$ etc. It was several years before I finally learnt how to prove these identities. It required advanced calculus techniques such as residues, and honestly I don't think I could reproduce the proofs now. Moreover, the proofs were unsatisfactory, since each proof coped with a single case. I feel in my gut that something deeper is going on here. Why does every single dumb attempt work? Is it possible to prove a statement along the lines of "If two analytic functions have the same poles and zeros then they are equal"? (Obviously some details will need to be firmed up before this statement is true!) Alternatively, is it possible to prove the identities by showing that the approximations satisfy the following defining characteristics of $\cos$, $\sin$ and $\tan$ (or some similar set): $$\frac{d \lim_{n\rightarrow\infty}c_n(x)}{dx} = -\lim_{n\rightarrow\infty}s_n(x)$$ $$\frac{d \lim_{n\rightarrow\infty}s_n(x)}{dx} = \lim_{n\rightarrow\infty}c_n(x)$$ $$\lim_{n\rightarrow\infty}t_n(x) = \frac{\lim_{n\rightarrow\infty}s_n(x)}{\lim_{n\rightarrow\infty}c_n(x)}$$ I would be grateful for any insight you can provide. You are probably interested in the Weierstrass factorization theorem, which is the sort of characterization you have in mind. You can't characterize a holomorphic function only by its zeros (or zeros and poles in the case of a meromorphic function) since, for example, if $f$ is entire then $e^zf(z)$ is also, but you can come close.
# The role of wave function in QED 1. Dec 26, 2006 ### Quantum River I am just learning QED and could not understand the role of wave function. Is the basic equation in QED the Schrodinger Equation? Is the difference between Quantum mechanics and QED just they have different Hamiltonians. I have tried to read the original paper of Tomonaga in 1946 Progress of Theoretical Physics, but just not could get the original paper. I have no access to Progress of Theoretical Physics. And this is the only paper that concerns the foundations of QED in my knowledge. Dyson's review The Radiation of Tomonaga, Schwinger, and Feynman gives a rather short description of Outline of the Theoretical Foundations(Dyson's words). Could anyone recommend me some papers in this direction. What I am confused about is the role of wave function in QED. In QM, the wave function means the distribution probability in the space (Born)? Then what concept or quantity in QFT means such things (distribution probability in the space) or corresponds to the wave function ? Is there some correspondence principle between QM and QED? Could the basic QED equation retreat to QM Schrodinger equation or Dirac equation under some conditions? For example, in the calculation of Lamb thift, what is the use of wave function (1s state of hydrogen)? It seems the Lamb shift has some connections with the wave function value at the r=0 point. Look at Baranger, Bethe, and Feynman's calculation. (Phys. Rev., Vol. 92, NO. 2,482) They use the wave function phi(r=0) to calculate the Lamb shift. But there is no wave function (of course no 1s wave function) in Quantum field theory? When calculating scattering, it is easier to understand the role of QFT, but when considering the Bounded states, I just could not understand how to calculate the bounded state wave function from QFT. Quantum River Last edited: Dec 26, 2006 2. Dec 26, 2006 ### Demystifier To understand a difference between two different quantum theories one should first understand the difference between the corresponding classical theories. While QM is a theory of particles, QED is a theory of fields. If you understand the difference between particles and fields in classical mechanics, the difference in the quantum case is essentially the same. However, in reality, things are not that simple. QED is not only a theory of fields, it is also a theory of particles (photons and electrons). It does not have an analog in the classical case. The correspondence between particles and fields is essentially the correspondence between wave functions and field operators. Wave functions are certain matrix elements of the field operators. For textbooks where this is explained to some extent see the textbooks of Ryder or Schweber. You may also see http://xxx.lanl.gov/abs/quant-ph/0609163 3. Dec 26, 2006 ### Quantum River But in Quantum field theory, particle and field are one thing. Electron is a particle in classical physics and Quantum mechanics (Schrodinger Equation), but it is a field in Quantum field theory. On the first thought, QM corresponds to classical physics (such as Newtonian physics), while Quantum field theory corresponds to classical field (such as Maxwellian electromagnetic field). But in Quantum field theory we not only quantize electromagnetic field, but also electron field (Does it exist?). I think the first thought is somewhat naive. Wave functions are certain matrix elements of the field operators. Could you explain the sentence above more concretely? 4. Dec 27, 2006 ### marlon That's hardly a decent answer to the OP's question. There is far more going on and you are missing out on the most essential part : how particles are connected to quantum fields in QFT. TO THE OP : In QFT (eg QED), particles arise due to vibrations of the quantum fields. Think of a mattress (ie the quantum field) on which "YOU" jump in one place. The surface of the mattress vibrates and due to this vibration, there is energy coming free from the mattress (ie the vibrational energy). If you take into account that energy is the same as mass (E=mc^2) you can see how the vibration of a quantumfield can mimic a certain particle with mass m and momentum p. All particles (photons, electrons, etc) are born this way. For example, suppose we have two electrons (which are vibrations of a quantumfield themselves) "sitting" on the mattress. The two electrons cause the mattress to vibrate and the resulting vibrations give us the energy for a particle (a photon for example), as explained before. This is how two electrons interact with each other via a photon. Keep in mind that i am giving a simplified picture here with some loss of accuracy (like the necessary conditions for the quantum fields and conservation laws). But essentially, in QFT, we have vibrating quantum fields that give us the particles we need. Such quantum fields are written in terms of creation and annihilation operators that acto onto a quantum state. Finally, let's compare to QM. The main difference between QFT and QM are the basic ingredients : 1) In QM the basic ingredient are wavefunctions. 2) In QFT the basic ingredient are quantum fields. In QFT, the "old" QM wavefunction is now the quantum field written in terms of creation and annihilation operators. In other words, the QM wavefunction has become a field operator in QFT. regards marlon Last edited: Dec 27, 2006 5. Dec 27, 2006 ### hellfire You can indeed define a "wavefunction" in QFT proceeding in a similar way than for (non relativistic) QM. In QM the variables that define the configuration space are the positions. The wavefunction is a function of the positions in configuration space. In QFT the variables that define the configuration space are the fields. You can define a functional of the fields in configuration space. This "wavefunctional" would obey a Schrödinger-type equation and would be analogous to the wavefunction in QM. Take a look to section 5.2 of this lecture notes. What you cannot do in QFT, as far as I know, is to define a wavefunction for a particle as in QM. The expression of a localized single particle $\phi(x) \vert 0 \rangle$ "looks like" the expansion in terms of momenta of the eigenstate of postion, however, the product of two of them $\langle 0 \vert \phi(x) \phi(y) \vert 0 \rangle$ is not equal to zero (it is equal to the propagator), which would be required in order to define a position basis and a wavefuction as such. Last edited: Dec 27, 2006 6. Dec 27, 2006 ### vanesch Staff Emeritus This is entirely correct. Only, the Schroedinger equation in QFT is quite a complicated beast, as it is an equation for a functional (and not a partial differential equation of a wavefunction over a finite-dimensional configuration space, as in QM). So nobody actually knows how to deal with it directly. Hence, the Schroedinger equation doesn't get much attention in QFT, as we don't know how to use it. In fact, the only thing we can do with it, is to write down the Schroedinger equation in integrated form, and even in an approximated way, and then even only for the case of t= - infinity to t = + infinity. The result of this is the S-matrix, which is nothing else but the solution to the Schroedinger equation for initial condition t = - infinity taken at the point t = + infinity: its elements are the individual integrals of the Schroedinger equation for specific initial conditions ("incoming particles"). The only relationship we have is that of the Born rule for the final solution of the Schroedinger equation (t = + infinity) when we have initial conditions at t = - infinity. It are hence the squares of the elements of the S-matrix, and when properly put into context, it gives you the cross sections for certain reactions. You can use the superposition principle as well in QFT as in QM. If you have the solutions (the matrix elements of the S-matrix) for plane waves, you know that the solution to an initial state which is a superposition of plane waves will be the superposition of the corresponding final states. What is usually done (as far as I know, I'm no expert) is to start with a specific bound state (usually obtained in NR QM), then use the superposition principle to "transfer" this to a superposition of final states, and eventually "project" this on other bound states (using the Born rule again, in a basis of bound states). This gives you then the transition probability from the initial bound state to the final bound state. 7. Dec 27, 2006 ### Demystifier http://xxx.lanl.gov/abs/quant-ph/0609163 Also, read Secs. VIII and IX in it, as well as the post of vanesch above. 8. Dec 28, 2006 ### reilly I'ts important to recognize that any quantum field theory in use is equivalent to a standard quantum theory -- that is, anything that is characterized in terms of particle creation and destruction operators, can be reformulated in the ordinary QM use of Fock space. This is very apparent in non-relativistic many-body theory, and is discussed in any number of publications. Note also that particle creation and destruction can occur, via transitions between subspaces with different numbers of particles. With the imposition of relativity, life gets more difficult. While we regularly use a momentum operator in QFT, we do not use a position operator. Rather, we use the space-time parametrization (x,y,z,t) as if the spatial coordinates are equivalent to position eigenstates. But we do use wave functions in momentum space, and consider their Fourier transforms as spatial wavefunctions. Yes, we use wavefunctions and state vectors in relativistic QFT -- that's how we compute scattering crossections for example. Regards Reilly Atkinson 9. Dec 28, 2006 ### CarlB I agree completely and think that this is very important. I have a couple QFT books that hint at this, but I have no textbooks or other references that spell it out. Do you know of any other sources? 10. Dec 29, 2006 ### Truth Finder 11. Dec 30, 2006 ### vanesch Staff Emeritus 12. Jan 1, 2007 ### Truth Finder Wrong!!!!!! These equations have been absolutely believed of since feynmann......... Thanks, to revise your knowledge about QFTs. Thanks ........... and Brgrds. Nuclear Scientist!!!!!!! 13. Jan 3, 2007 ### dextercioby The eqns need to be checked, as they are written in noncovariant language. But the assertion "Quantum electrodynamics is a generalization of quantum mechanics to include special relativity" is definitely wrong. Daniel. 14. Jan 3, 2007 ### Truth Finder Dexter, You are right. Dirac included SR in QM. The other 2 equations included Field Quantization (May be:{DUE TO QUNIZATION OF SOURCE CHARGES}). You can check them as much you can. But a question arises: What is the original? Is this form or the other or another one "if there is no equivalence"? By the way, I was just asking about the meaning of its implication for Dirac's Equation for a nonconservative field. But I am sure that they are right and already believed of. I already studied this long before and discussed with many professors. And I think that it is time now for me to understand its meaning and how QED founders made it like this. Schwartz Vandslire ------------------------------------------------ Either to work correctly as required, or to leave it. 15. Jan 3, 2007 ### Truth Finder Of Course Dexter, Lagrangian Form and Hamiltonian Form are different faces to the same Theory; QED. Schwartz Vandslire ------------------------------------------------ Either to work correctly as required, or to leave it. 16. Jan 3, 2007 ### Amr Morsi T. Finder, That's right, these equations are absolutely right. But the problem is that many some scietists now are talking about the origin of field quantization to be due to the probability of photon itself. What do you think? No doubt, the wave function in QFTs are that of the charges (and particles in general). But, isn't strange to have Dirac's Equation put explicitly like this (in its general forum)? Got me! I think, as you said below, this is a very good starting point to reach the whole picture, sir. But, isn't it some what strange that QM and QFTs in general don't speak about individual events, but about a number of same-system events? Got you! Yaaiih!!!! Engineer\ Amr Morsi. 17. Jan 4, 2007 ### Truth Finder OOPS!! The issue of quantum field theories is a puzzle for me till now. But, it is quite apparent that there is some differences in scientists' complete consideration for it. However many of them is talking about the theory as a total, but still away from the details. Why not and it was not till Born who explained the right meaning of the wave function...... I think you pointed to that in one of your previous answers. The last fact is absolutely right. It cannot describe only one trial. But, this doesn't have to do with the uncertainty principle? Does it! Scientists are investigating this problem in the mean time. But, we cannot deny that Quantum Mechanics still get more exact solutions? But still can't be applied to one experiment! Schwartz Vandslire ------------------------------------------------ Either to work correctly as required, or to leave it. 18. Jan 5, 2007 ### Anonym CarlB:” I agree completely and think that this is very important. I have a couple QFT books that hint at this, but I have no textbooks or other references that spell it out. Do you know of any other sources?” I think that one may be helpful too: Herman Hesse's book "The Glass Bead Game". 19. Jan 5, 2007 ### reilly Once again, I appeal to history and note that this issue, that QED or QFT can be written in terms of ordinary (relativistic) QM was pretty much recognized some 80 years ago. First, recall that Heisenberg's matrix solution to the harmonic oscillator problem, used so-called step operators, now generally called creation and destruction operators. The rest is history. I presume that Dirac can be cited as a reasonable authority in this matter. His groundbreaking paper on QED (1927) basically uses wavefunctions. Many papers on QED and QFT proior to WWII used wavefunction, albeit in creative ways. See, for example, Wiesskopf's paper on the electron's self energy of 1939. (Many of the important papers in QED, from inception to renormalization, can be found in Schwinger's Dover Books collection, Quantum Electrodynamics. (I'm no expert, but I suspect wave functions may be used in Stat Mech problems in which the chemical potntial is required.) When I taught this stuff, I used a class to discuss this very issue, just to show that there are many ways to frame a problem, but that some are more equal than others. Regards. Reilly Atkinson 20. Jan 5, 2007 ### Haelfix There is far more to field theory than wavefunctions, so im of the point of view that the Dirac equation and say the relativistic generalization of the Schrodinger equation are *not* correct. For instance they miss radiative corrections like the lamb shift, that only field theory will see. Of course they are retrieved in suitable limits or regimes where such things are not important, and for instance in looking at leading order terms in the dynamics. Indeed there are many examples where we have absolutely no idea what the wavefunction even *is*, yet we can still calculate most quantities of interest. Take QCD as an example.
## Linear Neural Networks The linear networks discussed in this section are similar to the perceptron, but their transfer function is linear rather than hard-limiting. This allows their outputs to take on any value, whereas the perceptron output is limited to either 0 or 1. Linear networks, like the perceptron, can only solve linearly separable problems. Here you design a linear network that, when presented with a set of given input vectors, produces outputs of corresponding target vectors. For each input vector, you can calculate the network's output vector. The difference between an output vector and its target vector is the error. You would like to find values for the network weights and biases such that the sum of the squares of the errors is minimized or below a specific value. This problem is manageable because linear systems have a single error minimum. In most cases, you can calculate a linear network directly, such that its error is a minimum for the given input vectors and target vectors. In other cases, numerical problems prohibit direct calculation. Fortunately, you can always train the network to have a minimum error by using the least mean squares (Widrow-Hoff) algorithm. This section introduces `linearlayer`, a function that creates a linear layer, and `newlind`, a function that designs a linear layer for a specific purpose. ### Neuron Model A linear neuron with R inputs is shown below. This network has the same basic structure as the perceptron. The only difference is that the linear neuron uses a linear transfer function `purelin`. The linear transfer function calculates the neuron's output by simply returning the value passed to it. `$\alpha =purelin\left(n\right)=purelin\left(Wp+b\right)=Wp+b$` This neuron can be trained to learn an affine function of its inputs, or to find a linear approximation to a nonlinear function. A linear network cannot, of course, be made to perform a nonlinear computation. ### Network Architecture The linear network shown below has one layer of S neurons connected to R inputs through a matrix of weights W. Note that the figure on the right defines an S-length output vector a. A single-layer linear network is shown. However, this network is just as capable as multilayer linear networks. For every multilayer linear network, there is an equivalent single-layer linear network. #### Create a Linear Neuron (linearlayer) Consider a single linear neuron with two inputs. The following figure shows the diagram for this network. The weight matrix W in this case has only one row. The network output is `$\alpha =purelin\left(n\right)=purelin\left(Wp+b\right)=Wp+b$` or `$\alpha ={w}_{1,1}{p}_{1}+{w}_{1,2}{p}_{2}+b$` Like the perceptron, the linear network has a decision boundary that is determined by the input vectors for which the net input n is zero. For n = 0 the equation Wp + b = 0 specifies such a decision boundary, as shown below (adapted with thanks from [HDB96]). Input vectors in the upper right gray area lead to an output greater than 0. Input vectors in the lower left white area lead to an output less than 0. Thus, the linear network can be used to classify objects into two categories. However, it can classify in this way only if the objects are linearly separable. Thus, the linear network has the same limitation as the perceptron. You can create this network using `linearlayer`, and configure its dimensions with two values so the input has two elements and the output has one. ```net = linearlayer; net = configure(net,[0;0],0); ``` The network weights and biases are set to zero by default. You can see the current values with the commands ```W = net.IW{1,1} W = 0 0 ``` and ```b= net.b{1} b = 0 ``` However, you can give the weights any values that you want, such as 2 and 3, respectively, with ```net.IW{1,1} = [2 3]; W = net.IW{1,1} W = 2 3 ``` You can set and check the bias in the same way. ```net.b{1} = [-4]; b = net.b{1} b = -4 ``` You can simulate the linear network for a particular input vector. Try ```p = [5;6]; ``` You can find the network output with the function `sim`. ```a = net(p) a = 24 ``` To summarize, you can create a linear network with `linearlayer`, adjust its elements as you want, and simulate it with `sim`. ### Least Mean Square Error Like the perceptron learning rule, the least mean square error (LMS) algorithm is an example of supervised training, in which the learning rule is provided with a set of examples of desired network behavior: `$\left\{{p}_{1},{t}_{1}\right\},\left\{{p}_{2},{t}_{2}\right\},\dots \left\{{p}_{Q},{t}_{Q}\right\}$` Here pq is an input to the network, and tq is the corresponding target output. As each input is applied to the network, the network output is compared to the target. The error is calculated as the difference between the target output and the network output. The goal is to minimize the average of the sum of these errors. `$mse=\frac{1}{Q}\sum _{k=1}^{Q}e{\left(k\right)}^{2}=\frac{1}{Q}\sum _{k=1}^{Q}{\left(t\left(k\right)-\alpha \left(k\right)\right)}^{2}$` The LMS algorithm adjusts the weights and biases of the linear network so as to minimize this mean square error. Fortunately, the mean square error performance index for the linear network is a quadratic function. Thus, the performance index will either have one global minimum, a weak minimum, or no minimum, depending on the characteristics of the input vectors. Specifically, the characteristics of the input vectors determine whether or not a unique solution exists. ### Linear System Design (newlind) Unlike most other network architectures, linear networks can be designed directly if input/target vector pairs are known. You can obtain specific network values for weights and biases to minimize the mean square error by using the function `newlind`. Suppose that the inputs and targets are ```P = [1 2 3]; T= [2.0 4.1 5.9]; ``` Now you can design a network. ```net = newlind(P,T); ``` You can simulate the network behavior to check that the design was done properly. ```Y = net(P) Y = 2.0500 4.0000 5.9500 ``` Note that the network outputs are quite close to the desired targets. You might try Pattern Association Showing Error Surface. It shows error surfaces for a particular problem, illustrates the design, and plots the designed solution. You can also use the function `newlind` to design linear networks having delays in the input. Such networks are discussed in Linear Networks with Delays. First, however, delays must be discussed. ### Linear Networks with Delays #### Tapped Delay Line You need a new component, the tapped delay line, to make full use of the linear network. Such a delay line is shown below. There the input signal enters from the left and passes through N-1 delays. The output of the tapped delay line (TDL) is an N-dimensional vector, made up of the input signal at the current time, the previous input signal, etc. #### Linear Filter You can combine a tapped delay line with a linear network to create the linear filter shown. The output of the filter is given by `$\alpha \left(k\right)=purelin\left(Wp+b\right)=\sum _{i=1}^{R}{w}_{1,i}p\left(k-i+1\right)+b$` The network shown is referred to in the digital signal processing field as a finite impulse response (FIR) filter [WiSt85]. Look at the code used to generate and simulate such a network. Suppose that you want a linear layer that outputs the sequence `T`, given the sequence `P` and two initial input delay states `Pi`. ```P = {1 2 1 3 3 2}; Pi = {1 3}; T = {5 6 4 20 7 8}; ``` You can use `newlind` to design a network with delays to give the appropriate outputs for the inputs. The delay initial outputs are supplied as a third argument, as shown below. ```net = newlind(P,T,Pi); ``` You can obtain the output of the designed network with ```Y = net(P,Pi) ``` to give ```Y = [2.7297] [10.5405] [5.0090] [14.9550] [10.7838] [5.9820] ``` As you can see, the network outputs are not exactly equal to the targets, but they are close and the mean square error is minimized. ### LMS Algorithm (learnwh) The LMS algorithm, or Widrow-Hoff learning algorithm, is based on an approximate steepest descent procedure. Here again, linear networks are trained on examples of correct behavior. Widrow and Hoff had the insight that they could estimate the mean square error by using the squared error at each iteration. If you take the partial derivative of the squared error with respect to the weights and biases at the kth iteration, you have `$\frac{\partial {e}^{2}\left(k\right)}{\partial {w}_{1,j}}=2e\left(k\right)\frac{\partial e\left(k\right)}{\partial {w}_{1,j}}$` for j = 1,2,…,R and `$\frac{\partial {e}^{2}\left(k\right)}{\partial b}=2e\left(k\right)\frac{\partial e\left(k\right)}{\partial b}$` Next look at the partial derivative with respect to the error. `$\frac{\partial e\left(k\right)}{\partial {w}_{1,j}}=\frac{\partial \left[t\left(k\right)-\alpha \left(k\right)\right]}{\partial {w}_{1,j}}=\frac{\partial }{\partial {w}_{1,j}}\left[t\left(k\right)-\left(Wp\left(k\right)+b\right)\right]$` or `$\frac{\partial e\left(k\right)}{\partial {w}_{1,j}}=\frac{\partial }{\partial {w}_{1,j}}\left[t\left(k\right)-\left(\sum _{i=1}^{R}{w}_{1,i}{p}_{i}\left(k\right)+b\right)\right]$` Here pi(k) is the ith element of the input vector at the kth iteration. This can be simplified to `$\frac{\partial e\left(k\right)}{\partial {w}_{1,j}}=-{p}_{j}\left(k\right)$` and `$\frac{\partial e\left(k\right)}{\partial b}=-1$` Finally, change the weight matrix, and the bias will be e(k)p(k) and e(k) These two equations form the basis of the Widrow-Hoff (LMS) learning algorithm. These results can be extended to the case of multiple neurons, and written in matrix form as `$\begin{array}{l}W\left(k+1\right)=W\left(k\right)+2\alpha e\left(k\right){p}^{T}\left(k\right)\\ b\left(k+1\right)=b\left(k\right)+2\alpha e\left(k\right)\end{array}$` Here the error e and the bias b are vectors, and α is a learning rate. If α is large, learning occurs quickly, but if it is too large it can lead to instability and errors might even increase. To ensure stable learning, the learning rate must be less than the reciprocal of the largest eigenvalue of the correlation matrix pTp of the input vectors. Fortunately, there is a toolbox function, `learnwh`, that does all the calculation for you. It calculates the change in weights as ```dw = lr*e*p' ``` and the bias change as ```db = lr*e ``` The constant 2, shown a few lines above, has been absorbed into the code learning rate `lr`. The function `maxlinlr` calculates this maximum stable learning rate `lr` as 0.999 * `P'`*`P`. Type `help learnwh` and `help maxlinlr` for more details about these two functions. ### Linear Classification (train) Linear networks can be trained to perform linear classification with the function `train`. This function applies each vector of a set of input vectors and calculates the network weight and bias increments due to each of the inputs according to `learnp`. Then the network is adjusted with the sum of all these corrections. Each pass through the input vectors is called an epoch. This contrasts with `adapt` which adjusts weights for each input vector as it is presented. Finally, `train` applies the inputs to the new network, calculates the outputs, compares them to the associated targets, and calculates a mean square error. If the error goal is met, or if the maximum number of epochs is reached, the training is stopped, and `train` returns the new network and a training record. Otherwise `train` goes through another epoch. Fortunately, the LMS algorithm converges when this procedure is executed. A simple problem illustrates this procedure. Consider the linear network introduced earlier. Suppose you have the following classification problem. `$\left\{{p}_{1}=\left[\begin{array}{l}2\\ 2\end{array}\right],{t}_{1}=0\right\}\left\{{p}_{2}=\left[\begin{array}{c}1\\ -2\end{array}\right],{t}_{2}=1\right\}\left\{{p}_{3}=\left[\begin{array}{c}-2\\ 2\end{array}\right],{t}_{3}=0\right\}\left\{{p}_{4}=\left[\begin{array}{c}-1\\ 1\end{array}\right],{t}_{4}=1\right\}$` Here there are four input vectors, and you want a network that produces the output corresponding to each input vector when that vector is presented. Use `train` to get the weights and biases for a network that produces the correct targets for each input vector. The initial weights and bias for the new network are 0 by default. Set the error goal to 0.1 rather than accept its default of 0. ```P = [2 1 -2 -1;2 -2 2 1]; T = [0 1 0 1]; net = linearlayer; net.trainParam.goal= 0.1; net = train(net,P,T); ``` The problem runs for 64 epochs, achieving a mean square error of 0.0999. The new weights and bias are ```weights = net.iw{1,1} weights = -0.0615 -0.2194 bias = net.b(1) bias = [0.5899] ``` You can simulate the new network as shown below. ```A = net(P) A = 0.0282 0.9672 0.2741 0.4320 ``` You can also calculate the error. ```err = T - sim(net,P) err = -0.0282 0.0328 -0.2741 0.5680 ``` Note that the targets are not realized exactly. The problem would have run longer in an attempt to get perfect results had a smaller error goal been chosen, but in this problem it is not possible to obtain a goal of 0. The network is limited in its capability. See Limitations and Cautions for examples of various limitations. This example program, Training a Linear Neuron, shows the training of a linear neuron and plots the weight trajectory and error during training. You might also try running the example program `nnd10lc`. It addresses a classic and historically interesting problem, shows how a network can be trained to classify various patterns, and shows how the trained network responds when noisy patterns are presented. ### Limitations and Cautions Linear networks can only learn linear relationships between input and output vectors. Thus, they cannot find solutions to some problems. However, even if a perfect solution does not exist, the linear network will minimize the sum of squared errors if the learning rate `lr` is sufficiently small. The network will find as close a solution as is possible given the linear nature of the network's architecture. This property holds because the error surface of a linear network is a multidimensional parabola. Because parabolas have only one minimum, a gradient descent algorithm (such as the LMS rule) must produce a solution at that minimum. Linear networks have various other limitations. Some of them are discussed below. #### Overdetermined Systems Consider an overdetermined system. Suppose that you have a network to be trained with four one-element input vectors and four targets. A perfect solution to wp + b = t for each of the inputs might not exist, for there are four constraining equations, and only one weight and one bias to adjust. However, the LMS rule still minimizes the error. You might try Linear Fit of Nonlinear Problem to see how this is done. #### Underdetermined Systems Consider a single linear neuron with one input. This time, in Underdetermined Problem, train it on only one one-element input vector and its one-element target vector: ```P = [1.0]; T = [0.5]; ``` Note that while there is only one constraint arising from the single input/target pair, there are two variables, the weight and the bias. Having more variables than constraints results in an underdetermined problem with an infinite number of solutions. You can try Underdetermined Problem to explore this topic. #### Linearly Dependent Vectors Normally it is a straightforward job to determine whether or not a linear network can solve a problem. Commonly, if a linear network has at least as many degrees of freedom (S *R + S = number of weights and biases) as constraints (Q = pairs of input/target vectors), then the network can solve the problem. This is true except when the input vectors are linearly dependent and they are applied to a network without biases. In this case, as shown with the example Linearly Dependent Problem, the network cannot solve the problem with zero error. You might want to try Linearly Dependent Problem. #### Too Large a Learning Rate You can always train a linear network with the Widrow-Hoff rule to find the minimum error solution for its weights and biases, as long as the learning rate is small enough. Example Too Large a Learning Rate shows what happens when a neuron with one input and a bias is trained with a learning rate larger than that recommended by `maxlinlr`. The network is trained with two different learning rates to show the results of using too large a learning rate.
# Subcritical approximations to stochastic defocusing mass-critical nonlinear Schr\"odinger equation on $\mathbb{R}$ Research paper by Chenjie Fan, Weijun Xu Indexed on: 22 Oct '18Published on: 22 Oct '18Published in: arXiv - Mathematics - Analysis of PDEs #### Abstract We show robustness of various truncated and subcritical approximations to the stochastic defocusing mass-critical nonlinear Schr\"odinger equation (NLS) in dimension $d=1$, whose solution was constructed in [FX18] with one particular such approximation. The key ingredient in the proof is a uniform bound of the solutions to the family of deterministic mass-subcritical defocusing NLS.
Whitening-Free Least-SquaresNon-Gaussian Component Analysis # Whitening-Free Least-Squares Non-Gaussian Component Analysis Hiroaki Shiino Yahoo Japan Corporation Kioi Tower 1-3 Kioicho, Chiyoda-ku, Tokyo 102-8282, Japan. Nara Institute of Science and Technology 8916-5 Takayama-cho Ikoma, Nara 630-0192, Japan. The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan. RIKEN Center for Advanced Intelligence Project 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Hiroaki Sasaki Yahoo Japan Corporation Kioi Tower 1-3 Kioicho, Chiyoda-ku, Tokyo 102-8282, Japan. Nara Institute of Science and Technology 8916-5 Takayama-cho Ikoma, Nara 630-0192, Japan. The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan. RIKEN Center for Advanced Intelligence Project 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Gang Niu Yahoo Japan Corporation Kioi Tower 1-3 Kioicho, Chiyoda-ku, Tokyo 102-8282, Japan. Nara Institute of Science and Technology 8916-5 Takayama-cho Ikoma, Nara 630-0192, Japan. The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan. RIKEN Center for Advanced Intelligence Project 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Masashi Sugiyama Yahoo Japan Corporation Kioi Tower 1-3 Kioicho, Chiyoda-ku, Tokyo 102-8282, Japan. Nara Institute of Science and Technology 8916-5 Takayama-cho Ikoma, Nara 630-0192, Japan. The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan. RIKEN Center for Advanced Intelligence Project 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. ###### Abstract Non-Gaussian component analysis (NGCA) is an unsupervised linear dimension reduction method that extracts low-dimensional non-Gaussian “signals” from high-dimensional data contaminated with Gaussian noise. NGCA can be regarded as a generalization of projection pursuit (PP) and independent component analysis (ICA) to multi-dimensional and dependent non-Gaussian components. Indeed, seminal approaches to NGCA are based on PP and ICA. Recently, a novel NGCA approach called least-squares NGCA (LSNGCA) has been developed, which gives a solution analytically through least-squares estimation of log-density gradients and eigendecomposition. However, since pre-whitening of data is involved in LSNGCA, it performs unreliably when the data covariance matrix is ill-conditioned, which is often the case in high-dimensional data analysis. In this paper, we propose a whitening-free variant of LSNGCA and experimentally demonstrate its superiority. ###### Keywords: non-Gaussian component analysis, dimension reduction, unsupervised learning Authors’ Instructions ## 1 Introduction Dimension reduction is a common technique in high-dimensional data analysis to mitigate the curse of dimensionality [1]. Among various approaches to dimension reduction, we focus on unsupervised linear dimension reduction in this paper. It is known that the distribution of randomly projected data is close to Gaussian [2]. Based on this observation, non-Gaussian component analysis (NGCA) [3] tries to find a subspace that contains non-Gaussian signal components so that Gaussian noise components can be projected out. NGCA is formulated in an elegant semi-parametric framework and non-Gaussian components can be extracted without specifying their distributions. Mathematically, NGCA can be regarded as a generalization of projection pursuit (PP) [4] and independent component analysis (ICA) [5] to multi-dimensional and dependent non-Gaussian components. The first NGCA algorithm is called multi-index PP (MIPP). PP algorithms such as FastICA [5] use a non-Gaussian index function (NGIF) to find either a super-Gaussian or sub-Gaussian component. MIPP uses a family of such NGIFs to find multiple non-Gaussian components and apply principal component analysis (PCA) to extract a non-Gaussian subspace. However, MIPP requires us to prepare appropriate NGIFs, which is not necessarily straightforward in practice. Furthermore, MIPP requires pre-whitening of data, which can be unreliable when the data covariance matrix is ill-conditioned. To cope with these problems, MIPP has been extended in various ways. The method called iterative metric adaptation for radial kernel functions (IMAK) [6] tries to avoid the manual design of NGIFs by learning the NGIFs from data in the form of radial kernel functions. However, this learning part is computationally highly expensive and pre-whitening is still necessary. Sparse NGCA (SNGCA) [7, 2] tries to avoid pre-whitening by imposing an appropriate constraint so that the solution is independent of the data covariance matrix. However, SNGCA involves semi-definite programming which is computationally highly demanding, and NGIFs still need to be manually designed. Recently, a novel approach to NGCA called least-squares NGCA (LSNGCA) has been proposed [8]. Based on the gradient of the log-density function, LSNGCA constructs a vector that belongs to the non-Gaussian subspace from each sample. Then the method of least-squares log-density gradients (LSLDG) [9, 10] is employed to directly estimate the log-density gradient without density estimation. Finally, the principal subspace of the set of vectors generated from all samples is extracted by eigendecomposition. LSNGCA is computationally efficient and no manual design of NGIFs is involved. However, it still requires pre-whitening of data. The existing NGCA methods reviewed above are summarized in Table 1. In this paper, we propose a novel NGCA method that is computationally efficient, no manual design of NGIFs is involved, and no pre-whitening is necessary. Our proposed method is essentially an extention of LSNGCA so that the covariance of data is implicitly handled without explicit pre-whitening or explicit constraints. Through experiments, we demonstrate that our proposed method, called whitening-free LSNGCA (WF-LSNGCA), performs very well even when the data covariance matrix is ill-conditioned. ## 2 Non-Gaussian Component Analysis In this section, we formulate the problem of NGCA and review the MIPP and LSNGCA methods. ### 2.1 Problem Formulation Suppose that we are given a set of -dimensional i.i.d. samples of size , , which are generated by the following model: xi=Asi+ni, (1) where () is an -dimensional signal vector independently generated from an unknown non-Gaussian distribution (we assume that is known), is a noise vector independently generated from a centered Gaussian distribution with an unknown covariance matrix , and is an unknown mixing matrix of rank . Under this data generative model, probability density function that samples follow can be expressed in the following semi-parametric form [3]: p(x)=f(B⊤x)ϕQ(x), (2) where is an unknown smooth positive function on , is an unknown linear mapping, is the centered Gaussian density with the covariance matrix , and denotes the transpose. We note that decomposition (2) is not unique; multiple combinations of and can give the same probability density function. Nevertheless, the following -dimensional subspace , called the non-Gaussian index space, can be determined uniquely [11]: E=Null(B⊤)⊥=Range(B), (3) where denotes the null space of , denotes the orthogonal complement, and denotes the column space of . The goal of NGCA is to estimate the non-Gaussian index space from samples . ### 2.2 Multi-Index Projection Pursuit (MIPP) MIPP [3] is the first algorithm of NGCA. Let us whiten the samples so that their covariance matrix becomes identity: yi:=Σ−12xi, where is the covariance matrix of . In practice, is replaced by the sample covariance matrix. Then, for an NGIF , the following vector was shown to belong to the non-Gaussian index space [3]: β(h):=E[yh(y)−∇yh(y)], where denotes the differential operator w.r.t.  and denotes the expectation over . MIPP generates a set of such vectors from various NGIFs : ˆβl:=1nn∑i=1[yihl(yi)−∇yhl(yi)], (4) where the expectation is estimated by the sample average. Then is normalized as ˆβl←ˆβl/ ⎷1nn∑i=1∥yihl(yi)−∇yhl(yi)∥2−∥ˆβl∥2, (5) by which is proportional to its signal-to-noise ratio. Then vectors with their norm less than a pre-specified threshold are eliminated. Finally, PCA is applied to the remaining vectors to obtain an estimate of the non-Gaussian index space . The behavior of MIPP strongly depends on the choice of NGIF . To improve the performance, MIPP actively searches informative as follows. First, the form of is restricted to , where denotes a unit-norm vector and is a smooth real function. Then, estimated vector is written as ˆβ=1nn∑i=1(yis(w⊤yi)−s′(w⊤yi)w), where is the derivative of . This equation is actually equivalent to a single iteration of the PP algorithm called FastICA [12]. Based on this fact, the parameter is optimized by iteratively applying the following update rule until convergence: w←∑ni=1(yis(w⊤yi)−s′(w⊤yi)w)∥∑ni=1(yis(w⊤yi)−s′(w⊤yi)w)∥. The superiority of MIPP has been investigated both theoretically and experimentally [3]. However, MIPP has the weaknesses that NGIFs should be manually designed and pre-whitening is necessary. ### 2.3 Least-Squares Non-Gaussian Component Analysis (LSNGCA) LSNGCA [8] is a recently proposed NGCA algorithm that does not require manual design of NGIFs (Table 1). Here the algorithm of LSNGCA is reviewed, which will be used for further developing a new method in the next section. ##### Derivation: For whitened samples , the semi-parametric form of NGCA given in Eq.(2) can be simplified as p(y)=˜f(˜B⊤y)ϕId(y), (6) where is an unknown smooth positive function on and is an unknown linear mapping. Under this simplified semi-parametric form, the non-Gaussian index space can be represented as E=Σ−12Range(˜B). Taking the logarithm and differentiating the both sides of Eq.(6) w.r.t.  yield ∇ylnp(y)+y =˜B∇˜B⊤yln˜f(˜B⊤y), (7) where denotes the differential operator w.r.t. . This implies that u(y):=∇ylnp(y)+y belongs to the non-Gaussian index space . Then applying eigendecomposition to and extracting the leading eigenvectors allow us to recover . In LSNGCA, the method of least-squares log-density gradients (LSLDG) [9, 10] is used to estimate the log-density gradient included in , which is briefly reviewed below. ##### Lsldg: Let denote the differential operator w.r.t. the -th element of . LSLDG fits a model to , the -th element of log-density gradient , under the squared loss: J(g(j)) :=E[(g(j)(y)−∂jlnp(y))2]−E[(∂jlnp(y))2] :=E[g(j)(y)2]−2E[g(j)(y)∂jlnp(y)]. (8) The second term in Eq.(8) yields E[g(j)(y)∂jlnp(y)] =∫g(j)(y)(∂jlnp(y))p(y)dy=∫g(j)(y)∂jp(y)dy where the second-last equation follows from integration by parts under the assumption . Then sample approximation yields J(g(j)) =E[g(j)(y)2−2∂jg(j)(y)]≈1nn∑i=1[g(j)(yi)2+2∂jg(j)(yi)]. (9) As a model of the log-density gradient, LSLDG uses a linear-in-parameter form: g(j)(y)=b∑k=1θk,jψk,j(y)=θ⊤jψj(y), (10) where denotes the number of basis functions, is a parameter vector to be estimated, and is a basis function vector. The parameter vector is learned by solving the following regularized empirical optimization problem: ˆθj=argminθj[θ⊤jˆGjθj+2θ⊤jˆhj+λj∥θj∥2], where is the regularization parameter, ˆGj =1nn∑i=1ψj(yi)ψj(yi)⊤,  ˆhj=1nn∑i=1∂jψj(yi). This optimization problem can be analytically solved as ˆθj=−(ˆGj+λjIb)−1ˆhj, where is the -by- identity matrix. Finally, an estimator of the log-density gradient is obtained as ˆg(j)(y)=ˆθ⊤jψj(y). All tuning parameters such as the regularization parameter and parameters included in the basis function can be systematically chosen based on cross-validation w.r.t. Eq.(9). ## 3 Whitening-Free LSNGCA In this section, we propose a novel NGCA algorithm that does not involve pre-whitening. A pseudo-code of the proposed method, which we call whitening-free LSNGCA (WF-LSNGCA), is summarized in Algorithm 1. ### 3.1 Derivation Unlike LSNGCA which used the simplified semi-parametric form (6), we directly use the original semi-parametric form (2) without whitening. Taking the logarithm and differentiating the both sides of Eq.(2) w.r.t.  yield ∇xlnp(x)+Q−1x=B∇B⊤xlnf(B⊤x), (11) where denotes the derivative w.r.t.  and denotes the derivative w.r.t. . Further taking the derivative of Eq.(11) w.r.t.  yields Q−1 =−∇2xlnp(x)+B∇2B⊤xlnf(B⊤x)B⊤, (12) where denotes the second derivative w.r.t. . Substituting Eq.(12) back into Eq.(11) yields ∇xlnp(x)−∇2xlnp(x)x=B(∇B⊤xlnf(B⊤x)−∇2B⊤xlnf(B⊤x)B⊤x). (13) This implies that v(x):=∇xlnp(x)−∇2xlnp(x)x belongs to the non-Gaussian index space . Then we apply eigendecomposition to and extract the leading eigenvectors as an orthonormal basis of non-Gaussian index space . Now the remaining task is to approximate from data, which is discussed below. ### 3.2 Estimation of v(x) Let be the -th element of : v(j)(x)=∂jlnp(x)−(∇x∂jlnp(x))⊤x. To estimate , let us fit a model to it under the squared loss: R(w(j)) :=E[(w(j)(x)−v(j)(x))2]−E[v(j)(x)2] :=E[w(j)(x)2]−2E[w(j)(x)v(j)(x)] :=E[w(j)(x)2]−2E[w(j)(x)∂jlnp(x)]+2E[w(j)(x)(∇x∂jlnp(x))⊤x]. (14) The second term in Eq.(14) yields E[w(j)(x)∂jlnp(x)] =∫w(j)(x)(∂jlnp(x))p(x)dx=∫w(j)(x)∂jp(x)dx where the second-last equation follows from integration by parts under the assumption . included in the third term in Eq.(14) may be replaced with the LSLDG estimator reviewed in Section 2.3. Note that the LSLDG estimator is obtained with non-whitened data in this method. Then we have R(w(j)) ≈E[w(j)(x)2+2∂jw(j)(x)+2w(j)(x)(∇xˆg(j)(x))⊤x] (15) ≈1nn∑i=1[w(j)(xi)2+2∂jw(j)(xi)+2w(j)(xi)(∇xˆg(j)(xi))⊤xi]. Here, let us employ the following linear-in-parameter model as : w(j)(x):=t∑k=1αk,jφk,j(x)=α⊤jφj(x), (16) where denotes the number of basis functions, is a parameter vector to be estimated, and is a basis function vector. The parameter vector is learned by minimizing the following regularized empirical optimization problem: ˆαj=argminαj[α⊤jˆSjαj+2α⊤jˆtj(x)+γj∥αj∥2], where is the regularization parameter, ˆSj =1nn∑i=1φj(xi)φj(xi)⊤, ˆtj =1nn∑i=1(∂jφj(xi)+φj(xi)(∇xˆg(j)(xi))⊤xi). This optimization problem can be analytically solved as ˆαj=−(ˆSj+γjIb)−1ˆtj. Finally, an estimator of is obtained as ˆv(j)(x)=ˆα⊤jφj(x). All tuning parameters such as the regularization parameter and parameters included in the basis function can be systematically chosen based on cross-validation w.r.t. Eq.(15). ### 3.3 Theoretical Analysis Here, we investigate the convergence rate of WF-LSNGCA in a parametric setting. Let be the optimal estimate to given by LSLDG based on the linear-in-parameter model , and let S∗j =E[φj(x)φj(x)⊤],  t∗j=E[∂jφj(x)+φj(x)(∇xg∗(j)(x))⊤x], α∗j =argminα{α⊤S∗jα+2α⊤t∗j+γ∗jα⊤α},  w∗(j)(x)=α∗⊤jφj(x), where must be strictly positive definite. In fact, should already be strictly positive definite, and thus is also allowed in our theoretical analysis. We have the following theorem (its proof is given in Section 3.4): ###### Theorem 3.1. As , for any , ∥ˆv(x)−w∗(x)∥2=Op(n−1/2), provided that for all converge in to , i.e., . Theorem 3.1 is based on the theory of perturbed optimizations [13, 14] as well as the convergence of LSLDG shown in [8]. It guarantees that for any , the estimate in WF-LSNGCA converges to the optimal estimate based on the linear-in-parameter model , and it achieves the optimal parametric convergence rate . Note that Theorem 3.1 deals only with the estimation error, and the approximation error is not taken into account. Indeed, approximation errors exist in two places, from to in WF-LSNGCA itself and from to in the plug-in LSLDG estimator. Since the original LSNGCA also relies on LSLDG, it cannot avoid the approximation error introduced by LSLDG. For this reason, the convergence of WF-LSNGCA is expected to be as good as LSNGCA. Theorem 3.1 is basically a theoretical guarantee that is similar to Part One in the proof of Theorem 1 in [8]. Hence, based on Theorem 3.1, we can go along the line of Part Two in the proof of Theorem 1 in [8] and obtain the following corollary. ###### Corollary 1. For eigendecomposition, define matrices and . Given the estimated subspace based on samples and the optimal estimated subspace based on infinite data, denote by the matrix form of an arbitrary orthonormal basis of and by that of . Define the distance between subspaces as D(ˆE,E∗)=infˆE,E∗∥ˆE−E∗∥Fro, where stands for the Frobenius norm. Then, as , D(ˆE,E∗)=Op(n−1/2), provided that for all converge in to and are well-chosen basis functions such that the first eigenvalues of are neither nor . ### 3.4 Proof of Theorem 3.1 ##### Step 1. First of all, we establish the growth condition (see Definition 6.1 in [14]). Denote the expected and empirical objective functions by R∗j(α) =α⊤S∗jα+2α⊤t∗j+γ∗jα⊤α, ˆRj(α) =α⊤ˆSjα+2α⊤ˆtj+γjα⊤α. Then , , and we have ###### Lemma 1. Let be the smallest eigenvalue of , then the following second-order growth condition holds R∗j(α)≥R∗j(α∗j)+ϵj∥α−α∗j∥22. ###### Proof. must be strongly convex with parameter at least . Hence, R∗j(α) ≥R∗j(α∗j)+(∇R∗j(α∗j))⊤(α−α∗j)+(α−α∗j)⊤(S∗j+γ∗jIb)(α−α∗j) ≥R∗j(α∗j)+ϵj∥α−α∗j∥22, where we used the optimality condition . ∎ ##### Step 2. Second, we study the stability (with respect to perturbation) of at . Let u={uS∈Sb+,ut∈Rb,uγ∈R} be a set of perturbation parameters, where is the cone of -by- symmetric positive semi-definite matrices. Define our perturbed objective function by Rj(α,u) =α⊤(S∗j+uS)α+2α⊤(t∗j+ut)+(γ∗j+uγ)α⊤α. It is clear that , and then the stability of at can be characterized as follows. ###### Lemma 2. The difference function is Lipschitz continuous in modulus ω(u)=O(∥uS∥Fro+∥ut∥2+|uγ|) on a sufficiently small neighborhood of . ###### Proof. The difference function is Rj(α,u)−R∗j(α)=α⊤uSα+2α⊤ut+uγα⊤α, with a partial gradient ∂∂α(Rj(α,u)−R∗j(α))=2uSα+2ut+2uγα. Notice that due to the -regularization in , such that . Now given a -ball of , i.e., , it is easy to see that , ∥α∥2≤∥α−α∗j∥2+∥α∗j∥2≤δ+M, and consequently ∥∥∥∂∂α(Rj(α,u)−R∗j(α))∥∥∥2≤2(δ+M)(∥uS∥Fro+|uγ|)+2∥ut∥2. This says that the gradient has a bounded norm of order , and proves that the difference function is Lipschitz continuous on the ball , with a Lipschitz constant of the same order. ∎ ##### Step 3. Lemma 1 ensures the unperturbed objective grows quickly when leaves ; Lemma 2 ensures the perturbed objective changes slowly for around , where the slowness is compared with the perturbation it suffers. Based on Lemma 1, Lemma 2, and Proposition 6.1 in [14], ∥ˆαj−α∗j∥2≤ω(u)ϵj=O(∥uS∥Fro+∥ut∥2+|uγ|), since is the exact solution to given , , and . According to the central limit theorem (CLT), . Consider : ˆtj−t∗j =1nn∑i=1∂jφj(xi)−E[∂jφj(x)]+1nn∑i=1φj(xi)(∇xˆg(j)(xi))⊤xi −E[φj(x)(∇xg∗(j)(x))⊤x]. The first half is clearly due to CLT. For the second half, the estimate given by LSLDG converges to for any in according to Part One in the proof of Theorem 1 in [8], and converges to in the same order because the basis functions in are all derivatives of Gaussian functions. Consequently, 1nn∑i=1φj(xi)(∇xˆg(j)(xi))⊤xi−1nn∑i=1φj(xi)(∇xg∗(j)(xi))⊤xi=Op(n−1/2), since converges to for any in , and 1nn∑i=1φj(xi)(∇xg∗(j)(xi))⊤xi−E[φj(x)(∇xg∗(j)(x))⊤x]=Op(n−1/2) due to CLT, which proves . Furthermore, we have already assumed that . Hence, as , ##### Step 4. Finally, for any , the gap of and is bounded by |ˆv(j)(x)−w∗(j)(x)|≤∥ˆαj−α∗j∥2⋅∥φj(x)∥2, where the Cauchy-Schwarz inequality is used. Since the basis functions in are again all derivatives of Gaussian functions, must be bounded uniformly, and then |ˆv(j)(x)−w∗(j)(x)|≤O(∥ˆαj−α∗j∥2)=Op(n−1/2). Applying the same argument for all completes the proof. ∎ ## 4 Experiments In this section, we experimentally investigate the performance of MIPP, LSNGCA, and WF-LSNGCA.111 The source code of the experiments is at https://github.com/hgeno/WFLSNGCA. ### 4.1 Configurations of NGCA Algorithms #### 4.1.1 Mipp We use the MATLAB script which was used in the original MIPP paper [3]. In this script, NGIFs of the form () are used: s1m(z) =z3exp(−z22σ2m),   s2m(z)=tanh(amz), s3m(z) =sin(bmz),   s4m(z)=cos(bmz), where , , and are scalars chosen at the regular intervals from , , and . The cut-off threshold is set at and the number of FastICA iterations is set at (see Section 2.2). #### 4.1.2 Lsngca Following [10], the derivative of the Gaussian kernel is used as the basis function in the linear-in-parameter model (10): ψk,j(y)=∂jexp(−∥y−ck∥22σ2j), where is the Gaussian bandwidth and is the Gaussian center randomly selected from the whitened data samples . The number of basis functions is set at . For model selection, -fold cross-validation is performed with respect to the hold-out error of Eq.(9) using candidate values at the regular intervals in logarithmic scale for Gaussian bandwidth and regularization parameter . #### 4.1.3 Wf-Lsngca Similarly to LSNGCA, the derivative of the Gaussian kernel is used as the basis function in the linear-in-parameter model (16) and the number of basis functions is set as . For model selection, -fold cross-validation is performed with respect to the hold-out error of Eq.(15) in the same way as LSNGCA. ### 4.2 Artificial Datasets Let , where are the -dimensional non-Gaussian signal components and are the -dimensional Gaussian noise components. For the non-Gaussian signal components, we consider the following four distributions plotted in Figure 1: (a) Independent Gaussian Mixture: . (b) Dependent super-Gaussian: . (c) Dependent sub-Gaussian: is the uniform distribution on . (d) Dependent super- and sub-Gaussian: and is the uniform distribution on , where if and otherwise. For the Gaussian noise components, we include a certain parameter , which controls the condition number; the larger is, the more ill-posed the data covariance matrix is. The detail is described in Appendix A. We generate samples for each case, and standardize each element of the data before applying NGCA algorithms. The performance of NGCA algorithms is measured by the following subspace estimation error: ε(E,ˆE):=122∑i=1∥ˆei−ΠEˆei∥2, (17) where is the true non-Gaussian index space, is its estimate, is the orthogonal projection on , and is an orthonormal basis in . The averages and the standard derivations of the subspace estimation error over runs for MIPP, LSNGCA, and WF-LSNGCA are depicted in Figure 2. This shows that, for all 4 cases, the error of MIPP grows rapidly as increases. On the other hand, LSNGCA and WF-LSNGCA perform much stably against the change in . However, LSNGCA performs poorly for (a). Overall, WF-LSNGCA is shown to be much more robust against ill-conditioning than MIPP and LSNGCA. In terms of the computation time, WF-LSNGCA is less efficient than LSNGCA and MIPP, but its computation time is still just a few times slower than LSNGCA, as seen in Figure 3. For this reason, the computational efficiency of WF-LSNGCA would still be acceptable in practice. ### 4.3 Benchmark Datasets Finally, we evaluate the performance of NGCA methods using the LIBSVM binary classification benchmark datasets333 We preprocessed the LIBSVM binary classification benchmark datasets as follows: vehicle: We convert original labels ‘1’ and ‘2’ to the positive label and original labels ‘3’ and ‘4’ to the negative label. SUSY: We convert original label ‘0’ to the negative label. shuttle: We use only the data labeled as ‘1’ and ‘4’ and regard them as positive and negative labels. svmguide1: We mix the original training and test datasets. [15]. From each dataset, points are selected as training (test) samples so that the number of positive and negative samples are equal, and datasets are standardized in each dimension. For an -dimensional dataset, we append -dimensional noise dimensions following the standard Gaussian distribution so that all datasets have dimensions. Then we use PCA, MIPP, LSNGCA, and WF-LSNGCA to obtain -dimensional expressions, and apply the support vector machine (SVM) 444 We used LIBSVM with MATLAB [15]. to evaluate the test misclassification rate. As a baseline, we also evaluate the misclassification rate by the raw SVM without dimension reduction. The averages and standard deviations of the misclassification rate over runs for are summarized in Table 2. As can be seen in the table, the appended Gaussian noise dimensions have negative effects on each classification accuracy, and thus the baseline has relatively high misclassification rates. PCA has overall higher misclassification rates than the baseline since a lot of valuable information for each classification problem is lost. Among the NGCA algorithms, WF-LSNGCA overall compares favorably with the other methods. This means that it can find valuable low-dimensional expressions for each classification problem without harmful effects of a pre-whitening procedure.
# fano moduli varieties of vector bundles Let $M$ be a fine moduli space of vector bundles on curve which is an algebraic variety as well. The first example of such an object that I have in mind is rank 2, deg 1 VB on a genus 2 curve. This is an intersection of 2 4-dimensional quadrics, and it is Fano. If I recall correctly, all moduli spaces of bundles with odd degree on an algebraic curve are fine. My question is: are all fine moduli varieties of VB on an algebraic curve Fano? If not, please give counterexamples. - The moduli space of vector bundles on a curve is fine if the degree is coprime with the rank. So, if you are interested only in rank 2 case, then indeed any odd degree gives a fine moduli space. But if you are interested in other ranks, then this is not true. –  Sasha Feb 6 '12 at 16:17 Yes, whenever the moduli space of semistable bundles of rank 2 and fixed, degree $1$ determinant is a fine moduli space, then it is a smooth, proper, geometrically connected variety with ample anticanonical bundle. –  Jason Starr Feb 6 '12 at 16:57 I assume you are asking about $SU(r,L),$ (semistable rank-$r$ bundles with determinant $L$) rather than $U(r,d)$ (semistable rank-$r$ bundles with determinant of degree d). Drezet-Narasimhan showed that even when $SU(r,L)$ is not a fine moduli space, it is locally factorial with Gorenstein singularities, and that its dualizing sheaf is isomorphic to $\mathscr{L}^{-2(r,c_{1}(L))}$ where $\mathscr{L}$ is the (ample) determinant bundle; consequently $SU(r,L)$ is Fano. $U(r,d)$ is then a Fano fibration over the degree-$d$ Picard variety of the underlying curve via the determinant map (the fiber over a degree-$d$ line bundle $L$ is just $SU(r,L)$). As Sasha pointed out in the comment above, the coprimality of rank and degree is sufficient for smoothness, so the answer is yes. For instance, $SU(r,L)$ is smooth when $r≥2$ and $c_{1}(L)=r-1.$ –  Yusuf Mustopa Feb 6 '12 at 20:23
# A 100-Year old computer for computing Fourier transforms Many famous machines have been built to do math — like Babbage’s Difference Engine for solving polynomials or Leibniz’s Stepped Reckoner for multiplying and dividing — yet none worked as well as Albert Michelson’s harmonic analyzer. This 19th century mechanical marvel does Fourier analysis: it can find the frequency components of a signal using only gears, springs and levers. We discovered this long-forgotten machine locked in a glass case at the University of Illinois. For your enjoyment, we brought it back to life in this book and in a companion video series — all written and created by Bill Hammack, Steve Kranz and Bruce Carpenter. A free PDF of their book is available at the above link; the book is also available for purchase. Here are the companion videos for the book. # New Education Initiative Replaces K-12 Curriculum With Single Standardized Test As the season of high-stakes testing hits America once again, we have one choice: cry or laugh. The new test will reportedly cover all topics formerly taught in K-12 classrooms, including algebra, World War I, cursive penmanship, pre-algebra, state capitals, biology, letters of the alphabet, environmental science, civics, French, Newtonian mechanics, parts of speech, and the Cold War. Sources said students will also be expected to demonstrate their knowledge of 19th-century American pioneer life, photosynthesis, and telling time. Officials said the initiative would also focus on improving teacher performance by tying teachers’ salaries to the test scores of the students they hand the assessment to. # Student t distribution Source: http://www.xkcd.com/1347/ # Talking about math to Congress From MAA Focus: If you think it’s hard to distill research results into a 15-minute conference presentation, try this: Choose a subject like matrix factorizations or recent progress on the twin prime conjecture. Figure out how to make a nonexpert audience—members of Congress, say—if not fully understand the chosen topic, at least appreciate its significance. Do this in a minute. The clock is ticking. Jerry McNerney of California’s ninth congressional district has risen to such a challenge more than 10 times in the U.S. House of Representatives, where he has served since 2007. The only current member of the House or Senate to hold a doctorate in mathematics (University of New Mexico, 1981), McNerney has read into the congressional record one-minute expositions of such abstruse subjects as vector bundles, synesthesia, and the Large Synoptic Survey Telescope… Boiling down complex material into a minute of talking is tricky, McNerney concedes, but he has been pleased with the results. As a member of the Public Face of Mathematics panel at the 2014 Joint Mathematics Meetings, McNerney told listeners that being coaxed into thinking about math has a positive effect on his congressional fellows. “Instead of all the usual bickering that you get on the House floor, everyone smiles,” he reported. “They say, ‘This is really fun.’” # MIT Scientists Figured Out How to Eavesdrop Using a Potato Chip Bag From Gizmodo: In a scenario straight out of “Enhance, enhance!,” MIT scientists have figured out that the tiny vibrations on ordinary objects like a potato chip bag or a glass of water or even a plant can be reconstructed into intelligible speech. All it takes is a camera and a snappy algorithm. Take a listen. # Collaborative Mathematics: Challenge 13 I’m a few months late with this, but my colleague Jason Ermer at Collaborative Mathematics has published Challenge 13 on his website: http://www.collaborativemathematics.org/ # Engaging students: Deriving the Pythagorean theorem In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Delaina Bazaldua. Her topic, from Geometry: introducing proportions. How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic? How has this topic appeared in the news? How could you as a teacher create an activity or project that involves your topic? I found a really good blog from a teacher through Pinterest: http://mathequalslove.blogspot.com/2012/04/sugar-packets-and-proportions.html. This website is really great because it is posted from a teacher on a blog who actually tried the lesson. The lesson can be adjusted for a geometry class, but it is really remarkable the way it is without changing a thing especially as an introduction to proportions before going into deeper questions that involve geometry. Like the video above, it can be relatable by the audience of students because of how applicable it is to their life. Likewise, it could also help them eat/drink better! The goal of the lesson is to figure out how many packets of sugar are in a variety of food and drinks using proportions between packets of sugar and grams of sugar! The engage would include the video of someone eating packets of sugar, students brainstorming ideas of how many packets of sugar are in a drink, and then would escalate to students putting the drinks in order of most sugar to least sugar without looking at the nutritional label. After that, students would be given the fact that there are approximately 4 packets of sugar in a gram of sugar. They would also be given the nutritional labels to calculate how many packets are in the drinks using proportions. I think this is a good lesson because it engages the students by allowing them to relate to something that happens in everyday life when they drink/eat things. It is also a good way to introduce proportions with something concrete like bottles before introducing something that is somewhat abstract, such as shapes drawn on a paper which is how geometry is often seen. What interesting things can you say about the people who contributed to the discovery and/or the development of this topic? Perhaps the most famous proportion in history is known as the “Divine Proportion.” Using the research found on the website: http://www.goldennumber.net/golden-ratio-history/, it can help students realize the history behind proportions because, despite popular belief, students need to learn the history of the concept they are being taught to fully grasp the concept of the topic. The website given is really great because it goes through the different names other than divine proportion, such as Golden Ratio and Fibonacci Sequence, and how it was discovered and rediscovered throughout time which is why there are so many unique names that exist now. I also found that the fact that the names that have the words ‘golden’ and ‘divine’ in the name are because of a spiritual background. Understanding divine proportion is important because it is around us every day and it is only a piece of the whole umbrella that engulfs all of probability. It is also applicable to students because it involves them and their physical body along with objects they interact with everyday. I found the topic of divine proportion very interesting and I would hope my students would as well which is why I think this is an extraordinary engage. References: http://mathequalslove.blogspot.com/2012/04/sugar-packets-and-proportions.html http://www.goldennumber.net/golden-ratio-history # Engaging students: Verifying trigonometric identities In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Tracy Leeper. Her topic, from Precalculus: verifying trigonometric identities. Many students when first learning about trigonometric identities want to move terms across the equal sign, since that is what they have been taught to do since algebra, however, in proving a trigonometric identity only one side of the equality is worked at a time. Therefore my idea for an activity to help students is to have them look at the identities as a puzzle that needs to be solved. I would provide them with a basic mat divided into two columns with an equal sign printed between the columns, and give them trig identities written out in a variety of forms, such as $\sin^2 \theta + \cos^2 \theta$ on one strip, and $1$ written on another strip. Other examples would also include having $\tan^2 \theta$ on one, and $\sin^2 \theta/\cos^2 \theta$ on another. The students will have to work within one column, and step by step, change one side to eventually reflect the term on the other side, and each strip has to be one possible representation of the same value. By providing the students with the equivalent strips, they will be able to construct the proof of the identity. I feel that giving them the strips will allow them to see different possibilities for how to manipulate the expression, without leaving them feeling lost in the process, and by dividing the mat into columns, they can focus on one side, and see that the equivalency is maintained throughout the proof. The students would need to arrange the strips into the correct order to prove the left hand side is equivalent to the right hand side, while reinforcing the process of not moving anything across the equal sign. Trigonometry identities are used in most of the math courses after pre-calculus, as well as the idea of proving an equivalency. If the students learn the concept of proving an equivalency that will help them construct proofs for any future math courses, as well as learning to look at something given, and be able to see it as parts of a whole, or just be able to write it a different way to assist with the calculations. If students learn to see that $1 = \sin^2 x + \cos^2 x = \sec^2 x - \tan^2 x = \csc^2 x - \cot^2 x$, their ability to manipulate expressions will dramatically improve, and their confidence in their ability will increase, as well as their understanding of the complexities and relations throughout all of mathematics. The trigonometric identities are the fundamental part of the relationships between the trig functions. These are used in science as well, anytime a concept is taught about a wave pattern. Sound waves, light waves, every kind of wave discussed in science are sinusoidal wave. Anytime motion is calculated, trigonometry is brought into the calculations. All students who wish to progress in the study of science or math need to learn basic trigonometric identities and learn how to prove equivalency for the identities. Since proving trigonometric identities is also a practice in logical reasoning, it will also help students learn to think critically, and learn to defend their conjectures, which is a valuable skill no matter what discipline the student pursues. For learning how to verify trigonometric identities, I like the Professor Rob Bob (Mr. Tarroy’s) videos found on youtube. He’s very energetic, and very thorough in explaining what needs to be done for each identity. He also gives examples for all of the different types of identities that are used. He is very specific about using the proper terms, and he makes sure to point out multiple times that this is an identity, not an equation, so terms cannot be transferred across the equal sign. He also presents options to use for a variety of cases, and that sometimes things don’t work out, but it’s okay, because you can just erase it and start again. I also like that he uses different colored chalk to show the changes that are being made. He is very articulate, and explains things very well, and makes sure to point out that he is providing examples, but it’s important to remember that there are many different ways to prove the identity presented. I enjoyed watching him teach, and I think the students would enjoy his energy as well. # Engaging students: Finding the equation of a circle In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Tiffany Wilhoit. Her topic, from Precalculus: finding the equation of a circle. How has this topic appeared in pop culture (movies, TV, current music, video games, etc.)? How has this topic appeared in high culture (art, classical music, theatre, etc.)? Circles are found everywhere! Everyday, multiple times a day, people come across circles. They are found throughout society. The coins students use to buy sodas are circles. On the news, we hear about crop circles and circular patterns in the fields around the world. One of the first examples of a circle was the wheel. Many logos for large companies involve circles, such as Coca-Cola, Google Chrome, and Target. Even the Roman Coliseum is circular in shape. Since circles are found everywhere, students will be able to identify and be comfortable with the shape (more than say a hexagon). A great way to get the students engaged in the topic of circles would be to have the brainstorm different places they see circles on a normal day. Then have each student pick an example and print or bring a picture of it. Then have the student take their circle (say the Ferris Wheel of the state fair), and place in centered at the origin. The students could then find the equation of their circle. They could do another example where their circle is centered at another point as well. This would allow the students to become more aware of circles around them, and would also allow them some freedom in the assignment. What interesting things can you say about the people who contributed to the discovery and/or the development of this topic? Circles have been an interesting topic for humans since the beginning. We see the sun as a circle in the sky. The ancient Greeks even believed the circle was the perfect shape. Ancient civilizations built stone circles such as Stonehenge, and circular structures such as the Coliseum. The circle led to the invention of the wheel and gears, as well. The study of geometry is focused largely around the study of circles. The study of circles led to many inventions and ideas. Euclid studied circles, and compared them to other polygons. He found ways to create circles that could circumscribe and inscribe polygons. This created a problem called “squaring a circle”. Ancient Greeks tried to construct a circle and square with the same area using only a compass and straightedge. The problem was never solved, but in 1882 it was proved impossible. However, people still tried to solve the problem and were called “circle squarers”. This became an insult for people who attempted the impossible. Borromean Rings is another puzzle involving circles. Circles have been a part of civilization from the beginning, and it is amazing how much they are still prevalent today. How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic? The website on www.mathopenref.com/coordgeneralcircle.html is a good site to use when learning to find the equation of a circle. The page contains an applet where the students are able to work with a circle. The circle can be moved so the center is at any point, and the radius can be changed to various sizes. At the top, it shows the equation of the circle shown. This website would allow the students to see how the equation of a circle changes depending on the center and size. This is a good tool to use for the students to explore circles and their equations or to review them before the test. The website also contains some information for the students to read to understand the concept, and there is even an example to try. The website is easy to use, and would not be difficult for students to understand. Resources: http://www-history.mcs.st-and.ac.uk/Curves/Circle.html http://nrich.maths.org/2561 www.mathopenref.com/coordgeneralcircle.html https://circlesonly.wordpress.com/category/history-of-circles/ # Engaging students: Introducing the number e In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Nada Al-Ghussain. Her topic, from Precalculus: introducing the number e. How can this topic be used in your students’ future courses in mathematics or science? Not every student loves math, but almost all students use math in his or her advanced courses. Students in microbiology will use the number e, to calculate the number of bacteria that will grow on a plate during a specific time. Biology or pharmacology students hoping to go into the health field will be able to find the time it takes a drug to lose one-half of its pharmacologic activity. By knowing this they will be able to know when a drug expires. Students going into business and finance will take math classes that rely greatly on the number e. It will help them understand and be able to calculate continuous compound interest when needed. Students who do love the math will get to explore the relation of logarithms and exponentials and how they interrelate. As students move into calculus, they are introduced to derivatives and integrals. The number e is unique, since when the area of a region bounded by a hyperbola y= 1/x, the x-axis, and the vertical lines x=1 and x= e is 1. So a quick introduction to e in any level of studies, reminds the students that it is there to simplify our life! What interesting things can you say about the people who contributed to the discovery and/or the development of this topic? In the late 16th century, a Scottish mathematician named John Napier was a great mind that introduced to the world decimal point and Napier’s bones, which simplified calculating large numbers. Napier by the early 17th century was finishing 20 years of developing logarithm theory and tables with base 1/e and constant 10^7. In doing this, multiplication computational time was cut tremendously in astronomy and navigation. Other mathematicians built on this to make lives easier (at least mathematically speaking!) and help develop the logarithmic system we use today. Henry Briggs, an English mathematician saw the benefit of using base 10 instead of Napier’s base 1/e. Together Briggs and Napier revised the system to base 10, were Briggs published 30,000 natural numbers to 14 places [those from 1 to 20,000 and from 90,000 to 100,000]! Napier’s became known as the “natural logarithm” and Briggs as the “common logarithm”. This convinced Johann Kepler of the advantages of logarithms, which led him to discovery of the laws of planetary motions. Kepler’s reputation was instrumental in spreading the use of logarithms throughout Europe. Then no other than Isaac Newton used Kepler’s laws in discovering the law of gravity. In the 18th century Swiss mathematician, Leonhard Euler, figured he would have less distraction after becoming blind. Euler’s interest in e stemmed from the need to calculate compounded interest on a sum of money. The limit for compounding interest is expressed by the constant e. So if you invest $1 at a rate of interest of 100% a year and in interest is compounded continually, then you will have$2.71828… at the end of the year. Euler helped show us many ways e can be used and in return published the constant e. It didn’t stop there but other mathematical symbols we use today like i, f(x), Σ, and the generalized acceptance of π are thanks to Euler. How can technology be used to effectively engage students with this topic? Statistics and math used in the same sentence will make most students back hairs stand up! I would engage the students and ask them if they started a new job for one month only, would they rather get 1 million dollars or 1 penny doubled every day for a month? I would give the students a few minutes to contemplate the question, without using any calculators. Then I would take a toll of the number of the students’ choices for each one. I would show them a video regarding the question and idea of compound interest. Students will see how quickly a penny gets transformed into millions of dollars in a short time. Money and short time used in the same sentence will make students fully alert! I would then ask them another question, how many times do you need to fold a newspaper to get to the moon? As a class we would decide that the thickness is 0.001cm and the distance from the Earth to the moon would be given. I would give them some time to formulate a number and then take votes around the class, which should be correct. The video is then played which shows how high folding paper can go! This one helps them see the growth and compare it to the world around them. After the engaged, students are introduced to the number e and its roll in mathematics. Money: watch until 2:35: Paper: References: http://mathworld.wolfram.com/e.html http://betterexplained.com/articles/demystifying-the-natural-logarithm-ln/ http://www.math.wichita.edu/history/men/euler.html http://www.maa.org/publications/periodicals/convergence/john-napier-his-life-his-logs-and-his-bones-introduction
# Comma formatted list printer I'm working on a list practice project from the book "Automate the Boring Stuff with Python" which asks for this: Write a function that takes a list value as an argument and returns a string with all the items separated by a comma and a space, with and inserted before the last item. For example, passing the previous spam list to the function would return 'apples, bananas, tofu, and cats'. But your function should be able to work with any list value passed to it. Be sure to test the case where an empty list [] is passed to your function. So far I've come up with this: def comma_code(iterable): ''' Function that loops through every value in a list and prints it with a comma after it, except for the last item, for which it adds an "and " at the beginning of it. Each item is str() formatted in output to avoid concatenation issues. ''' for i, item in enumerate(iterable): if i == len(iterable)-1 and len(iterable) != 1: # Detect if the item is the last on the list and the list doesn't contain only 1 item (BONUS) print('and ' + str(iterable[-1])) # Add 'and ' to the beginning elif len(iterable) == 1: # BONUS: If list contains only 1 item, print('Just ' + str(iterable[-1])) # replace 'and ' with 'Just ' else: # For all items other than the last one print(str(iterable[i]) + ',', end=" ") # Add comma to the end and omit line break in print There's heavy commenting because I'm fairly new and I'm trying to leave everything as clear as possible for my future self. Now I wonder if there's a better way of doing this and also (subjective question) if there is something in my code that I should change for better readability and/or style. As I said, I'm fairly new and I would like to pick good coding practices from the beginning. These are a couple of lists I ran through the function: spam = ['apples', 'bananas', 'tofu', 'cats'] bacon = [3.14, 'cat', 11, 'cat', True] enty = [1] And this is the working output: apples, bananas, tofu, and cats 3.14, cat, 11, cat, and True Just 1 • "returns a string" - You're not doing that at all. – superb rain Oct 31 '20 at 19:49 • Care to explain? Is it because i'm printing the output rather than returning it? – Vaney Rio Oct 31 '20 at 19:53 • Well... yeah... – superb rain Oct 31 '20 at 19:56 • I fixed it, I just didn't know how to print a "return" value. Thanks for pointing it out, my logic was faulty. – Vaney Rio Oct 31 '20 at 23:19 # An alternate approach • The routine is same for all types of inputs, except for when the len(list) == 1. This is why we can use a generator function to simplify the routine, and then a simple if statement to work with the exception def comma_code(iterable): result = ', '.join(str(value) for value in iterable[:-1]) return f"{result}, and {iterable[-1]}" Now, since we have to deal with a special case, an if-statement can work def comma_code(iterable): if not iterable: return None if len(iterable) == 1: return f"Just {iterable[0]}" result = ', '.join(str(value) for value in iterable[:-1]) return f"{result}, and {iterable[-1]}" The question hasn't stated what to do if the size of the container is 0, hence I have just returned None. As stated by @Stef in the comments, you can also return an empty string which is just ''. I would prefer None as it becomes easy to trace back if an issue occurred. Moreover, you can also design a simple function that would check if the container is empty. If it is then the function can print a custom warning which can help you in debugging your code # Explanation result = ', '.join(str(value) for value in iterable[:-1]) This is called a generator function, we're basically appending str(value) + ', ' for every value in iterable. iterable[:-1] so that we only iterate till the second last element. to be fully verbose, :-1 says everything in iterable up till the last element • A gift introducing the generator & f-strings and explaining so gentle 👍 Applied pythonic conciseness like a pro. – hc_dev Oct 31 '20 at 20:53 • result = ', '.join(str(value) for value in iterable[:-1]) Is this applying the concept of list comprehension? I'm familiar with f-string formatting and understand what generator code does for memory optimization, but i'm yet to recognize easily the format for list comprehensions. – Vaney Rio Oct 31 '20 at 21:46 • I tried this, it works great and it's really compact. I only had to figure out how to print the function return values. Thanks. – Vaney Rio Oct 31 '20 at 22:46 • However, I just found an issue in the format. Output shows a comma after every item, including the next to last so it ends up looking like "Item[1], Item[2], ... Item[-2] , and Item[-1]". Simply removing the ',' after {result} in return f"{result}, and {iterable[-1]}" seems to fix the issue (In case you want to update your code). – Vaney Rio Oct 31 '20 at 23:17 • @VaneyRio Hey, sorry for the late reply, don't you need a , before and? That's how I remember my English to be😁, if not its simply to remove it, as you said just remove it from the f-string – Aryan Parekh Nov 1 '20 at 3:11 ### Well done First I like your attitude "for the future me". Keep that clean-coding practice. It will truely help. ### Handle edge cases aside I would extract the junction-part (", ", " and ") and particularly the single case ("Just "): def print_joined(iterable): # in unlikely case of nothing (fail-fast) if len(iterable) == 0: # raise an error or print sorry instead of returning silently return # special intro if single elif len(iterable) == 1: print('Just ' + str(iterable[0])) return ## multiples concatenated for i, item in enumerate(iterable): join = ', ' # default separator ## multiples concatenated joined_items = '' for i, item in enumerate(iterable): junction = ', ' # default separator if i == len(iterable)-2: junction = ' and ' # special separator before last elif i == len(iterable)-1: junction = '' # last item ends without separator joined_items += str(item) + junction # you defined item on loop, so use it print(joined_items) This alternative is just making the different cases clear. Although a bit verbose and not very pythonic (no generator, no template based string-formatting used), it focuses on robustness. To give the lengthy method body structure and separate different parts for readability I inserted empty lines (vertical spacing) - individual preference of style. All edge-cases are caught in the beginning in order to return/fail fast (avoid obsolete loop). This is best-practice when it comes to validation. The core purpose can be easily read from the last line (the return line): printing items joined. The side amenities like case-based junction strings are put literally as topping. Refinement can be introduced in later iterations (extensibility), depending on your experience, e.g.: • enhanced formatting/printing (using Python's convenience functions) • returning a string (making the function a formatter, independent from output interface) • parameterize junction-strings (for customisation/localisation instead of hard-coded) ### Usability: Grammar for using comma before and in lists I wondered about the comma before and and did research. The phenomenon called serial comma or Oxford comma, is described in the Grammarly Blog post Comma Before And. It essentially says: • You usually put a comma before and when it’s connecting two independent clauses. • It’s almost always optional to put a comma before and in a list. • You should not use a comma before and if you’re only mentioning two qualities. Since I assume the function should list pure items (names, nouns, verbs) - not clauses - I implemented a simple and (without serial comma) to also handle 2 items grammatically correctly. Thus as long as the different junctions can't be parameterized yet, you should clarify (implicitly) applied junction-rules in a comment (doc-string). • I tried this because I like the idea of testing for errors before computing anything else, however, there's a problem with this code. Last item (iterable[-1]) shows in the output as: 'iterable[-1] and' rather than 'and iterable[-1]'. Inverting the print statement also changes the order of the ',' to the beginning of every item printed. – Vaney Rio Oct 31 '20 at 21:42 • @VaneyRio Thanks! I forgot to test my solution. Now fixed and updated the code. Note: comparison with 0-based index i to check for last i == len(iterable)-1. – hc_dev Nov 1 '20 at 14:41 • Did some grammar research on Oxford comma and SO suggests related review on Refactor Oxford Comma function (and other functions) – hc_dev Nov 1 '20 at 15:21
# Microscopic states of Kerr black holes from boundary-bulk correspondence • It was claimed by the author that black holes can be considered as topological insulators. They both have boundary modes and those boundary modes can be described by an effective BF theory. In this paper, we analyze the boundary modes on the horizon of black holes with the methods developed for topological insulators. Firstly the BTZ black hole is analysed, and the results are compatible with the previous works. Then we generalize those results to Kerr black holes. Some new results are obtained: dimensionless right- and left-temperature can be defined and have well behaviors both in Schwarzschild limit $a\rightarrow 0$ and in extremal limit $a\rightarrow M$. Upon the Kerr/CFT correspondence, we can associate a central charge $c=12 M r_+$ with an arbitrary Kerr black hole. We can identify the microstates of the Kerr black hole with the quantum states of this scalar field. From this identification we can count the number of microstates of the Kerr black hole and give the Bekenstein-Hawking area law for the entropy. • [1] J. D. Bekenste, Physical Review D 7(8), 2333-2346 (1973) doi: 10.1103/PhysRevD.7.2333 [2] S. W. Hawking, Nature 248(5443), 30-31 (1974) doi: 10.1038/248030a0 [3] J. Wang. Black hole as topological insulator (I): the BTZ black hole case. 2017. [4] J. Wang. Black hole as topological insulator (II): the boundary modes. 2017. [5] J. Wang. Classification of black holes in three dimensional spacetime by the W1+∞ symmetry. 2018. [6] J. Wang, Chin. Phys. C43(9), 095104 (2019) [7] J. Wang, Phys. Lett. B792, 56-59 (2019) [8] G. Y. Cho and J. E. Moore, Annals Phys. 326, 1515-1535 (2011) doi: 10.1016/j.aop.2010.12.011 [9] S. W. Hawking, Phys. Rev. D14, 2460-2473 (1976) [10] S. Chakraborty and K. Lochan, Universe 3(3), 55 (2018) [11] D. Marolf, Rept. Prog. Phys. 80(9), 092001 (2017) doi: 10.1088/1361-6633/aa77cc [12] M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010) doi: 10.1103/RevModPhys.82.3045 [13] X. L. Qi and S. C. Zhang, Rev. Mod. Phys. 83(4), 1057-1110 (2011) doi: 10.1103/RevModPhys.83.1057 [14] X. Chen, A. Tiwari, and S. Ryu. Bulk-boundary correspondence in (3+1)-dimensional topological phases. Phys. Rev., B94(4): 045113, 2016. [Addendum: Phys. Rev.B94, no.7, 079903(2016)]. [15] A. Cappelli, E. Randellini, and J. Sisti, JHEP 05, 135 (2017) [16] J. Wang, Y. Ma, and X.-A. Zhao, Phys. Rev. D 89, 084065 (2014) doi: 10.1103/PhysRevD.89.084065 [17] C.-G. Huang and J. Wang, Gen. Rel. Grav. 48(8), 115 (2016) doi: 10.1007/s10714-016-2110-x [18] A. Ashtekar, J. Baez, A. Corichi, and Kirill Krasnov, Phys. Rev. Lett. 80, 904-907 (1998) doi: 10.1103/PhysRevLett.80.904 [19] A. Ashtekar, John C. Baez, and Kirill Krasnov, Adv. Theor. Math. Phys. 4, 1-94 (2000) doi: 10.4310/ATMP.2000.v4.n1.a1 [20] C. Rovelli. Quantum Gravity. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 2004. [21] T. Thiemann. Modern Canonical Quantum General Relativity. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 2008. [22] A. Ashtekar and J. Lewandowski, Classical Quantum Gravity 21(15), R53-R152 (2004) doi: 10.1088/0264-9381/21/15/R01 [23] M. Han, Y. Ma, and W. Huang, Int. J. Mod. Phys. D 16(9), 1397-1474 (2007) doi: 10.1142/S0218271807010894 [24] J. Wang. The Entropy of BTZ Black Hole from Loop Quantum Gravity. 2014. [25] J. Wang and C.-G. Huang, Class. Quant. Grav. 32, 035026 (2015) doi: 10.1088/0264-9381/32/3/035026 [26] J. Wang and C.-G. Huang, Int. J. Mod. Phys. D25(14), 1650100 (2016) [27] J. Wang, C.-G. Huang, and L. Li, Chin. Phys. C40(8), 083102 (2016) [28] M. Guica, T. Hartman, W. Song, and A. Strominger, Phys. Rev. D80, 124008 (2009) [29] A. Castro, A. Maloney, and A. Strominger, Phys. Rev. D82, 024008 (2010) [30] G. CompYre. The Kerr/CFT correspondence and its extensions. Living Rev. Rel., 15: 11, 2012. [Living Rev. Rel.20, no.1, 1(2017)]. [31] H. Afshar, D. Grumiller, M. M. Sheikh-Jabbari, and H. Yavartanoo, JHEP 08, 087 (2017) [32] B. Krishnan. Quasi-local black hole horizons. In A. Ashtekar and V. Petkov, editors, Springer Handbook of Spacetime, pages 527-555. 2014. [33] J. D. Bekenstein, Lett. Nuovo Cim. 11, 467 (1974) doi: 10.1007/BF02762768 [34] M. Maggiore, Phys. Rev. Lett. 100, 141301 (2008) doi: 10.1103/PhysRevLett.100.141301 [35] J. Wang. Central charges for Kerr and Kerr-AdS black holes in diverse dimensions. 2019. [36] T. Padmanabhan, Mod. Phys. Lett. A30(03n04), 1540007 (2015) [37] G. W. Gibbons, M. J. Perry, and C. N. Pope, Class. Quant. Grav. 22, 1503-1526 (2005) doi: 10.1088/0264-9381/22/9/002 [38] K. Hajian, M. M. Sheikh-Jabbari, and H. Yavartanoo, Phys. Rev. D98(2), 026025 (2018) [39] J. D. Bekenstein, Lett. Nuovo Cim. 11, 467 (1974) doi: 10.1007/BF02762768 [40] D. Kothawala, T. Padmanabhan, and S. Sarkar, Phys. Rev. D78, 104018 (2008) Get Citation Jing-Bo Wang. Microscopic states of Kerr black holes from boundary-bulk correspondence[J]. Chinese Physics C. Jing-Bo Wang. Microscopic states of Kerr black holes from boundary-bulk correspondence[J]. Chinese Physics C. Milestone Article Metric Article Views(21) Cited by(0) Policy on re-use To reuse of subscription content published by CPC, the users need to request permission from CPC, unless the content was published under an Open Access license which automatically permits that type of reuse. ###### 通讯作者: 陈斌, bchen63@163.com • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 Title: Email: ## Microscopic states of Kerr black holes from boundary-bulk correspondence • Institute for Gravitation and Astrophysics, College of Physics and Electronic Engineering, Xinyang Normal University, Xinyang, 464000, P. R. China Abstract: It was claimed by the author that black holes can be considered as topological insulators. They both have boundary modes and those boundary modes can be described by an effective BF theory. In this paper, we analyze the boundary modes on the horizon of black holes with the methods developed for topological insulators. Firstly the BTZ black hole is analysed, and the results are compatible with the previous works. Then we generalize those results to Kerr black holes. Some new results are obtained: dimensionless right- and left-temperature can be defined and have well behaviors both in Schwarzschild limit $a\rightarrow 0$ and in extremal limit $a\rightarrow M$. Upon the Kerr/CFT correspondence, we can associate a central charge $c=12 M r_+$ with an arbitrary Kerr black hole. We can identify the microstates of the Kerr black hole with the quantum states of this scalar field. From this identification we can count the number of microstates of the Kerr black hole and give the Bekenstein-Hawking area law for the entropy. Reference (40) /
ch_egor's blog By ch_egor, 19 months ago, translation, Hi! This Sunday will take place All-Russian olympiad for students of 5-8 grades, in the name of Keldysh. Good luck to all the participants! Olympiad is conducted under the guidance of the Moscow Olympiad Scientific Committee, in particular GlebsHP, ch_egor, Endagorion, vintage_Vlad_Makeev, Zlobober, meshanya, cdkrot, voidmax, grphil and, of course, Helen Andreeva. We are happy to announce the Codeforces Round #727 based on the problems of this olympiad! It will be a Div. 2 round, which will take place at Jun/20/2021 13:05 (Moscow time). You might have already participated in rounds based on the school olympiads, prepared by Moscow Olympiad Scientific Committee (rounds 327, 342, 345, 376, 401, 433, 441, 466, 469, 507, 516, 541, 545, 567, 583, 594, 622, 626, 657, 680, 704, 707). The problems of this olympiad were prepared by _tryhard, Siberian, shishyando, Artyom123, TeaTime, Tikhon228 under the supervision of grphil. Thanks to KAN, Aleks5d and isaf27 for their help in organizing the Codeforces version of this contest and MikeMirzayanov for the Codeforces and Polygon. Also I would like to thank the Tinkoff company and personally Tatyana TKolinkova Kolinkova for great help with organizing the competition. Good luck! UPD1: Thanks to _overrated_ and Ormlis for testing. UPD2: Scoring distribution: 500 — 750 — 1250 — 1500 — 2000 — 2500 UPD3: Editorial UPD4: Winners! Div. 2: Div. 1 + Div. 2: • +460 | Write comment? » 19 months ago, # |   +118 uh oh, another russian middle school olympiad • » » 19 months ago, # ^ |   +126 Yet another $Div$ $1.5$ round • » » » 19 months ago, # ^ |   +74 just gave a div 2.5, its time for payback xd • » » » 19 months ago, # ^ |   +8 Your account name said it all ROFL. • » » » 19 months ago, # ^ |   0 Finally, it was Div 2 only, lol • » » » 19 months ago, # ^ |   +3 It turned out to be div-2.75 if not div-3 • » » » » 19 months ago, # ^ |   +4 Div.3 combined with Div.1, more precisely. » 19 months ago, # |   +93 • » » 19 months ago, # ^ |   +5 WYSI • » » 19 months ago, # ^ |   +5 WYSI » 19 months ago, # |   +35 WYSI » 19 months ago, # | ← Rev. 2 →   +20 Dude, what the heck happened here? [I didn't participate in this round] • » » 19 months ago, # ^ |   +26 weak pretests and tight TL/ML. But the problems were good (very hard in fact). • » » » 19 months ago, # ^ |   +25 Bruh,remember the destruction #657 did, Even problem A was 1500 :waturr: » 19 months ago, # |   +79 You might have already participated in rounds based on the school olympiads, prepared by Moscow Olympiad Scientific Committee (rounds 327, 342, 345, 376, 401, 433, 441, 466, 469, 507, 516, 541, 545, 567, 583, 594, 622, 626, 657, 680, 704, 707). Yes, I have and this line, this line.......... terrifies me. » 19 months ago, # |   +100 » 19 months ago, # |   +5 • » » 19 months ago, # ^ |   +10 After reading this comment , I fear Weak Pretests .Since only red coders are testers so they most probably had used the right approach to solve the problems and wouldn't had thought like pupil or specialist or expert ( various greedy approaches or so) • » » » 19 months ago, # ^ |   0 There's a simple solution, just think of the correct/intended approach from the get-go. » 19 months ago, # |   +76 The scariest rounds on CF. • » » 19 months ago, # ^ |   -57 Why? I got 44th place in the last round of them • » » » 19 months ago, # ^ |   +52 That explains it all :) • » » » » 19 months ago, # ^ |   0 I understood that Russian mid-grade Olympiad problems are tough for the majority :) » 19 months ago, # | ← Rev. 2 →   +35 Don't complain about #707 any more everyone...Maybe cf is thinking about a no-pretest contest at that time » 19 months ago, # |   +5 Aireu from osu must see this contest. » 19 months ago, # |   +155 Meme » 19 months ago, # |   +14 WYSI » 19 months ago, # |   +4 I am new here can anyone tell what these rounds are like • » » 19 months ago, # ^ |   +43 » 19 months ago, # |   +142 As a setter I hope you will enjoy our problems! • » » 19 months ago, # ^ |   -26 • » » 19 months ago, # ^ |   +2 I am hoping for linear increase in difficulty of problem. In past contests like this we have seen drastic increase in difficulty (like proble B — 900 difficulty to C-1800 ). • » » » 19 months ago, # ^ |   +2 707 C passed with n2 algo(which wasnt actually n2 after doing hard analysis, but I didnt know that and got lucky), but still, it was a bad question,not a hard one:) • » » » » 19 months ago, # ^ |   +4 it was a good question that reminded you to check constraints • » » » » » 19 months ago, # ^ |   0 now that i have seen the question again and that I understand the beauty of its solution, I second your comment. • » » » » 19 months ago, # ^ |   +1 It was a Good Problem with pigeonhole principle. • » » 19 months ago, # ^ |   +7 Don't get Dijkstracted. :) • » » 19 months ago, # ^ |   +31 TeaTime orz » 19 months ago, # |   +59 Relatable af » 19 months ago, # |   +3 is this gonna be tough?? • » » 19 months ago, # ^ |   +20 But after solving Problem C. » 19 months ago, # |   +154 meme"Huee Huee Hueee" • » » 19 months ago, # ^ |   +14 LoL how to break this loop bruuhh !! • » » » 19 months ago, # ^ |   +4 Solve Problem C, Because you are grey because you can't. • » » » 19 months ago, # ^ | ← Rev. 2 →   +2 Try to practice more of C and D questions. In total, more than 80% C questions, but 10-20% D questions too, to get more experience in that difficulty rating.Even if you can't solve them, try to spend 10-20 minutes trying to observe different details about the problems, that might help in finding its solution.Then try to read the editorial 3-4 times, and see if you can solve it. If you can't, try to see the code solution 3-4 times, and see if you understand it. If you don't, go to youtube, and learn how to solve it.Try to see the editorial, editorial solution, and multiple youtube solutions, even if you get it right. It's good to learn new tricks and new approaches. » 19 months ago, # |   +17 • » » 19 months ago, # ^ |   +10 Back Story? » 19 months ago, # | ← Rev. 2 →   +9 I remember the last round they organized. It was extremely hard and rating jump from b to c was huge. But i remember for another reason. I became specialist for first time in that contest. Kinda emotional >.< » 19 months ago, # | ← Rev. 2 →   +31 The last time I saw this much red with numbers was my maths answer sheet in high school.:Danger: • » » 19 months ago, # ^ | ← Rev. 2 →   +7 I hope I am wrong though and everyone except pupils and newbies get +ive delta.An irrelevant meme. • » » » 19 months ago, # ^ |   +4 2+(2*5)!=12????? do they different in Japan???? • » » » » 19 months ago, # ^ |   0 First, answer this. 2!=2True or False? » 19 months ago, # |   +13 This comment section is surely one of the most funniest ones. Lots of fear, confusion and memes before the contest itself. » 19 months ago, # |   +12 Round 657 was arguably the hardest round of last year. » 19 months ago, # |   +4 How many question will be there?? » 19 months ago, # |   -48 Please HELP!!https://codeforces.com/contest/1534/submission/120000620 why is it showing out of bonds when it is perfectly working in XCODE • » » 19 months ago, # ^ |   +11 This isn't really the place to post that You swapped the indices around when initializing your array. It should be string** arr = new string*[a]; and then arr[aa] = new string[b]; » 19 months ago, # | ← Rev. 3 →   0 Hope history doesn't repeat itself ಠ_ಠ Looking forward for interesting but moderate round. • » » 19 months ago, # ^ |   +6 problems were hard, but interesting. » 19 months ago, # |   0 When you see it!! » 19 months ago, # | ← Rev. 3 →   0 Does anyone else's latoken rating reduced today after rating returned ? This round https://codeforces.com/contest/1537/standings » 19 months ago, # |   0 hard div 2 :V • » » 19 months ago, # ^ |   0 will this round be harder than normal div 2?? • » » » 19 months ago, # ^ | ← Rev. 4 →   0 In Russia, it's for grades 5 to 8 :V • » » » » 19 months ago, # ^ |   +3 All-Russian does't mean its for all students 5-8. Its a final, of course problems gonna be tough » 19 months ago, # |   +1 Notice the unusual timing » 19 months ago, # |   -8 Give me some positives here, looks like in the contest I'm not getting it! » 19 months ago, # | ← Rev. 3 →   +152 » 19 months ago, # |   +17 Do russian kids know about video games? • » » 19 months ago, # ^ |   +26 Some of them are video games creators. » 19 months ago, # |   +19 I still have nightmares from 657 » 19 months ago, # |   +14 No scoring distribution for this round!? » 19 months ago, # |   0 Score distribution? • » » 19 months ago, # ^ |   +55 There will be six problems with following scoring distribution. 1500 2000 2500 3000 3500 ${\displaystyle \infty }$ » 19 months ago, # |   +4 RIP to my ratings in Advance , I got green in last round. » 19 months ago, # |   +6 What about score distribution? » 19 months ago, # | ← Rev. 2 →   0 ch_egor there are less then 25minutes to start. score distribution did not update yet. • » » 19 months ago, # ^ |   +19 1500,3500,3500,3500,3500. • » » 19 months ago, # ^ | ← Rev. 2 →   -17 Why it's so important? For real, what you use that information for? • » » » 19 months ago, # ^ |   +12 Because many people create code files in advance (like me) and for that we need to know the number of problems • » » » » 19 months ago, # ^ |   +6 Just create more files! It's free, nothing bad gonna if you create more files than problems • » » » » » 19 months ago, # ^ |   +5 There can be subtasks tooobviously I am not gonna create every letter number combination • » » » 19 months ago, # ^ |   -10 To estimate difficulties of problems. • » » » » 19 months ago, # ^ |   -8 Why would you need that? • » » » » » 19 months ago, # ^ |   0 For cheating purposes. • » » » » » 19 months ago, # ^ |   0 To understand — how many problems I can solve and do I need to solve them faster or normal speed is enough? • » » » » » » 19 months ago, # ^ |   0 Just always solve as many as you can as fast as you can » 19 months ago, # |   0 Wish it be easy:( » 19 months ago, # |   +3 Here we go again. » 19 months ago, # |   0 hope for no googleforces • » » 19 months ago, # ^ |   +8 how about cheatforces? • » » » 19 months ago, # ^ |   0 yes too many cheaters » 19 months ago, # |   +5 left 3 minutes, hope this time I can be green! • » » 19 months ago, # ^ |   0 this time I just solved 3 problems. I am not sure whether I could go up or down. • » » » 19 months ago, # ^ |   +8 you can use CF predictor! » 19 months ago, # |   +5 WYSI » 19 months ago, # |   0 I want an extra registration for this contest sir. Please I didn't know about this timing. I thought it is at 8PM. • » » 19 months ago, # ^ | ← Rev. 2 →   -8 Nightmare Round • » » » 19 months ago, # ^ |   0 Is it possible to get register now? • » » » » 19 months ago, # ^ |   0 Participate virtually. Anyways Only 40 mins are left Now • » » » » » 19 months ago, # ^ |   0 Completed three problems offline that's why asking for registration. • » » » » » » 19 months ago, # ^ |   -8 Nothing cant be done now » 19 months ago, # |   +36 Huuuuuge gap between D and E,F » 19 months ago, # |   +3 Again a contest with great problems and a hell lot of cheaters. » 19 months ago, # |   +3 tourist giving div 2 round very rare • » » 19 months ago, # ^ |   0 tourist missing Russian Olympiad based contests is even rarer (check past rounds) » 19 months ago, # |   0 Bruh, They either come up with div1 round or div3 round everytime. Give us a round with div2 difficulty variant. :weary: » 19 months ago, # |   +3 First time solved four questions in any div2 contest :) • » » 19 months ago, # ^ |   0 Hope you solve four questions next round too » 19 months ago, # |   -8 Hoping the pretests for D are strong. » 19 months ago, # |   +4 problem A was very annoying , I solved BCD but not A • » » 19 months ago, # ^ | ← Rev. 2 →   0 give you a test: 2 1 2000000000. I hack myself use this test. » 19 months ago, # |   +21 That was educational contest, which is no suprise given the target audience.For me A was much harder than B, and even harder than C. Also gap from D to E/F felt huge.But nice problems anyway, thanks a lot. • » » 19 months ago, # ^ |   +5 And to me A was much harder than D xD • » » 19 months ago, # ^ |   +1 I was able to solve A easily but got struck in D :). Can you give some ideas for solving D? • » » » 19 months ago, # ^ |   +4 If we can buy products for price 1 we do. Else we buy some items of the product with max(b[i]). • » » » 19 months ago, # ^ | ← Rev. 2 →   +4 Buy the products with higher bi until you reach level of lowest bi • » » » » 19 months ago, # ^ |   0 I tried similar solution. Could you please tell my mistake. Submission • » » » 19 months ago, # ^ |   0 • » » 19 months ago, # ^ |   +3 Yes i took only 10-10 mins for B and C but A took me 1 hour just to find out that t/x can be > n-1 so we have to make it n-1. • » » 19 months ago, # ^ |   +3 What was your mistake in pretest 7 of problem D ? • » » » 19 months ago, # ^ | ← Rev. 2 →   0 Actually I am not sure.Looking at the diff of the two submission, they look like doing the same. So, maybe overflow? » 19 months ago, # |   +28 Thanks for the round. I think the problems were good, except A. The sad thing is that the gap between D and E was large. » 19 months ago, # |   +1 After participating in this round, I QUIT :( » 19 months ago, # |   +41 speedforces » 19 months ago, # |   +1 payed back more than what i got in last round. » 19 months ago, # |   +7 A is the hardest problem among A,B,C,D :)) » 19 months ago, # |   +21 Goddamn A • » » 19 months ago, # ^ |   +1 I spent 35 mins on A, 9 mins on B and 15 mins on C... amazing round, what can I say. • » » 19 months ago, # ^ |   0 *Goddamn A,C,D :( • » » » 19 months ago, # ^ |   +3 Well, C&D were simple, just not codeforces-pypy-friendly » 19 months ago, # | ← Rev. 2 →   +139 MEET AN EXPRIENCED & SHAMELESS CHEATER This is how Master_Jiraya bypasses Plagiarism testing.Master_Jiraya does cheating from starting and i reported about it to MikeMirzayanov and he got plag in last round , he abused me in private chat becz i reported him https://ibb.co/JmhSwKL .guys show your support and again upvote my comment so he again got punished by MikeMirzayanovPeople like Master_Jiraya are spoiling the sport. I don't understand where would cheating take them in life. They will never get anywhere in life but always remain what they are i.e cheater. He should be banned from the platform as soon as possible . MikeMirzayanov sir pls ban him and skip his solutions .his todays contest submission 120093195 120088691 , saw his submission timing and also see this dummy variables snippet;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++; cur+=to_take;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++; cur+=arr[j].first;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++;baap++;aaya++;knock++;knock++;tera++; » 19 months ago, # |   0 Was D easy? I had no idea how to solve D. Can anyone tell what they did • » » 19 months ago, # ^ |   +24 every question is easy when you have answers floating around on youtube and telegram • » » 19 months ago, # ^ |   +4 Sort the pairs by the number of purchsed objects required to get a discount in non decreasing order. Then proceed with two pointers, one at the beginning and one at the end of the sorted sequence. Buy from the end at full price, until there are enough objects to get a discount at the beginning. Then buy from the beginning at discounted price, until the condition for discount is not satisfied anymore. Alternate until the pointers meet in the middle and the object are exhausted. • » » 19 months ago, # ^ |   0 D solution: https://www.youtube.com/watch?v=kJsyu1MfwOc » 19 months ago, # |   +42 Dear problem setters if solution is short doesn't mean the problem is easy and should be A » 19 months ago, # |   +23 feels like educational round :< • » » 19 months ago, # ^ |   +9 It was made for 5 to 8 graders, so no suprise it feels educational. » 19 months ago, # |   +13 Oh dear, I had a stupid mistake in Problem C. For $x=1$ I treated students with equal levels wrongly (my code increased $k$ then, since it thought it needs $-1$ additional students) and I just couldn't find this case. That cost me so many points, it's infuriating! And the points-ranking curve felt quite flat so it hurt even more. :DBut still, I liked the tasks! Looking forward for Editorial E and F. » 19 months ago, # |   0 I lost the round because of the unusual time :(((( » 19 months ago, # | ← Rev. 2 →   0 How to solve D? Thanks. • » » 19 months ago, # ^ | ← Rev. 3 →   0 sort the whole pair according to $b_i$ .then from i = 1 to n -> if already taken no of element >= $b_i$ take $a_i$ otherwise take $b_j$ from the last untaken until taken reached $b_i$. (for this can use two pointer idea.) Spoilerll n; cin>>n; pair a[n]; for(ll i=0;i>a[i].second>>a[i].first; } sort(a,a+n); ll ans=0; for(ll i=0,j=n-1,c=0;i<=j;){ if(c>=a[i].first){ ans+=a[i].second; c+=a[i].second; a[i].second=0; i++; } else{ if(a[i].first-c>=a[j].second){ ans+=(2*a[j].second); c+=a[j].second; a[j].second=0; j--; } else{ ll d=a[i].first-c; ans+=(2*d); c+=d; a[j].second-=d; } } } cout< • » » 19 months ago, # ^ |   0 » 19 months ago, # | ← Rev. 2 →   +3 Can we solve F by finding the maximum values and (n(i)-n(j)) where a[i]>=a[cur] and a[j] » 19 months ago, # |   +10 For problem D :- AC submission in 1hour 57 min ---> 1800 AC submission in last 3 min ---> 300+ All due to legends like this shivam.utube23 He is one of many others . • » » 19 months ago, # ^ | ← Rev. 2 →   +11 https://www.youtube.com/watch?v=xpuLhmR5rZcWondering how 1800 got it in 1 hr 57 min. Here is the guy posting solutions for last 2 contests • » » » 19 months ago, # ^ |   +5 They are making it hard for contestants who do all the hardwork to get +ve delta and growth, what they deserve . • » » » 19 months ago, # ^ |   +3 This idiot deserve IP ban • » » » 19 months ago, # ^ |   +3 Wow!! I go through most of my friends' submissions after the contest ends. Something caught my eye in many solutions and I found it 'not so wise' to first sort the array and then reverse it rather than just reverse-sorting it.Now I completely understand why. My peer just converted the leaked code given in this link to Python and submitted it. It is very heartbreaking in the first place that the amount of people that cheat has risen up so significantly. :((I really wish rodents like him be banished from codeforces, just like snakes were banished from Ireland. • » » » » 19 months ago, # ^ |   +4 Sometimes I do the same thing, at first sort, then reverse, with my template it is just faster to type. • » » » » » 19 months ago, # ^ |   0 Well, in the case of vectors of pairs, with a lambda function defined inline, it looks very odd when we can just change < to > rather than sorting then reversing and in Python we just have to add a minus to the lambda. My guy just translated the leaked code in c++ to python. » 19 months ago, # |   +3 Problem ABCD is very simple and problem E&F is a bit hard. Over 2000 persons solved D, but few of them solved E.I think D should be more difficult and E should be a little easier. » 19 months ago, # |   +14 Was D really that easy? • » » 19 months ago, # ^ |   0 I thought I will just sort the pairs but it's not that simple if I had more time it can be solved maybe • » » 19 months ago, # ^ |   0 I felt it was easier than your usual Div2. D » 19 months ago, # |   +1 Everything was going fine until I got TLE. » 19 months ago, # |   +5 Can someone tell why am I getting WA in B 120121669 • » » 19 months ago, # ^ |   0 intl x[100002][26]; is in a function (main), so it is uninitialized. After initializing x[0], the rest of the values are only added to (+= and ++) so the stored value could differ from the intended one. • » » » 19 months ago, # ^ |   +5 Ohh thanks += shouldn't be done » 19 months ago, # |   +4 I got destroyed after seeing this was meant for students of grade 5-8... » 19 months ago, # | ← Rev. 3 →   +16 How to solve E? I was thinking of doing some sort of dp whether its possible to place $i$-th card in $j$-th hand. If its possible for $(i, j)$, binary search on largest $x$ such that its possible to place only other hand (not j) from $(i + 1, x - 1)$ (check with pref sums) and $k_i$ in hand $j$ is valid in the same range (check with RMQ). Now just mark all $(y, j)$ as good for all $i + 1 \leq y \leq x$ using difference sums. I'm still not sure if this is correct as I ran out of time implementing. » 19 months ago, # |   +6 A was tougher than B. • » » 19 months ago, # ^ |   +11 And tougher than C & D, I think. » 19 months ago, # | ← Rev. 2 →   +85 I think in Russia they don't like int they only like long long. » 19 months ago, # |   0 Can someone explain what is wrong in my approach for problem D? I am using greedy with prefix and suffix sums. Sort the given 2D array according to products required for discount and then traverse from top and see how many products you can get for a discount.120081648 • » » 19 months ago, # ^ |   +5 The values were long long, not int. • » » » 19 months ago, # ^ |   0 I have used long long only • » » » » 19 months ago, # ^ |   +5 k,x can also be long long • » » » » » 19 months ago, # ^ |   +5 Oh yes. This was a terrible mistake :( » 19 months ago, # |   0 For problem c, What's wrong in this code? vi v;input(v,n); sort(v.begin(),v.end()); ll ans = n; for(ll i=1;i0){ if((v[i]-v[i-1])<=2*x){ ans--; k--; } } } cout< • » » 19 months ago, # ^ | ← Rev. 2 →   0 The gap can be greater than 2*x which you haven't considered. Also, there is the case of priority of which gap to close first, which hasn't been taken into account. • » » 19 months ago, # ^ | ← Rev. 3 →   0 You can add multiple students between elements. 2 2 2 1 7 Answer is 1, add 3 and 5 to it.Even if you remove $\leq 2 \times x$ condition from your code another problem will appear 3 1 4 1 100 108 Here its optimal to add the one person between $100$ and $108$, whereas your code (with the fix) would try to insert it between $1$ and $100$. You need apply the operations in non decreasing order of diffs ($v_i - v_{i - 1}$) for it to be optimal. • » » 19 months ago, # ^ |   +3 3 2 21 2 8 -->ans : 1 try this • » » 19 months ago, # ^ |   0 Even when the difference between the two consecutive number is greater than 2*x, in some cases, you can put 2 or more extra numbers between them so that they are connected. However, you have only accounted for the difference lower or equal to 2*x which is incorrect. » 19 months ago, # |   +1 Also got 4 TLE seemingly just because codeforces doesn't use the latest pypyIt always hurts getting TLE with correct asymptotic and it is even more painful when you know that the problem is not in the language per se or in the code but just in the version the judge uses • » » 19 months ago, # ^ |   +3 And now I have FST in C for the same reason » 19 months ago, # |   0 Anyone tried Top-Down approach for D? • » » 19 months ago, # ^ |   0 its greedy, not dp » 19 months ago, # |   0 Can anyone please help why I am getting a TLE in [problem:727 (Div. 2)-B Love song] 120120431,even though I am having two loops and time complexity is O(n^2) which justifies the constraints as it comes out be 10^12 and we can perform 10^8 operations in one second??If I am calculating time complexity wrong please tell how to calculate it properly as I am facing this problem in many questions • » » 19 months ago, # ^ |   +5 I have not gone through your code but assuming it is O(n^2), how can 10^12 operations be performed if in one second you can perform 10^8. 10^12 / 10^8 = 10^4 second. • » » 19 months ago, # ^ | ← Rev. 2 →   0 your complexity is n * q, that is 10^10, and that is hundred times 10^8 » 19 months ago, # |   +1 Russian Olympiad rounds and Weak Pretests are really Synonyms. Thanks For FST in C , I was accessing Garbage value , I know it's my mistake but then why Pretests passed . :/ . » 19 months ago, # |   +8 Contest was amazing but Why There is A problem very Annoying :( killed me » 19 months ago, # | ← Rev. 6 →   0 Why I am getting WA on problem A? Used Approach reverse from second last participant ans = 1 , 2 , 3 .... , (t/x-1) , t/x , t/x , t/x ........Edit: Got it. t/x can be greater then n. My Solution ll n,x,t; cin>>n>>x>>t; ll y=t/x; n--; ll ans=1; if(y%2==0) ans= (y/2)*(y+1); else ans= ((y+1)/2)*y; ll z=ans+ (n-y)*y; cout< • » » 19 months ago, # ^ |   +5 Check for testcases with t/x > n • » » 19 months ago, # ^ |   0 The number of participants can be smaller than t/x. • » » 19 months ago, # ^ |   0 use this test n=5 x=2 t=100 » 19 months ago, # |   0 How to solve F? Any hint?? • » » 19 months ago, # ^ |   +16 in problem F, we notice one mathematical fact that helps us solve the problem: the distance of the median and the element of any array depends only on $nS-nG$, where $nS$ is the number of elements smaller than that element and $nG$ is the number of elements larger than that. To find the exact expression you should break it into two cases: $nS \geq nG$ and $nS < nG$. For $nS \geq nG$, it comes out to be $val = floor((nS-nG)/2)$ and for $nS < nG$ it comes out to be $val = floor((nG-nS+1)/2)$. So you can find subarrays with the maximum value of $val$ for each of those two cases. For this I implemented two lazy segment trees which output max/min and store prefix sum.Then you iterate in descending order and you can maintain in the array that if the value is greater than curr, it is $+1$ otherwise it is $-1$. Then prefix sum gives the value of $nG-nS$ for a prefix.120142177 » 19 months ago, # |   +61 Just ban cheaters accounts. It's getting out of control. • » » 19 months ago, # ^ |   +29 Honestly, still surprised 2k+ people were able to solve D • » » » 19 months ago, # ^ |   +3 And here I am, getting WA in D cause I mistakenly wrote i>0 instead of i>=0 in a for loop. » 19 months ago, # |   0 After sys failing C; Don't believe floating-point arithmetic. • » » 19 months ago, # ^ |   +3 Yeah, just never use floating point numbers unless you absolutely have to • » » 19 months ago, # ^ | ← Rev. 2 →   0 why Don't you just use (x + y — 1) / y for ceil. >.< » 19 months ago, # |   0 So I tried to find the ordering of the products in problem D with exchange argument. However I couldn't is there any proof for the order of products using exchange argument ? • » » 19 months ago, # ^ |   0 Here's what I can think of:Let's say I have to decide the order between two products A and B (where B has a higher required-number-of-prior-purchases-to-unlock-discount value)Now I know that no matter what order I go with, I won't be able to purchase B-type products at the discounted price.But there CAN exist some order where I can obtain A-type products at a discount. This order will be when I purchase A-type products after purchasing the minimum required B-products to unlock A's discount.After that I can buy all A-type products and then buy any remaining B-types.To summarize: I will ALWAYS have to spend full price on the objects whose discount-cutoff is high, but there's a chance I can unlock a discount on lower discount-cutoff products. Therefore I should make any full-price purchases on higher discount-cutoff products so I can unlock discounts on lower discount-cutoff products simultaneously. » 19 months ago, # |   +3 A was really interesting.Mind negetive numbers. And it's my first turn solve many problems in div2 thanks a lot! » 19 months ago, # |   +5 Why did C not have multiple testcases in each pretest? It has so many FSTs. » 19 months ago, # | ← Rev. 2 →   +4 Most of the code using pypy C is 0.99x seconds, isn't it unreasonable? I still don't know why my code is TLE codeN, K, X = map(int, r().split()) L = list(map(int, r().split())) L.sort() ans = [] for i in range(N-1): if L[i+1]- L[i] > X: ans.append((L[i+1]- L[i]-1)//X) ans.sort() cnt = 0 for i in ans: if K >= i: K -= i cnt += 1 else: break print(len(ans)+1-cnt) » 19 months ago, # | ← Rev. 2 →   +5 I think TL on problem C is too tight for Pypy3. Or I was wrong?(Edit: I got TLE while system testing.) coden, k, x = mip() a = lmip() a.sort() p = [] g = 1 for i in range(1, n): di = a[i] - a[i - 1] if di > x: p.append((di - 1) // x) g += 1 p.sort() for i in p: if i > k: break k -= i g -= 1 print(g) • » » 19 months ago, # ^ | ← Rev. 4 →   +4 It is not so much pypyIt is that the specific version of pypy codeforces uses is bad with numbers above int32, it isfixed in later versionsAnd both C&D (at least in my implementation) use such numbers a lotHere are more detail https://codeforces.com/blog/entry/90184 • » » » 19 months ago, # ^ |   0 Thanks! • » » » 19 months ago, # ^ |   +1 I think it is pypy in this case.The same code for C: 1. PyPy3 got TLE: https://codeforces.com/contest/1539/submission/120095637 2. Python3 got AC: https://codeforces.com/contest/1539/submission/120126977 » 19 months ago, # |   0 I thought tourist will be streaming for today's contest when he registered. » 19 months ago, # |   0 can someone give counter case for pretest 7 of problem D, I tried similar to solution mentioned by others above. Submission » 19 months ago, # |   +33 Just Python Things :) • » » 19 months ago, # ^ |   +4 Lol FSTs. What just happened there? I see a lot ppl getting FST on C and D. XD • » » » 19 months ago, # ^ |   +2 In case of python int64 numbers which are really slow in codeforces version of pypy on windowsD should be fine if you solve it with two pointers, but gets the same problem if you use binary search like I did • » » » » 19 months ago, # ^ |   +2 Feels like the pythonistas are in the middle of an appocalypse. I have read that blog mentioning int64's time limit problem by pajenegod earlier, but I've seen it in action for the first time. Really sorry for all those who FST'ed because of this problem. I love Python for every reason possible but I just don't use it in CP because people don't care about us and our time limits :( • » » » » » 19 months ago, # ^ | ← Rev. 3 →   +4 The thing that sucks the most is that it is not some fundamental performance issue of python, It is just the version/instance installed at codeforces sucks, Which makes it more annoying to get fst on that • » » » 19 months ago, # ^ |   +1 TLE on test 18 ╥﹏╥ • » » » » 19 months ago, # ^ |   +1 me too • » » 19 months ago, # ^ |   0 Does Russian Olympiad allow Python? This might be part of the issue? • » » 19 months ago, # ^ |   0 What extension do you use ? » 19 months ago, # | ← Rev. 2 →   -16 Why this gives WA on test 37 in C I didn't expect that :( cin>>n>>k>>x; for(int i=0;i>a[i]; } sort(a,a+n); vectortmp; for(int i=1;ix) tmp.push_back(a[i]-a[i-1]); } sort(tmp.begin(),tmp.end()); int si=tmp.size(); si++; //cout< • » » 19 months ago, # ^ | ← Rev. 2 →   +11 Strange formula. Suppose you have tmp = {0, 10}, x = 2. So k = 4 needed to have one group. Your solution thinks it can make one group with k = 3 only. • » » » 19 months ago, # ^ |   +20 actually how the hell this passed 36 test cases » 19 months ago, # | ← Rev. 3 →   +4 It's sad and funny how pretests and 1 sec limit killed most of python solutions. During contest I was happy my C passed and sad cause D didn't. Now, I'm happy D failed, since I have rewritten D in c++ and sad that C passed pretests, since C failed tests :) :( Mood roller-coaster round :) » 19 months ago, # |   +4 Please give some relief in tightness of time bounds for python users. My O(n) solution for Q. C gave TLE during system testing. • » » 19 months ago, # ^ |   0 Hey, a couple of things. Your solution is actually nlogn, since you are sorting. My pypy submission also failed on system tests, and it's really sad. Turns out the pypy version that codeforces uses is just really bad with large numbers. So much so, in fact, that the same solution passed comfortably using Python 3.9. I really hope they do something about this. » 19 months ago, # |   +85 I could not understand the statement of problem F during the round. It has many ambiguous points.I first read maximized value as |i-(center's value)|, |a[i]-(center's value)|, not |(position of a[i] in a subsegment) — (position of center)|.The statement said "the center" is not the position but the element itself, so the distance compares the position and element's value. I messed up. First sample which has explanation is pretty weak to resolve them. • » » 19 months ago, # ^ |   +16 Exactly my thoughts as well. Surprisingly, each explanation bullet for the first sample also agrees with the other interpretation of the problem xD » 19 months ago, # |   0 Why do I have TLE? I don't understand.120075328 • » » 19 months ago, # ^ |   0 Too many sum operations. Google "prefix array". • » » » 19 months ago, # ^ |   0 I got AC with same code 120078694 • » » » » 19 months ago, # ^ |   0 You can see your solution runs for 1.8s ,very close to 2s.Actually we can let every question' l=1 and r=n ,then your algorithm turns into O(n^2) » 19 months ago, # |   +121 Feel my pain • » » 19 months ago, # ^ |   0 Rough day for many of us, bad contest $\implies$ good upcoming contest :) • » » 19 months ago, # ^ | ← Rev. 2 →   +8 YES ! » 19 months ago, # |   -32 Short Solutions Solution Avoid solve() { ll n,i,j,k,m,x,t; cin>>n>>x>>t; m=t/x; k=(n-1)*x; ll nn=min(n,m); n-=nn; ll ans=(n*m)+(nn*(nn-1))/2; cout<>n>>q; string s; cin>>s; vector> v(n+1,vector(26)); for(i=1;i<=n;i++) { v[i] = v[i-1]; v[i][s[i-1]-'a']++; } while(q--) { ll l,r; cin>>l>>r; ll ans=0; vector v1=v[r]; vector v2=v[l-1]; for(int i=0;i<26;i++) { ans+=(i+1)*(v1[i]-v2[i]); } cout<>n>>k>>x; vector v(n); for(i=0;i>v[i]; } sov(v); priority_queue,greater> pq; ll grp=1; for(i=1;ix) { grp++; pq.push((v[i]-v[i-1]-1)/x); } } while(!pq.empty() && k>0) { auto it=pq.top(); if(k>n; vector> v(n,vector(2)); for(ll i=0;i>v[i][1]>>v[i][0]; sov(v); i=0;j=n-1; ll ans=0; ll tot=0; while(i<=j) { if(tot>=v[i][0]) { ans+=v[i][1]; tot+=v[i][1]; i++; } else { ll diff=v[i][0]-tot; ll z=min(v[j][1],diff); ans+=z*2; tot+=z; v[j][1]-=z; if(v[j][1]==0) {j--;} } } cout< » 19 months ago, # |   +4 Why did python O(N) TLE in C;-;Time to reject python embrace c++ • » » 19 months ago, # ^ |   0 They should rejudge the submissions. O(n) is definitely sufficient enough for 1s Time-limit. • » » » 19 months ago, # ^ |   +1 python TLE on test 18 https://codeforces.com/contest/1539/submission/120107086same code C++ code AC with 93 ms https://codeforces.com/contest/1539/submission/120127276 • » » » » 19 months ago, # ^ |   0 same, my python solution TLEd for test case 18 and it passed after adding the fast IO class. AC with Fast IO • » » » » » 19 months ago, # ^ |   +1 I can't belive what I just did The same code that TLE'd in a contest in pypy 3 passed in python 3python 3https://codeforces.com/contest/1539/submission/120128958pypy 3https://codeforces.com/contest/1539/submission/120107086 • » » » » » » 19 months ago, # ^ |   0 pypy3 is suck at big integers https://codeforces.com/blog/entry/91905?#comment-806750 » 19 months ago, # |   -49 Extremely Sorry for posting this question here, Regarding yesterday's atcoder beginnner contest abc206 F. (https://atcoder.jp/contests/abc206/tasks/abc206_f). I tried to solve it by DP. This is what I tried. My understanding for intersecting is, Two intervals A, B are said to intersect if there is atleast one real number x such that x belongs to both A and B. (i.e. intervals which lie completely within each other do intersect).With this definition, I tried the following logic. First for each position from 1 to 100 note the end positions of the interval starting from that position. Similarly find the minimum end position of all the intervals starting at or after the current position. Then do a reverse dp (states 0, 1 : 0 -> considering all the intervals starting from >= current position, whether the 1st player can win. 1 -> considering all the intervals starting from current position (only this current position), whether the 1st player can win).The transition is for all the intervals starting from current position, dp[i][1] = dp[i][1] | !dp[end_position][0]. Then For j from i until the minimum end position -1, dp[i][0] = dp[i][0] | dp[j][1]https://atcoder.jp/contests/abc206/submissions/23636854But it does not work. I could not think of a case where it fails. Could somone please suggest a case where it does not work ? » 19 months ago, # |   +35 Ideone Link: D solution Link Matching submissions which I have found till now: https://codeforces.com/contest/1539/submission/120121120 by Harsh18064, https://codeforces.com/contest/1539/submission/120121447 by dv.jakhar, https://codeforces.com/contest/1539/submission/120121675 by Abhijeet007There might be many more such submissions which would have copied from the same source. MikeMirzayanov and ch_egor Please have a look at this. I did not want to pollute the CF blog but this type of behaviour must be penalised. » 19 months ago, # |   0 greedy forces » 19 months ago, # |   +9 What is disappointment? spoilerWhen you are 2 lines of code away from the correct solution and time up! PS — gree-D » 19 months ago, # |   +1 For 1539C - Stable Groups, 120068517 (PyPy 3) and 120127418 (Python 3) are exactly the same, however, the PyPy one got TLE while the Python one got AC. Why did this happen? • » » 19 months ago, # ^ | ← Rev. 2 →   +11 I presume this is the reasonhttps://codeforces.com/blog/entry/90184 • » » » 19 months ago, # ^ |   0 Thanks. Fortunately, this is not a rated competition for me. » 19 months ago, # |   +1 Please try to make pretests stronger. T~T. My Global rank fell from 63 to 1807, because C failed on main tests! • » » 19 months ago, # ^ |   0 that's the point of the pretests, they are not supposed to cover ALL testcases. They are giving meaning to hacks. » 19 months ago, # |   -10 Pretests for C are too weak. There are no tests with max K. My code passed pretests but failed main tests because I forgot to use long long for k :( » 19 months ago, # |   +62 To not keep you waiting, the ratings updated preliminarily. We will remove cheaters and update the ratings again soon! » 19 months ago, # |   0 I can't believe what I just did The same code that TLE'd in a contest in pypy 3 passed in python 3python 3https://codeforces.com/contest/1539/submission/120128958pypy 3https://codeforces.com/contest/1539/submission/120107086CAN ANYONE PLEASE EXPLAIN THIS TIA • » » 19 months ago, # ^ |   0 Maybe you can see this post: When is PyPy slower than Python? » 19 months ago, # |   -17 The pretests of C is soo weak.Even in the test cases if you just add 1 participant in every place which has diff > x then it will pass upto test case 24. • » » 19 months ago, # ^ | ← Rev. 2 →   0 I don't think so.I did that in my first submission and It showed wrong answer on pretest 3 • » » » 19 months ago, # ^ |   0 Compare solution 1 and solution 2Only thing I changed here is k-- to k-=co[i] in the last loop. » 19 months ago, # |   +4 editorial please » 19 months ago, # | ← Rev. 2 →   0 In problem D,Can anyone please explain as to why it would not be optimal to take more than required ai's for any i? Can it not be the case that it would profit us later? • » » 19 months ago, # ^ |   +4 It is not optimal to buy any product more than it is actually required. Consider if you buy an extra item at a discounted price and then by doing this step your total number of bought items reaches a level that you get a discount for buying another item. In this scenario, your total incurred cost would be 1+1=2. Instead of buying this extra item, you can buy the second item at its original price i.e. 2. » 19 months ago, # |   +9 Pretest of problem C seems very weak (╥╯^╰╥) » 19 months ago, # | ← Rev. 2 →   +13 How to solve E? » 19 months ago, # |   0 Where is editorial? » 19 months ago, # |   0 For problem C, can someone please explain why the difference between two students' level is supposed to be (a[i+1]-a[i]-1) instead of straightforwardly taking (a[i+1]-a[i]). I can find some of the examples where the formula without minus 1 fails, but I can't find a reason that explains why :( • » » 19 months ago, # ^ |   0 There is -1 because if you won't put it then suppose a[I+1]-a[I] is 8 and X is 4 , so 8/4=2 whereas you need only one person if a[I] is 1 and a[I+1] is 9 then putting 5 in between will be sufficient. • » » 19 months ago, # ^ |   0 made same mistake that costed me 2 WA's and 40 mins to figure • » » 19 months ago, # ^ | ← Rev. 2 →   0 If we have 2 students with 1 and 7 values and x=2 the difference =6 and it's divisible by x in this case you need two students to fill the gap it becomes 1 3 5 7 so without this -1 answer will be 3 students which is wrong » 19 months ago, # |   0 How to solve D using binary search?The premise was to binary search the number of "Twos" I want such that I try to get as minimum Twos as possible but my checking algorithm isn't correct, any ideas? • » » 19 months ago, # ^ |   +1 I solved with binary search, you can look at my submissionWell, it got TLE FST because of the pypy issues, but that's a different story • » » 19 months ago, # ^ |   +9 I solved it using binary search as you explained. You do search on number of items, that you will buy without discount (so for price of $2$). This bases on a simple fact: If you can buy all needed items using only $k$ purchases without discount, when you always can buy all needed items using $k_1 > k$ purchases without discount. So, the minimal $k$, such as you can buy all, buying only $k$ items without discount will yield the solution.To check inside binsearch you should firstly sort all items by their $b_i$, before search. Logic behind it is pretty intuitive, you do not want to spend $2$ bucks on items, that potentially could be unlocked with discount later, so inside binary search you firstly simulate buying $mid$ items without discount from the end of sorted list (so, buying the most hard-to-unlick items). Then you basically go from the beggining of this list, and check that for items you need and did not buy yet for 2 bucks, you have discount. If you can buy all needed items with only $mid$ undiscounted purchases, then do $r := mid$, searching for more optimal solution, else do $l := mid + 1$. After binsearch answer is $2 * k + ((\sum a_i) - k)$. Submission: 120089991 » 19 months ago, # |   +20 https://www.youtube.com/channel/UCm-7dkk1fHId1hy5vY5VVCQ this guy is spreading live solutions.plz take serious actions » 19 months ago, # |   -13 I solved D in O(n) using MITM technique. You can check it out: https://codeforces.com/contest/1539/submission/120088638 » 19 months ago, # | ← Rev. 2 →   0 Can someone please explain this to me this is the solution I had submitted for problem D during the contest and it gave TLE but after the contest I submitted the same solution with a small change in the compare function and it worked Same solution for D with change in compare function • » » 19 months ago, # ^ |   +11 I guess this blog should clarify the issuehttps://codeforces.com/blog/entry/70237 » 19 months ago, # |   +3 PROBLEM A LOOKING AT QUESTION B,C,D TODAY (JUST FOR FUN)... » 19 months ago, # |   0 CF-DIV2D shinigami676 kriborz Mnltrix vineet_02 akhilesh.k gsamarth882 Adarsh_29 Harsh18064 there are some users 100% cheat in this contest and almost are India. Pls check and give them heavy penalty. » 19 months ago, # |   +2 Its my humble request to the cheaters please don't ruin the Codeforces contests by cheating , if you really want to improve do the hard work and trust me efforts never go in vain someday or the other you gonna get what you have worked for. » 19 months ago, # | ← Rev. 5 →   +18 Can someone help me out with F? I tried using segment trees. The idea basically is that in a given subsegement for a particular x what matters is the number: No. of elements greater than x — No. of elements less than x. this will be the sum over that subsegment if I represent a number greater than x as 1 and -1 otherwise(it get's a little more complicated when you consider equal elements but the idea is same). The code seems to be working correctly, except the fact that I am getting TLE. And given that I just do 2*n updates and 4*n queries in total I don't understand why it is so. Here's my code • » » 19 months ago, # ^ | ← Rev. 3 →   +30 Hey there! Try using ar instead of vt • » » » 19 months ago, # ^ |   +24 Oh Thanks! It worked!!!!!! • » » » » 19 months ago, # ^ |   +18 Did you forgot to shift to alt while answering the question yourself, or you really did that ? • » » » » » 19 months ago, # ^ |   +17 identity crisis ig • » » » » » 19 months ago, # ^ |   +38 I thought it was funny. lol » 19 months ago, # |   0 Why you have not published the editorial till now ch_egor? • » » 19 months ago, # ^ |   +6 Chill bro. Every contest doesn't get an editorial this fast. • » » 19 months ago, # ^ |   0 ch_egor ping » 19 months ago, # |   0 Waiting for editorial. Have been waiting for 5 hours already. Where. is. editorial. » 19 months ago, # |   0 Me, waiting for editorial........ » 19 months ago, # |   -12 Contest was div-3 type and my performance was div-4 type » 19 months ago, # |   -22 Speedforces. • » » 19 months ago, # ^ |   -23 Also fstforces. » 19 months ago, # | ← Rev. 3 →   -22 Test: 5 3 10 2 12 1 11 3 10 1 8 Submission: Your text to link here...Output : 20 Correct Output: 19This submission is getting accepted but however the above test case shows wrong output. Correct Output should be 19, instead of 20._tryhard • » » 19 months ago, # ^ |   0 you are right bro this is my submission LOL. and I find my mistake. Thanks » 19 months ago, # | ← Rev. 2 →   0 Why I am getting runtime error?Anyone please tell.My submission. • » » 19 months ago, # ^ |   0 See the problem constraints, n can be 10^9, which is too large, results in a runtime error • » » » 19 months ago, # ^ |   0 That's why I have used long long int.It can take upto 10^18 as an input • » » » » 19 months ago, # ^ |   0 I mean size of array is arr[n] which is too large. You can see this error by declaring array of size 10^9. size can be upto 10^8 » 19 months ago, # |   +3 Anyone has any idea if the contest has been officially deemed unrated or there's an issue with the rating? I don't see any posts about this round going unrated, and it's the first time I'd touched the Specialist tag so I really really want the round to stay rated. • » » 19 months ago, # ^ |   0 I guess they made it unrated because there were too many cheaters » 19 months ago, # |   0 Harsh18064 this user was cheated why he still rated ? MikeMirzayanov check this • » » 19 months ago, # ^ |   0 he 100% cheat in problem D pls skip him » 19 months ago, # |   0 getting tle in 'C' in testcase 61 using java, the same code works well in c++
2. (16 pts.) Consider the following relations on Z. R1 = {(x; y) | y = x + 1} R2 = {(x; y) | y = x - 1} R3 = {(x; y) | y = 2x + 1} R4 = {(x; y) | y = 2x - 1} Describe each of the following composite relations in set builder notation. R1 oR1                R1 o R2               R1 o R3               R1 o R4 R2 oR1                R2 o R2               R2 o R3               R2 o R4 R3 o R1               R3 o R2               R3 o R3               R3 o R4 R4 o R1               R4 o R2               R4 o R3               R4 o R4 ## Want an answer? ### No answer yet. Submit this question to the community. Practice with similar questions Introduction to Languages and the Theory of Computation (4th Edition) Q: For a relation R on a set S,the transitive closure of R is the relation Rt defined as follows: R ⊆ Rt; for every x,every y, and every z in S,if (x, y) ∈ Rt and (y,z) ∈ Rt,then (x,z) ∈  Rt. (We can summarize the definition by saying that Rt is the smallest transitive relation containing R .) Show that if R1 and R2 are relations on S satisfying R1 ⊆ R2, then A: See a step-by-step answer Database Management Systems (3rd Edition) Q: Exercise 4.2 Given two relations R1 and R2, where R1 contains N1 tuples, R2 contains N2 tuples, and N2 > N1 > 0, give the minimum and maximum possible sizes (in tuples) for the resulting relation produced by each of the following relational algebra expressions. In each case, state any assumptions about the schemas for R1 and R2 needed to make the expression meaningful:(1) R1∪R2, (2) R1∩R2, (3) R1-R2, (4) R1×R2, (5) σa=5(R1), (6) πa(R1), and (7) R1/R2 A: See a step-by-step answer
# calculus Use integration by parts to evaluate the integral of x*sec^2(3x). ([x*tan(3x)]/3)-[ln(sec(3x))/9] but it's incorrect. u=x dv=sec^2(3x)dx du=dx v=(1/3)tan(3x) [xtan(3x)]/3 - integral of(1/3)tan(3x)dx - (1/3)[ln(sec(3x))/3] - [ln(sec(3x))/9] What am I doing wrong? 1. 0 1. integral of(1/3)tan(3x)dx = (1/3)integral of(sin3x/cos 3x) dx = (1/3)ln|sin 3x| + C (x tan3x)/3 - (1/3)ln|sin 3x| + C posted by Mohamed ## Similar Questions 1. ### Integration Intergrate ¡ì sec^3(x) dx could anybody please check this answer. are the steps correct? thanks. = ¡ì sec x d tan x = sec x tan x - ¡ì tan x d sec x = sec x tan x - ¡ì sec x tan^2(x) dx = sec x tan x + ¡ì sec x dx - ¡ì 2. ### calculus (check my work please) Not sure if it is right, I have check with the answer in the book and a few integral calculators but they seem to get a different answer ∫ sec^3(x)tan^3(x) dx ∫ sec^3(x)tan(x)(sec^2(x)-1) dx ∫ tan(x)sec(x)[sec^4(x)-sec^2(x)] 3. ### math How do I derive the secant reduction rule? Integral (sec x)^n dx = Integral (sec x)^(n-2) * (sec x)^2 dx = Integral ((tan x)^2 + 1)^(n/2-1) * (sec x)^2 dx Doing a substitution with: u = tax x du = (sec x)^2 dx = Integral (u^2 + 4. ### Calculus Evaluate the indefinite integral integral sec(t/2) dt= a)ln |sec t +tan t| +C b)ln |sec (t/2) +tan (t/2)| +C c)2tan^2 (t/2)+C d)2ln cos(t/2) +C e)2ln |sec (t/2)+tan (t/2)| +C 5. ### AP Calculus BC Hi! Thank you very much for your help--- I'm not sure what the answer to this is; how do I solve? Find antiderivative of (1/(x^2))[sec(1/x)][tan(1/x)]dx I did integration by parts and got to (1/(x^2))[sec(1/x)] + 2*[antiderivative 6. ### Calculus - Integration Hello! I really don't think I am understanding my calc hw. Please help me fix my errors. Thank you! 1. integral from 0 to pi/4 of (tanx^2)(secx^4)dx It says u = tan x to substitute So if I use u = tan x, then my du = secx^2 then I 7. ### Calc Hello im trying to integrate tan^3 dx i have solved out the whole thing but it doesnt match up with the solution.. this is what i did: first i broke it up into: integral tan^2x (tanx) dx integral (sec^2x-1)(tanx) dx then i did a u 8. ### Calc Hello im trying to integrate tan^3 dx i have solved out the whole thing but it doesnt match up with the solution.. this is what i did: first i broke it up into: integral tan^2x (tanx) dx integral (sec^2x-1)(tanx) dx then i did a u 9. ### Calculus AP I'm doing trigonometric integrals i wanted to know im doing step is my answer right? ∫ tan^3 (2x) sec^5(2x) dx =∫ tan^2(2x) sec^4(2x) tan*sec(2x) dx =∫ (sec^2(2x)-1)sec^4 tan*sec(2x) dx let u=sec x, du= 1/2 tan*sec(2x) dx
A- A+ Alt. Display # Sources of variation in simulated ecosystem carbon storage capacity from the 5th Climate Model Intercomparison Project (CMIP5) ## Abstract Ecosystem carbon (C) storage strongly regulates climate-C cycle feedback and is largely determined by both C residence time and C input from net primary productivity (NPP). However, spatial patterns of ecosystem C storage and its variation have not been well quantified in earth system models (ESMs), which is essential to predict future climate change and guide model development. We intended to evaluate spatial patterns of ecosystem C storage capacity simulated by ESMs as part of the 5th Climate Model Intercomparison Project (CMIP5) and explore the sources of multi-model variation from mean residence time (MRT) and/or C inputs. Five ESMs were evaluated, including C inputs (NPP and [gross primary productivity] GPP), outputs (autotrophic/heterotrophic respiration) and pools (vegetation, litter and soil C). ESMs reasonably simulated the NPP and NPP/GPP ratio compared with Moderate Resolution Imaging Spectroradiometer (MODIS) estimates except NorESM. However, all of the models significantly underestimated ecosystem MRT, resulting in underestimation of ecosystem C storage capacity. CCSM predicted the lowest ecosystem C storage capacity (~10 kg C m−2) with the lowest MRT values (14 yr), while MIROC-ESM estimated the highest ecosystem C storage capacity (~36 kg C m−2) with the longest MRT (44 yr). Ecosystem C storage capacity varied considerably among models, with larger variation at high latitudes and in Australia, mainly resulting from the differences in the MRTs across models. Our results indicate that additional research is needed to improve post-photosynthesis C-cycle modelling, especially at high latitudes, so that ecosystem C residence time and storage capacity can be appropriately simulated. Keywords: How to Cite: Yan, Y., Luo, Y., Zhou, X. and Chen, J., 2014. Sources of variation in simulated ecosystem carbon storage capacity from the 5th Climate Model Intercomparison Project (CMIP5). Tellus B: Chemical and Physical Meteorology, 66(1), p.22568. DOI: http://doi.org/10.3402/tellusb.v66.22568 Published on 01 Jan 2014 Accepted on 20 Mar 2014            Submitted on 8 Aug 2013 ## 1. Introduction The rising atmospheric CO2 concentration and resultant climate warming may substantially impact the global carbon (C) budget (Solomon et al., 2007), leading to positive or negative feedback to global climate change (Friedlingstein et al., 2006; Heimann and Reichstein, 2008). Terrestrial ecosystems are estimated to have sequestered nearly 30% of the C released by anthropogenic activities from 1960 to 2008, during which fossil fuel CO2 emissions increased from 2.4 to 8.7 Pg C yr−1 (Canadell et al., 2007; Le Quere et al., 2009). However, whether the natural sink will be sustainable into the future is under debate due to the complexity of terrestrial ecosystem responses to global change, such as forest dieback (Cox et al., 2004), land use change (Strassmann et al., 2008), and storms reducing canopy photosynthesis and transferring C from plant to litter pools (Chambers et al., 2007). Therefore, it is imperative to assess the sustainability of terrestrial C storage for guiding international efforts to stabilise CO2 concentration. Terrestrial ecosystem C storage has been studied in the past decades using experimental (Johnston et al., 1996; Lales et al., 2001; Tang et al., 2012) and modelling approaches (Emanuel et al., 1984; Tian et al., 2012) at a biome or regional scale. For example, global climate change experiments, such as open-top chambers, free-air CO2 enrichment (FACE) and infrared heating techniques, have been conducted to quantify responses of terrestrial C storage to elevated CO2 (Mooney et al., 1999) and climate change (Kane and Vogel, 2009). These experimental results have advanced global model development to predict terrestrial C storage in response to climate change (Friedlingstein et al., 2006; Tian et al., 2012). Earth system models (ESMs) have often coupled atmosphere–ocean general circulation models (GCMs) with the Dynamic Global Vegetation Models (DGVMs) or Terrestrial Biogeochemistry Models (TBMs, e.g. Krinner et al., 2005; Prentice et al., 2007). The different coupled models could result in diverse results (Ahlström et al., 2013) with considerable uncertainty in magnitude and even in direction (Friedlingstein et al., 2006). The accuracy of these ESMs in simulating ecosystem C storage remains unclear, considerably affecting our confidence in predicting C storage in terrestrial ecosystem under future climate conditions. The C storage of an ecosystem under given environmental conditions will ultimately approach its steady state (referred to as ecosystem C storage capacity, Xia et al., 2013). Ecosystem C storage capacity is often determined by C influx and mean residence time (MRT; Luo et al., 2001, 2003), as adopted in most biogeochemical models (Parton et al., 1988). Since biogeochemical models are usually first initialised to the steady state before being used for further analysis, the steady-state ecosystem C storage and its determinants are good indicators for model performance at a given C-cycle model structure. In ESMs, net primary productivity (NPP) is often estimated by canopy-absorbed photosynthetically active radiation (PAR, Cramer and Field, 1999), while MRT is calculated with photosynthate allocation or C transfer coefficients among various C pools and environmental forcing (Xia et al., 2013). The large variations on NPP and MRT among the models may result from differences in simplifying assumptions and the environmental variables used, leading to various results for terrestrial C storage capacity. A recent analysis of the 5th Climate Model Intercomparison Project (CMIP5) from Todd-Brown et al. (2013) indicated that the estimates of the global soil C pool varied 5.9-fold among 12 models, with 2.6-fold variation in NPP and 3.6-fold variation in MRT. However, spatial variations in ecosystem C storage capacity determined by NPP and MRT and multi-model variations at a global scale have not yet been well quantified. Variations associated with regional and global NPP have been widely evaluated via comparison with data sets and among models (Kicklighter et al., 1999; Pinsonneault et al., 2011; Wang et al., 2011). For example, comparison among 17 uncoupled terrestrial biogeochemical models showed similar estimates of NPP over large areas (Cramer et al., 1999). Researchers have also analysed the sources of uncertainty in NPP via direct comparison of model structure (Adams et al., 2004) or analysis of the relationship between NPP and climate variables (Wang et al., 2011). The results showed general agreement on average among models but exhibited significant differences in spatial patterns. However, the sources of these variations in the spatial distribution of NPP remain unclear. Mean MRT has been estimated at a global scale through soil respiration measurement (Raich and Schlesinger, 1992) and C isotope tracing (Ciais et al., 1999; Randerson et al., 1999), as well as by inverse methods at a regional scale (Barrett, 2002; Zhou and Luo, 2008; Zhao and Running, 2010). However, spatial pattern of MRT at a global scale is still unknown, limiting accurate evaluation of the terrestrial C balance and model prediction of future global C cycling in response to climate change. If uncertainties in the MRT estimation are not adequately addressed at the global scale, ecosystem C storage capacity cannot be fully understood. For example, Zhou and Luo (2008) and Zhou et al. (2012) calculated ecosystem C uptake with increased NPP and MRT in the USA and found that MRT was the key source of uncertainty in the results. Therefore, quantifying variation in NPP and MRT at a global scale is necessary for better understanding of terrestrial ecosystem C storage. To date, no studies have been conducted to examine the spatial patterns in modelled and observed ecosystem C storage capacity and their variations at a global scale. In this study, we examined spatial patterns of ecosystem C storage capacity simulated by the ESMs included in CMIP5, evaluated the multi-model variations and explored their potential sources such as MRT, C inputs, or both. We aimed to (1) quantify spatial patterns of ecosystem C storage capacity and their multi-model variations and (2) examine the sources of variations from NPP and MRT estimates by ESMs. Here, the simulated results from five models were used to estimate MRT using C pools and influx or efflux. We mainly focused on assessing spatial variability across the models through model intercomparison at grid and global scales. ## 2. Materials and methods ### 2.1. Model description To calculate ecosystem C storage capacity and MRT, the simulated results of ESMs from the 5th CMIP5 were used, including C influx [gross primary productivity (GPP) and NPP], respiration [autotrophic respiration (Ra) and heterotrophic respiration (Rh)] and C pools (soil, litter and plant C, http://pcmdi9.llnl.gov/esgf-web-fe/). Eight models from five institutes were available in CMIP5 (Table 1). Models from the same climate centre showed more than 90% relative similarity (e.g. MIROC-ESM base model and CHEM, NorESM1-M and ME). Therefore, the modelled results from the same centre were pooled together prior to further analysis. However, IPSL models were still retained because of the different Re/GPP ratio between IPSL-CM5B and IPSL-CM5A with the values of 1 and 1.96, respectively. These ESMs combine climate models, atmospheric and oceanic process models, and terrestrial ecosystem models to examine the responses of earth system to global climate change. In this study, we focused on the terrestrial ecosystem models. Ecosystem MRT cannot directly be obtained from ESMs’ results and is calculated by the C residence times and C allocation coefficients for individual C pools in plants and soils (Barrett, 2002). Carbon enters into the terrestrial ecosystem through plant photosynthesis, which is partitioned into various plant pools (i.e. leaf, root and woody biomass). Plant materials then die to form litter pools (i.e. metabolic, structural and coarse woody debris). The litter C is partially decomposed by microbes to release CO2 and partially converted to soil organic matter (SOM) in fast, slow and passive pools. Most of the models share similar structures for C inputs and partition into plant or soil pools and terrestrial decomposition, but different parameters in C transfer coefficients between pools as well as their response to environmental variables could result in different MRTs across the models (Xia et al., 2013). Carbon input or GPP is similarly simulated with the leaf-level photosynthesis model involving sunlit and shaded leaves to scale up to the region or globe with leaf area index (LAI). Among five models, different plant functional types (PFTs) were defined, which included characteristic of different climate zones or biomes, such as 9 PFTs for CanESM, 13 for IPSL and MIROC and 15 for CCSM/NorESM. However, only the MIROC model considers the vegetation dynamic (Watanabe et al., 2011). Autotrophic respiration (Ra) is critical to estimate NPP (GPP-Ra), which includes maintenance respiration (MR) and growth respiration (GR). MIROC and CanESM simulated Ra by the respiration rate, the chemical composition of each of the plant tissues, and air temperature with a Q10 function (Arora et al., 2011; Watanabe et al., 2011). While CCSM and NorESM estimated MR as a function of temperature and live tissue N concentration and GR as 0.3 times the total C in new growth (woody and non-woody tissues) for a given time step (Lawrence et al., 2011). IPSL simulated MR as a linear function of biomass and temperature, and GR is a fixed part of allocated photosynthates (30%, Piao et al., 2010). Heterotrophic respiration or terrestrial decomposition across all ESMs is relatively uniform with a first-order decay process through 1–9 dead C pools (Todd-Brown et al., 2013). Decomposition in most models depends on Q10 function or Arrhenius-type equations, which are functionally similar (Davidson and Janssens, 2006). The decomposition rate is modified as a function of temperature (T) relative to a baseline (T0), such that F (T)=Q10(T − T0)/10 with different Q10 values across models. All models account for land use change, but Only CCSM and NorESM have the nitrogen cycle coupled with the C cycle. ### 2.2. Methods MRT is the average time that a C atom remains in a compartment of the system (Luo et al., 2003). Ecosystem MRT is aggregated from C residence time in the individual plant and soil pools with different ways (Zhou et al., 2012). Zhou and Luo (2008) used the inverse model to estimate the MRT in the USA, defining MRT as the inverse of the C transfer coefficients for individual pools. Friedlingstein et al. (2006) directly estimated the MRT of dead C (litter plus soil C pools) as the ratio of total dead C to Rh. Different mean C residence times in the individual C pools probably caused regional discrepancy in ecosystem MRT. Here, we estimated ecosystem MRT using the C balance method by the ratio of the C pool to C outflow. For an ecosystem, the C pool (Cpool) has three components—vegetation, litter and soil—and C loss is ecosystem respiration (Re), which includes Ra and Rh. Although C losses by wildfires are attributed to a large amount of C effluxes (about 2–4 Pg C yr−1, about 3–6% of soil respiration, Bowman et al., 2009; van der Werf et al., 2010), it is difficult to quantify fire effects on MRT and then ecosystem C storage by both modelling and experiments. Moreover, MIROC models did not consider fire. We thus did not take fire effects into account and calculated ecosystem MRT as follows: (1 ) $\text{MTR}={\mathrm{C}}_{\mathrm{pool}}\mathrm{/}{\mathrm{R}}_{\mathrm{e}}$ At the steady state, Re is equal to GPP. Except in IPSL-CM5A, the Re/GPP ratios for all of the models range from 1.1 to 0.99 for the years 1850–1860, during which most of the models can be considered to be at steady state. Here, we just used IPSL-CM5B to estimate MRTs and the resultant ecosystem C storage for IPSL because IPSL-CM5A was not at the steady state. In addition, Thompson and Randerson (1999) indicated that there were two types of MRTs for terrestrial ecosystems: the GPP-based or the NPP-based MRT. The latter does not include autotrophic respiration. If not specified, ecosystem MRT refers to the GPP-based MRT in this study. To make better comparison, we also estimated the NPP-based MRT. The NPP-based MRT (MRTcor) was corrected from ecosystem MRT with the NPP/GPP ratio. NPP and ecosystem respiration have significant seasonal and inter-annual variability. To decrease the effects of inter-annual variability on the MRT, monthly means for all variables from 1850 to 1860 were determined for each grid cell to generate an overall mean for calculating MRT at the steady state. All of the data were regridded using R software to a common projection (WGS 84) and 1°×1° spatial resolution. Latitudinal patterns were extracted by moving averages over 1° latitudinal bands. The regridding approach assumed conservation of mass that a latitudinal degree was proportional to distance for the close grid cells (Todd-Brown et al., 2013). Multi-model variability on NPP, MRT and terrestrial C storage capacity were measured using the standard deviation (SD) and coefficient of variation (CV=SD/mean). The SD and CV were calculated using the five models’ results for each grid at a spatial resolution of 1°×1°. ### 2.3. Data sets Four data sets were used to evaluate model performance, including the NPP and NPP/GPP ratio derived from Moderate Resolution Imaging Spectroradiometer (MODIS) data (Zhang et al., 2009; Zhao and Running, 2010), the MRT for the USA estimated using the inverse model (Zhou and Luo, 2008; Zhou et al., 2012) and soil C storage from the Harmonized World Soil Database (HWSD). We used the 0.008°×0.008° gridded MODIS product MOD17A3 from 2000 to 2009 (Zhao and Running, 2010). Here, the GPP was calculated as GPP=ɛ×FPAR×PAR, where ɛ is the radiation use efficiency of the vegetation determined by maximum ɛ in each biome (ɛmax), temperature (T) and soil moisture (M, ɛ=ɛmax×f (T)×f (M)). FPAR was the fraction of incident PAR, absorbed by the canopy. The annual NPP was calculated as . Here, PsnNet=GPP − Rml − Rmr; Rml, Rmr and Rmo are MR by leaves, fine roots and other living parts, respectively. Rg is GR. All the respiration data were obtained from the C4 MOD17 Biome Parameter Look-Up Table (BPLUT). The NPP/GPP ratio was also used to assess model performance due to its greater stability compared to the NPP or GPP alone (Zhang et al., 2009). The simulated NPPs and NPP/GPP ratios in 1995–2005 were used for model-data comparison. Spatial patterns of directly observed MRTs are not available at the global scale for evaluating the models. Currently, regional MRTs have been estimated using inverse analysis only for the USA and Australia (Barrett, 2002; Zhou and Luo, 2008; Zhou et al., 2012). In the conterminous USA, the estimated MRTs were estimated by genetic algorithm (Zhou and Luo, 2008) and Markov Chain Monte Carlo (MCMC, Zhou et al., 2012) with values of 46 and 56.8 yr, respectively. However, MRTs estimated by Zhou et al. (2012) and Zhou and Luo (2008) were NPP-based values, so the modelled MRTcor in the USA from 1850 to 1860 were used for data-model comparison at the grid scale. Mean MRT in Australia (Barrett, 2002) was larger than global C turnover estimates (26–60 yr), and therefore spatial patterns were not discussed. Ecosystem C storage is composed of C pools in vegetation, litter and soil. As the largest terrestrial C pool, soil C storage was used to assess model performance for ecosystem C storage with the HWSD (FAO/IIASA/ISRIC/ISSCAS/JRC, 2012). For the HWSD, the major sources of uncertainty are related to analytical measurement of soil carbon, variation in carbon content within a soil type and assumption that soil types can be used to extrapolate soil C data. Analytical measurements of soil C concentrations are generally accurate, but measurements of soil bulk density are more uncertain (Todd-Brown et al. 2013). Therefore, we used SOC from HWSD by the amendments of typological data and a bulk density (Hiederer and Köchy, 2011) to conduct data-model comparison, with the global total of 1417 Pg C at the 30 arc second grid (http://eusoils.jrc.ec.europa.eu). One limitation of the above datasets is that their uncertainties are poorly quantified. Here, 50 000 and 500 simulations calculated the global or regional means, respectively, through MCMC sampling with size of 5000 and 500 in R software. For each variable, the confidence interval was estimated as the 2.5 and 97.5 percentile of mean values of the 5000 (or 500) simulations. Therefore, the global mean was 0.55 (0.54–0.56) kg C m−2 yr−1 for MODIS NPP, 12.42 (12.37–12.48) kg C m−2 for HWSD soil C, 0.53 (0.52–0.54) for the NPP/GPP ratio and 52.1 (51.6–52.6) years for MRT in the United States. This method was also used to calculate the global mean of each variable in each model. ## 3. Results ### 3.1. Ecosystem carbon storage capacity The ecosystem C storage capacity was calculated from the sums of C storage in plant, litter and soil pools at the steady state, represented by NPP×MRTcor. The average ecosystem C storage capacity for all five models was about 20 kg C m−2, with a maximum of nearly 36 kg C m−2 for MIROC and a minimum of nearly 10 kg C m−2 for CCSM (Figs. 1 and 2). The ecosystem C storage capacity for CanESM was higher than that for IPSL due to a longer MRTcor, although NPP in IPSL was larger than that in CanESM. The largest ecosystem C storage capacity (MIROC) was associated with the longest MRTcor (88 yr) and a mid-range NPP (0.45 kg C m−2 yr−1). However, the lowest NPP and longest MRTcor (NorESM) resulted in the larger ecosystem C storage capacity than that in CCSM. Fig. 1 The spatial pattern of ecosystem C storage capacity (NPP*MRTcor, kg C m−2) for the five models (modelled time: 1850–1860). Coefficient of variation (CV) was calculated for each grid cell using five models’ results. Fig. 2 The relationship between mean residence time (MRT, years) and net primary production (NPP, kg C m−2 yr−1) and the latitude pattern of ecosystem C storage capacity (NPP*MRTcor) for five models (modelled time: 1850–1860). The spatial and latitudinal patterns of NPP and MRT substantially affected the patterns of ecosystem C storage capacity (Fig. 2). Between 30°S and 30°N, all five models closely simulated ecosystem C storage capacity, with similar values of NPP and MRT (Fig. 2). However, ecosystem C storage capacity for MIROC was higher at other latitudes than those for the other models, reaching a maximum at around 70°N (~55 kg C m−2) (Fig. 2b). The ecosystem C storage capacities for CCSM and NorESM were much lower than those for the other models, particularly at 50–30°S and 30–70°N, but relatively high at 5°S–5°S. Ecosystem C storage capacity for the NorESM was higher than that for CCSM due to a higher MRT, although they had the same terrestrial ecosystem model (CLM). Soil C storage accounted for a large amount of ecosystem C storage (40% for CCSM and NorESM and 70% for CanESM, IPSL and MIROC, Fig. 3) and explained much of the spatial variation in ecosystem C storage across models (R2>0.7), especially for CanESM and MIROC. However, not all of the models accurately predicted soil C storage (Fig. 3f). Across all common grid cells, the Pearson correlation coefficients between soil C data from the models and HWSD ranged from 0.06 to 0.49 and the root mean square errors (RMSE) were from 10 to 15 kg C m−2. Fig. 3 The relationship between ecosystem C storage and soil C across models (a, b, c, d, e) and Taylor diagram for soil C storage at grid scale (f). ### 3.2. NPP and NPP/GPP ratio The average NPP among the five models was 0.36 kg C m−2 yr−1 from 1850 to 1860 and 0.41 kg C m−2 yr−1 from 1996 to 2005 (Table 3). Apart from NorESM, the predicted global average NPPs were close to the MODIS-based estimates with the similar latitude patterns (Fig. 7a), but there was regional variability among models (Figs. 4, 9a, b). For example, NorESM and CCSM4 underestimated NPPs for all grids except in certain tropical regions (Fig. 4d and h). CanESM and IPSL greatly underestimated NPPs for northern North America, but overestimated NPPs for northern Africa. Thus, high spatial variability across models led to high CV at most areas with the values larger than 0.5 (Fig. 9b). The highest CV occurred in high latitude and sparse vegetation regions. The SD was larger for high NPP areas and smaller where the NPP was low. The highest SDs occurred in tropical zones, while the coefficient of variance was lower than 0.1. Fig. 4 Spatial pattern of net primary production (NPP, kg C m−2 yr−1) estimated from MODIS and the difference between five models and MODIS (modelled time: 1995–2005). Apart from NorESM, all of the models estimated an average NPP/GPP ratio near to 0.5 (Table 3), although there was poor agreement between modelled ratio and the MODIS estimates at the grid cells (R2<0.25, Fig. 5). At most areas, CanESM, IPSL and MIROC closely estimated the NPP/GPP ratios, differing by −0.05 to 0.05 from MODIS, while CCSM and NorESM greatly underestimated the NPP/GPP ratio. The latitudinal patterns of the NPP/GPP ratios for the other models were not consistent and were highly complex (Fig. 7b). For example, the NPP/GPP ratio predicted by CanESM had a series of peaks within 10°S–10°N, with the highest nearly at the equator, while the NPP/GPP ratios for the other models had lower values at these latitudes. Fig. 5 Spatial pattern of the NPP/GPP ratio estimated from MODIS and the difference between five models and MODIS (modelled time: 1995–2005). ### 3.3. Mean residence time Ecosystem MRTs at the grid scale were highly heterogeneous, ranging from 0 to thousands of years. MRT values for the majority of the grids were 5–40 yr, with the lowest values in the tropics and the highest values at high latitudes (Fig. 6). The mean MRT for all models was ~28 yr, ranging from 15 yr for CCSM to 45 yr for MIROC (Table 2). Large MRT differences occurred among the models (Fig. 6). Compared with the mean value across all models, CCSM and NorESM underestimated MRTs in most areas at <30 yr, with only 5% of the grids >50 yr. MIROC overestimated MRTs for all grids, with a majority of the MRTs <70 yr and a maximum value >568 yr. Fig. 6 Spatial pattern of average mean residence time (MRT, years) of ecosystem for five models and the difference between five models and the average of models (modelled time: 1850–1865). We sampled the mean MRT for each latitudinal zone at 1° intervals between 50°S and 70°N to explore latitudinal patterns in the MRT (Fig. 7). The MRTs predicted by the five models at 25°S–10°N were relatively low with low variability. The models predicted high MRTs at high latitudes because of the relatively low temperatures and lower rates of decomposition. The MRTs for CCSM and NorESM were lower than those of the other models but had similar latitudinal patterns. The MRTs for MIROC were higher than that for the other models at most latitudes, particularly at 12°N–30°N. Thus, there was high spatial variability in the MRT across models. The CV of the MRTs estimated by all five models mainly ranged between 0.2 and 0.8, with the lowest values in latitude 10°S–10°N and the largest values in the high latitude and sparse vegetation regions. Fig. 7 The latitude pattern of NPP, the NPP/GPP ratio and mean residence time (MRT) for five models (each point representing the average over one latitudinal zone). The Data of MODIS was got from Zhang et al. (2009), which is the central tendency produced by average over 5° latitude. We extracted MRTs for the USA and Australia and calculated MRTcor using the NPP/GPP ratio to test model performance (Table 3). The relative errors ranged from −47% for CCSM to 2.2% for MIROC. The differences between the simulated and inverse results showed that all five models underestimated the MRT in southwest USA (Fig. 8). Among the models, CanESM, CCSM and IPSL underestimated the MRTs in the USA for most grids by up to 70%, while NorESM greatly overestimated MRTs in eastern and central USA due to the lowest NPP/GPP ratio. The MRT values for MIROC were more similar to the inverse model estimates than those of the other models. Fig. 8 Spatial pattern of mean residence time (MRT) of ecosystem in the USA from inversed models (Zhou and Luo, 2008; Zhou et al., 2012) and the difference between five models and the estimates of inverse models. ## 4. Discussion For ESMs, the ability to accurately represent the spatial distribution of NPP and MRT is a prerequisite for predicting ecosystem C storage capacity and future carbon–climate feedback. Our results showed that most models accurately predicted the global average NPP and NPP/GPP ratios compared with the MODIS estimates, although regional variability was relatively large among models (Table 3). However, all five models at the regional and global scales poorly estimated MRTs, resulting in poor estimates of ecosystem C storage capacity. Thus, variations in ecosystem C storage, NPP and MRT are important to improve model predictions of the global terrestrial C balance. ### 4.1. Variation in simulated ecosystem carbon storage capacity Ecosystem C storage capacity can be a function of NPP and MRT and is composed of vegetation, litter and soil C pools. Currently, there is no feasible method to directly validate ecosystem C storage capacity due to the lack of the gridded observation-based data. Therefore, we indirectly assessed ecosystem C storage capacity through soil C storage and through validation of NPP and MRT. On average, none of the models accurately simulated the grid-scale distributions of ecosystem C storage capacity (Fig. 1) or soil C stocks, which were consistent with the results in Todd-Brown et al. (2013). Models may continue to amplify variation in predicting climate C-cycle feedback in the future (i.e. Friedlingstein et al., 2006). It is evident that large variations remain in modelled estimates of ecosystem C storage capacity at the regional and global scales, with a high CV in most areas, particularly at high latitudes and in sparsely vegetated regions (Figs. 1 and 9). Large variation in ecosystem C storage capacity can result from high spatial variability in the NPP, MRT or both (Figs. 4 and 6). At 10° S–10° N, low variability in both the NPP and MRT resulted in little difference in ecosystem C storage capacity among models. At high latitudes and in sparsely vegetated regions, high CVs in NPP and MRT led to large variability in ecosystem C storage capacity. However, at lower latitudes, low variability in NPP and high variability of MRT also produced a high CV for ecosystem C storage capacity. In addition, the high spatial variability of NPP and MRT among individual models may induce large variability in C storage capacity. For example, if NorESM were not included, the CV for ecosystem C storage capacity would decrease by 27% on average, particularly at high latitudes. Our results suggest that the main source of variation in ecosystem C storage is spatial variability in C residence times, which was consistent with previous research at a regional scale (Zhou and Luo, 2008; Zhou et al., 2012). The inverse analysis indicated that the sensitivity of the C storage capacity to disturbance is determined by the residence time of C pool (Weng et al., 2012). Similarly, the results of Todd-Brown et al. (2013) showed indirectly that soil C turnover time was more important than NPP in determining differences of simulated soil C across ESMs at a global scale. Another source of multi-model variation in ecosystem C storage capacity is likely the use of different methods to simulate the ecosystem C cycle. Most models are only effective for the specific processes in which they have been designed and their parameter ranges were validated (Dungait et al., 2012). For example, the soil models currently embedded in the ESMs are structured around 3–5 pools, with transformation rates modified by empirical correlations with soil temperature, water and clay content (Schmidt et al., 2011). However, mechanisms of permafrost melting over the long term are not embedded in the current ESMs, resulting in large uncertainties in prediction of ecosystem C storage at the steady state. Additionally, most ESMs ignored deep mineral soils or sparsely vegetated regions because of the lack of field data and ecosystem biogeochemistry. These omissions contribute to the large variation in ecosystem C storage at high latitudes and for Australia. For example, the current ESMs simulated C processes with 9–13 PFTs comprising forests and grasses, which largely omitted the property of permafrost vegetation. Models for permafrost soil C have only recently been integrated into ESMs (Koven et al., 2011) and further improvements in modelling C loss and accumulation would reduce uncertainties related to ecosystem C feedback cycles at high latitudes (Krishan et al., 2009; Schuur et al., 2009). ### 4.2. Variation in NPP and NPP/GPP ratio Although most models reasonably predicted global NPP that was fairly consistent with latitude pattern (Fig. 7), none were able to reproduce grid-scale distributions of NPP (Fig. 9). Better performance at the latitude level may be due to aggregation of environmental variations that affect the C cycle at the grid scale. At the grid scale, land surface parameters may be the main factors contributing to poor agreement between the model predictions and empirical data. Most of the models share a similar structure in which photosynthetically fixed C is based on a leaf-level function. In the most models, PFT patches are directly linked to leaf-level ecophysiological measurements, while community composition (i.e. the PFTs and their areal extent) and vegetation structure (e.g. height, LAI) are directly inputted to each grid cell for each PFT. Thus, different inputs of land surface parameters among the models may cause spatial heterogeneity across models. For example, the land surface parameters in CLM were developed from several MODIS land surface products at a grid cell resolution of 0.05° (Lawrence and Chase, 2007), resulting in a good agreement between the simulated NPP for CCSM4 and NorESM with the estimated NPP from MODIS (Pearson correlation coefficient >0.7). In addition, the fixed vegetation cover in most ESMs would neglect the effect of climate change on vegetation. Among the five models, only MIROC models include dynamic vegetation through PFT distributional shifts and demographic stand process (Watanabe et al., 2011), which could directly produce the variability of vegetation cover over time and predict the C-climate feedback. Fig. 9 Spatial pattern of coefficient of variation (CV) and standard deviation (SD) of NPP and MRT for all five models (modelled time: 1850–1860). Although NPP is calculated from GPP and autotrophic respiration (Ra), little is known about the Ra and its response to environmental change, especially for long-term acclimation, which largely determine the NPP/GPP ratio. Plant Ra is not parameterised very well in current biogeochemical models (Atkin et al., 2008), further limiting the ability to accurately estimate NPP and its response to climate change. In most models, the sensitivity of Ra to temperature is represented by a Q10 function or a modified Arrhenius equation (similar function), but different models have different Q10 (Ruimy et al., 1996), ranging from 1.9 to 2.5 based on estimates inferred from global forest database (Piao et al., 2010). Moreover, the long-term experiments suggested that the sensitivity of plant respiration to temperature often declined with temperature due to the long-term acclimation (Luo, 2007; Atkin et al., 2008). Most ESMs defined a single temperature response function and failed to take into account for acclimation of plant respiration (Atkin et al., 2008). ### 4.3. Variation in MRT In contrast to NPP, information on how ecosystem MRT varies among ecosystems and its responses to global climate change is extremely limited. Current research showed that soil warming experiments were compatible with the long-term sensitivity of SOC residence time (Knorr et al., 2005), with the slow soil C pool being more sensitive to temperature than the fast soil C pool, but it is still a topic of intense debate (Hopkins et al., 2012). Since future changes in the MRT could strongly affect the ability of the ecosystem to serve as a sink for atmospheric C, it is critical to evaluate model performance in estimating MRT against observed data. There are a number of factors that may contribute to poor agreement between model predictions and empirical data, such as variation in the observed data and model structure. MRTs for various C pools are mainly estimated from simple isotope mixing models. Model estimates can only produce a composite MRT of the various SOC constituents with short and long residence times (Randerson et al., 1999), so different fractionation techniques and model structures may affect the calculation of MRT (Derrien and Amelung, 2011). As a result, global MRTs ranged from 29 to 60 yr (Table 2), which clearly indicates that there is room to improve the empirical estimates. Inverse models could be a valid approach to produce spatial information on MRTs at a global scale for assessing model performance. Although parameterisation of inverse models is constrained by experimental data, there have been large uncertainties reported, likely due to lack of experimental data (e.g. microbial biomass, respiration), a mismatch in timescales between the available data and the parameters to be estimated, or differences in the inverse methods (Xu et al., 2006; Zhou & Luo, 2008; Zhou et al., 2012). For example, the MRT estimated by Zhou and Luo (2008) was 10 yr shorter than the estimate by Zhou et al. (2012) using different inverse methods with nearly the same experimental data. Improving empirical estimates, however, will not fully resolve the differences in MRT predictions across models, because they do not all agree with one another in their representation of the ecosystem. MRTs are estimated from C transfer coefficients among the various C pools and environmental forcing (Xia et al., 2013). The former determines how long C may remain in plant or soil pool. The C allocated to plant tissues (e.g. stems, leaves and roots) could determine ecosystem MRT, commonly described by plant biomass or PFTs (Carbone and Trumbore, 2007). For example, the low allocation of C to the longer-lived stem pools resulted in the short MRT for biomass C in Arid region (Barrett, 2002). The cropland and grassland only allocate C to leaves and roots with turnover times of months to a few years, leading to the lower MRT than other PFTs (Zhou et al., 2012). However, among five models, only MIROC simulates the temporal dynamics of PFTs. Another source of the large variation is to determine C transfer coefficients between pools. For example, in the MIROC models, the transfer coefficients from the leaves and fine roots into the litter are constant (Sato et al., 2007), while in the CanESM models, they are calculated as a function of normal turnover, drought and cold stress (Arora and Boer, 2005). Highly variable C residence times in difference pools would lead to the difference of ecosystem MRT among models globally and regionally. For example, grassland and cropland have the relatively fast MRT due to the lack of long-residence wood tissues and coarse litter (Zhou et al., 2012). The difference of MRT across models could result from ecosystem response to driving variables, determined by model parameterisation. Here, we defined MRT as a function of soil temperature (Ts): , where the parameter k and Q10 in each model were calculated using ecosystem MRT and climate factors across all grid cells. Such simple models showed that temperature could explain the spatial variation of ecosystem MRT up to 65% (Table 4), suggesting the effects of the initial residence time and the temperature sensitivity on ecosystem MRT among models. The Q10 values in MIROC, CanESM and IPSL were within the range between 1.5 and 2.5, which have been often set on estimates inferred from ecosystem flux measurements (Mahecha et al., 2010), while Q10 in CCSM and NorESM was much higher, resulting in the short MRT. In addition, soil moisture did not significantly improve the estimate of MRT if it was incorporated into the temperature function (data not shown). ### 4.4. Implications for land surface models Our model intercomparison indicates that both NPP and MRT may contribute to multi-model variation in ecosystem C storage capacity at the regional scale but MRT has greater effects than NPP, especially at high latitudes. More research on carbon MRT is thus needed to greatly improve the performance of land surface models toward predictive understanding of ecosystem responses to climate change in the future. Thus, our study would offer several suggestions for future experimental and modelling research with the goal of improving estimates of ecosystem C storage capacity. First, our results showed that some ESMs such as CCSM4 simulated fast C turnover, whereas other models such as MIROC simulated slow C turnover (Fig. 6). Thus, experimental data of C residence time in various C pools should be used to constrain the rates of the C cycle. In addition, there are no benchmarking data for the spatial patterns of MRT at regional or global scale to assess model performance. Inverse models would be a reasonable approach to produce a map of ecosystem MRTs. Thus, collection of experimental data on various C pools among biomes globally is the first step to improve model parameters. Especially, models could use forest inventory data to constrain C residence times in living biomass pools. Second, although most models share a similar structure for carbon partitioning among three or more C pools and response to climate change, the models have different definitions of the C pools and equations with environmental variables that control the C flows among the C pools, resulting in large variation in simulation of ecosystem MRT. For example, IPSL defines 15 C pools (eight biomass pools, four litter pools and three soil C pools), while CLM4 simulates six C pools (three biomass pools and three dead C pools). Thus, assessing and improving C partitioning and transfer coefficients among C pools at the global scale is key to improving model performance for ecosystem MRTs. Third, this study demonstrated that the largest uncertainties in spatial variability of ecosystem C storage and MRTs occur at high latitudes and for sparsely vegetated regions (Figs. 1 and 6). The current soil models embedded in ESMs are mainly based on molecular structures and the kinetic theory (Schmidt et al., 2011). Such model structures largely ignore deep mineral and permafrost soils. In addition, there would be a strong temperature or water stress close to thresholds for vegetation growth in the sparse vegetated regions, which could be difficult to be modelled accurately. Thus, it is imperative for the development of ESMs to improve post-photosynthesis C cycle modelling at high latitudes and its response to climate change. ## 5. Conclusions We aimed to evaluate spatial variation in ecosystem C storage capacity simulated by ESMs included in CMIP5 and to examine sources of multi-model variability in the results associated with MRT and/or C inputs. Model intercomparison indicated that NPP was simulated relatively well by most models on average, but MRT was substantially underestimated by most of the models. Underestimation of MRT resulted in lower estimates for ecosystem C storage capacity. Among the five models, MIROC predicted the largest ecosystem C storage capacity (about 40 kg C m−2) and the longest MRT (50 yr). The spatial patterns of ecosystem C storage capacity predicted by CCSM4 and NorESM were similar, as they include the same land C model (e.g. CLM). The C storage capacity estimated by NorESM was higher than that for CCSM due to differences in the NPP/GPP ratio between the two models. Nonetheless, large spatial variations in MRT and NPP resulted in large variations in ecosystem C storage capacity (CV=0.4–1.2), particularly at high latitudes and in sparsely vegetated regions. Our results indicate that more research should be conducted in the future to estimate C partitioning and transfer coefficients among C pools so that ecosystem C residence time and storage capacity can be accurately simulated. ## 6. Acknowledgements This study was funded by the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, 2012 Shanghai Pujiang Program (12PJ1401400), Thousand Young Talents Program in China, by a Changjiang Scholarship to Yiqi Luo at Fudan University, and by the US Department of Energy Terrestrial Ecosystem Sciences Grant DE SC0008270 and US National Science Foundation (NSF) Grants DEB 0444518, DEB 0743778, DEB 0840964, DBI 0850290 and EPS 0919466 to YL at the University of Oklahoma. We acknowledge the World Climate Research Programmer's Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modelling groups (Table 1) for producing and making available their model output. Additional thanks to CMIP, the US Department of Energy's Program for Climate Model Diagnosis and Intercomparison, for providing coordinating support and leading the development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. ## References 1. Ahlström A , Smith B , Lindström J , Rummukainen M , Uvo C. B . GCM characteristics explain the majority of uncertainty in projected 21st century terrestrial ecosystem carbon balance . Biogeosciences . 2013 ; 10 : 1517 – 1528 . 2. Arora V. K , Boer G. J . A parameterization of leaf phenology for the terrestrial ecosystem component of climate models . Glob. Chang. Biol . 2005 ; 11 : 39 – 59 . 3. Arora V. K , Scinocca J. F , Boer G. J , Christian J. R , Denman K. L , co-authors . Carbon emission limits required to satisfy future representative concentration pathways of greenhouse gases . Geophys. Res. Lett . 2011 ; 38 : 05805 . 4. Atkin O. K , Atkinson L. J , Fisher R. A , Campbell C. D , Zaragoza-Castells J , co-authors . Using temperature-dependent changes in leaf scaling relationships to quantitatively account for thermal acclimation of respiration in a coupled global climate–vegetation model . Glob. Chang. Biol . 2008 ; 14 : 2709 – 2726 . 5. Barrett D. J . Steady state turnover time of carbon in the Australian terrestrial biosphere . Glob. Biogeochem. Cycles . 2002 ; 16 ( 4 ): 1108 . 6. Bowman D. M , Balch J. K , Artaxo P , Bond W. J , Carlson J. M , co-authors . Fire in the Earth system . Science . 2009 ; 324 : 481 – 484 . [PubMed Abstract] . 7. Canadell J. G , Le Quere C , Raupach M. R , Field C. B , Buitenhuis E. T , co-authors . Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks . Proc. Natl. Acad. Sci. U. S. A . 2007 ; 104 : 18866 – 18870 . [PubMed Abstract] [PubMed CentralFull Text] . 8. Carbone M. S , Trumbore S. E . Contribution of new photosynthetic assimilates to respiration by perennial grasses and shrubs: residence times and allocation patterns . New Phytol . 2007 ; 176 : 124 – 135 . [PubMed Abstract] . 9. Chambers J. Q , Fisher J. I , Zeng H , Chapman E. L , Baker D. B , co-authors . Hurricane Katrina's carbon footprint on U.S. Gulf Coast forests . Science . 2007 ; 318 : 1107 . [PubMed Abstract] . 10. Ciais P , Friedlingstein P , Schimel D. S , Tans P. P . A global calculation of the delta C-13 of soil respired carbon: implications for the biospheric uptake of anthropogenic CO2 . Glob. Biogeochem. Cycles . 1999 ; 13 : 519 – 530 . 11. Cox P. M , Betts R. A , Collins M , Harris P. P , Huntingford C , co-authors . Amazonian forest dieback under climate–carbon cycle projections for the 21st century . Theor. Appl. Climatol . 2004 ; 78 : 137 – 156 . 12. Cramer W , Field C. B . Comparing global models of terrestrial net primary productivity (NPP): introduction . Glob. Chang. Biol . 1999 ; 5 : Iii – Iv . 13. Cramer W , Kicklighter D. W , Bondeau A , Moore B , Churkina G , co-authors . Comparing global models of terrestrial net primary productivity (NPP): overview and key results . Glob. Chang. Biol . 1999 ; 5 : 1 – 15 . 14. Davidson E. A , Janssens I. A . Temperature sensitivity of soil carbon decomposition and feedbacks to climate change . Nature . 2006 ; 440 : 165 – 173 . [PubMed Abstract] . 15. Derrien D , Amelung W . Computing the mean residence time of soil carbon fractions using stable isotopes: impacts of the model framework . Eur. J. Soil Sci . 2011 ; 62 : 237 – 252 . 16. Dungait J. A. J , Hopkins D. W , Gregory A. S , Whitmore A. P . Soil organic matter turnover is governed by accessibility not recalcitrance . Glob. Chang. Biol . 2012 ; 18 : 1781 – 1796 . 17. Emanuel W. R , Killough G. G , Post W. M , Shugart H. H . Modeling terrestrial ecosystems in the global carbon-cycle with shifts in carbon storage capacity by land-use change . Ecology . 1984 ; 65 : 970 – 983 . 18. FAO/IIASA/ISRIC/ISSCAS/JRC . Harmonized World Soil Database (version 1.10) . 2012 ; Rome, Italy and IIASA, Laxenburg, Austria : FAO . 19. Friedlingstein P , Cox P , Betts R , Bopp L , Von Bloh W , co-authors . Climate–carbon cycle feedback analysis: results from the C4MIP model intercomparison . J. Clim . 2006 ; 19 : 3337 – 3353 . 20. Heimann M , Reichstein M . Terrestrial ecosystem carbon dynamics and climate feedbacks . Nature . 2008 ; 451 : 289 – 292 . [PubMed Abstract] . 21. Hiederer R , Köchy M . Global Soil Organic Carbon Estimates and the Harmonized World Soil Database. EUR 25225 EN . 2011 ; Publications Office of the European Union . 79 pp . 22. Hopkins F. M , Torn M. S , Trumbore S. E . Warming accelerates decomposition of decades-old carbon in forest soils . Proc. Natl. Acad. Sci. U. S. A . 2012 ; 109 : E1753 – 1761 . [PubMed Abstract] [PubMed CentralFull Text] . 23. Johnston M. H , Homann P. S , Engstrom J. K , Grigal D. F . Changes in ecosystem carbon storage over 40 years on an old-field forest landscape in east-central Minnesota . Forest Ecol. Manag . 1996 ; 83 : 17 – 26 . 24. Kane E. S , Vogel J. G . Patterns of total ecosystem carbon storage with changes in soil temperature in Boreal Black Spruce forests . Ecosystems . 2009 ; 12 : 322 – 335 . 25. Kicklighter D. W , Bondeau A , Schloss A. L , Kaduk J , McGuire A. D , co-authors . Comparing global models of terrestrial net primary productivity (NPP): global pattern and differentiation by major biomes . Glob. Chang. Biol . 1999 ; 5 : 16 – 24 . 26. Knorr W , Prentice I. C , House J. I , Holland E. A . Long-term sensitivity of soil carbon turnover to warming . Nature . 2005 ; 433 : 298 – 301 . [PubMed Abstract] . 27. Koven C. D , Ringeval B , Friedlingstein P , Ciais P , Cadule P , co-authors . Permafrost carbon–climate feedbacks accelerate global warming . Proc. Natl. Acad. Sci. U. S. A . 2011 ; 108 : 14769 – 14774 . [PubMed Abstract] [PubMed CentralFull Text] . 28. Krinner G , Viovy N , de Noblet-Ducoudré N , Ogée J , Polcher J , co-authors . A dynamic global vegetation model for studies of the coupled atmosphere-biosphere system . Glob. Biogeochem. Cycles . 2005 ; 19 : GB1015 . 29. Krishan G , Srivastav S. K , Kumar S , Saha S. K , Dadhwal V. K . Quantifying the underestimation of soil organic carbon by the Walkley and Black—examples from Himalayan and Central Indian soils . Curr. Sci. India . 2009 ; 96 : 1133 – 1136 . 30. Lales J. S , Lasco R. D , Geronimo I. Q . Carbon storage capacity of agricultural and grassland ecosystems in a geothermal block . Philippine Agr. Sci . 2001 ; 84 : 8 – 18 . 31. Lawrence D. M , Oleson K. W , Flanner M. G , Thornton P. E , Swenson S. C , co-authors . Parameterization improvements and functional and structural advances in version 4 of the Community Land Model . J. Adv. Model. Earth Syst . 2011 ; 3 : 1942 – 2466 . 32. Lawrence P. J , Chase T. N . Representing a new MODIS consistent land surface in the Community Land Model (CLM 3.0) . Journal of Geophysical Research: Biogeosciences . 2007 ; 112 : G01023 . 33. Le Quere C , Raupach M. R , Canadell J. G , Marland G , Bopp L , co-authors . Trends in the sources and sinks of carbon dioxide . Nat. Geosci . 2009 ; 2 : 831 – 836 . 34. Luo Y . Terrestrial carbon–cycle feedback to climate warming . Ann. Rev. Ecol. Evol. Syst . 2007 ; 38 : 683 – 712 . 35. Luo Y. Q , White L. W , Canadell J. G , DeLucia E. H , Ellsworth D. S , co-authors . Sustainability of terrestrial carbon sequestration: a case study in Duke Forest with inversion approach . Glob. Biogeochem. Cycles . 2003 ; 17 ( 1 ): 1021 . 36. Luo Y. Q , Wu L. H , Andrews J. A , White L , Matamala R , co-authors . Elevated CO2 differentiates ecosystem carbon processes: deconvolution analysis of Duke Forest FACE data . Ecol. Monogr . 2001 ; 71 : 357 – 376 . 37. Mahecha M. D , Reichstein M , Carvalhais N , Lasslop G , Lange H , co-authors . Global convergence in the temperature sensitivity of respiration at ecosystem level . Science . 2010 ; 329 : 838 – 840 . [PubMed Abstract] . 38. Meyer R , Joos F , Esser G , Heimann M , Hooss G , co-authors . The substitution of high-resolution terrestrial biosphere models and carbon sequestration in response to changing CO2 and climate . Global Biogeochem. Cycles . 1999 ; 13 : 785 – 802 . 39. Mooney H , Canadell J , Chapin F., III , Ehleringer J , Körner C , co-authors . Ecosystem Physiology Responses to Global Change . 1999 ; Cambridge : Cambridge University Press . 141 – 189 . 40. Parton W. J , Stewart J. W , Cole C. V . Dynamics of C, N, P and S in grassland soils: a model . Biogeochemistry . 1988 ; 5 : 109 – 131 . 41. Piao S , Luyssaert S , Ciais P , Janssens I. A , Chen A , co-authors . Forest annual carbon cost: a global-scale analysis of autotrophic respiration . Ecology . 2010 ; 91 : 652 – 661 . [PubMed Abstract] . 42. Pinsonneault A. J , Matthews H. D , Kothavala Z . Benchmarking climate–carbon model simulations against forest FACE data . Atmos. Ocean . 2011 ; 49 : 41 – 50 . 43. Prentice I. C , Bondeau A , Cramer W , Harrison S. P , Hickler T , co-authors . Dynamic global vegetation modeling: quantifying terrestrial ecosystem responses to large-scale environmental change . Terrestrial Ecosystems in a Changing World . 2007 ; Springer, Berlin, : Heidelberg . 175 – 192 . (eds. J. G. Canadell, D. E. Pataki and L. . 44. Raich J. W , Schlesinger W. H . The global carbondioxide flux in soil respiration and its relationship to vegetation and climate . Tellus B . 1992 ; 44 : 81 – 99 . 45. Randerson J. T , Thompson M. V , Field C. B . Linking C-13-based estimates of land and ocean sinks with predictions of carbon storage from CO2 fertilization of plant growth . Tellus. B . 1999 ; 51 : 668 – 678 . 46. Ruimy A , Dedieu G , Saugier B . TURC: a diagnostic model of continental gross primary productivity and net primary productivity . Glob. Biogeochem. Cycles . 1996 ; 10 : 269 – 285 . 47. Sato H , Itoh A , Kohyama T . SEIB–DGVM: a new dynamic global vegetation model using a spatially explicit individual-based approach . Ecol. Model . 2007 ; 200 : 279 – 307 . 48. Schmidt M. W. I , Torn M. S , Abiven S , Dittmar T , Guggenberger G , co-authors . Persistence of soil organic matter as an ecosystem property . Nature . 2011 ; 478 : 49 – 56 . [PubMed Abstract] . 49. Schuur E. A. G , Vogel J. G , Crummer K. G , Lee H , Sickman J. O , co-authors . The effect of permafrost thaw on old carbon release and net carbon exchange from tundra . Nature . 2009 ; 459 : 556 – 559 . [PubMed Abstract] . 50. Solomon S , Qin D , Manning M , Marquis M , Averyt K , co-authors . Climate change 2007: the physical science basis . Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change . 2007 ; Cambridge : Cambridge University Press . 51. Strassmann K. M , Joos F , Fischer G . Simulating effects of land use changes on carbon fluxes: past contributions to atmospheric CO2 increases and future commitments due to losses of terrestrial sink capacity . Tellus B . 2008 ; 60 : 583 – 603 . 52. Tang J. W , Yin J. X , Qi J. F , Jepsen M. R , Lu X. T . Ecosystem carbon storage of tropical forests over limestone in Xishuangbanna, SW China . J. Trop. Forest Sci . 2012 ; 24 : 399 – 407 . 53. Thompson M. V , Randerson J. T . Impulse response functions of terrestrial carbon cycle models: method and application . Glob. Chang. Biol . 1999 ; 5 : 371 – 394 . 54. Tian H. Q , Chen G. S , Zhang C , Liu M. L , Sun G , co-authors . Century-scale responses of ecosystem carbon storage and flux to multiple environmental changes in the southern united states . Ecosystems . 2012 ; 15 : 674 – 694 . 55. Todd-Brown K. E. O , Randerson J. T , Post W. M , Hoffman F. M , Tarnocai C , co-authors . Causes of variation in soil carbon simulations from CMIP5 Earth system models and comparison with observations . Biogeosciences . 2013 ; 10 : 1717 – 1736 . 56. van der Werf G. R , Randerson J. T , Giglio L , Collatz G , Mu M , co-authors . Global fire emissions and the contribution of deforestation, savanna, forest, agricultural, and peat fires (1997–2009) . Atmos. Chem. Phys . 2010 ; 10 : 11707 – 11735 . 57. Wang W , Dungan J , Hashimoto H , Michaelis A. R , Milesi C , co-authors . Diagnosing and assessing uncertainties of terrestrial ecosystem models in a multimodel ensemble experiment: 1. primary production . Glob. Chang. Biol . 2011 ; 17 : 1350 – 1366 . 58. Weng E. S , Luo Y. Q , Wang W , Wang H , Hayes D. J , co-authors . Ecosystem carbon storage capacity as affected by disturbance regimes: A general theoretical model . Journal of Geophysical Research-Biogeosciences . 2012 ; 117 : G3 . 59. Watanabe S , Hajima T , Sudo K , Nagashima T , Takemura T , co-authors . MIROC-ESM 2010: model description and basic results of CMIP5-20c3m experiments . Geosci. Model Dev . 2011 ; 4 : 845 – 872 . 60. Xia J , Luo Y , Wang Y. P , Hararuk O . Traceable components of terrestrial carbon storage capacity in biogeochemical models . Glob. Chang. Biol . 2013 ; 19 : 2104 – 2116 . [PubMed Abstract] . 61. Xu T , White L , Hui D. F , Luo Y. Q . Probabilistic inversion of a terrestrial ecosystem model: analysis of uncertainty in parameter estimation and model prediction . Glob. Biogeochem. Cycles . 2006 ; 20 : GB2007 . 62. Zhang Y , Xu M , Chen H , Adams J . Global pattern of NPP to GPP ratio derived from MODIS data: effects of ecosystem type, geographical location and climate . Glob. Ecol. Biogeogr . 2009 ; 18 : 280 – 290 . 63. Zhao M , Running S. W . Drought-induced reduction in global terrestrial net primary production from 2000 through 2009 . Science . 2010 ; 329 : 940 – 943 . [PubMed Abstract] . 64. Zhou T , Luo Y. Q . Spatial patterns of ecosystem carbon residence time and NPP-driven carbon uptake in the conterminous United States . Glob. Biogeochem. Cycles . 2008 ; 22 : GB3032 . 65. Zhou X , Zhou T , Luo Y . Uncertainties in carbon residence time and NPP-driven carbon uptake in terrestrial ecosystems of the conterminous USA: a Bayesian approach . Tellus B . 2012 ; 64 : 17223 .
# pvlib.solarposition.declination_spencer71¶ pvlib.solarposition.declination_spencer71(dayofyear)[source] Solar declination from Duffie & Beckman and attributed to Spencer (1971) and Iqbal (1983). See 1 for details. Warning Return units are radians, not degrees. Parameters dayofyear (numeric) – Returns declination (radians) (numeric) – Angular position of the sun at solar noon relative to the plane of the equator, approximately between +/-23.45 (degrees). References 1 J. A. Duffie and W. A. Beckman, “Solar Engineering of Thermal Processes, 3rd Edition” pp. 13-14, J. Wiley and Sons, New York (2006) 2 J. W. Spencer, “Fourier series representation of the position of the sun” in Search 2 (5), p. 172 (1971) 3 Daryl R. Myers, “Solar Radiation: Practical Modeling for Renewable Energy Applications”, p. 4 CRC Press (2013)
# On eigenvalues of a high-dimensional spatial-sign covariance matrix Sample spatial-sign covariance matrix is a much-valued alternative to sample covariance matrix in robust statistics to mitigate influence of outliers. Although this matrix is widely studied in the literature, almost nothing is known on its properties when the number of variables becomes large as compared to the sample size. This paper for the first time investigates the large-dimensional limits of the eigenvalues of a sample spatial sign matrix when both the dimension and the sample size tend to infinity. A first result of the paper establishes that the distribution of the eigenvalues converges to a deterministic limit that belongs to the family of celebrated generalized Marčenko-Pastur distributions. Using tools from random matrix theory, we further establish a new central limit theorem for a general class of linear statistics of these sample eigenvalues. In particular, asymptotic normality is established for sample spectral moments under mild conditions. This theory is established when the population is elliptically distributed. As applications, we first develop two new tools for estimating the population eigenvalue distribution of a large spatial sign covariance matrix, and then for testing the order of such population eigenvalue distribution when these distributions are finite mixtures. Using these inference tools and considering the problem of blind source separation, we are able to show by simulation experiments that in high-dimensional situations, the sample spatial-sign covariance matrix is still a valid and much better alternative to sample covariance matrix when samples contain outliers. ## Authors • 6 publications • 14 publications • 17 publications 05/15/2022 ### A CLT for the LSS of large dimensional sample covariance matrices with unbounded dispersions In this paper, we establish the central limit theorem (CLT) for linear s... 09/14/2021 ### Asymptotic normality for eigenvalue statistics of a general sample covariance matrix when p/n →∞ and applications The asymptotic normality for a large family of eigenvalue statistics of ... 01/30/2016 ### Spectrum Estimation from Samples We consider the problem of approximating the set of eigenvalues of the c... 05/03/2018 ### A generalized spatial sign covariance matrix The well-known spatial sign covariance matrix (SSCM) carries out a radia... 01/24/2021 ### Testing for subsphericity when n and p are of different asymptotic order In this short note, we extend a classical test of subsphericity, based o... 10/22/2020 ### One-shot Distributed Algorithm for Generalized Eigenvalue Problem Nowadays, more and more datasets are stored in a distributed way for the... 01/22/2015 ### Estimating the Intrinsic Dimension of Hyperspectral Images Using an Eigen-Gap Approach Linear mixture models are commonly used to represent hyperspectral datac... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction When a multivariate data set is potentially contaminated by outliers, sample covariance matrix (SCM) becomes less reliable. A wide range of robust alternatives has been proposed in the literature starting from the early M-estimators (Maronna, 1976; Huber, 1977), the minimum volume ellipsoid and minimum determinant estimators (Rousseeuw, 1985), the Stahel-Donoho estimators (Humpel et al., 1986; Donoho and Gasko, 1992) and Tyler’s scatter matrix (Tyler, 1987). These estimators enjoy a high breakdown point and most of them are desirably affine equivariant. For book-length discussions on these classical estimators, we refer to Maronna et al. (2006) and Oja (2010), see also Magyar and Tyler (2014) for a sensible review. However many of these estimators are implicitly defined only, and this lack of an analytically tractable form leads to certain difficulty for their computation and theoretical analysis. Such difficulty is even more pronounced when the number of variables is large. This motivates a growing recent research for more tractable robust scatter estimators that might not be affine equivariant. A particularly studied estimator is the spatial sign matrix, first introduced in Locantore et al. (1999) and Visuri et al. (2000). The former paper introduces an influential based on the spatial sign matrix. Two striking examples in the paper, shown on Figures 14-16 and Figures 15-17, respectively, demonstrate how the SCM leads to a much distorted principal components (PCs) in the presence of a single outlier, and how, at the same time, the spatial sign matrix is able to mitigate the impact of such highly influential outliers. A number of papers have followed since then, especially within the groups around H. Oja and D.E. Tyler, respectively, see Gervini (2008), Sirkia et al. (2009), Taskinen et al. (2010, 2012), Dürre et al. (2014, 2015, 2017) and Dürre and Vogel (2016). Let us define the sample spatial sign covariance matrix (SSCM). The spatial sign of a nonnull -dimensional vector is , i.e. its projection on the -dimensional unit sphere. For completeness, we set . This is called a sign because for the univariate case with , are ordinary signs. Given a sample from a -variate population , the sample SSCM is Cn=1nn∑j=1s(xj−^μ)s(xj−^μ)′. (1.1) Here is an estimate for the spatial median of the population which is determined by the equation (zero of the mean spatial sign function). The population SSCM is . If the data is already centered, one may assume and consider the sample SSCM Cn=1nn∑j=1s(xj)s(xj)′. (1.2) Despite its simplicity, is a rich scatter statistic for a multivariate population. It is indeed the exact counterpart of the usual SCM when one shifts from the Euclidean distance (or distance) to the Manhattan block distance (or distance) in . When the number of variables is large compared to the sample size , the sample SSCM will likely deviate from the population SSCM due to the high-dimensional effect. Indeed for the usual covariance matrix , such high-dimensional distortion between and the sample covariance matrix is now well understood with the aid of random matrix theory, see Johnstone (2007) and Paul and Aue (2014). Typically, sample eigenvalues from have a much wider spread than the population eigenvalues of , and this deformation is precisely described by the famous Marčenko-Pastur law. A main result from the current paper shows that for the spatial sign matrix, such high-dimensional distortion again happens when is large compared to . Such high-dimensional distortion is particularly critical to the robust PCA or robust covariance matrix estimation proposed in Locantore et al. (1999) and Visuri et al. (2000). For example, the procedure (C1)-(C2)-(C3) on page 566 of Visuri et al. (2000) for a robust estimator of the population covariance matrix works as follows when applied with spatial signs. Given a centered data sample from a -dimensional population: • Construct eigenvector estimates using the eigenvectors of the sample SSCM , say, matrix . • Estimate the marginal variances (eigenvalues, principal values) of , , using any univariate robust scale estimate (MAD, etc.). Write for the estimates. • The estimate for is  . In the high-dimensional context, estimates found in the steps (C1) and (C2) become seriously biaised. For (C1), the eigenvectors in can be far away from the their population counterpart (of the population SSCM ), so a fortiori far away from the eigenvectors of the population covariance matrix . For (C2), the marginal variances of the projections could be very different of the eigenvalues of . As for the robust PCA proposed in Locantore et al. (1999), it will suffer from the same high-dimensional distortion because the procedure also uses the eigenvectors of the sample spatial sign matrix to estimate the eigenvectors of the population covariance matrix, exactly as in Step (C1) above. Therefore it is necessary to correct such high-dimensional distortion appeared in the sample SSCM in order to preserve its long established attractiveness such as robustness. In this paper, using tools of random matrix theory, we investigate asymptotic spectral behaviors of the sample SSCM in high-dimensional frameworks where both the dimension and the sample size tend to infinity. We restrict ourselves to the family of elliptical distributions for the population for two reasons. Firstly, if is elliptically distributed, the population SSCM and the population covariance matrix share same eigenvectors while their respective eigenvalues are in an one-to-one correspondence through a well-known map (Boente and Fraiman, 1999). Secondly, high-dimensional study as the one developed in this paper but for a more general population seems out of reach at the moment. The first main result of the paper (Theorem 3.1) is an analogue of Marčenko-Pastur law for the limiting distribution of eigenvalues of . This law has been so far known for sample covariance matrices and sample correlation matrices only. The second main result of the paper provides a central limit theorem (CLT) for linear spectral statistics of (Theorem 3.2). This CLT is the corner-stone for all subsequent applications we developed in the paper. These applications are designed to demonstrate the effectiveness of the theory we develop here for the SSCM . There have been a few very recent works in the literature that deal with the high-dimensional SSCM (or its variants), namely Zou et al. (2014), Feng and Sun (2016), Li et al. (2016), and Chakraborty and Chaudhuri (2017) . A common feature in these papers is that given a specific null hypothesis on the population location or scatter, in a one-sample or two-sample design, the authors have in their disposal a specific test statistic which is an explicit function of (or its variants). They thus directly study the statistic using traditional asymptotic methods such as projections (as in a U-statistic) or a martingale decomposition. None of these papers studied the distribution of the eigenvalues of as done in this paper using random matrix theory. Meanwhile, some of these test statistics are indeed linear spectral statistics of . Therefore in these cases, the CLT developed in this paper leads to an independent and new proof for these existing results. However this comparison will not be pursued here but in a later separated work. The remaining of the paper is as follows. Section 2 summarizes some preliminary results from elliptical distributions and related random matrix theory. Section 3 establishes the two main theoretical results of the paper (Theorems 3.1 and 3.2). Application to spectral moments statistics is fully addressed with explicit limiting mean and covariance functions in the corresponding central limit theorem. Then in Section 4, relying on these results, we develop two statistical applications on the spectrum of , the population SSCM, under a setting where the spectrum forms a discrete distribution with finite support. In one application, the spectrum of is estimated using the method of moments, and in the other application, we test the hypothesis that there are no more than distinct eigenvalues in the spectrum of . In Section5, we develop two applications of the general theory of Section 3 to robust statistics in the high-dimensional context. Technical proofs of the main theorems are gathered in Section 6. Some useful lemmas and their proofs are postponed to the last section. ## 2 Preliminaries The family of elliptical distributions is an important extension to the multivariate normal distribution and has been broadly used in data analysis in various fields (Gupta et al., 2013). A random vector with zero mean is elliptically distributed if it has a stochastic representation: x=wAu, (2.1) where is a deterministic and invertible matrix, a scalar random variable representing the scale of , and is the random direction, independent of and uniformly distributed on the unit sphere in . Besides the normal distribution, this family includes many other celebrated distributions, such as multivariate -distribution, Kotz-type distributions, and Gaussian scale mixtures. Clearly the population covariance matrix is where is in fact the shape matrix of the population. In order to resolve the indeterminacy between the scales of and , we will use throughout the paper the normalization . Let be a sequence of independent and identically distributed (i.i.d.) random vectors from the elliptical population (2.1). We consider the sample SSCM defined in (1.2) and scale it as Bn=pCn=1nn∑j=1yjy′j,yj=√ps(xj). (2.2) The reason for this scaling is that now and the eigenvalues of are of order in average. In this paper, using tools of random matrix theory, we investigate limiting properties of the eigenvalues of in a high-dimensional setting. Precisely, both the dimension and the sample size tend to infinity with their ratio , a positive constant in . Let be a matrix with eigenvalues . Its empirical spectral distribution (ESD) is by definition the probability measure FMp=1pp∑j=1δλj, where denotes the Dirac mass at . If this sequence has a limit when , the limit is referred as a limiting spectral distribution, or LSD, of the sequence. Our aim is to study the limiting properties of and CLT for linear spectral statistics (LSS) of the form for a class of smooth test functions . These properties may become powerful tools to recover spectral features of the scaled population SSCM, i.e. , and then those of the shape matrix since the matrices and share the same eigenvectors and their eigenvalues have a one-to-one correspondence (Boente and Fraiman, 1999). Moreover, as , the two matrices coincide in the sense that the spectral norm , as long as (or ) is uniformly bounded, see Lemma 7.1. Spectral properties of a standard high-dimensional SCM have been extensively studied in random matrix theory since the pioneer work of Marčenko and Pastur (1967). The standard model in this literature has the form ˜x=σAz, (2.3) where is as before, is a constant, and is a set of i.i.d. random variables satisfying E, E, and E. Let be i.i.d. copies of and be the corresponding SCM. It is well known that the ESD of converges to the celebrated Marčenko-Pastur (MP) law when , and to a generalized MP law for general matrix , as with (Marčenko and Pastur, 1967; Silverstein, 1995). The CLT for LSS of was first studied in Jonsson (1982) by assuming the population is a standard multivariate normal. One breakthrough on the CLT was obtained by Bai and Silverstein (2004), where the population is allowed to be general with E. This fourth moment condition was then weakened to E in Pan and Zhou (2008). For more references, one can refer to Bai and Silverstein (2010), Bai et al. (2015), Gao et al. (2016), and references therein. However, these results do not apply to general elliptical populations since the two models in (2.1) and (2.3) have little in common, except for normal populations. In fact, for general elliptical populations, it has been reported that the ESD of the SCM converges to a deterministic distribution that is not a generalized MP law, but has to be characterized by both the distribution of and the limiting spectrum of through a system of implicit equations (El Karoui, 2009; Li and Yao, 2018). The involvement of seriously interferes with our understanding of the spectrum of from the ESD of . This again motivates us to shift our attention to the SSCM which discards the random radius and focus only on the directions . ## 3 High-dimensional theory for eigenvalues of Bn ### 3.1 Limiting spectral distribution of Bn In this section we derive a LSD for the sequence under the assumptions below. Assumption (a).  Both the sample size and population dimension tend to infinity in such a way that and . Assumption (b).  Sample observations are , where is a deterministic and invertible matrix with and are i.i.d. random vectors, uniformly distributed on the unit sphere in . Assumption (c).  The spectral norm of is bounded and its spectral distribution converges weakly to a probability distribution , called population spectral distribution (PSD). Moreover, the spectral moments of are denoted by and their limits by . From Lemma 7.1, it is clear that the spectral distributions of and are asymptotically identical. So one can certainly replace with in Assumption (c), which does not affect the LSD of . However we keep because it is easy to describe the CLT for LSS using the spectral distribution of . For the characterization of the LSD of , we need to introduce the Stieltjes transform of a measure on the real line, which is defined as mG(z)=∫1x−zdG(x),z∈C+, where . ###### Theorem 3.1. Suppose that Assumptions (a)-(c) hold. Then, almost surely, the empirical spectral distribution converges weakly to a probability distribution , whose Stieltjes transform is the unique solution to the equation m=∫1t(1−c−czm)−zdH(t) ,z∈C+, (3.1) in the set . The LSD defined in (3.1) is a generalized MP law already appeared in the seminal paper Marčenko and Pastur (1967). Let denote the Stieltjes transform of . Then (3.1) can also be represented as (Silverstein, 1995) z=−1m––+c∫t1+tm––dH(t) ,z∈C+. (3.2) For procedures on finding the density function and the support set of from (3.1) and (3.2), one is referred to Bai and Silverstein (2010). ### 3.2 CLT for linear spectral statistics of Bn Let be the LSD as defined in (3.1) with the parameters replaced by . Let . We now study the fluctuation of so-called LSS of the form Gn(f):=∫f(x)dGn(x)=∫f(x)d[FBn(x)−Fcn,Hp(x)], where is some given measurable function. Define also the interval Ic:=[liminfp→∞λΣminδ(0,1)(c)(1−√c)2,limsupp→∞λΣmax(1+√c)2]. (3.3) ###### Theorem 3.2. Suppose that Assumptions (a)-(c) hold. Let be functions analytic on an open set that includes the interval (3.3). Then the random vector converges weakly to a Gaussian vector with mean function EXf =−12πi∮C1f(z)∫c(m––′(z)t)2dH(t)m––(z)(1+m––(z)t)3dz−cm––(z)m––′(z)πi∮C1f(z)× [∫(γ2t−t2)dH(t)1+m––(z)t∫tdH(t)(1+m––(z)t)2−∫tdH(t)1+m––(z)t∫t2dH(t)(1+m––(z)t)2]dz, and covariance function Cov(Xf,Xg)= −12π2∮C1∮C2f(z)g(~z)m––′(z)m––′(~z)(m––(z)−m––(~z))2dzd~z +2γ2c∫xf′(x)dF(x)∫xg′(x)dFc,H(x) −1πi∮C1f(z)m––′(z)m––2(z)dz∫xg′(x)dFc,H(x) −1πi∮C1g(z)m––′(z)m––2(z)dz∫xf′(x)dFc,H(x). Here and the contours and are non-overlapping, closed, counter-clockwise orientated in the complex plane and enclosing the support of the LSD . A special case of interest is a multivariate normal population that satisfies both the elliptical model (2.1 ) and the linear transformation model ( 2.3). In this case, it is interesting to compare the limiting distribution in Theorem 3.2 based on the sample SSCM with the classical CLT in Bai and Silverstein (2004) based on the SCM . One finds that some additional and new terms appear in Theorem 3.2, namely the second contour integral in the mean function and the second to fourth terms in the covariance function above do not exist in the classical CLT in Bai and Silverstein (2004). Another closely related work is Hu et al. (2019), where the authors study elliptical population by assuming being independent of . Though sharing the same form, our model violates their independent assumption. Specifically, we takes which is correlated with . It will be shown that such correlation is not (asymptotically) negligible for the distribution of LSS. ### 3.3 Asymptotic distributions of spectral moments Among all LSS, the following spectral moments of are particularly important: ^βnj=1p\rm tr(Bjn)=∫xjdFBn(x),j=1,2,…. The first moment is 1 since . All other moments , , are random. Define the moments of the related MP laws βnj=∫xjdFcn,Hp(x)andβj=∫xjdFc,H(x),j≥1. From Nica and Speicher (2006), the quantities and (moments of ) are related through the recursive formulae: βnj=∑ci1+⋯+ij−1n(γn2)i2⋯(γnj)ijϕ(i1,…,ij),j≥2, (3.4) and , where the sum runs over the following partitions of : (i1,…,ij):j=i1+2i2+⋯+jij,il∈N, and The joint limiting distribution of moments can be derived from Theorem 3.2 by considering the moment functions . For this particular case, the mean and covariance functions in the limiting distribution can be explicitly calculated. ###### Corollary 3.1. Suppose that Assumptions (a)-(c) hold. Then the random vector p(^βn2−βn2,…,^βnk−βnk)D−→Nk−1(v,Ψ). The mean vector is given by vj=[cPj(j−2)!(P2,31−cz2P2,2+2γ2P1,1P1,2−2P2,1P1,2−2P1,1P2,2)](j−2)∣∣∣z=0, where , , and denotes the th derivative of with respect to . The covariance matrix has entries ψij=2i−1∑ℓ=0(i−ℓ)ui,ℓuj,i+j−ℓ+2cγ2ijβiβj+2jβjui,i+1+2iβiuj,j+1, where . ## 4 Applications to spectral inference A natural question on spatial signs is how to infer the population SSCM from the sample SSCM when the dimension is large. If the question was for the pair of population and sample covariance matrices , this falls in the widely studied problem of estimating a large covariance matrix. Noting the fundamental difference between an SSCM and a standard covariance matrix, we indeed found nothing in the literature for properties of a high-dimensional SSCM. (to our best knowledge). In this section, we consider a scenario where the PSD of can be modeled as a finite mixture of point masses. Using the theory of Section 3, we propose two new inference tools for the PSD . First an asymptotic normal estimator is found for such a finite-mixture PSD . This estimator is particularly interesting for an elliptical population because the eigenvalues of and are then in a well-known one-to-one correspondence. This will finally lead to a robust estimator for much better than some existing proposals, for example, the estimator from the procedure (C1)-(C2)-(C3) of Visuri et al. (2000). The second inference tool we develop treats the question of determination of the order of the finite mixture in . Precisely, the family of PSDs under study is a class of parameterized discrete distributions with finite support on , that is, where Here the restriction is due to the normalization condition . Note that the model (4.1) depends on an integer parameter , referred as the order of . Such finite mixtures have already been employed for the standard large covariance matrix , see El Karoui (2008), Rao et al. (2008), Bai et al. (2010) and Li and Yao (2014). Similar to El Karoui (2008), we adopt the setting of fixed PSDs in this section, i.e. for all large. ### 4.1 Estimation of a PSD For the model in (4.1), we follow the moment method in Bai et al. (2010) for the PSD estimation. Given a known order , the method first estimates the moments of through the recursive formulae in (3.4), and then solve a system of moment equations, to get a consistent estimator of . In our situation, with notation and for , we denote g1:γ2d−1→θ% andg2,j:βj→γj as the mappings between the corresponding vectors. These mappings are all one-to-one and the determinants of their Jacobian matrices are all nonzero, see Bai et al. (2010). Therefore, applying Theorem 3.1, which implies that , as . However, as shown by the CLT in Corollary 3.1, the estimator has a bias of the order . So it’s natural to modify by subtracting its limiting mean in the CLT to obtain a better estimator of . Beyond this correction, the CLT can also provide confidence regions for the parameter . Denote the modified estimators of , , and by ^β∗j=^βj−1p(^v2,…,^vj)′,^γ∗j=g2,j(^β∗j),and^θ∗n=g1(^γ∗2d−1), (4.2) respectively, where with defined in Corollary 3.1 for From Theorem 3.1, Corollary 3.1, and a standard application of the Delta method, one may easily get asymptotic properties of these estimators. ###### Theorem 4.1. Suppose that Assumptions (a)-(c) hold and the true value is an inner point of . Then we have , , , and moreover p(^γ∗j−γj) D−→Nj−1(0,J2,jΨjJ′2,j), (4.3) p(^θ∗n−θ) D−→N2k−2(0,J1J2,2d−1Ψ2d−1J′2,2d−1J′1), where and represent the Jacobian matrices and , respectively, and is defined in Corollary 3.1 with . ### 4.2 Test for the order of a PSD The aforementioned estimation procedure requires that the order of the PSD be pre-specified. In general, this prior knowledge should be testified in advance. To deal with this problem, we consider the hypotheses H0:d≤d0v.s.H1:d>d0, (4.4) where is a given positive integer. These hypotheses can also be regarded as a generalization of the well-known sphericity hypotheses on covariance matrices, i.e. the case . In Qin and Li (2016), a test procedure was outlined based on a moment matrix and its estimator which can be formulated as Here we set and , as defined in (4.2), for . It has been proved that the determinant of is zero if the null hypothesis in (4.4) holds, otherwise is strictly positive (Li and Yao, 2014). Therefore, the determinant can serve as a test statistic for (4.4) and the null hypothesis shall be rejected if the statistic is large. Applying Theorem 4.1 and the main theorem in Qin and Li (2016), the asymptotic distribution of is obtained immediately. ###### Theorem 4.2. Suppose that Assumptions (a)-(c) hold. Then the statistic is asymptotically normal, i.e. p(det(ˆΓ)−det(Γ))D−→N(0,σ2), (4.5) where with , the vectorization of the adjoint matrix of . The first two rows and columns of the matrix consist of zero and the remaining sub-matrix is defined in (4.3). The matrix is a 0-1 matrix with only , , , where denotes the greatest integer not exceeding . From Theorem 4.1, the limiting variance in (4.5) is a continuous function of . While, under the null hypothesis, this variance is a function of , denoted by . Let . Then it is a strongly consistent estimator of . ###### Corollary 4.1. Suppose that Assumptions (a)-(c) hold. Then, under the null hypothesis, Tn:=pdet(ˆΓ)^σH0D−→N(0,1), as . In addition, the asymptotic power of tends to 1. Corollary 4.1 follows directly from Theorem 4.2 and its proof is thus omitted. This corollary includes as a particular case the sphericity test. For this case, the test statistic reduces to and its null distribution is consistent with that in Paindaveine and Verdebout (2016) which is obtained by a direct and completely different method. ### 4.3 Simulation experiments Simulations are carried out to evaluate the performance of the proposed estimation and test for discrete PSDs in (4.1). Samples are drawn from and all empirical statistics are calculated from 10,000 independent replications. The estimation procedure is tested for the following two PSDs. • Model 1: and ; • Model 2: and . The sample sizes are for Model 1 and for Model 2, respectively. In addition to empirical means and standard deviations of all estimators, we also calculate 95% confidence intervals for all parameters and report their coverage probabilities. Results are collected in Tables 1 and 2. The consistency of all estimators is clearly demonstrated. Next we examine the test procedure for the order of a PSD. The following two models are employed in this experiment: • Model 3: ; • Model 4: . Here the parameter represents the distance between the null and alternative hypotheses. In particular, Model 3 is used for testing (sphericity test) with ranging from 0 to 0.2 by step 0.18 and Model 4 is for testing with ranging from 0 to 0.45 by step 0.05. The sample size is , the dimension-sample size ratios are , and the significance level is fixed at . Results summarized in Table 3 show that the proposed test has accurate empirical size and its power tends to 1 as the parameter increases under the two models. ## 5 Application to robust statistics In this section we develop a few applications of the general theory of Section 3 to robust statistics using the sample SSCM . ### 5.1 Robustness We examine the robustness of several estimators for the shape matrix when sample data include outliers. Four estimators derived from and are considered in this comparison. Let be the spectral decomposition of and : Sn=UsΛsU′sandBn=UbΛbU′b, where the ’s are diagonal matrices of eigenvalues, sorted in ascending order, and the ’s are matrices of corresponding eigenvectors, respectively. In addition, we define a regularization function as r(A)=pA\rm tr(A), for any matrix with non-zero trace. Obviously, this function normalizes such that . With the above notations, the four estimators of we examine are as follows: • Regularized SCM , ˆT1=r(Sn); • Spectrum-corrected SCM , ˆT2=r(UsΛ2U′s), where is a collection of ascendingly sorted estimators of population eigenvalues using a moment method developed in Li and Yao (2014); • Robust sample SSCM constructed from the procedures (C1)-(C2)-(C3) of Visuri et al. (2000), ˆT3=r(UbΛ3U′b), where with being the square of the MAD of the th row of for . • Spectrum-corrected sample SSCM , ˆT4=r(UbΛ4U′b), where the correction is obtained following three steps: • Step 1: Estimate the PSD of from the ESD through the procedure in Section 4.1 to get, say, ; • Step 2: Estimate the eigenvalues of from using the correspondence between the eigenvalues of and as given in Lemma 7.1; • Step 3: Sort the obtained estimates of the eigenvalues in ascending order to obtain . The performance of the four estimators are tested under two models below. Model 1: Contaminated normal distribution of elliptical form: (1−ε)N(0,T)+εN(0,16T), where the population shape matrix is a diagonal matrix, T=diag(0.5,…,0.5p/2,1.5,…,1.5p/2). This model implies there are about 100% outlying observations with large amplitude. The mixing parameter takes two values (uncontaminated) and (contaminated by 1% outliers). Model 2: Contaminated normal distribution of non-elliptical form: (1−ε)N(0,T)+εN(0,16˜T) where the population shape matrix is the same as in Model 1 and the mixing parameter takes values and . For outliers, their shape matrix is ˜T=Diag(1.5,…,1.5p/2,0.5,…,0.5p/2). The population dimension is , and the sample size is . All statistics are averaged from 1000 independent replications. The number of outliers is fixed at for a given . For each estimator , we calculate three distances from to its target matrix , including the Frobenius distance, the Kullback-Leibler (KL) distance and the spectral distance. The KL distance is not applicable to and for cases with because their determinants are thus zero. Figures 1, 2 and 3 summarize the results. They show that, when there is no outlier, and are comparable and both suffers from large biases caused by the high-dimensional effect. Such bias can be much alleviated by means of spectral correction as demonstrated by and which have almost the same accuracy. Note that the remaining biais is clearly present and it is a pity that there is no effective way at present to remove this remaining bias entirely. Therefore, the performance of and with can serve as a benchmark for the four estimators when comparing their robustness against outliers. As shown in the figures, in the presence of outliers, the estimator is more robust than , but both of them are still heavily biased for large . This is explained by the fact that both of them are not adapted to high dimensions. For the two other estimators and with high-dimensional correction, the estimator
Categories # Birth Rates and Life Expectancy Bad enough, that we have to read und hear current failures of thought by right wing populists (article in German only) and many relativizations (comments in German only). It seems like 70 years of History class did not help to stop utter racism in public debate. What, however, sparked my interest was the question what correlations with birth rates there are. My intuitive expectation was, that higher life expectancy is linked to lower birth rates, what might also be explained from an evolutionary perspective. Now, I’m neither an anthropologist nor do I know the current state of research and I can only use openly available statistics. Luckily, the World Bank has a large database with various indicators for all countries and regions of the world. Setting up an R script for reading the data was easy. I was able to quickly check my hypothesis visually and $r = -0.86$ hints at a strong link between life expectancy and birth rate1 in the expected direction: However, it is known, that Life Expectancy is also closely linked to standard of life or a country’s Gross Domestic Product (GDP). If you plot Life Expectancy against GDP, you find a correlation of $r = 0.58$:[GDP per Capita in Purchasing Power Parity was chosen for the analyses. It is log-scaled in the diagrams, correlations are calculated with raw values.] The same is true for Birth Rates and Gross Domestic Product with $r = -0.56$: This is the classical situation why high correlations per se do not indicate a causal effect: additional variables can explain a statistical correlation. A very common example is the correlation between shoe size and vocabulary, that is “created” only through the strong effect of age on both shoe size and vocabulary — controlling for age, the link disappears. Thus, the interesting question is how the partial correlation between Life Expectancy and Birth Rates looks if we control for a country’s GDP:2 Interestingly, you still find a substantial correlation of $r_{partial} = -0.79$. Of course, this is not ultimate evidence for the hypothesis that Life Expectancy effects Birth Rates (see above), but it’s interesting to see that the link still holds. In my interpretation the graphs show that other effects are at work than some “reproduction types”, which originate in a completely different context. Using those terms only shows the true nature of such statements: fan fear and stir up hate using a pseudo-scientific language. Update (04.01.2016): I have uploaded the R script for generating the plots on GitHub. You find it here. 1. Life Expectancy at Birth and Crude per Birth Rate, i.e. births per 1,000 of a population for 2013 were chosen as it was most complete data set available. Countries with missing data were excluded from the analyses. 2. Partial correlations are calculated on the basis of residuals from a linear regression, leading to the values on the axes. Thus, strictly speaking, the labels of the axes is not correct.
# 8.11 Unbalanced force system  (Page 3/3) Page 3 / 3 As a matter of fact, this velocity profile is typical of any vehicle, which is first accelerated, then run with minimum acceleration or even zero acceleration and finally brought to rest with constant deceleration. We work out an example for studying motion under variable force, which is similar to the earlier example except that force is now dependent on time. Problem : A block of mass “m” is pulled by a string on a smooth horizontal surface with force F = kt, where “k” is a constant. The string maintains an angle “θ” with horizontal. Find the time when block breaks off from the surface. Solution : Here, important thing is to know the meaning of "breaking off". It means that physical contact between two surfaces is broken off. In that condition, the normal force should disappear as there is no contact between block and surface. Since normal force is directed vertically up, we need to analyze forces in y-direction only so that we could apply the condition corresponding to "breaking off". $\text{Free body diagram of the block}$ $\begin{array}{l}\sum {F}_{y}=N+kt\mathrm{sin}\theta -mg=0\\ ⇒N=mg-kt\mathrm{sin}\theta \end{array}$ Now, let $t={t}_{B}$ (break off time) when N = 0 (breaking off condition) $\begin{array}{l}⇒0=mg-k{t}_{B}\mathrm{sin}\theta \\ ⇒k{t}_{B}\mathrm{sin}\theta =mg\\ ⇒{t}_{B}=\frac{mg}{k\mathrm{sin}\theta }\end{array}$ ## Motional mechanism of animals According to laws of motion, motion of a body is not possible by the application of internal force. On the other hand, animals move around using internal muscular force. This is not a contradiction, but an intelligent maneuvering on the part of animals, which use internal muscular force to generate external force on them. Let us take the case of our own movement. So long we stand upright, applying weight on the ground in vertical direction; there is no motion. To move forward (say), we need to press back the surface at an angle. The ground applies an equal and opposite force (say, reaction of the ground). The reaction of the surface is an external force for our body. The horizontal component of the reaction force moves our body forward, whereas the vertical force balances our weight. ## Elements of body system The underlying frame work of the analysis of force systems in inertial frame of reference is now almost complete. There is nothing new as far as application of laws of motion is concerned. But, there is a big “but” with respect to details of various elements of body systems that we consider during study of motion. These elements typically are block, string, incline, pulleys and spring. The whole gamut of analysis in dynamics requires systemic approach to answer following questions : • what are the forces ? • which of them are external forces ? • does friction is part of the external force system ? • whether forces are collinear, coplanar or three-dimensional ? • are forces balanced or unbalanced ? • are the forces time dependent ? • is the motion taking place in inertial frame or accelerated frame ? • what would be the appropriate coordinate system for analysis ? • what are the characteristics of system elements ? It is not very difficult to realize that our job is half done if we are able to classify the system in hand based on the answers to above questions. Though, we have listed the system elements at the end of the list, we shall soon realize that a great deal of our effort in getting answers to questions 1, 2, 4 and 8 are largely determined by the elements involved, while the rest are situation specific. We have briefly described system elements like block, string, pulley etc. In subsequent modules, we shall emphasize details of these and other elements. What are the system of units A stone propelled from a catapult with a speed of 50ms-1 attains a height of 100m. Calculate the time of flight, calculate the angle of projection, calculate the range attained 58asagravitasnal firce Amar water boil at 100 and why what is upper limit of speed what temperature is 0 k Riya 0k is the lower limit of the themordynamic scale which is equalt to -273 In celcius scale Mustapha How MKS system is the subset of SI system? which colour has the shortest wavelength in the white light spectrum if x=a-b, a=5.8cm b=3.22 cm find percentage error in x x=5.8-3.22 x=2.58 what is the definition of resolution of forces what is energy? Ability of doing work is called energy energy neither be create nor destryoed but change in one form to an other form Abdul motion Mustapha highlights of atomic physics Benjamin can anyone tell who founded equations of motion !? n=a+b/T² find the linear express أوك عباس Quiklyyy Moment of inertia of a bar in terms of perpendicular axis theorem How should i know when to add/subtract the velocities and when to use the Pythagoras theorem?
Courses Courses for Kids Free study material Free LIVE classes More # $\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{d\theta }{1+\tan \theta }}$ is equal to:A. $\pi$B. $\dfrac{\pi }{2}$C. $\dfrac{\pi }{3}$D. $\dfrac{\pi }{4}$ Last updated date: 16th Mar 2023 Total views: 304.5k Views today: 7.84k Verified 304.5k+ views Hint: For$\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{d\theta }{1+\tan \theta }}$ , use $\tan \theta =\dfrac{\sin \theta }{\cos \theta }$, then multiply and divide by $2$and simplify. After that, split the term and use $\sin \theta +\cos \theta =u$and apply the limits. Simplify it, you will get the answer. Complete step by step solution: We have to integrate, $\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{d\theta }{1+\tan \theta }}$ We know the identity, $\tan \theta =\dfrac{\sin \theta }{\cos \theta }$, So substituting above we get, $\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{d\theta }{1+\tan \theta }}=\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{d\theta }{1+\dfrac{\sin \theta }{\cos \theta }}}$ Also, multiplying and dividing by $2$ we get, $\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{d\theta }{1+\tan \theta }}=\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}$ So we can write, $\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{(\cos \theta +\sin \theta +\cos \theta -\sin \theta )d\theta }{\cos \theta +\sin \theta }}$ Now, splitting we get, $\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{d\theta +\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{(\cos \theta -\sin \theta )d\theta }{\cos \theta +\sin \theta }}}$ So let, $\sin \theta +\cos \theta =u$ Now differentiating both sides we get, $(\cos \theta -\sin \theta )d\theta =du$ For $\theta =\dfrac{\pi }{2}$, $u=1$ and $\theta =0$, $u=1$ $\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{d\theta +\dfrac{1}{2}\int\limits_{1}^{1}{\dfrac{du}{u}}}$ Now we know that, $\int{\dfrac{1}{u}=\log u+c}$ $\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{1}{2}\left[ \theta \right]_{0}^{\dfrac{\pi }{2}}+\dfrac{1}{2}\left[ \log u \right]_{1}^{1}$ Now, applying the limit we get, $\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{1}{2}\left( \dfrac{\pi }{2}-0 \right)+\dfrac{1}{2}\left( \log 1-\log 1 \right)=\dfrac{\pi }{4}+0=\dfrac{\pi }{4}$ We get, $\dfrac{1}{2}\int\limits_{0}^{\dfrac{\pi }{2}}{\dfrac{2\cos \theta d\theta }{\cos \theta +\sin \theta }}=\dfrac{\pi }{4}$ We get the answer as option(D). Note: Read the question carefully. You must be familiar with the concept of integration. Also, don’t make silly mistakes. While simplifying, take care that no term is missing. Also, take care of signs. Most of the mistakes occur while simplifying so avoid it.
# Straight lines divide the circumference of the circle $x^2+y^2=100$ into two arcs whose lengths are in the ratio $3:1$ Find the equation of straight lines which pass through $(7,1)$,and divide the circumference of the circle $x^2+y^2=100$ into two arcs whose lengths are in the ratio $3:1$ My attempt: As the required line is dividing the circumference in the ratio of $3:1$.Therefore,angle subtended by the required line on the center is $\frac{\pi} {2}$ .But i could not find the equation of the lines. I let the equation of line as $ax+by+c=0$ and it passes through $(7,1)$.So $7a+b+c=0$ • First of all you can set $a=1$ and eliminate $c$, so that your line equation depends on $b$ only. Then you can find the points of intersections $A$ and $B$ between line and circle and fix $b$ so that $AB^2=10^2+10^2$ (Pythagoras' theorem). – Aretino Sep 11 '15 at 16:34 HINT.....Any line passing through $(7, 1)$ can be written as $$y-1=m(x-7)\rightarrow y-mx+7m-1=0$$ We require that the distance from the origin (the centre of the circle) to this line is $5\sqrt{2}$, so we can use the formula for the distance from a point to a line to set up an equation for $m$. Can you take it from there? • Sir,@David Quinn,there are two lines $x-2y-5=0$ and $7x+y-50=0$ given in the answer.But using this method,i am getting only $7x+y-50=0$. – Vinod Kumar Punia Sep 12 '15 at 2:24 • You should get a quadratic equation in m and hence two answers – David Quinn Sep 12 '15 at 3:56 • The quadratic equation $m^2+14m+49=0$ is giving me only one value of $m$ @David Quinn – Vinod Kumar Punia Sep 12 '15 at 3:59 • Are you sure the other answer is correct? The distance from the origin to it is $\sqrt{5}$ – David Quinn Sep 12 '15 at 9:22 HINT: Let the equation of the line be $y=mx+c$ passing through the point $(7, 1)$ then we have $$1=m(7)+c$$ $$7m+c=1\tag 1$$ Substituting $y=mx+c$ in the equation of circle $x^2+y^2=100$, we get $$x^2+(mx+c)^2=100$$ $$(1+m^2)x^2+2mc x+c^2-100=0\tag 2$$ Let, the roots of the above equation be $x_1$ & $x_2$ then $$x_1+x_2=-\frac{-2mc}{1+m^2}=\frac{2mc}{1+m^2}$$ $$x_1x_2=\frac{c^2-100}{1+m^2}$$ the points of intersection are $(x_1, y_1)$ & $(x_2, y_2)$ Now, the circumference $=2\pi\times 10=20\pi$ is divided in a ratio $3:1$ then the angle subtended by the small arc at the center $$=\frac{\text{arc length}}{\text{radius}}=\frac{5\pi}{10}=\frac{\pi}{2}$$ hence, the lines joining the points $(x_1, y_1)$ & $(x_2, y_2)$ to the center $(0, 0)$ will be normal to each other hence, we have $$m_1\times m_2=-1$$ $$\frac{y_1-0}{x_1-0}\times \frac{y_2-0}{x_2-0}=-1$$ $$x_1x_2+y_1y_2=0$$ $$x_1x_2+(mx_1+c)(mx_2+c)=0$$ $$(1+m^2)x_1x_2+2mc(x_1+x_2)+c^2=0$$ I hope you can take it from here to solve for the values of $m$ & $c$ And, by the way, check the distance from the origin (the centre of the circle) to the point (7, 1),- whether it >, < or = $5\sqrt{2}$ :) Then write the equation of the line through the point (7, 1) and the origin, and then the line perpendicular to it. Probably, this helps. First its present equation is connecting ( 7,1) to (-1,7) due to requirements of arc division, subtending angle at origin should be $90^0,$ by rotation with $90^0$ angle. $$\dfrac{1-y}{7-x}=\dfrac{6}{-8}$$ Next, distance to origin is $5 \sqrt 2$ ,so you have build a similar triangle $\sqrt 2$ times zoomed with resp to origin, multiplying its intercepts or normal length from origin. You can take the last step. Let the slope if equation be-: $$m$$ the equation of line becomes $$mx-y=7m-1$$ $$(mx-y)/(7m-1)=1\tag1$$ Now homogenize the eq. $$x2+y2=100$$ by multiplying $$100$$ by square of eq. (1) $$x2+y2=100\{(mx-y)/(7m-1)\}^2\tag2$$ Now, you can put the condition of $$90^\circ$$ angle subtended by the lines on centre that is $$(a+b)=0$$ Value of (a) and (b) can be calculated from from equation (2) .... Hope it helps
### Home > CC2 > Chapter 6 > Lesson 6.1.3 > Problem6-30 6-30. Write an algebraic expression for each situation. For example, $5$ less than a number can be expressed as $n-5$. 1. $7$ more than a number This is similar to the given example. What operation is used to make a number larger? 1. Twice a number What operation can you use to double a number? $2·x$
# Math Insight ### Solving single autonomous differential equations using graphical methods #### Video introduction A graphical approach to solving an autonomous differential equation. #### Overview One can understand an autonomous differential equation of the form \begin{align} \diff{x}{t} &= f(x)\\ x(t_0) &= x_0\notag \end{align} by using a purely graphical approach. We can determine the essential behavior of the solution $x(t)$ without doing any analytic calculations. A graph of the function $f(x)$ will tell us all we need to know to estimate what the solution $x(t)$ will do for any initial condition $x_0$. Since the derivative $\diff{x}{t}$ is the rate of change of $x(t)$, a glance at the graph of $f(x)$ will tell us where $x(t)$ is increasing or decreasing and how fast it is changing. The state variable $x(t)$ moves to larger values when $f(x)$ is positive, and it moves to smaller values when $f(x)$ is negative. The velocity of $x(t)$ drops to zero when $f(x)$ reaches zero. The points where $f(x)=0$ are the equilibria where $x(t)$ does not move. #### An example For example, we'll look at the differential equation \begin{align*} \diff{x}{t} = x^2-4. \end{align*} We can determine the dynamics of the solution $x(t)$ by looking at the graph of $f(x)=x^2-4$. From the graph we see that $f(x) < 0$ for $x \in (-2,2)$. The rate of change $\diff{x}{t}$ must be negative for $-2 \lt x(t) \lt 2$, so the solution decreases in that range. If the initial condition $x_0$ were in the range $x_0 \in (-2,2)$, then the solution would start out decreasing. The speed of this negative change would be greatest when $x(t)=0$, and then the trajectory would slow down as it got closer to $x(t)=-2$. The function $f(x)$ is zero at $x=2$ and $x=-2$. Therefore, these two values of $x(t)$ are equilbria. If the initial condition $x_0$ were $x_0=-2$, then the solution would be a constant $x(t)=-2$ for all time. Similarly, if the initial condition were $x_0=2$, then the solution would be the constant $x(t)=2$ for all time. For the case mentioned above, with initial condition $x_0 \in (-2,2)$, the trajectory would get closer and closer to $-2$. It could never cross $x=-2$ because we know the velocity is zero at that point. One solution to an initial condition just below 2 is graphed below. In the graph, $x(t)$ is plotted as a function of $t$. In this plot, the $x$-axis has moved to the vertical axis, and the $t$-axis is the horizontal axis. Notice that $x(t)$ decreases the whole time, decreases most quickly around $x=0$, then slows down, getting closer and closer to $x=-2$ as the time $t$ increases. What changes if the initial condition is below $x=-2$. In that case, the function $f(x)$ is positive, so the rate of change $\diff{x}{t}$ is positive. If the initial condition $x_0 \lt -2$, then the solution $x(t)$ starts out increasing. If $x_0$ is much smaller than $-2$, then $x(t)$ increases very quickly, but its velocity slows down as $x(t)$ approaches the equilibrium $x=-2$. It continues to increase the whole time, as it can't cross the equilibrium, but its velocity goes to zero as it gets closer to the equilibrium. The graph of $f(x)$ is symmetric across the vertical axis. So, it might seem that something similar will happen for initial conditions above the upper equilibrium $x=2$. However, the behavior is entirely different. Just as in the case for $x_0 \lt -2$, for initial condition $x_0 \gt 2$, the trajectory starts off with a positive rate of change $\diff{x}{t}$. In this case, though, as $x(t)$ increases, its velocity increases even more. The trajectory quickly blows up to very large values. In the following applet, you can explore the behavior of solutions to $\diff{x}{t} = x^2-4$ for many different initial conditions. You can observe how initial conditions below $x=2$ lead to solutions $x(t)$ that converge toward $x=-2$. You can also see how quickly solutions with initial conditions $x_0 \gt 2$ blow up. This applet will also help you solidify the relationship between the graph of $f(x)$ (where $x$ is the horizontal axis) and the graph of the trajectories versus time (where $x$ is the vertical axis). Exploring autonomous differential equations. The second applet lets you see the simultaneous behavior of six solutions with different initial conditions. Before you hit play or move the $t$ slider, see if you can predict what the solutions will look like. Exploring autonomous differential equations, multiple trajectories.
# Spring Mass System ## What is Spring Mass System? Consider a spring with mass m with spring constant k, in a closed environment spring demonstrates a simple harmonic motion. T = 2π √m/k From the above equation, it is clear that the period of oscillation is free from both gravitational acceleration and amplitude. Also, a constant force cannot alter the period of oscillation. ## Parallel Combination of Springs Fig (a), (b) and (c) – are the parallel combination of springs. Displacement on each spring = is same But restoring force = is different $F={{F}_{1}}+{{F}_{2}}$ $-{{k}_{p}}x=-{{k}_{1}}x-{{k}_{2}}x$ $-x{{k}_{p}}=-x\left( {{k}_{1}}+{{k}_{2}} \right)$ ${{k}_{p}}={{k}_{1}}+{{k}_{2}}$ ### Time Period in Parallel Combination $T=\frac{2\pi }{\omega }=2\pi \sqrt{\frac{m}{{{k}_{p}}}}=2\pi \sqrt{\frac{m}{{{k}_{1}}+{{k}_{2}}}}$ ## Springs in Series Combination Force on each string is same but displacement on each string is different $x={{x}_{1}}+{{x}_{2}}$ ${{F}_{1}}=-{{k}_{1}}{{x}_{1}}$ $T=2\pi \sqrt{\frac{L}{{{g}_{eff}}}}=2\pi \sqrt{\frac{L}{g+a}}$ ${{F}_{2}}=-{{k}_{2}}{{x}_{2}}$ $\frac{1}{{{k}_{s}}}=\frac{1}{{{k}_{1}}}+\frac{1}{{{k}_{2}}}$ $\frac{F}{{{k}_{1}}}={{x}_{1}}$ ${{k}_{s}}=\frac{{{k}_{1}}{{k}_{2}}}{{{k}_{1}}+{{k}_{2}}}$ $\frac{F}{{{k}_{2}}}={{x}_{2}}$ ### Time Period in Series Combination $T=2\pi \sqrt{\frac{m}{{{k}_{s}}}}=2\pi \sqrt{\frac{m\left( {{k}_{1}}+{{k}_{2}} \right)}{{{k}_{1}}{{k}_{2}}}}$ ### Spring Constant $K=\frac{YA}{L}$ Y = youngs modulus of elasticity From Hooke’s law $Y=\frac{Stress}{Strain}=\frac{\frac{F}{A}}{\frac{DL}{L}}$ $\frac{YDL}{L}=\frac{F}{A}$ $F=\frac{YA}{L}\left( DL \right)$ $\left[ Since, \;K=\frac{YA}{L} \right]$ $F=K\,x$ $K=\frac{YA}{L}$ $K\propto \frac{1}{L}$ If a spring of spring constant (K) and length (L) cutted into $\frac{L}{2}$ size two pieces, then magnitude of spring constant of the new pieces will be? $K\propto \frac{1}{L}\Rightarrow$ then K becomes = 2K for the new pieces. ### How to Find the Time period of a Spring Mass System? Steps: 1. Find the mean position of the SHM (point at which Fnet = 0) in horizontal spring-mass system The natural length of the spring = is the position of equilibrium point. 2. Displace the object by a small distance (x) from its equilibrium position (or) mean position other than mean position, restoring force will act on the body $\overrightarrow{{{F}_{net}}}=-k\overrightarrow{x}$ $\overrightarrow{a}=\frac{-k}{m}\overrightarrow{x}$ 3. Acceleration of the particle is calculated and the calculated value of $\overrightarrow{a}$. $\overrightarrow{a}\propto -\overrightarrow{x}$ then only it is SHM Then equate the $\overrightarrow{a}=-{{\omega }^{2}}\overrightarrow{x}$ $\frac{-k}{m}\overrightarrow{x}=-{{\omega }^{2}}\overrightarrow{x}$ $\omega =\sqrt{\frac{k}{m}}$ 4. Substitute ω value in standard time period expression of SHM $T=\frac{2\pi }{\omega }=\frac{2\pi }{\sqrt{\frac{k}{m}}}=2\pi \sqrt{\frac{m}{k}}$ $T=2\sqrt{\frac{Inertia}{Force\,constant}}$ ## Problems on Spring Mass System Q.1: A particle is executing linear SHM what are its velocity and displacement when its acceleration is half the maximum possible? Solution: $\overrightarrow{a}=-A{{\omega }^{2}}\sin \left( \omega t+\phi \right)$ $\overrightarrow{{{a}_{\max }}}=-A{{\omega }^{2}}$ $\frac{{{a}_{\max }}}{2}=-\frac{A{{\omega }^{2}}}{2}=-A{{\omega }^{2}}\sin \left( \frac{\pi }{6} \right)$ Phase $\left( \omega t+\phi \right)=\frac{\pi }{6}$ $v=A\omega \cos \left( \frac{\pi }{6} \right)=A\omega \frac{\sqrt{3}}{2}$ $x=A\sin \left( \frac{\pi }{6} \right)=\frac{A}{2}$ $\left( v=A\omega \frac{\sqrt{3}}{2},\,and\,\,x=\frac{A}{2} \right)$ Q.2: A particle executing linear SHM has speeds v1 and v2 at distances y1 and y2 from the equilibrium position. What is the frequency of the oscillation of the particle? Solution: $v=\omega \sqrt{{{A}^{2}}-{{y}^{2}}}$ ${{v}^{2}}={{\omega }^{2}}\left( {{A}^{2}}-{{y}^{2}} \right)$ $\frac{{{v}^{2}}}{{{\omega }^{2}}}=\left( {{A}^{2}}-{{y}^{2}} \right)$ $\frac{{{v}^{2}}}{{{\omega }^{2}}}+{{y}^{2}}={{A}^{2}}$ … (1) ${{A}^{2}}=\frac{v_{1}^{2}}{{{\omega }^{2}}}+y_{1}^{2}=\frac{v_{2}^{2}}{{{\omega }^{2}}}+y_{2}^{2}$ $\frac{v_{1}^{2}-v_{2}^{2}}{{{\omega }^{2}}}=y_{2}^{2}-y_{1}^{2}$ ${{\omega }^{2}}=\frac{v_{1}^{2}-v_{2}^{2}}{y_{2}^{2}-y_{1}^{2}}$ $\omega =2\pi f$ $f=\frac{\omega }{2\pi }=\frac{1}{2\pi }{{\left[ \frac{v_{1}^{2}-v_{2}^{2}}{y_{2}^{2}-y_{1}^{2}} \right]}^{\frac{1}{2}}}$ Q.3: A particle is executing SHM of amplitude A. (a) What fraction of the total energy is kinetic when displacement is quarter of the amplitude? (b) At what displacement is the energy are half kinetic and half potential? (a) $90%,\,\,\frac{A}{\sqrt{2}}$ (b) $94%,\,\,\frac{A}{\sqrt{3}}$ (c) $9%,\,\,\frac{A}{\sqrt{4}}$ (d) $93%,\,\,\frac{A}{\sqrt{2}}$ Solution: $KE=\frac{1}{2}m{{\omega }^{2}}\left( {{A}^{2}}-{{y}^{2}} \right)$ $PE=\frac{1}{2}m{{\omega }^{2}}{{y}^{2}}$ $E=\frac{1}{2}m{{\omega }^{2}}{{A}^{2}}$ (a) at $y=\frac{A}{4},$ KE becomes $KE=\frac{1}{2}m{{\omega }^{2}}\left( {{A}^{2}}-{{\left( \frac{A}{4} \right)}^{2}} \right)$ = $\frac{15}{16}\frac{1}{2}m{{\omega }^{2}}{{A}^{2}}$ = 93% of total energy is KE (b) KE = PE $\frac{1}{2}m{{\omega }^{2}}\left( {{A}^{2}}-{{y}^{2}} \right)=\frac{1}{2}m{{\omega }^{2}}{{y}^{2}}$ $y=\frac{A}{\sqrt{2}}$ Q.4: Three springs each of force constant k are connected at equal angles with respect to each other to a common mass. If the mass is pulled by anyone of the spring then the time period of its oscillation? (a) $2\pi \sqrt{\frac{M}{K}}$ (b) $2\pi \sqrt{\frac{M}{2K}}$ (c) $2\pi \sqrt{\frac{2M}{3K}}$ (d) $2\pi \sqrt{\frac{2M}{K}}$ Solution: It is pulled by an upper spring each are making equal angles. $\cos 60{}^\circ =\frac{\Delta x}{x}$ $x\cos 60{}^\circ =\Delta \,x$ $\frac{x}{2}=\Delta \,x$ Fnet ${{F}_{net}}=Kx+2\frac{Kx}{2}\cos 60{}^\circ$ = $Kx+\frac{Kx}{2}=\frac{3Kx}{2}$ ${{K}_{eqn}}x=\frac{3Kx}{2}$ ${{K}_{eqn}}=\frac{3K}{2}$ $T=2\pi \sqrt{\frac{M}{K}}=2\pi \sqrt{\frac{2M}{3K}}$ Q.5: A particle of mass 0.2 kg is executing SHM of amplitude 0.2 m. When the particle passes through the mean position. It mechanical energy is $4\times {{10}^{-3}}J$ find the equation of motion of the particle if the initial phase of oscillation is 60°. (a) $0.1\sin \left( 2t+\frac{\pi }{4} \right)$ (b) $0.2\sin \left( \frac{1}{2}t+\frac{\pi }{3} \right)$ (c) $0.2\sin \left( t+\frac{\pi }{3} \right)$ (d) $0.1\cos \left( 2t+\frac{\pi }{4} \right)$ Solution: Equation of motion of particle is $y=A\sin \left( \omega t+\phi \right)$ A = 0.2 m, $\omega =?,\,\,\phi =60{}^\circ ,\,\,ME=4\times {{10}^{-3}}J$ From energy $E=\frac{1}{2}m{{\omega }^{2}}{{A}^{2}}$, $4\times {{10}^{-3}}=\frac{1}{2}\left( 0.2 \right){{\omega }^{2}}{{\left( 0.2 \right)}^{2}}$, ${{\omega }^{2}}=\frac{4\times {{10}^{-3}}\times 2}{\left( 0.2 \right){{\left( 0.2 \right)}^{2}}}=\frac{8\times {{10}^{-3}}}{0.008}=1\,rad\,{{s}^{-1}}$, $y=0.2\sin \left( t+\frac{\pi }{3} \right)$ Q.6: A block of mass 0.1 kg which slides without friction on a 30° incline is connected to the top of the incline by a massless spring of force constant 40 Nm-1. If the block is pulled slightly from its mean position what is the period of oscillation? (a) $\pi s$ (b) $\frac{\pi }{10}s$ (c) $\frac{2\pi }{5}s$ (d) $\frac{\pi }{2}s$ Solution: $T=2\pi \sqrt{\frac{M}{K}}=2\pi \sqrt{\frac{0.1}{40}}$ = $\frac{\pi }{10}s$ ### You might also be interested in: Test your Knowledge on Spring mass system
# Why do physicists use plane waves so much? When looking at solutions of the Dirac equation people tend to give solutions as $$\psi^{(1)} = e^{\frac{-imc^2t}{\hbar}}\begin{pmatrix}1\\0\\0\\0\\\end{pmatrix},\psi^{(2)} = e^{\frac{-imc^2t}{\hbar}}\begin{pmatrix}0\\1\\0\\0\\\end{pmatrix},\psi^{(3)} = e^{\frac{imc^2t}{\hbar}}\begin{pmatrix}0\\0\\1\\0\\\end{pmatrix},\psi^{(4)} = e^{\frac{imc^2t}{\hbar}}\begin{pmatrix}0\\0\\0\\1\\\end{pmatrix}$$ To me this seems useless because you cannot normalize it. Doesn't it represent a situation where there is infinite uncertainty in position and zero uncertainty in momentum? How is that useful? Surely it would be more useful to give a wave packet solution to the Dirac equation? It's the same problem when looking at solutions of the Schrodinger equation for a free particle. There, the given solution is the plane wave $$e^{i(kx-\omega t)}$$, which you cannot normalize. I understand that the equation is linear and that you can represent the solution as a sum of these stationary states, but wouldn't it be more logical to give the general solution, the Gaussian wave packet? $$\psi=\frac{1}{\sqrt{\pi+\frac{i\hbar t}{m}}}e^{\frac{-x^2}{2(\pi+\frac{i\hbar t}{m})}}$$ You can also construct solutions with sums of this and it makes much more sense because you can actually normalize it. You can add them, see how the particles interfere with each other, understand the role of complex numbers, etc. I feel like there is some concept that I am missing because otherwise, I wouldn't see this plane wave solution so much. • It's simply easier for a general understanding to fix the uncertainty for one variable (momentum) and have the wave become infinite in space than to calculate with a mixture of both being uncertain. Think about explaining the vibrations of a guitar string: are you picturing standing waves with certain frequencies or do you think of a dispersing wave packet? Also: simply because we are taught plane waves does not mean we are obsessed with them. There will always be more accurate (more complex) solutions to specific physical problems but you have to start abstracting somewhere... – Asmus Nov 20 '20 at 8:05 • In my case the premise is wrong. The solution to the free particle was found to be/given as an infinite sum (integral) of plane waves, a wave packet that one could normalize. We saw that it "decayed" with time and that the expectation value of the position moved as the CoM of a classical particle, etc. So yeah, this question is opinion based. Not a question suited for this website. – AccidentalBismuthTransform Nov 20 '20 at 9:07 • As a side point, you can have normalizable momentum eigenstates in compact spaces. As others have pointed out, the cash value of momentum eigenstates is not necessarily as physically realizable states (i.e., normalizable states) but rather as a useful basis for the Hilbert space. Useful because they are generators of translations in space and consequently diagonalize the free Hamiltonian, etc. – Dvij D.C. Nov 21 '20 at 1:12 • Have you ever tried doing a practical calculation using a wave-packet basis? – tparker Nov 22 '20 at 4:39 • @tparker I keep hearing this word basis and I'm somewhat ashamed to admit that I have no clue what people mean when they say it. Why would you need to worry about basis vectors for a complex vector space that is at most four dimensions? – Ryan Parikh Nov 22 '20 at 15:34 Plane waves in quantum mechanics are usually the eigenstates of the momentum operator, which is what makes them very useful. Momentum conservation is the manifestation of the translational variance in space, which is arguably what makes plane waves also very useful in classical contexts, whenever one deals with a homogeneous media. Mathematically, plane waves correspond to the Fourier expansion, which is also a very convenient mathematical tool. On a more general level: expanding in terms of the appropriate orthogonal basis is often a good idea. • Also their use as the underlying fields on which creation and annihilation operators work in quantum field theory, wave packets would be an unnecessary complication in the very successful calculations of Feynman diagrams. – anna v Nov 19 '20 at 13:25 • +1: To spell out the connection between translations and momentum more explicitly, momentum eigenstates are generators of translation in space. So even if the theory is not translational invariant, you would end up needing them one way or the other. – Dvij D.C. Nov 21 '20 at 1:08 • @annav: I misread that as "very stressful calculations" ... – alexarvanitakis Nov 21 '20 at 2:42 I understand that the equation is linear and that you can represent the solution as a sum of these stationary states That's the whole story right there. Plane-wave solutions are useful because every other solution can be built up as a decomposition of plane-wave contributions. However, wouldn't it be more logical to give the general solution, the Gaussian wave packet? there is no meaningful or useful sense in which the Gaussian wavepacket is a "general" solution. You can also construct solutions with sums of this and it makes much more sense because you can actually normalize it. You can add them, see how the particles interfere with each other, understand the role of complex numbers, etc. This is indeed true, and Gaussian-wavepacket solutions are very useful in understanding the dynamics, but they are of very limited usefulness in studying the behaviour of an arbitrary initial condition. Gaussian wavepackets are not a basis, because they are not mutually orthogonal. Moreover, while they do span the space in the sense that $$\frac1\pi\int |\alpha⟩⟨\alpha|\mathrm d^2\alpha = \mathbb I$$ using coherent-state notation, they are overcomplete, and this completeness relationship is not particularly useful $$-$$ basically because the Segal-Bargmann transform is not a particularly convenient tool, especially when compared with the Fourier transform. • Slight technicality: being orthogonal is not a requirement for being a basis. – Javier Nov 21 '20 at 1:45 • @Javier Generally in inner product spaces we are interested not in bases but orthonormal bases, because that's the notion that respects the structure of the space. – Mario Carneiro Nov 21 '20 at 5:05 • @MarioCarneiro I know, but that's an orthogonal basis, not just a basis. I did say it was a slight technicality! – Javier Nov 21 '20 at 13:50 • That's the whole story right there. Plane-wave solutions are useful because every other solution can be built up as a decomposition of plane-wave contributions. But that is not a peculiar property at all; bases are a dime a dozen and not particularly meaningful. It's their other properties that make them useful. – Federico Poloni Nov 21 '20 at 18:31 • @all If you think you can provide a clearer presentation then you're obviously welcome to write your own. – Emilio Pisanty Nov 21 '20 at 20:41 Obsession: Plane waves diagonalise the free hamiltonian and are useful as a basis for perturbation expansions of scattering problems or periodic systems. For atomic phsysics they are not useful. Fiasco: Since the plane waves are periodic, you can think of these as solutions in a box normalised by $$1/\sqrt{V}$$. Since the normalisation factor does not add anything it is often dropped. In classical fields, the solution of the wave differential equation describes the real stuff, after adding the specific boundary conditions. So, a sinusoidal plane wave (SPW) can be a real solution. But other plane waves, also solution for the differential equation, are not necessarily sinusoidal. In this case the SPW's change its status from a real solution to a basis for the real solution. I understand that in QM that change of status is complete. SPW are no more real solutions, but they only define basis for them. It seems more precise to call them eingenfunctions instead of solutions, to avoid taking them as physical entities. But as they are solutions of the differential equations, the wording ambiguity is here to stay. There, the given solution is the plane wave ei(kx−ωt), which you cannot normalize. You can't normalize it in isolation, but you don't need to normalize the basis vectors, you just need to normalize the actual vectors you're working with. Do you have the same qualms with waves given in position space? Position space and momentum space are dual. If we give a wave as a function that assigns a complex amplitude to each position in physical space, then we're using the states with zero uncertainty in position and infinite uncertainty in momentum as basis vectors, and those states can't be normalized either. When you write $$\psi=\frac{1}{\sqrt{\pi+\frac{i\hbar t}{m}}}e^{\frac{-x^2}{2(\pi+\frac{i\hbar t}{m})}}$$, strictly speaking, that's a function. To make it a vector, we have to treat that function as giving the coefficients of an uncountable number of vectors: $$\psi = \sum_{x \in X}f(x) \delta_x$$ where $$\delta_x$$ is a state with definite position of $$x$$. Also, wouldn't we need more parameters to have a general solution, such as $$\psi=\frac{1}{\sqrt{\pi+\frac{i\hbar t}{m}}}e^{\frac{-(x-x_0)^2}{2(\pi+\frac{i\hbar t}{m})}}$$? As other answers have said, plane waves are eigenstates of the momentum operator, which means that any operator based on the momentum operator will be diagonal in terms of this basis. If you have $$\hat {\mathcal H}\psi = \lambda \psi$$, then $$e^{-\frac i {\hbar}\hat {\mathcal H}t}\psi$$ is just $$e^{-\frac i {\hbar}\lambda t}\psi$$. That means that each state evolves independently. If you had a Gaussian basis, the time evolution of one time-independent basis state will have to involve other states. And if you have a medium with frequency-dependent propagation speeds, any state that doesn't have a fixed frequency is going to exhibit dispersion. There can also be frequency-dependent damping.
# Jetson Nano + IMX327 (Low Light Camera) doesn't work. Dear all, Acquiring the Sony IMX327 (MIPI 2lanes) camera, I am trying to get it works on the Nano. But I have a problem using the gstreamer command: ``````gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e `````` Please, do you have a feedback according to this problem ? I think Nvidia has some difficulties with this kind of camera, but I am not sure. Do I need to install a specific driver, or compiling the kernel ? Thanks for your help and support. Best regards, Chris. Did you integrate the IMX327 driver to your system? If not you need a driver for this sensor. Have a check below document. https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fcamera_sensor_prog.html%23 Thank you ShaneCCC, I will check that.
affine transformations, strategy for finding invariant straight lines At first lets introduce some notation. $\mathcal{A}^n$ is a $n-$dimensional affine space and $V$ is its associated vector space. For any affine subspace of $\mathcal{M}$, its associated vector space it would be denoted as $V_{\mathcal{M}}$. For the points of the affine space I will use bold capital letters such as $\mathbf{Q},\mathbf{P},\mathbf{R},\mathbf{A},..$ and for the vectors of $V$ bold smaii ones. The vector $\mathbf{v} \in V$ that produced by $\mathbf{Q}$ and $\mathbf{P}$, it would be denoted as $\mathbf{QP}$ Finally if $S=\left\{\mathbf{O},\mathbf{A_1},\mathbf{A_2},\cdots,\mathbf{A_n}\right\}$ is a coordinate system with origin $\mathbf{O}$ then, its associated basis of $V$ it would be the $\mathcal{B}_s = \{\mathbf{a_i} = \mathbf{OA_i}: i=1,\cdots,n\}$ and the vector $\mathbf{OX}=\mathbf{x}$, $\mathbf{X} \in \mathcal{A}^n$ Consider a affine transformation $f:\mathcal{A}^n \to \mathcal{A}^n$, with $f(\mathbf{x})=A\mathbf{x}+\mathbf{p}$ What is the general strategy we follow to find the invariant straight lines under the transformation $f$? Consider a line $\varepsilon$ parallel to $\mathbf{v}$ passing throw point $\mathbf{Q}$, so $$\varepsilon=\left\{ t\mathbf{v}+\mathbf{q}:t \in \mathbb{R} \right\}$$ we have that $$f(\varepsilon)=\left\{ tA\mathbf{v}+A\mathbf{q}+\mathbf{p}:t\in\mathbb{R} \right\}$$ in order to be $\varepsilon=f(\varepsilon)$ it should exists a $\lambda \in \mathbb{R}$ such that $$A\mathbf{v}=\lambda \mathbf{v} \text{ and }A\mathbf{q}=\lambda \mathbf{q}+\mathbf{p}$$ So the problem is, to find the eigenvalues of $A$ and their corresponding eigenvectors and then solve the equation $A\mathbf{q}=\lambda \mathbf{q}+\mathbf{p}$ Theoretically, is everything alright? Here is a basic example: Consider a affine transformation $f:\mathcal{A}^2 \to \mathcal{A}^2$, with $f(\mathbf{x})=\begin{bmatrix}2&3\\3 &10\end{bmatrix}\mathbf{x}$, which is the invariant lines of the plane? using the above method we have that the eigenvalues of $A$ is $\lambda_1=1$ and $\lambda_2=11$ and their corresponding eigenvectors are $\mathbf{v}_1= \begin{bmatrix} -3\\1 \end{bmatrix}$ and $\mathbf{v}_2= \begin{bmatrix} 1\\3 \end{bmatrix}$. So the invariant lines of the transformation $f$ should have the form $$t\mathbf{v}_i+\mathbf{q}_i \text{ with } i=1,2$$ but how i will find the points $\mathbf{q}_i?$ _______________________ My first try above is wrong but I believe that I make some progress so let me share it with you, I don't delete the former text in order for the reader follow my thoughts. Consider a affine transformation $f:\mathcal{A}^n \to \mathcal{A}^n$, with $f(\mathbf{x})=A\mathbf{x}+\mathbf{p}$ and a line $\varepsilon$ parallel to $\mathbf{v}$ passing throw point $\mathbf{Q}$, so $$\varepsilon=\left\{ t\mathbf{v}+\mathbf{q}:t \in \mathbb{R} \right\}$$ what is the general strategy we follow to find the invariant straight lines under the transformation $f$? we have that $$f(\varepsilon)=\left\{ tA\mathbf{v}+A\mathbf{q}+\mathbf{p}:t\in\mathbb{R} \right\}$$ (Here is where my reasoning was wrong) In order to be $f(\varepsilon)=\varepsilon$ it should be 1. $f(\varepsilon)\parallel\varepsilon$ 2. and $f(\varepsilon) \cap \varepsilon \neq \varnothing$ So at first we should find for which $\mathcal{v}$ we have that $f(\varepsilon)\parallel\varepsilon$. In other words we should find a $\lambda \in \mathbb{R}$ such that $$A\mathbf{v}=\lambda \mathbf{v}$$ Once we have the find the eigenvalues $\lambda_i$ of $A$ and their corresponding eigenvectors $\mathbf{v_i}$, $i \in I$ We know that every line of the form $$\varepsilon=\left\{ t \mathbf{v_i}+\mathbf{q}:t \in \mathbb{R} \right\}$$ it is mapped throw $f$ to a parallel one. Now for every $i\in I$ we should the following equation with unknowns the $t_1, t_2$ and $\mathbf{q}$ $$t_1 \mathbf{v_i} + \mathbf{q} = t_2 A \mathbf{v_i} + A \mathbf{q} + \mathbf{p}$$ Even this is a fare general method, the problem seems far from solved. At least in generall. The general case it seems to be too complicated but we may extract interesting results for the cases $n=2,3$ Many new questions have came to my mind. For example, what happens when the eigenvalues are complex numpers? or when the $\mathbf{p}$ from the last equation belongs to a line parallel to the $f(\varepsilon)$ or.. I will soon add and the solution to the basic example Feel free to and your thoughts, still I dont know if my reasoning is correct, moreover I am sure somewhere there is a great resource explaining our problem, so if someone know it, please share your reference • Nice question. ${}{}$ – copper.hat Feb 11 '16 at 16:55 • i edited the post with some new thoughts – karhas Feb 12 '16 at 3:17 • If you work in homogeneous coordinates, your problem becomes that of finding the two-dimensional invariant subspaces of $M=\small{\begin{bmatrix}A&\mathbf p\\\mathbf 0^T&1\end{bmatrix}}.$ This is fairly easy if $M$ is diagonalizable (or its minimal polynomial factors into linear and irreducible quadratic factors), but rather challenging when there are defective eigenvalues. – amd Apr 20 at 7:21
# Probability, Expectation, Cov • August 14th 2009, 04:28 AM Maccaman Probability, Expectation, Cov Let X and Y be 2 random variables with $|\mathbb{E}[X]|, |\mathbb{E}[Y]|,$and $|\mathbb{E}[\frac{X}{Y}]|$ all finite, and with $\mathbb{P}(Y = 0) = 0$ and $\mathbb{E}[Y] \ne 0$. Prove that $\mathbb{E}[\frac{X}{Y}] = \frac{\mathbb{E}[X]}{\mathbb{E}[Y]}$ if and only if Cov $(Y,\frac{X}{Y}) = 0$ • August 14th 2009, 04:35 AM kobylkinks solution Use Cov(Y,X/Y)=E(Y*X/Y)-E(Y)*E(X/Y)=0. Because random variable Y*X/Y=X for all points except those where Y=0 the condition P(Y=0)=0 gives that E(X)=E(Y*X/Y). • August 15th 2009, 01:01 AM Maccaman wow, that was surprisingly easy. Thanks for your help.
Question (a) An MRI technician moves his hand from a region of very low magnetic field strength into an MRI scanner’s 2.00 T field with his fingers pointing in the direction of the field. Find the average emf induced in his wedding ring, given its diameter is 2.20 cm and assuming it takes 0.250 s to move it into the field. (b) Discuss whether this current would significantly change the temperature of the ring. 1. $3.04 \textrm{ mV}$ 2. A temperature change of $5.8 \times 10^{-4} \textrm{ C}^\circ$ is insignificant. Solution Video
# $\omega$-consistency and related terms We know that a theory $T$ is $\omega$-inconsistent if there is a formula $\psi$ such that $T$ proves $(\exists x)\psi(x)$, and $T$ also proves $\lnot \psi(n)$ separately for each standard natural number $n$. So $T$ is $\omega$-consistent if it is not $\omega$-inconsistent. Is there a name for the following property: if for every formula $\psi$ such that $\psi(0), \psi(1), \psi(2),...$ can be proven in $T$, then $\forall x \psi(x)$ can be proven in $T$ ? And what is the connection between this property and $\omega$-inconsistency / consistency? Thank you. - In my Gödel book §21.6, I give the following definition, which I took/take to be pretty standard terminology: An arithmetic theory $T$ is $\omega$-incomplete iff, for some open wff $\varphi\mathsf{(x)}$, $T$ can prove each $\varphi\mathsf{(\overline{m})}$ but $T$ can't go on to prove $\forall \mathsf{x}\varphi\mathsf{(x)}$. So if $T$ is able to prove $\forall \mathsf{x}\varphi\mathsf{(x)}$ when $T$ can prove each $\varphi\mathsf{(\overline{m})}$, for any $\varphi\mathsf{(x)}$, then $T$ would (as Luca says) naturally be called $\omega$-complete. What is the connection between $\omega$-incompleteness and $\omega$-inconsistency? Well, we can say this: $\omega$-incompleteness in a theory of arithmetic is a regrettable weakness; if $T$ can prove each $\varphi\mathsf{(\overline{m})}$ it would be very nice if $T$ were always able to prove $\forall \mathsf{x}\varphi\mathsf{(x)}$ too. Sadly, Gödel's incompleteness theorem tells us that, surprisingly, nice enough theories $T$ can't be this nice! By contrast $\omega$-inconsistency is not just a regrettable weakness but a Very Bad Thing indeed (not quite as bad as outright inconsistency, maybe, but still bad). For evidently, a theory that can prove each of $\varphi\mathsf{(\overline{m})}$ and yet also prove $\neg\forall \mathsf{x}\varphi\mathsf{(x)}$ is just not going to be interpretable as being about the natural numbers. - thanks for eveyone! –  user75221 Aug 14 '13 at 9:33 The rule you're referring to is called $\omega$-rule, and if you add it to the axioms and rules of inference of first-order logic you get the so called $\omega$-logic. Thus I would say that the name you're looking for is consistent with the $\omega$-rule, or consistent in $\omega$-logic. A shorter name is $\omega$-complete. A theory $T$ has an $\omega$-model if and only if it is consistent in $\omega$-logic (see Proposition 2.2.13 in C. C. Chang and H. Jerome Keisler's Model Theory). It follows that consistency with the $\omega$-rule implies $\omega$-consistency, but it is actually a stronger condition. - I believe the property in the original question is that the theory is closed under the $\omega$-rule. –  Carl Mummert Aug 13 '13 at 11:34
How to be a good cop on the problem-solving beat Guy Blelloch is best known for his seminal work on parallel computing; if you need advice on parallelism he is the one to ask. Not surprisingly, he is a strong advocate of parallel programming. See his essay, “Is Parallel Programming Hard?” Spoiler alert: he argues that it is not. Today we wish to talk about complexity theory as an enabler of problem solving, rather than a disabler when a problem is proved hard for ${\mathsf{NP}}$ or some other complexity class of difficult problems. Blelloch’s approach emphasizes how parallel structure emerges from specific problems, rather than focusing on control in parallel machine systems. The algorithms realizing this structure are informed by computational complexity theory. For instance, the “scan programs” in the book of his Ph.D. thesis are influenced by theorems relating vector machine models to the circuit classes of the NC hierarchy. Blelloch has taught a course at Carnegie Mellon titled “Algorithms in the Real World” for many years. This covers classic algorithms in text compression, string searching, computational biology, high-dimensional geometry, linear versus integer programming, cryptography, and others. Some of these areas are informed by complexity, such as ${\mathsf{NP}}$-hardness, reduction to and from factoring, and hardness of approximation. However, even with immediately practical tasks like computing Nash equilibria of games, which as we noted in the last post is complete for a class called ${\mathsf{PPAD}}$, or pricing financial derivatives as discussed and shown hard here, complexity is playing the “bad cop,” policing attempts to find feasible exact algorithms where ostensibly there aren’t any. Ken’s colleagues Hung Ngo and Atri Rudra are developing a seminar to showcase complexity as the “good cop,” enabling applications. The rest of this post is by Atri Rudra. ## An Early Example The earliest example I, Atri, know is the now famous pattern matching algorithm by Donald Knuth, James Morris, and Vaughan Pratt, which is included in Blelloch’s course. The point is that automata theory was used to design a linear time pattern matching algorithm. The following is quote is from the “Historical Remarks” section of the original paper: “This was the first time in Knuth’s experience that automata theory had taught him how to solve a real programming problem better than he could solve it before.” This whole section is worth reading to see how Knuth’s version of the algorithm was implicit in Stephen Cook’s linear time simulation of deterministic two-way pushdown automata (see this reference), which is clearly a complexity result. What could a “deterministic two-way pushdown automaton” have to do with a practical algorithm? Plenty. Automata theory in itself is a treasure-trove of practical applications. For instance, the ubiquitous regular expressions and parsers in compilers, as they exist today, would not be around without automata theory. ## Making Lower Bounds Your Friend Probably the best known use of complexity as an enabler is to change the rules of the game: use the hardness results to foil the adversary. Cryptography has done this with great effect. There are other similar works, e.g, recently there has been work on designing elections that cannot be manipulated under complexity assumptions: see the survey by Piotr Faliszewski, Edith Hemaspaandra and Lane Hemaspaandra. For the rest of the post, I’ll go back to the good cop role of complexity. A small bit of warning though: when I talk about practical applications below, I will not be talking about results that can find themselves in the complexity equivalent of the Algorithms in The Field Workshop—rather I will be content to talk about applications where one uses complexity to solve problems that have a clear and direct practical motivation. Given that I have ruled out cryptography, I hope you will allow me this luxury. ## Applications of Complexity-Based Tools Complexity theory has developed many tools that can and should find applications in practical areas where on the surface the two do not seem to have anything in common. Below the surface are beautiful connections that are being exploited can be developed further. Let me try to substantiate this with the following examples: ${\bullet}$ Unbalanced bipartite expanders. An expander is a sparse graphs in which any subset of vertices that isn’t too big has many edges leaving it, or alternately has many neighboring vertices. These objects in general have many applications both in complexity theory as well as other practical areas—see for instance this great survey by Shlomo Hoory, Nathan Linial, and Avi Wigderson. I picked unbalanced bipartite expanders for my seminar, as they have been developed almost solely by complexity theorists. Unbalanced means that one side of the bipartite graph has many more vertices than the other. These objects have been used to construct compressed sensing matrices, as noted in this survey by Anna Gilbert and Piotr Indyk. The particular version of the compressed sensing problem is to design a matrix ${\Phi}$ that is long and skinny such that for any real-valued vector ${x}$; given ${\Phi x}$ one can recover a vector ${\hat{x}}$ such that $\displaystyle \|x-\hat{x}\|_1 \le C\cdot\|x-x_k\|_1.$ Here ${x_k}$ is the vector with all but the largest ${k}$ components in ${x}$ have been zeroed out and ${C\ge 1}$ is the approximation factor. When ${\Phi}$ is the adjacency matrix of a lossless unbalanced bipartite expander, one can obtain ${C}$ arbitrarily close to ${1}$, and one can efficiently reconstruct ${\hat{x}}$ form ${\Phi x}$. A general resource page on compressed sensing shows other applications. ${\bullet}$ Randomness extractors. Pseudo-randomness leads to another great place to look for applications besides expanders, namely extractors, which enable refining the output of partially random sources down to pure uniform bits. Indeed, the best known constructions of extractors, which is due to Venkat Guruswami, Chris Umans, and Salil Vadhan use unbalanced bipartite expanders. Michael Mitzenmacher and Vadhan used these techniques also to demonstrate why limited-wise independent hash functions seem to work as well as what theory predicts for fully independent hash functions in practical applications, owing to the fact that data often has enough inherent partial randomness. ${\bullet}$ List decoding. Oded Goldreich and Leonid Levin found the first applications of list decoding in complexity theory itself, and this opened the way to much algorithmic progress in list decoding being made by complexity theorists, as surveyed here. List decoding has the potential to correct more errors in practice than traditional decoding, as hailed in this August, 2004 NSF item. List decoding has also been applied to IP traceback of denial-of-service attacks, to games of guessing secrets, and to verifying truthful storage of client data. The last paper also applies Kolmogorov complexity—disclaimer: this paper is joint work with (my colleague Ram Sridhar’s student) Mohammad “Ifte” Husain, my colleague Steve Ko, and my student Steve Uurtamo. ${\bullet}$ Locally decodable codes. These are codes where one can compute an arbitrary message symbol with high probability by querying very small number of (random) positions in a corrupted codeword. Complexity theorists can take complete credit for this coding primitive. Along with locally testable codes, these coding objects came out of the work on probabilistically checkable proofs (PCPs). Unlike list decoding, the theoretical limits on locally decodable codes are still not known, though there has been some recent progress starting with this breakthrough work of Sergey Yekhanin. Typically these codes are studied in the regime of a constant fraction of errors, where the amount of redundancy needed in the code has to be super-linear. Recently, locally decodable codes for a single error have been studied by Yekhanin with Parikhshit Gopalan, Cheng Huang, and Huseyin Simitci. This work is tailored for distributed remote storage systems where the number of errors is small and local decoding translates to lower communication between servers needed to recreate the server that is down. ${\bullet}$ Superconcentrators. Superconcentrators are sparse graphs where certain designated set of input and output nodes are connected by many node-disjoint paths. (One can construct superconcentrators using expanders.) These objects were originally studied by Leslie Valiant and others in the context of complexity lower bounds. This was also inspired by connections to switching networks, namely routing boxes that switch traffic from one fiber to another, and for work showing the connections go both ways, see here and here and here. See also this post on a problem about routing network traffic on expanders. ## Models Complexity theory studies computation through the lens of various machine and program models. I believe that these models themselves can be a valuable export to practice. Probably the most widely exported model is that of communication complexity, which has found applications in diverse areas such as data stream algorithms and game theory. However, these are again used to prove lower bounds, on the bad-cop’s beat. Below are some examples I am aware of that prove positive results: ${\bullet}$ Natural Computational Models. The idea here is to model natural phenomena using computational ideas, and then to use complexity tools to prove interesting results. One recent example is Valiant’s new research on quantitative aspects of evolution. This and follow-up work bring in tools developed in computational learning theory. Another example is this work by Nikhil Devanur and Lance Fortnow developing a computational model of awareness and decision making, which uses Leonid Levin’s foundational complexity concept of universal enumeration. ${\bullet}$ Restricted Models. The idea is to take restricted computational models that have already been studied in complexity, and adapt them to model practical applications where upper bounds have practical value. For one of probably many examples, James Apsnes and Eric Blais and Ryan O’Donnell have a project with me, my colleague Murat Demirbas, and my student Uurtamo that adapts decision trees to model delicate issues in single-hop wireless sensor networks. The conventional wisdom in wireless networks is that packet collisions are bad. We created a model in which collisions become a cost akin to decision-tree queries as wireless devices detect them, thus measuring the efficiency of computing certain functions of data distributed over sensor nodes. Our first paper characterized the complexity of different functions in this new model. Then we ran experiments and showed in a follow-up paper that our algorithms improve over existing ones. So collisions are bad, but not always; sometimes you can make “collision-ade” out of them. ${\bullet}$ PCP was a Model. The PCP Theorem has been so important in hardness results that we seem to have forgotten that it originated in the positive-minded model of probabilistically checkable proofs. Madhu Sudan’s recent long survey restores attention to the PCP Theorem as a potential enabler of applications. Until the challenge of fully realizing them is met, however, complexity theorists may have to regard this as a proverbial “fish that got away” with its own success. There is some encouraging recent news on this front. Eli-Ben Sasson and Eran Tomer are leading a project with Ohad Barta, Alessandro Chiesa, Daniel Genkin and Arnon Yogev funded by the European Community that is aimed at implementing PCPs for practical applications in security and verifiable computation. The project is still in its early stages, but hopefully it will allow us to think of PCPs as the fish that almost got away. ## Open Problems What other categories of complexity as an enabler can we collect? What are other applications of the above examples? Ken and I (Dick) also wonder whether complexity theory got carried away with emphasis on classes—per our last post—and did not give enough to problems. Note that this parallels Guy Blelloch’s point insofar as most complexity classes were originally defined in terms of machines, rather than emerging directly from features of problems. Can complexity profit from such a shift in focus? Ken and I, finally, thank Atri for this contribution. We invite others who have something they wish to say to contact us for a potential “guest” post. We do not pay much to the author of the post—actually about \$2.56 less than Knuth would—but we hope that some of you may wish to do one. 1. September 9, 2011 9:30 am Parallel computing turns out to be a natural way of constructing fast serial algorithms for parametric search algorithms, as was discovered by Megiddo. (In parametric search you take a problem like shortest path but instead of constant lengths you have lengths which are affine in some parameter T.) Somehow each parallel step enables the serial algorithm to make a “batch” out of several steps.Here is a link to a more detailed description: http://daveagp.wordpress.com/2011/07/01/parametric-search/ 2. September 10, 2011 7:04 pm Didn’t complexity theory give us SAT, the queen of NP-complete problems? While NP-completeness seems like a “bad cop” aspect of complexity at first, today many NP problems are solved in practice by reducing them to SAT. Integer Linear Programming plays a similar role. September 11, 2011 9:13 pm Hi Amir, Great examples. When I listed the examples, I was thinking about my planned seminar with Hung, where we’re looking for specific applications, preferably with theorems, so I’m probably missing other relevant example like yours. BTW integer linear programming is cover in the Algorithms in the real world course. 3. September 12, 2011 11:42 am When we view thermal noise as algorithmically-incompressible complexity, then we appreciate one of the many great lessons owed quantum information theory is the intuition that thermal noise is an enabler of efficient dynamical simulation. That thermal noise (and its accompanying dissipation) makes simulation exponentially easier is true both classically (e.g. von Neumann viscosity for example) and quantum mechanically (see e.g. Plenio and Virmani’s arXiv:0810.4340). Moreover this principle is robust, in the sense that it is both formally provably for idealized dynamical systems and works well empirically for dynamical systems having real-world complexity. Systems engineers especially are exceedingly fond of this ubiquitous complexity-enabling principle, since real-world systems invariably are bathed in thermal noise. As David Deutsch writes entertainingly in his new book The Beginning of Infinity: Explanations That Transform the World: It may well be that the interiors of refrigerators constructed by physicists are by far the coldest and darkest places in the universe. [These refrigerators are] Far from typical. Broadly conceived, systems engineering seeks to predict and control the noisy universe that is outside of the physicists’ ultra-cold ultra-dark refrigerators … and recent advances in QIT now have given us fundamental reason to hope that the predictive aspect of this goal is efficiently achievable. A key point is that the complexity-theoretic noise-enabling principle can be exploited both forwards and backwards, in the sense that separative transport mechanisms that increase entropy gobally can be reversed to create separative transport engines that diminish entropy locally; a classic textbook book that entertainingly surveys this key branch of engineering is J. Calvin Giddings’ Unified Separation Science (1991).
TheInfoList OR: The speed of light in vacuum A vacuum is a space devoid of matter. The word is derived from the Latin adjective ''vacuus'' for "vacant" or " void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists ofte ... , commonly denoted , is a universal physical constant A physical constant, sometimes fundamental physical constant or universal constant, is a physical quantity that is generally believed to be both universal in nature and have constant value in time. It is contrasted with a mathematical constant ... that is important in many areas of physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which relat ... . The speed of light is exactly equal to ). According to the special theory of relativity In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: # The law ... , is the upper limit for the speed at which conventional matter In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic parti ... or energy (and thus any signal In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The '' IEEE Transactions on Signal Processing' ... carrying information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random ... ) can travel through space. All forms of electromagnetic radiation In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. It includes radio waves, microwaves, infrared, (visible) ... , including visible light Light or visible light is electromagnetic radiation that can be perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nanometres (nm), corresponding to frequencies of 750–420 tera ... , travel at the speed of light. For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. Starlight Starlight is the light emitted by stars. It typically refers to visible electromagnetic radiation from stars other than the Sun, observable from Earth at night, although a component of starlight is observable from Earth during daytime. Sunl ... viewed on Earth left the star A star is an astronomical object comprising a luminous spheroid of plasma held together by its gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye at night, but their immense distances from Eart ... s many years ago, allowing humans to study the history of the universe by viewing distant objects. When communicating with distant space probe A space probe is an artificial satellite that travels through space to collect scientific data. A space probe may orbit Earth; approach the Moon; travel through interplanetary space; flyby, orbit, or land or fly on other planetary bodies; o ... s, it can take minutes to hours for signals to travel from Earth to the spacecraft and vice versa. In computing Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, e ... , the speed of light fixes the ultimate minimum communication delay between computers, to computer memory In computing, memory is a device or system that is used to store information for immediate use in a computer or related computer hardware and digital electronic devices. The term ''memory'' is often synonymous with the term '' primary storage ... , and within a CPU A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and ... . The speed of light can be used in time of flight Time of flight (ToF) is the measurement of the time taken by an object, particle or wave (be it acoustic, electromagnetic, etc.) to travel a distance through a medium. This information can then be used to measure velocity or path length, or as a w ... measurements to measure large distances to extremely high precision. Ole Rømer first demonstrated in 1676 that light travels at a finite speed (non-instantaneously) by studying the apparent motion of Jupiter's moon Io. Progressively more accurate measurements of its speed came over the following centuries. In a paper Paper is a thin sheet material produced by mechanically or chemically processing cellulose fibres derived from wood, rags, grasses or other vegetable sources in water, draining the water through fine mesh leaving the fibre evenly distributed ... published in 1865, James Clerk Maxwell James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish mathematician and scientist responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and ligh ... proposed that light was an electromagnetic wave In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. It includes radio waves, microwaves, infrared, (visible) li ... and, therefore, travelled at speed . In 1905, Albert Einstein Albert Einstein ( ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time. Einstein is best known for developing the theor ... postulated that the speed of light with respect to any inertial frame of reference In classical physics and special relativity, an inertial frame of reference (also called inertial reference frame, inertial frame, inertial space, or Galilean reference frame) is a frame of reference that is not undergoing any acceleration. ... is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and, in doing so, showed that the parameter had relevance outside of the context of light and electromagnetism. Massless particle In particle physics, a massless particle is an elementary particle whose invariant mass is zero. There are two known gauge boson massless particles: the photon (carrier of electromagnetism) and the gluon (carrier of the strong force). However, ... s and field perturbations, such as gravitational wave Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in ... s, also travel at speed in a vacuum. Such particles and waves travel at regardless of the motion of the source or the inertial reference frame of the observer An observer is one who engages in observation or in watching an experiment. Observer may also refer to: Computer science and information theory * In information theory, any system which receives information from an object * State observer in co ... . Particles with nonzero rest mass The invariant mass, rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is the portion of the total mass of an object or system of objects that is independent of the overall motion of the system. More precisely, ... can be accelerated to approach but can never reach it, regardless of the frame of reference in which their speed is measured. In the special and general theories of relativity, interrelates space and time and also appears in the famous equation of mass–energy equivalence In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physici ... , . In some cases, objects or waves may appear to travel faster than light (e.g., phase velocities of waves, the appearance of certain high-speed astronomical objects, and particular quantum effects). The expansion of the universe The expansion of the universe is the increase in distance between any two given gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion whereby the scale of space itself changes. The universe does not e ... is understood to exceed the speed of light beyond a certain boundary. The speed at which light propagates through transparent materials In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. On a macroscopic scale (one in which the dimensions a ... , such as glass or air, is less than ; similarly, the speed of electromagnetic waves In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. It includes radio waves, microwaves, infrared, (visible) ligh ... in wire cables is slower than . The ratio between and the speed at which light travels in a material is called the refractive index of the material (). For example, for visible light, the refractive index of glass is typically around 1.5, meaning that light in glass travels at ; the refractive index of air for visible light is about 1.0003, so the speed of light in air is about slower than . # Numerical value, notation, and units The speed of light in vacuum is usually denoted by a lowercase , for "constant" or the Latin (meaning 'swiftness, celerity'). In 1856, Wilhelm Eduard Weber Wilhelm Eduard Weber (; ; 24 October 1804 – 23 June 1891) was a German physicist and, together with Carl Friedrich Gauss, inventor of the first electromagnetic telegraph. Biography of Wilhelm Early years Weber was born in Schlossstrasse i ... and Rudolf Kohlrausch Rudolf Hermann Arndt Kohlrausch (November 6, 1809 in Göttingen – March 8, 1858 in Erlangen) was a German physicist. Biography He was a native of Göttingen, the son of the Royal Hanovarian director general of schools Friedrich Kohlrausch. He ... had used for a different constant that was later shown to equal times the speed of light in vacuum. Historically, the symbol ''V'' was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish mathematician and scientist responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and ligh ... in 1865. In 1894, Paul Drude Paul Karl Ludwig Drude (; 12 July 1863 – 5 July 1906) was a German physicist specializing in optics. He wrote a fundamental textbook integrating optics with Maxwell's theories of electromagnetism. Education Born into an ethnic German family, ... redefined with its modern meaning. Einstein Albert Einstein ( ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time. Einstein is best known for developing the theor ... used ''V'' in his original German-language papers on special relativity in 1905, but in 1907 he switched to , which by then had become the standard symbol for the speed of light. "The origins of the letter c being used for the speed of light can be traced back to a paper of 1856 by Weber and Kohlrausch ..Weber apparently meant c to stand for 'constant' in his force law, but there is evidence that physicists such as Lorentz and Einstein were accustomed to a common convention that c could be used as a variable for velocity. This usage can be traced back to the classic Latin texts in which c stood for 'celeritas', meaning 'speed'." Sometimes is used for the speed of waves in any material medium, and 0 for the speed of light in vacuum.See for example: * * * * This subscripted notation, which is endorsed in official SI literature, has the same form as related electromagnetic constants: namely, ''μ''0 for the vacuum permeability The vacuum magnetic permeability (variously ''vacuum permeability'', ''permeability of free space'', ''permeability of vacuum''), also known as the magnetic constant, is the magnetic permeability in a classical vacuum. It is a physical constant ... or magnetic constant, ''ε''0 for the vacuum permittivity Vacuum permittivity, commonly denoted (pronounced "epsilon nought" or "epsilon zero"), is the value of the absolute dielectric permittivity of classical vacuum. It may also be referred to as the permittivity of free space, the electric const ... or electric constant, and ''Z''0 for the impedance of free space The impedance of free space, , is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is, , where is the electric field strength and is the magnetic fie ... . This article uses exclusively for the speed of light in vacuum. ## Use in unit systems Since 1983, the constant has been defined in the International System of Units (SI) as ''exactly'' ; this relationship is used to define the metre as exactly the distance that light travels in a vacuum in of a second. By using the value of , as well as an accurate measurement of the second, one can thus establish a standard for the metre.See, for example: * * * As a dimensional physical constant, the numerical value of is different for different unit systems. For example, in imperial units, the speed of light is approximately miles per second, or roughly 1 foot The foot ( : feet) is an anatomical structure found in many vertebrates. It is the terminal portion of a limb which bears weight and allows locomotion. In many animals with feet, the foot is a separate organ at the terminal part of the leg m ... per nanosecond. In branches of physics in which appears often, such as in relativity, it is common to use systems of natural units In physics, natural units are physical units of measurement in which only universal physical constants are used as defining constants, such that each of these constants acts as a coherent unit of a quantity. For example, the elementary charge ... of measurement or the geometrized unit system where . Using these units, does not appear explicitly because multiplication or division by1 does not affect the result. Its unit of light-second The light-second is a unit of length useful in astronomy, telecommunications and relativistic physics. It is defined as the distance that light travels in free space in one second, and is equal to exactly . Just as the second forms the basis for ... per second is still relevant, even if omitted. # Fundamental role in physics The speed at which light waves propagate in vacuum is independent both of the motion of the wave source and of the inertial frame of reference In classical physics and special relativity, an inertial frame of reference (also called inertial reference frame, inertial frame, inertial space, or Galilean reference frame) is a frame of reference that is not undergoing any acceleration. ... of the observer.However, the frequency of light can depend on the motion of the source relative to the observer, due to the Doppler effect The Doppler effect or Doppler shift (or simply Doppler, when in context) is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who ... . This invariance of the speed of light was postulated by Einstein in 1905, after being motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous aether Luminiferous aether or ether ("luminiferous", meaning "light-bearing") was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space (a vacuum), s ... ; it has since been consistently confirmed by many experiments.See Michelson–Morley experiment The Michelson–Morley experiment was an attempt to detect the existence of the luminiferous aether, a supposed medium permeating space that was thought to be the carrier of light waves. The experiment was performed between April and July 188 ... and Kennedy–Thorndike experiment, for example. It is only possible to verify experimentally that the two-way speed of light (for example, from a source to a mirror and back again) is frame-independent, because it is impossible to measure the one-way speed of light (for example, from a source to a distant detector) without some convention as to how clocks at the source and at the detector should be synchronized. However, by adopting Einstein synchronization for the clocks, the one-way speed of light becomes equal to the two-way speed of light by definition. The special theory of relativity In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: # The law ... explores the consequences of this invariance of ''c'' with the assumption that the laws of physics are the same in all inertial frames of reference. One consequence is that ''c'' is the speed at which all massless particle In particle physics, a massless particle is an elementary particle whose invariant mass is zero. There are two known gauge boson massless particles: the photon (carrier of electromagnetism) and the gluon (carrier of the strong force). However, ... s and waves, including light, must travel in vacuum. Special relativity has many counterintuitive and experimentally verified implications. These include the equivalence of mass and energy Equivalence or Equivalent may refer to: Arts and entertainment *Album-equivalent unit, a measurement unit in the music industry * Equivalence class (music) *'' Equivalent VIII'', or ''The Bricks'', a minimalist sculpture by Carl Andre *'' Equiv ... , length contraction Length contraction is the phenomenon that a moving object's length is measured to be shorter than its proper length, which is the length as measured in the object's own rest frame. It is also known as Lorentz contraction or Lorentz–FitzGera ... (moving objects shorten), and time dilation In physics and relativity, time dilation is the difference in the elapsed time as measured by two clocks. It is either due to a relative velocity between them ( special relativistic "kinetic" time dilation) or to a difference in gravitational ... (moving clocks run more slowly). The factor ''γ'' by which lengths contract and times dilate is known as the Lorentz factor The Lorentz factor or Lorentz term is a quantity expressing how much the measurements of time, length, and other physical properties change for an object while that object is moving. The expression appears in several equations in special relativit ... and is given by , where ''v'' is the speed of the object. The difference of ''γ'' from1 is negligible for speeds much slower than ''c'', such as most everyday speedsin which case special relativity is closely approximated by Galilean relativity Galilean invariance or Galilean relativity states that the laws of motion are the same in all inertial frames of reference. Galileo Galilei first described this principle in 1632 in his '' Dialogue Concerning the Two Chief World Systems'' using ... but it increases at relativistic speeds and diverges to infinity as ''v'' approaches ''c''. For example, a time dilation factor of ''γ'' = 2 occurs at a relative velocity of 86.6% of the speed of light (''v'' = 0.866 ''c''). Similarly, a time dilation factor of ''γ'' = 10 occurs at 99.5% the speed of light (''v'' = 0.995 ''c''). The results of special relativity can be summarized by treating space and time as a unified structure known as spacetime In physics, spacetime is a mathematical model that combines the three dimensions of space and one dimension of time into a single four-dimensional manifold. Spacetime diagrams can be used to visualize relativistic effects, such as why diff ... (with ''c'' relating the units of space and time), and requiring that physical theories satisfy a special symmetry Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ... called Lorentz invariance, whose mathematical formulation contains the parameter ''c''. Lorentz invariance is an almost universal assumption for modern physical theories, such as quantum electrodynamics In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and sp ... , quantum chromodynamics In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks mediated by gluons. Quarks are fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a ... , the Standard Model The Standard Model of particle physics is the theory describing three of the four known fundamental forces ( electromagnetic, weak and strong interactions - excluding gravity) in the universe and classifying all known elementary particles. It ... of particle physics Particle physics or high energy physics is the study of fundamental particles and forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) a ... , and general relativity General relativity, also known as the general theory of relativity and Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics ... . As such, the parameter ''c'' is ubiquitous in modern physics, appearing in many contexts that are unrelated to light. For example, general relativity predicts that ''c'' is also the speed of gravity and of gravitational waves Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1 ... , and observations of gravitational waves have been consistent with this prediction. In non-inertial frames of reference (gravitationally curved spacetime or accelerated reference frames), the ''local'' speed of light is constant and equal to ''c'', but the speed of light along a trajectory of finite length can differ from ''c'', depending on how distances and times are defined. It is generally assumed that fundamental constants such as ''c'' have the same value throughout spacetime, meaning that they do not depend on location and do not vary with time. However, it has been suggested in various theories that the speed of light may have changed over time. No conclusive evidence for such changes has been found, but they remain the subject of ongoing research. It also is generally assumed that the speed of light is isotropic Isotropy is uniformity in all orientations; it is derived . Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix ' or ', hence '' anisotropy''. ''Anisotropy'' is also used to describ ... , meaning that it has the same value regardless of the direction in which it is measured. Observations of the emissions from nuclear energy level A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The ... s as a function of the orientation of the emitting nuclei in a magnetic field (see Hughes–Drever experiment), and of rotating optical resonators (see Resonator experiments) have put stringent limits on the possible two-way anisotropy Anisotropy () is the property of a material which allows it to change or assume different properties in different directions, as opposed to isotropy. It can be defined as a difference, when measured along different axes, in a material's physi ... . ## Upper limit on speeds According to special relativity, the energy of an object with rest mass The invariant mass, rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is the portion of the total mass of an object or system of objects that is independent of the overall motion of the system. More precisely, ... ''m'' and speed ''v'' is given by , where ''γ'' is the Lorentz factor defined above. When ''v'' is zero, ''γ'' is equal to one, giving rise to the famous formula for mass–energy equivalence. The ''γ'' factor approaches infinity as ''v'' approaches ''c'', and it would take an infinite amount of energy to accelerate an object with mass to the speed of light. The speed of light is the upper limit for the speeds of objects with positive rest mass, and individual photons cannot travel faster than the speed of light. This is experimentally established in many tests of relativistic energy and momentum. More generally, it is impossible for signals or energy to travel faster than ''c''. One argument for this follows from the counter-intuitive implication of special relativity known as the relativity of simultaneity In physics, the relativity of simultaneity is the concept that ''distant simultaneity'' – whether two spatially separated events occur at the same time – is not absolute, but depends on the observer's reference frame. This possi ... . If the spatial distance between two events A and B is greater than the time interval between them multiplied by ''c'' then there are frames of reference in which A precedes B, others in which B precedes A, and others in which they are simultaneous. As a result, if something were travelling faster than ''c'' relative to an inertial frame of reference, it would be travelling backwards in time relative to another frame, and causality Causality (also referred to as causation, or cause and effect) is influence by which one event, process, state, or object (''a'' ''cause'') contributes to the production of another event, process, state, or object (an ''effect'') where the ca ... would be violated. In such a frame of reference, an "effect" could be observed before its "cause". Such a violation of causality has never been recorded, and would lead to paradox A paradox is a logically self-contradictory statement or a statement that runs contrary to one's expectation. It is a statement that, despite apparently valid reasoning from true premises, leads to a seemingly self-contradictory or a logically u ... es such as the tachyonic antitelephone. # Faster-than-light observations and experiments There are situations in which it may seem that matter, energy, or information-carrying signal travels at speeds greater than ''c'', but they do not. For example, as is discussed in the propagation of light in a medium section below, many wave velocities can exceed ''c''. The phase velocity The phase velocity of a wave is the rate at which the wave propagates in any medium. This is the velocity at which the phase of any one frequency component of the wave travels. For such a component, any given phase of the wave (for example, ... of X-ray An X-ray, or, much less commonly, X-radiation, is a penetrating form of high-energy electromagnetic radiation. Most X-rays have a wavelength ranging from 10  picometers to 10  nanometers, corresponding to frequencies in the range 30&n ... s through most glasses can routinely exceed ''c'', but phase velocity does not determine the velocity at which waves convey information. If a laser beam is swept quickly across a distant object, the spot of light can move faster than ''c'', although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed ''c''. However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed ''c'' from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than ''c'', after a delay in time. In neither case does any matter, energy, or information travel faster than light. The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of ''c''. However, this does not represent the speed of any single object as measured in a single inertial frame. Certain quantum effects appear to be transmitted instantaneously and therefore faster than ''c'', as in the EPR paradox. An example involves the quantum state In quantum physics, a quantum state is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the quantum state together with the rules for the system's evolution i ... s of two particles that can be entangled. Until either of the particles is observed, they exist in a superposition of two quantum states. If the particles are separated and one particle's quantum state is observed, the other particle's quantum state is determined instantaneously. However, it is impossible to control which quantum state the first particle will take on when it is observed, so information cannot be transmitted in this manner. Another quantum effect that predicts the occurrence of faster-than-light speeds is called the Hartman effect: under certain conditions the time needed for a virtual particle A virtual particle is a theoretical transient particle that exhibits some of the characteristics of an ordinary particle, while having its existence limited by the uncertainty principle. The concept of virtual particles arises in the perturb ... to tunnel A tunnel is an underground passageway, dug through surrounding soil, earth or rock, and enclosed except for the entrance and exit, commonly at each end. A pipeline is not a tunnel, though some recent tunnels have used immersed tube cons ... through a barrier is constant, regardless of the thickness of the barrier. This could result in a virtual particle crossing a large gap faster than light. However, no information can be sent using this effect. archive /ref> So-called superluminal motion In astronomy, superluminal motion is the apparently faster-than-light motion seen in some radio galaxies, BL Lac objects, quasars, blazars and recently also in some galactic sources called microquasars. Bursts of energy moving out along the rel ... is seen in certain astronomical objects, such as the relativistic jet An astrophysical jet is an astronomical phenomenon where outflows of ionised matter are emitted as an extended beam along the axis of rotation. When this greatly accelerated matter in the beam approaches the speed of light, astrophysical jets b ... s of radio galaxies A radio galaxy is a galaxy with giant regions of radio emission extending well beyond its visible structure. These energetic radio lobes are powered by jets from its active galactic nucleus. They have luminosities up to 1039  W at radio wav ... and quasar A quasar is an extremely luminous active galactic nucleus (AGN). It is pronounced , and sometimes known as a quasi-stellar object, abbreviated QSO. This emission from a galaxy nucleus is powered by a supermassive black hole with a mass rangin ... s. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted. A 2011 experiment where neutrinos were observed to travel faster than light turned out to be due to experimental error. In models of the expanding universe, the farther galaxies are from each other, the faster they drift apart. This receding is not due to motion through space, but rather to the expansion of space The expansion of the universe is the increase in distance between any two given gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion whereby the scale of space itself changes. The universe does not e ... itself. For example, galaxies far away from Earth appear to be moving away from the Earth with a speed proportional to their distances. Beyond a boundary called the Hubble sphere, the rate at which their distance from Earth increases becomes greater than the speed of light. # Propagation of light In classical physics Classical physics is a group of physics theories that predate modern, more complete, or more widely applicable theories. If a currently accepted theory is considered to be modern, and its introduction represented a major paradigm shift, then the ... , light is described as a type of electromagnetic wave In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. It includes radio waves, microwaves, infrared, (visible) li ... . The classical behaviour of the electromagnetic field is described by Maxwell's equations Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, and electric circuits. ... , which predict that the speed ''c'' with which electromagnetic waves (such as light) propagate in vacuum is related to the distributed capacitance and inductance of vacuum, otherwise respectively known as the electric constant ''ε''0 and the magnetic constant The vacuum magnetic permeability (variously ''vacuum permeability'', ''permeability of free space'', ''permeability of vacuum''), also known as the magnetic constant, is the magnetic permeability in a classical vacuum. It is a physical constant ... ''μ''0, by the equation :$c =\frac \ .$ In modern quantum physics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, q ... , the electromagnetic field is described by the theory of quantum electrodynamics In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and sp ... (QED). In this theory, light is described by the fundamental excitations (or quanta) of the electromagnetic field, called photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless, so they ... s. In QED, photons are massless particle In particle physics, a massless particle is an elementary particle whose invariant mass is zero. There are two known gauge boson massless particles: the photon (carrier of electromagnetism) and the gluon (carrier of the strong force). However, ... s and thus, according to special relativity, they travel at the speed of light in vacuum. Extensions of QED in which the photon has a mass have been considered. In such a theory, its speed would depend on its frequency, and the invariant speed ''c'' of special relativity would then be the upper limit of the speed of light in vacuum. No variation of the speed of light with frequency has been observed in rigorous testing, putting stringent limits on the mass of the photon. The limit obtained depends on the model used: if the massive photon is described by Proca theory, the experimental upper bound for its mass is about 10−57 gram The gram (originally gramme; SI unit symbol g) is a unit of mass in the International System of Units (SI) equal to one one thousandth of a kilogram. Originally defined as of 1795 as "the absolute weight of a volume of pure water equal to ... s; if photon mass is generated by a Higgs mechanism In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other b ... , the experimental upper limit is less sharp,   (roughly 2 × 10−47 g). Another reason for the speed of light to vary with its frequency would be the failure of special relativity to apply to arbitrarily small scales, as predicted by some proposed theories of quantum gravity Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics; it deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the v ... . In 2009, the observation of gamma-ray burst In gamma-ray astronomy, gamma-ray bursts (GRBs) are immensely energetic explosions that have been observed in distant galaxies. They are the most energetic and luminous electromagnetic events since the Big Bang. Bursts can last from ten mill ... GRB 090510 found no evidence for a dependence of photon speed on energy, supporting tight constraints in specific models of spacetime quantization on how this speed is affected by photon energy for energies approaching the Planck scale. ## In a medium In a medium, light usually does not propagate at a speed equal to ''c''; further, different types of light wave will travel at different speeds. The speed at which the individual crests and troughs of a plane wave In physics, a plane wave is a special case of wave or field: a physical quantity whose value, at any moment, is constant through any plane that is perpendicular to a fixed direction in space. For any position \vec x in space and any time t, t ... (a wave filling the whole space, with only one frequency) propagate is called the phase velocity The phase velocity of a wave is the rate at which the wave propagates in any medium. This is the velocity at which the phase of any one frequency component of the wave travels. For such a component, any given phase of the wave (for example, ... ''v''p. A physical signal with a finite extent (a pulse of light) travels at a different speed. The overall envelope An envelope is a common packaging item, usually made of thin, flat material. It is designed to contain a flat object, such as a letter or card. Traditional envelopes are made from sheets of paper cut to one of three shapes: a rhombus, a sh ... of the pulse travels at the group velocity The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the ''modulation'' or '' envelope'' of the wave—propagates through space. For example, if a stone is thrown into the midd ... ''v''g, and its earliest part travels at the front velocity ''v''f. The phase velocity is important in determining how a light wave travels through a material or from one material to another. It is often represented in terms of a ''refractive index''. The refractive index of a material is defined as the ratio of ''c'' to the phase velocity ''v''p in the material: larger indices of refraction indicate lower speeds. The refractive index of a material may depend on the light's frequency, intensity, polarization Polarization or polarisation may refer to: Mathematics *Polarization of an Abelian variety, in the mathematics of complex manifolds * Polarization of an algebraic form, a technique for expressing a homogeneous polynomial in a simpler fashion b ... , or direction of propagation; in many cases, though, it can be treated as a material-dependent constant. The refractive index of air is approximately 1.0003. Denser media, such as water, glass, and diamond, have refractive indexes of around 1.3, 1.5 and 2.4, respectively, for visible light. In exotic materials like Bose–Einstein condensate In condensed matter physics, a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero (−273.15 °C or −459.67&n ... s near absolute zero, the effective speed of light may be only a few metres per second. However, this represents absorption and re-radiation delay between atoms, as do all slower-than-''c'' speeds in material substances. As an extreme example of light "slowing" in matter, two independent teams of physicists claimed to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium Rubidium is the chemical element with the symbol Rb and atomic number 37. It is a very soft, whitish-grey solid in the alkali metal group, similar to potassium and caesium. Rubidium is the first alkali metal in the group to have a density higher ... . However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrarily later time, as stimulated by a second laser pulse. During the time it had "stopped", it had ceased to be light. This type of behaviour is generally microscopically true of all transparent media which "slow" the speed of light. In transparent materials, the refractive index generally is greater than 1, meaning that the phase velocity is less than ''c''. In other materials, it is possible for the refractive index to become smaller than1 for some frequencies; in some exotic materials it is even possible for the index of refraction to become negative. The requirement that causality is not violated implies that the real and imaginary parts of the dielectric constant The relative permittivity (in older texts, dielectric constant) is the permittivity of a material expressed as a ratio with the electric permittivity of a vacuum. A dielectric is an insulating material, and the dielectric constant of an insulat ... of any material, corresponding respectively to the index of refraction and to the attenuation coefficient The linear attenuation coefficient, attenuation coefficient, or narrow-beam attenuation coefficient characterizes how easily a volume of material can be penetrated by a beam of light, sound, particles, or other energy or matter. A coefficient valu ... , are linked by the Kramers–Kronig relations. In practical terms, this means that in a material with refractive index less than 1, the wave will be absorbed quickly. A pulse with different group and phase velocities (which occurs if the phase velocity is not the same for all the frequencies of the pulse) smears out over time, a process known as dispersion. Certain materials have an exceptionally low (or even zero) group velocity for light waves, a phenomenon called slow light. The opposite, group velocities exceeding ''c'', was proposed theoretically in 1993 and achieved experimentally in 2000. It should even be possible for the group velocity to become infinite or negative, with pulses travelling instantaneously or backwards in time. None of these options, however, allow information to be transmitted faster than ''c''. It is impossible to transmit information with a light pulse any faster than the speed of the earliest part of the pulse (the front velocity). It can be shown that this is (under certain assumptions) always equal to ''c''. It is possible for a particle to travel through a medium faster than the phase velocity of light in that medium (but still slower than ''c''). When a charged particle In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, ... does that in a dielectric material, the electromagnetic equivalent of a shock wave In physics, a shock wave (also spelled shockwave), or shock, is a type of propagating disturbance that moves faster than the local speed of sound in the medium. Like an ordinary wave, a shock wave carries energy and can propagate through a me ... , known as Cherenkov radiation, is emitted. # Practical effects of finiteness The speed of light is of relevance to communications Communication (from la, communicare, meaning "to share" or "to be in relation with") is usually defined as the transmission of information. The term may also refer to the message communicated through such transmissions or the field of inqui ... : the one-way and round-trip delay time In telecommunications, round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent ''plus'' the amount of time it takes for acknowledgement of that signal having been received. This time delay includes p ... are greater than zero. This applies from small to astronomical scales. On the other hand, some techniques depend on the finite speed of light, for example in distance measurements. ## Small scales In computers A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations ( computation) automatically. Modern digital electronic computers can perform generic sets of operations known as programs. These pr ... , the speed of light imposes a limit on how quickly data can be sent between processors A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ... . If a processor operates at 1 gigahertz The hertz (symbol: Hz) is the unit of frequency in the International System of Units (SI), equivalent to one event (or cycle) per second. The hertz is an SI derived unit whose expression in terms of SI base units is s−1, meaning that one he ... , a signal can travel only a maximum of about in a single clock cycle — in practice, this distance is even shorter since the printed circuit board A printed circuit board (PCB; also printed wiring board or PWB) is a medium used in electrical and electronic engineering to connect electronic components to one another in a controlled manner. It takes the form of a laminated sandwich stru ... itself has a refractive index and slows down signals. Processors must therefore be placed close to each other, as well as memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remember ... chips, to minimize communication latencies, and care must be exercised when routing wires between them to ensure signal integrity Signal integrity or SI is a set of measures of the quality of an electrical signal. In digital electronics, a stream of binary values is represented by a voltage (or current) waveform. However, digital signals are fundamentally analog in nature ... . If clock frequencies continue to increase, the speed of light may eventually become a limiting factor for the internal design of single chips ''CHiPs'' is an American crime drama television series created by Rick Rosner and originally aired on NBC from September 15, 1977, to May 1, 1983. It follows the lives of two motorcycle officers of the California Highway Patrol (CHP). The s ... . ## Large distances on Earth Given that the equatorial circumference of the Earth is about and that ''c'' is about , the theoretical shortest time for a piece of information to travel half the globe along the surface is about 67 milliseconds. When light is traveling in optical fibre An optical fiber, or optical fibre in Commonwealth English, is a flexible, transparent fiber made by drawing glass ( silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a mean ... (a transparent material) the actual transit time is longer, in part because the speed of light is slower by about 35% in optical fibre, depending on its refractive index ''n''. Furthermore, straight lines are rare in global communications and the travel time increases when signals pass through electronic switches or signal regenerators. Although this distance is largely irrelevant for most applications, latency becomes important in fields such as high-frequency trading High-frequency trading (HFT) is a type of algorithmic financial trading characterized by high speeds, high turnover rates, and high order-to-trade ratios that leverages high-frequency financial data and electronic trading tools. While there is no ... microwave Microwave is a form of electromagnetic radiation with wavelengths ranging from about one meter to one millimeter corresponding to frequencies between 300 MHz and 300 GHz respectively. Different sources define different frequency ra ... communications between trading hubs, because of the advantage which radio waves travelling at near to the speed of light through air have over comparatively slower fibre optic An optical fiber, or optical fibre in Commonwealth English, is a flexible, transparent fiber made by drawing glass ( silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a mean ... signals. ## Spaceflight and astronomy Similarly, communications between the Earth and spacecraft are not instantaneous. There is a brief delay from the source to the receiver, which becomes more noticeable as distances increase. This delay was significant for communications between ground control and Apollo 8 when it became the first crewed spacecraft to orbit the Moon: for every question, the ground control station had to wait at least three seconds for the answer to arrive. The communications delay between Earth and Mars Mars is the fourth planet from the Sun and the second-smallest planet in the Solar System, only being larger than Mercury. In the English language, Mars is named for the Roman god of war. Mars is a terrestrial planet with a thin at ... can vary between five and twenty minutes depending upon the relative positions of the two planets. As a consequence of this, if a robot on the surface of Mars were to encounter a problem, its human controllers would not be aware of it until later. It would then take a further for commands to travel from Earth to Mars. Receiving light and other signals from distant astronomical sources takes much longer. For example, it takes 13 billion (13) years for light to travel to Earth from the faraway galaxies viewed in the Hubble Ultra Deep Field images. Those photographs, taken today, capture images of the galaxies as they appeared 13 billion years ago, when the universe was less than a billion years old. The fact that more distant objects appear to be younger, due to the finite speed of light, allows astronomers to infer the evolution of stars, of galaxies, and of the universe itself. Astronomical distances are sometimes expressed in light-year A light-year, alternatively spelled light year, is a large unit of length used to express astronomical distances and is equivalent to about 9.46  trillion kilometers (), or 5.88 trillion miles ().One trillion here is taken to be 1012 ... s, especially in popular science publications and media. A light-year is the distance light travels in one Julian year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsec The parsec (symbol: pc) is a unit of length used to measure the large distances to astronomical objects outside the Solar System, approximately equal to or (au), i.e. . The parsec unit is obtained by the use of parallax and trigonometry, ... s. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri Proxima Centauri is a small, low-mass star located away from the Sun in the southern constellation of Centaurus. Its Latin name means the 'nearest tarof Centaurus'. It was discovered in 1915 by Robert Innes and is the nearest-kn ... , the closest star to Earth after the Sun, is around 4.2 light-years away.Further discussion can be found at ## Distance measurement Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites GPS satellite blocks are the various production generations of the Global Positioning System (GPS) used for satellite navigation. The first satellite in the system, Navstar 1, was launched on 22 February 1978. The GPS satellite constellatio ... based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about () in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging Experiment, radar astronomy Radar astronomy is a technique of observing nearby astronomical objects by reflecting radio waves or microwaves off target objects and analyzing their reflections. Radar astronomy differs from '' radio astronomy'' in that the latter is a passive ... and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times. # Measurement There are different ways to determine the value of ''c''. One way is to measure the actual speed at which light waves propagate, which can be done in various astronomical and Earth-based setups. However, it is also possible to determine ''c'' from other physical laws where it appears, for example, by determining the values of the electromagnetic constants ''ε''0 and ''μ''0 and using their relation to ''c''. Historically, the most accurate results have been obtained by separately determining the frequency and wavelength of a light beam, with their product equalling ''c''. This is described in more detail in the "Interferometry" section below. In 1983 the metre was defined as "the length of the path travelled by light in vacuum during a time interval of of a second", fixing the value of the speed of light at by definition, as described below. Consequently, accurate measurements of the speed of light yield an accurate realization of the metre rather than an accurate value of ''c''. ## Astronomical measurements Outer space is a convenient setting for measuring the speed of light because of its large scale and nearly perfect vacuum A vacuum is a space devoid of matter. The word is derived from the Latin adjective ''vacuus'' for "vacant" or " void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists ofte ... . Typically, one measures the time needed for light to traverse some reference distance in the Solar System, such as the radius of the Earth's orbit. Historically, such measurements could be made fairly accurately, compared to how accurately the length of the reference distance is known in Earth-based units. Ole Christensen Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light in the year 1676. Translated in Reproduced in The account published in ''Journal des sçavans'' was based on a report that Rømer read to the French Academy of Sciences The French Academy of Sciences (French: ''Académie des sciences'') is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. It was at ... in November 1676 (Cohen, 1940, p. 346). When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter In geometry, a diameter of a circle is any straight line segment that passes through the center of the circle and whose endpoints lie on the circle. It can also be defined as the longest chord of the circle. Both definitions are also valid for ... of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit. Another method is to use the aberration of light, discovered and explained by James Bradley James Bradley (1692–1762) was an English astronomer and priest who served as the third Astronomer Royal from 1742. He is best known for two fundamental discoveries in astronomy, the aberration of light (1725–1728), and the nutation of th ... in the 18th century. This effect results from the vector addition In mathematics, physics, and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and direction. Vectors can be added to other vectors a ... of the velocity of light arriving from a distant source (such as a star) and the velocity of its observer (see diagram on the right). A moving observer thus sees the light coming from a slightly different direction and consequently sees the source at a position shifted from its original position. Since the direction of the Earth's velocity changes continuously as the Earth orbits the Sun, this effect causes the apparent position of stars to move around. From the angular difference in the position of stars (maximally 20.5 arcsecond A minute of arc, arcminute (arcmin), arc minute, or minute arc, denoted by the symbol , is a unit of angular measurement equal to of one degree. Since one degree is of a turn (or complete rotation), one minute of arc is of a turn. The na ... s) it is possible to express the speed of light in terms of the Earth's velocity around the Sun, which with the known length of a year can be converted to the time needed to travel from the Sun to the Earth. In 1729, Bradley used this method to derive that light travelled times faster than the Earth in its orbit (the modern figure is times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth. ### Astronomical unit An astronomical unit The astronomical unit (symbol: au, or or AU) is a unit of length, roughly the distance from Earth to the Sun and approximately equal to or 8.3 light-minutes. The actual distance from Earth to the Sun varies by about 3% as Earth orbit ... (AU) is approximately the average distance between the Earth and Sun. It was redefined in 2012 as exactly . Previously the AU was not based on the International System of Units but in terms of the gravitational force exerted by the Sun in the framework of classical mechanics. The current definition uses the recommended value in metres for the previous definition of the astronomical unit, which was determined by measurement. This redefinition is analogous to that of the metre and likewise has the effect of fixing the speed of light to an exact value in astronomical units per second (via the exact speed of light in metres per second). Previously, the inverse of  expressed in seconds per astronomical unit was measured by comparing the time for radio signals to reach different spacecraft in the Solar System, with their position calculated from the gravitational effects of the Sun and various planets. By combining many such measurements, a best fit value for the light time per unit distance could be obtained. For example, in 2009, the best estimate, as approved by the International Astronomical Union The International Astronomical Union (IAU; french: link=yes, Union astronomique internationale, UAI) is a nongovernmental organisation with the objective of advancing astronomy in all aspects, including promoting astronomical research, outreach ... (IAU), was: :light time for unit distance: ''t''au =  :''c'' =  =  The relative uncertainty in these measurements is 0.02 parts per billion (), equivalent to the uncertainty in Earth-based measurements of length by interferometry. Since the metre is defined to be the length travelled by light in a certain time interval, the measurement of the light time in terms of the previous definition of the astronomical unit can also be interpreted as measuring the length of an AU (old definition) in metres. ## Time of flight techniques A method of measuring the speed of light is to measure the time needed for light to travel to a mirror at a known distance and back. This is the working principle behind the Fizeau–Foucault apparatus The Fizeau–Foucault apparatus is either of two types of instrument historically used to measure the speed of light. The conflation of the two instrument types arises in part because Hippolyte Fizeau and Léon Foucault had originally been friends ... developed by Hippolyte Fizeau Armand Hippolyte Louis Fizeau FRS FRSE MIF (; 23 September 181918 September 1896) was a French physicist, best known for measuring the speed of light in the namesake Fizeau experiment. Biography Fizeau was born in Paris to Louis and Beatrice F ... and Léon Foucault Jean Bernard Léon Foucault (, ; ; 18 September 1819 – 11 February 1868) was a French physicist best known for his demonstration of the Foucault pendulum, a device demonstrating the effect of Earth's rotation. He also made an early measurement ... , based on a suggestion by François Arago Dominique François Jean Arago ( ca, Domènec Francesc Joan Aragó), known simply as François Arago (; Catalan: ''Francesc Aragó'', ; 26 February 17862 October 1853), was a French mathematician, physicist, astronomer, freemason, supporter of ... . The setup as used by Fizeau consists of a beam of light directed at a mirror away. On the way from the source to the mirror, the beam passes through a rotating cogwheel. At a certain rate of rotation, the beam passes through one gap on the way out and another on the way back, but at slightly higher or lower rates, the beam strikes a tooth and does not pass through the wheel. Knowing the distance between the wheel and the mirror, the number of teeth on the wheel, and the rate of rotation, the speed of light can be calculated. The method of Foucault replaces the cogwheel with a rotating mirror. Because the mirror keeps rotating while the light travels to the distant mirror and back, the light is reflected from the rotating mirror at a different angle on its way out than it is on its way back. From this difference in angle, the known speed of rotation and the distance to the distant mirror the speed of light may be calculated. Today, using oscilloscopes An oscilloscope (informally a scope) is a type of electronic test instrument that graphically displays varying electrical voltages as a two-dimensional plot of one or more signals as a function of time. The main purposes are to display repetiti ... with time resolutions of less than one nanosecond, the speed of light can be directly measured by timing the delay of a light pulse from a laser or an LED reflected from a mirror. This method is less precise (with errors of the order of 1%) than other modern techniques, but it is sometimes used as a laboratory experiment in college physics classes. ## Electromagnetic constants An option for deriving ''c'' that does not directly depend on a measurement of the propagation of electromagnetic waves is to use the relation between ''c'' and the vacuum permittivity Vacuum permittivity, commonly denoted (pronounced "epsilon nought" or "epsilon zero"), is the value of the absolute dielectric permittivity of classical vacuum. It may also be referred to as the permittivity of free space, the electric const ... ''ε''0 and vacuum permeability The vacuum magnetic permeability (variously ''vacuum permeability'', ''permeability of free space'', ''permeability of vacuum''), also known as the magnetic constant, is the magnetic permeability in a classical vacuum. It is a physical constant ... ''μ''0 established by Maxwell's theory: ''c''2 = 1/(''ε''0''μ''0). The vacuum permittivity may be determined by measuring the capacitance Capacitance is the capability of a material object or device to store electric charge. It is measured by the change in charge in response to a difference in electric potential, expressed as the ratio of those quantities. Commonly recognized are ... and dimensions of a capacitor, whereas the value of the vacuum permeability was historically fixed at exactly through the definition of the ampere. Rosa Rosa or De Rosa may refer to: People *Rosa (given name) * Rosa (surname) * Santa Rosa (female given name from Latin-a latinized variant of Rose) Places * 223 Rosa, an asteroid * Rosa, Alabama, a town, United States * Rosa, Germany, in Thuringia ... and Dorsey used this method in 1907 to find a value of . Their method depended upon having a standard unit of electrical resistance, the "international ohm Ohm (symbol Ω) is a unit of electrical resistance named after Georg Ohm. Ohm or OHM may also refer to: People * Georg Ohm (1789–1854), German physicist and namesake of the term ''ohm'' * Germán Ohm (born 1936), Mexican boxer * Jörg Ohm ( ... ", and so its accuracy was limited by how this standard was defined. ## Cavity resonance Another way to measure the speed of light is to independently measure the frequency ''f'' and wavelength ''λ'' of an electromagnetic wave in vacuum. The value of ''c'' can then be found by using the relation ''c'' = ''fλ''. One option is to measure the resonance frequency of a cavity resonator A resonator is a device or system that exhibits resonance or resonant behavior. That is, it naturally oscillates with greater amplitude at some frequencies, called resonant frequencies, than at other frequencies. The oscillations in a resona ... . If the dimensions of the resonance cavity are also known, these can be used to determine the wavelength of the wave. In 1946, Louis Essen and A.C. Gordon-Smith established the frequency for a variety of normal mode A normal mode of a dynamical system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencie ... s of microwaves of a microwave cavity of precisely known dimensions. The dimensions were established to an accuracy of about ±0.8 μm using gauges calibrated by interferometry. As the wavelength of the modes was known from the geometry of the cavity and from electromagnetic theory, knowledge of the associated frequencies enabled a calculation of the speed of light. The Essen–Gordon-Smith result, , was substantially more precise than those found by optical techniques. By 1950, repeated measurements by Essen established a result of . A household demonstration of this technique is possible, using a microwave oven and food such as marshmallows or margarine: if the turntable is removed so that the food does not move, it will cook the fastest at the antinode A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effe ... s (the points at which the wave amplitude is the greatest), where it will begin to melt. The distance between two such spots is half the wavelength of the microwaves; by measuring this distance and multiplying the wavelength by the microwave frequency (usually displayed on the back of the oven, typically 2450 MHz), the value of ''c'' can be calculated, "often with less than 5% error". ## Interferometry Interferometry Interferometry is a technique which uses the '' interference'' of superimposed waves to extract information. Interferometry typically uses electromagnetic waves and is an important investigative technique in the fields of astronomy, fiber o ... is another method to find the wavelength of electromagnetic radiation for determining the speed of light. A coherent Coherence, coherency, or coherent may refer to the following: Physics * Coherence (physics), an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference * Coherence (units of measurement), a deri ... beam of light (e.g. from a laser A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The word "laser" is an acronym for "light amplification by stimulated emission of radiation". The ... ), with a known frequency (''f''), is split to follow two paths and then recombined. By adjusting the path length while observing the interference pattern In physics, interference is a phenomenon in which two waves combine by adding their displacement together at every single point in space and time, to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive ... and carefully measuring the change in path length, the wavelength of the light (''λ'') can be determined. The speed of light is then calculated using the equation ''c'' = ''λf''. Before the advent of laser technology, coherent radio sources were used for interferometry measurements of the speed of light. However interferometric determination of wavelength becomes less precise with wavelength and the experiments were thus limited in precision by the long wavelength (~) of the radiowaves. The precision can be improved by using light with a shorter wavelength, but then it becomes difficult to directly measure the frequency of the light. One way around this problem is to start with a low frequency signal of which the frequency can be precisely measured, and from this signal progressively synthesize higher frequency signals whose frequency can then be linked to the original signal. A laser can then be locked to the frequency, and its wavelength can be determined using interferometry. This technique was due to a group at the National Bureau of Standards (which later became the National Institute of Standards and Technology). They used it in 1972 to measure the speed of light in vacuum with a fractional uncertainty of . # History Until the early modern period, it was not known whether light travelled instantaneously or at a very fast finite speed. The first extant recorded examination of this subject was in ancient Greece Ancient Greece ( el, Ἑλλάς, Hellás) was a northeastern Mediterranean civilization, existing from the Greek Dark Ages of the 12th–9th centuries BC to the end of classical antiquity ( AD 600), that comprised a loose collection of cult ... . The ancient Greeks, Arabic scholars, and classical European scientists long debated this until Rømer provided the first calculation of the speed of light. Einstein's Theory of Special Relativity concluded that the speed of light is constant regardless of one's frame of reference. Since then, scientists have provided increasingly accurate measurements. ## Early history Empedocles Empedocles (; grc-gre, Ἐμπεδοκλῆς; , 444–443 BC) was a Greek pre-Socratic philosopher and a native citizen of Akragas, a Greek city in Sicily. Empedocles' philosophy is best known for originating the cosmogonic theory of th ... (c. 490–430 BCE) was the first to propose a theory of light and claimed that light has a finite speed. He maintained that light was something in motion, and therefore must take some time to travel. Aristotle Aristotle (; grc-gre, Ἀριστοτέλης ''Aristotélēs'', ; 384–322 BC) was a Greek philosopher and polymath during the Classical period in Ancient Greece. Taught by Plato, he was the founder of the Peripatetic school of ... argued, to the contrary, that "light is due to the presence of something, but it is not a movement". (click on "Historical background" in the table of contents) Euclid Euclid (; grc-gre, Εὐκλείδης; BC) was an ancient Greek mathematician active as a geometer and logician. Considered the "father of geometry", he is chiefly known for the '' Elements'' treatise, which established the foundations of ... emission theory Emission theory, also called emitter theory or ballistic theory of light, was a competing theory for the special theory of relativity, explaining the results of the Michelson–Morley experiment of 1887. Emission theories obey the principle of rela ... of vision, where light is emitted from the eye, thus enabling sight. Based on that theory, Heron of Alexandria argued that the speed of light must be infinite Infinite may refer to: Mathematics *Infinite set, a set that is not a finite set *Infinity, an abstract concept describing something without any limit Music * Infinite (group), a South Korean boy band *''Infinite'' (EP), debut EP of American m ... because distant objects such as stars appear immediately upon opening the eyes. Early Islamic philosophers initially agreed with the Aristotelian view that light had no speed of travel. In 1021, Alhazen Ḥasan Ibn al-Haytham, Latinized as Alhazen (; full name ; ), was a medieval mathematician, astronomer, and physicist of the Islamic Golden Age from present-day Iraq.For the description of his main fields, see e.g. ("He is one of the pri ... (Ibn al-Haytham) published the '' Book of Optics The ''Book of Optics'' ( ar, كتاب المناظر, Kitāb al-Manāẓir; la, De Aspectibus or ''Perspectiva''; it, Deli Aspecti) is a seven-volume treatise on optics and other fields of study composed by the medieval Arab scholar Ibn al ... '', in which he presented a series of arguments dismissing the emission theory of vision Vision, Visions, or The Vision may refer to: Perception Optical perception * Visual perception, the sense of sight * Visual system, the physical mechanism of eyesight * Computer vision, a field dealing with how computers can be made to gain u ... in favour of the now accepted intromission theory, in which light moves from an object into the eye. This led Alhazen to propose that light must have a finite speed, and that the speed of light is variable, decreasing in denser bodies. He argued that light is substantial matter, the propagation of which requires time, even if this is hidden from the senses. Also in the 11th century, Abū Rayhān al-Bīrūnī Abu Rayhan Muhammad ibn Ahmad al-Biruni (973 – after 1050) commonly known as al-Biruni, was a Khwarazmian Iranian in scholar and polymath during the Islamic Golden Age. He has been called variously the "founder of Indology", "Father of Co ... agreed that light has a finite speed, and observed that the speed of light is much faster than the speed of sound. In the 13th century, Roger Bacon Roger Bacon (; la, Rogerus or ', also '' Rogerus''; ), also known by the scholastic accolade ''Doctor Mirabilis'', was a medieval English philosopher and Franciscan friar who placed considerable emphasis on the study of nature through ... argued that the speed of light in air was not infinite, using philosophical arguments backed by the writing of Alhazen and Aristotle. In the 1270s, Witelo Vitello ( pl, Witelon; german: Witelo; – 1280/1314) was a friar, theologian, natural philosopher and an important figure in the history of philosophy in Poland. Name Vitello's name varies with some sources. In earlier publications he was qu ... considered the possibility of light travelling at infinite speed in vacuum, but slowing down in denser bodies. In the early 17th century, Johannes Kepler Johannes Kepler (; ; 27 December 1571 – 15 November 1630) was a German astronomer, mathematician, astrologer, natural philosopher and writer on music. He is a key figure in the 17th-century Scientific Revolution, best known for his laws ... believed that the speed of light was infinite since empty space presents no obstacle to it. René Descartes René Descartes ( or ; ; Latinized: Renatus Cartesius; 31 March 1596 – 11 February 1650) was a French philosopher, scientist, and mathematician, widely considered a seminal figure in the emergence of modern philosophy and science. Mathe ... argued that if the speed of light were to be finite, the Sun, Earth, and Moon would be noticeably out of alignment during a lunar eclipse A lunar eclipse occurs when the Moon moves into the Earth's shadow. Such alignment occurs during an eclipse season, approximately every six months, during the full moon phase, when the Moon's orbital plane is closest to the plane of the Earth ... . (Although this argument fails when aberration of light is taken into account, the latter was not recognized until the following century.) Since such misalignment had not been observed, Descartes concluded the speed of light was infinite. Descartes speculated that if the speed of light were found to be finite, his whole system of philosophy might be demolished. Despite this, in his derivation of Snell's law Snell's law (also known as Snell–Descartes law and ibn-Sahl law and the law of refraction) is a formula used to describe the relationship between the angles of incidence and refraction, when referring to light or other waves passing through ... , Descartes assumed that some kind of motion associated with light was faster in denser media. Pierre de Fermat Pierre de Fermat (; between 31 October and 6 December 1607 – 12 January 1665) was a French mathematician who is given credit for early developments that led to infinitesimal calculus, including his technique of adequality. In particular, he ... derived Snell's law using the opposing assumption, the denser the medium the slower light travelled. Fermat also argued in support of a finite speed of light. ## First measurement attempts In 1629, Isaac Beeckman Isaac Beeckman (10 December 1588van Berkel, p10 – 19 May 1637) was a Dutch philosopher and scientist, who, through his studies and contact with leading natural philosophers, may have "virtually given birth to modern atomism".Harold J. Cook, i ... proposed an experiment in which a person observes the flash of a cannon reflecting off a mirror about one mile (1.6 km) away. In 1638, Galileo Galilei proposed an experiment, with an apparent claim to having performed it some years earlier, to measure the speed of light by observing the delay between uncovering a lantern and its perception some distance away. He was unable to distinguish whether light travel was instantaneous or not, but concluded that if it were not, it must nevertheless be extraordinarily rapid. In 1667, the Accademia del Cimento The Accademia del Cimento (Academy of Experiment), an early scientific society, was founded in Florence in 1657 by students of Galileo, Giovanni Alfonso Borelli and Vincenzo Viviani and ceased to exist about a decade later. The foundation of Academ ... of Florence reported that it had performed Galileo's experiment, with the lanterns separated by about one mile, but no delay was observed. The actual delay in this experiment would have been about 11 microsecond A microsecond is a unit of time in the International System of Units (SI) equal to one millionth (0.000001 or 10−6 or ) of a second. Its symbol is μs, sometimes simplified to us when Unicode is not available. A microsecond is equal to 1000 ... s. The first quantitative estimate of the speed of light was made in 1676 by Ole Rømer. From the observation that the periods of Jupiter's innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when receding from it, he concluded that light travels at a finite speed, and estimated that it takes light 22 minutes to cross the diameter of Earth's orbit. Christiaan Huygens Christiaan Huygens, Lord of Zeelhem, ( , , ; also spelled Huyghens; la, Hugenius; 14 April 1629 – 8 July 1695) was a Dutch mathematician, physicist, engineer, astronomer, and inventor, who is regarded as one of the greatest scientists ... combined this estimate with an estimate for the diameter of the Earth's orbit to obtain an estimate of speed of light of , which is 27% lower than the actual value. In his 1704 book '' Opticks ''Opticks: or, A Treatise of the Reflexions, Refractions, Inflexions and Colours of Light'' is a book by English natural philosopher Isaac Newton that was published in English in 1704 (a scholarly Latin translation appeared in 1706). (''Opti ... '', Isaac Newton Sir Isaac Newton (25 December 1642 – 20 March 1726/27) was an English mathematician, physicist, astronomer, alchemist, theologian, and author (described in his time as a " natural philosopher"), widely recognised as one of the ... reported Rømer's calculations of the finite speed of light and gave a value of "seven or eight minutes" for the time taken for light to travel from the Sun to the Earth (the modern value is 8 minutes 19 seconds). Newton queried whether Rømer's eclipse shadows were coloured; hearing that they were not, he concluded the different colours travelled at the same speed. In 1729, James Bradley James Bradley (1692–1762) was an English astronomer and priest who served as the third Astronomer Royal from 1742. He is best known for two fundamental discoveries in astronomy, the aberration of light (1725–1728), and the nutation of th ... discovered stellar aberration. From this effect he determined that light must travel 10,210 times faster than the Earth in its orbit (the modern figure is 10,066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth. ## Connections with electromagnetism In the 19th century Hippolyte Fizeau Armand Hippolyte Louis Fizeau FRS FRSE MIF (; 23 September 181918 September 1896) was a French physicist, best known for measuring the speed of light in the namesake Fizeau experiment. Biography Fizeau was born in Paris to Louis and Beatrice F ... developed a method to determine the speed of light based on time-of-flight measurements on Earth and reported a value of . His method was improved upon by Léon Foucault Jean Bernard Léon Foucault (, ; ; 18 September 1819 – 11 February 1868) was a French physicist best known for his demonstration of the Foucault pendulum, a device demonstrating the effect of Earth's rotation. He also made an early measurement ... who obtained a value of in 1862. In the year 1856, Wilhelm Eduard Weber Wilhelm Eduard Weber (; ; 24 October 1804 – 23 June 1891) was a German physicist and, together with Carl Friedrich Gauss, inventor of the first electromagnetic telegraph. Biography of Wilhelm Early years Weber was born in Schlossstrasse i ... and Rudolf Kohlrausch Rudolf Hermann Arndt Kohlrausch (November 6, 1809 in Göttingen – March 8, 1858 in Erlangen) was a German physicist. Biography He was a native of Göttingen, the son of the Royal Hanovarian director general of schools Friedrich Kohlrausch. He ... measured the ratio of the electromagnetic and electrostatic units of charge, 1/, by discharging a Leyden jar A Leyden jar (or Leiden jar, or archaically, sometimes Kleistian jar) is an electrical component that stores a high-voltage electric charge (from an external source) between electrical conductors on the inside and outside of a glass jar. It ty ... , and found that its numerical value was very close to the speed of light as measured directly by Fizeau. The following year Gustav Kirchhoff Gustav Robert Kirchhoff (; 12 March 1824 – 17 October 1887) was a German physicist who contributed to the fundamental understanding of electrical circuits, spectroscopy, and the emission of black-body radiation by heated objects. He ... calculated that an electric signal in a resistanceless wire travels along the wire at this speed. In the early 1860s, Maxwell showed that, according to the theory of electromagnetism he was working on, electromagnetic waves propagate in empty space at a speed equal to the above Weber/Kohlrausch ratio, and drawing attention to the numerical proximity of this value to the speed of light as measured by Fizeau, he proposed that light is in fact an electromagnetic wave. ## "Luminiferous aether" It was thought at the time that empty space was filled with a background medium called the luminiferous aether Luminiferous aether or ether ("luminiferous", meaning "light-bearing") was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space (a vacuum), s ... in which the electromagnetic field existed. Some physicists thought that this aether acted as a preferred frame of reference for the propagation of light and therefore it should be possible to measure the motion of the Earth with respect to this medium, by measuring the isotropy Isotropy is uniformity in all orientations; it is derived . Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix ' or ', hence '' anisotropy''. ''Anisotropy'' is also used to describ ... of the speed of light. Beginning in the 1880s several experiments were performed to try to detect this motion, the most famous of which is the experiment performed by Albert A. Michelson and Edward W. Morley in 1887. The detected motion was always less than the observational error. Modern experiments indicate that the two-way speed of light is isotropic Isotropy is uniformity in all orientations; it is derived . Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix ' or ', hence '' anisotropy''. ''Anisotropy'' is also used to describ ... (the same in every direction) to within 6 nanometres per second. Because of this experiment Hendrik Lorentz proposed that the motion of the apparatus through the aether may cause the apparatus to contract A contract is a legally enforceable agreement between two or more parties that creates, defines, and governs mutual rights and obligations between them. A contract typically involves the transfer of goods, services, money, or a promise to ... along its length in the direction of motion, and he further assumed that the time variable for moving systems must also be changed accordingly ("local time"), which led to the formulation of the Lorentz transformation. Based on Lorentz's aether theory, Henri Poincaré Jules Henri Poincaré ( S: stress final syllable ; 29 April 1854 – 17 July 1912) was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as "The ... (1900) showed that this local time (to first order in ''v''/''c'') is indicated by clocks moving in the aether, which are synchronized under the assumption of constant light speed. In 1904, he speculated that the speed of light could be a limiting velocity in dynamics, provided that the assumptions of Lorentz's theory are all confirmed. In 1905, Poincaré brought Lorentz's aether theory into full observational agreement with the principle of relativity In physics, the principle of relativity is the requirement that the equations describing the laws of physics have the same form in all admissible frames of reference. For example, in the framework of special relativity the Maxwell equations h ... . ## Special relativity In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: # The law ... , in which the speed of light in vacuum ''c'' featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time. ## Increased accuracy of ''c'' and redefinition of the metre and second In the second half of the 20th century, much progress was made in increasing the accuracy of measurements of the speed of light, first by cavity resonance techniques and later by laser interferometer techniques. These were aided by new, more precise, definitions of the metre and second. In 1950, Louis Essen determined the speed as , using cavity resonance. This value was adopted by the 12th General Assembly of the Radio-Scientific Union in 1957. In 1960, the metre was redefined in terms of the wavelength of a particular spectral line of krypton-86 There are 34 known isotopes of krypton (36Kr) with atomic mass numbers from 69 through 102. Naturally occurring krypton is made of five stable isotopes and one () which is slightly radioactive with an extremely long half-life, plus traces of radi ... , and, in 1967, the second was redefined in terms of the hyperfine transition frequency of the ground state of caesium-133 Caesium (55Cs) has 40 known isotopes, making it, along with barium and mercury, one of the elements with the most isotopes. The atomic masses of these isotopes range from 112 to 151. Only one isotope, 133Cs, is stable. The longest-lived radioisot ... . In 1972, using the laser interferometer method and the new definitions, a group at the US National Bureau of Standards The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical s ... in Boulder, Colorado Boulder is a home rule city that is the county seat and most populous municipality of Boulder County, Colorado, United States. The city population was 108,250 at the 2020 United States census, making it the 12th most populous city in Color ... determined the speed of light in vacuum to be ''c'' = . This was 100 times less uncertain than the previously accepted value. The remaining uncertainty was mainly related to the definition of the metre. As similar experiments found comparable results for ''c'', the 15th General Conference on Weights and Measures The General Conference on Weights and Measures (GCWM; french: Conférence générale des poids et mesures, CGPM) is the supreme authority of the International Bureau of Weights and Measures (BIPM), the intergovernmental organization established ... in 1975 recommended using the value for the speed of light. ## Defined as an explicit constant In 1983 the 17th meeting of the General Conference on Weights and Measures (CGPM) found that wavelengths from frequency measurements and a given value for the speed of light are more reproducible Reproducibility, also known as replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a ... than the previous standard. They kept the 1967 definition of second, so the caesium Caesium ( IUPAC spelling) (or cesium in American English) is a chemical element with the symbol Cs and atomic number 55. It is a soft, silvery-golden alkali metal with a melting point of , which makes it one of only five elemental metals that ... hyperfine frequency would now determine both the second and the metre. To do this, they redefined the metre as "the length of the path traveled by light in vacuum during a time interval of 1/ of a second." As a result of this definition, the value of the speed of light in vacuum is exactly and has become a defined constant in the SI system of units. Improved experimental techniques that, prior to 1983, would have measured the speed of light no longer affect the known value of the speed of light in SI units, but instead allow a more precise realization of the metre by more accurately measuring the wavelength of krypton-86 and other light sources. In 2011, the CGPM stated its intention to redefine all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. It proposed a new, but completely equivalent, wording of the metre's definition: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly when it is expressed in the SI unit ." This was one of the changes that was incorporated in the 2019 redefinition of the SI base units In 2019, four of the seven SI base units specified in the International System of Quantities were redefined in terms of natural physical constants, rather than human artifacts such as the standard kilogram. Effective 20 May 2019, the 144t ... , also termed the ''New SI''.See, for example: * * * * Light-second The light-second is a unit of length useful in astronomy, telecommunications and relativistic physics. It is defined as the distance that light travels in free space in one second, and is equal to exactly . Just as the second forms the basis for ... * Speed of electricity The word ''electricity'' refers generally to the movement of electrons (or other charge carriers) through a conductor in the presence of a potential difference or an electric field. The speed of this flow has multiple meanings. In everyday elect ... * Speed of gravity * Speed of sound The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. At , the speed of sound in air is about , or one kilometre in or one mile in . It depends strongly on temperature as ... * Velocity factor The velocity factor (VF), also called wave propagation speed or velocity of propagation (VoP or of a transmission medium is the ratio of the speed at which a wavefront (of an electromagnetic signal, a radio signal, a light pulse in an optical fib ... * Warp factor (fictional) # References ## Historical references * ** Translated as * * * * * * * ## Modern references * * * * * * "Test Light Speed in Mile Long Vacuum Tube." ''Popular Science Monthly'', September 1930, pp. 17–18. (International Bureau of Weights and Measures, BIPM) Speed of light in vacuum (National Institute of Standards and Technology, NIST) Subluminal (Java applet by Greg Egan Greg Egan (born 20 August 1961) is an Australian science fiction writer and amateur mathematician, best known for his works of hard science fiction. Egan has won multiple awards including the John W. Campbell Memorial Award, the Hugo Award, ... demonstrating group velocity information limits) (Sixty Symbols, University of Nottingham Department of Physics ideo IDEO () is a design and consulting firm with offices in the U.S., England, Germany, Japan, and China. It was founded in Palo Alto, California, in 1991. The company's 700 staff uses a design thinking approach to design products, services, envir ... Speed of Light BBC Radio4 discussion (''In Our Time'', 30 November 2006) Speed of Light (Live-Counter – Illustrations) Speed of Light – animated demonstrations * The Velocity of Light , Albert A. Nicholson, Scientific American ''Scientific American'', informally abbreviated ''SciAm'' or sometimes ''SA'', is an American popular science magazine. Many famous scientists, including Albert Einstein and Nikola Tesla, have contributed articles to it. In print since 1845, i ... , 28 September 1878, p. 193 {{DEFAULTSORT:Speed Of Light Fundamental constants Physical quantities Light Light Velocity
# The Explore-Exploit Dilemma in Media Consumption How much should we rewatch our favorite movies (media) vs keep trying new movies? Most spend most viewing time on new movies, which is unlikely to be good. I suggest an explicit Bayesian model of imprecise ratings + enjoyment recovering over time for Thompson sampling over movie watch choices. (statistics, decision theory, psychology, Bayes) created: 24 Dec 2016; modified: 30 Sep 2017; status: notes; confidence: possible; When you decide to watch a movie, it can be tough to pick. Do you pick a new movie or a classic you watched before & liked? If the former, how do you pick from all the thousands of plausible unwatched candidate movies; and if the former, how soon is too soon to rewatch? I tend to default to a new movie, reasoning that I might really like it and discover a new classic to add to my library. Once in a while, I rewatch some movie I really liked, and I like it almost as much as the first time, and I think to myself, why did I wait 15 years to rewatch this, why didn’t I watch this last week instead of movie X which was mediocre, or Y before that which was crap? I’d forgotten most of the details, and it wasn’t boring at all! I should rewatch movies more often. (Then of course I don’t because I think I should watch Z to see if I like it…) Maybe many other people do this too, judging from how often I see people mentioning watching a new movie and how rare it is for someone to mention rewatching a movie; it seems like people predominantly (maybe 80%+ of the time) watch new movies rather than rewatch a favorite. (Some, like Pauline Kael, refuse to ever rewatch movies, and people who rewatch a film more than 2 or 3 times come off as eccentric or true fans.) In other areas of media, we do seem to balance exploration and exploitation more - people often reread a favorite novel like a Harry Potter novel and everyone relistens their favorite music countless times (perhaps too many times) - so perhaps there is something about movies & TV series which biases us away from rewatches which we ought to counteract with a more mindful approach to our choices. In general, I’m not confident I come near the optimal balance, whether it be exploring movies or music or anime or tea. The tricky thing is that each watch of a movie decreases the value of another watch (diminishing marginal value), but in a time-dependent way: 1 day is usually much too short and the value may even be negative, but 1 decade may be too long - the movie’s entertainment value recovers slowly and smoothly over time, like an exponential curve. This sounds like a classic reinforcement learning (RL) exploration-exploitation tradeoff problem: we don’t want to watch only new movies, because the average new movie is mediocre, but if we watch only known-good movies, then we miss out on all the good movies we haven’t seen and fatigue may make watching the known-good ones downright unpleasant. One could imagine some simple heuristics, such as setting a cutoff for good movies and then alternate between watching whatever new movie sounds the best (and adding it to the good list if it is better than the cutoff) and watching the oldest unwatched good movie. This seems suboptimal because in a typical RL problem, exploration will decrease over time as most of the good decisions become known and it becomes more important to benefit from them than to keep trying new options, hoping to find better ones; one might explore using 100% of one’s decisions at the beginning but steadily decrease the exploration rate down to a fraction of a percent towards the end - in few problems is it optimal to keep eternally exploring on, say, 80% of one’s decisions. Eternally exploring on the majority of decisions would only make sense in an extremely unstable environment where the best decision constantly rapidly changes; this, however, doesn’t seem like the movie-watching problem, where typically if one really enjoyed a movie 1 year ago, one will almost always enjoy it now too. At the extreme, one might explore a negligible amount: if someone has accumulated a library of, say, 5000 great movies they enjoy, and they watch one movie every other night, then it would take them 27 years to cycle through their library once, and of course, after 27 years and 4999 other engrossing movies, they will have forgotten almost everything about the first movie… Better RL algorithms exist, assuming one has a good model of the problem/environment, such as Thompson sampling. This minimizes our regret in the long run, by estimating the probability of being able to find an improvement, and decreasing its exploration as the probability of improvements decreases because the data increasingly nails down the shape of the recovery curve, the true ratings of top movies, and enough top movies have been accumulated The real question is the modelling of ratings over time. The basic framework here is a longitudinal growth model. Movies are individuals who are measured at various times on ratings variables (our personal rating, and perhaps additional ratings from sources like IMDB) and are impacted by events (viewings), and we would like to infer the posterior distributions for each movie of a hypothetical event today (to decide what to watch); movies which have been watched already can be predicted quite precisely based on their rating + recovery curve, but new movies are highly uncertain (and not affected by a recovery curve yet). I would start here with movie ratings. A movie gets rated 1-10, and we want to maximize the sum of ratings over time; we can’t do this simply by picking the highest-ever rated movie, because once we watch it, it suddenly stops being so enjoyable; so we need to model some sort of drop. A simple parametric model would to treat it as something like an exponential curve over time: gradually increasing and approaching the original rating but never reaching it (the magic of the first viewing can never be recaptured). (Why an exponential exactly, instead of a spline or something else? Well, there could be a hyperbolic aspect to the recovery where over the first few hours/days/weeks enjoyment resets faster than later on; but if the recovery curve is monotonic and smooth, then an exponential is going to fit it pretty well regardless of the exact shape of the spline or hyperbola, and one would probably require data from hundreds of people or rewatches to fit a more complex curve which can outpredict an exponential. Indeed, to the extent that enjoyment rests on memory, we might further predict that the recovery curve would be the inverse of the forgetting curve, and our movie selection problem becomes, in part, anti-spaced repetition - selecting datapoints to review to maximize forgetting.) So each viewing might drop the rating by a certain number v and then the exponential curve increases by r units per day - intuitively, I would say that on a 10-point scale, a viewing drops an immediate rewatch by at least 2 points, and then it takes ~5 years to almost fully recover within +-0.10 points (I would guess it takes less than 5 years to recover rather than more, so this estimate would bias towards new movies/exploration), so we would initially assign priors centered on v=2 and r= (2-0.10) / (365*5) ~= 0.001 x(t) = 1+1 ^ (t/r) x(365*5) = 0.10 and then our model should finetune those rough estimates based on the data. • not standard SEM latent growth curve model - varying measurement times • not Hidden Markov - categorical, stateless • not simple Kalman filter, equivalent to AR(1) • state-space model of some sort - dynamic linear model? AR(2)? dlm, TMB, Biips? State Space Models in R https://arxiv.org/pdf/1412.3779v1.pdf https://en.wikipedia.org/wiki/Radioactive_decay#Half-life https://en.wikipedia.org/wiki/Kalman_filter TODO: • write a simple demo of the 5k films example ignoring uncertainty to see what the exploit pattern looks like • use the rating resorter to convert my MAL ratings into a more informative uniform distribution • MAL average ratings for unwatched anime should be standardized based on MAL mean/SD (in part because the averages aren’t discretized, and in part because they are not comparable with my uniformized ratings)
# Ngô Quốc Anh ## August 17, 2010 ### Evaluate complex integral via the Fourier transform Filed under: Giải tích 7 (MA4247) — Tags: — Ngô Quốc Anh @ 5:56 As suggested from this topic, we are interested in evaluating the following complex integral $\displaystyle G(t)=\mathop {\lim }\limits_{A \to \infty } \int\limits_{ - A}^A {{{\left( {\frac{{\sin x}} {x}} \right)}^2}{e^{itx}}dx}$. The trick here is to use the Fourier transform. Thanks to ZY for teaching me this interesting technique. In $\mathbb R$, the Fourier transform of function $f$, denoted by $\mathcal F[f]$, is defined to be $\displaystyle \mathcal F[f](y) = \int_{ - \infty }^\infty {f(x){e^{ - 2\pi ixy}}dx}$. If we apply the Fourier transform twice to a function, we get a spatially reversed version of the function. Precisely, $\displaystyle\begin{gathered} \mathcal{F}\left[ {\mathcal{F}[f]} \right](z) = \int_{ - \infty }^\infty {\mathcal{F}[f](y){e^{ - 2\pi iyz}}dy} \hfill \\ \qquad\qquad= \int_{ - \infty }^\infty {\mathcal{F}[f](y){e^{2\pi iy( - z)}}dy} \hfill \\ \qquad\qquad= {\mathcal{F}^{ - 1}}\left[ {\mathcal{F}[f]} \right]( - z) \hfill \\ \qquad\qquad= f( - z) \hfill \\ \end{gathered}$ where $\mathcal F^{-1}$ denotes the inverse Fourier transform. ## July 30, 2009 ### A couple of complex integrals involving exp(itx) for a real parameter t In this turn, I will consider a couple of examples of complex contour integrals with respect to variable $x$ involving the following factor $e^{itx}$ where $t$ a real parameter. Problem 1. Evaluate the integral $\displaystyle I\left( t \right) = \int\limits_{ - \infty }^\infty {\frac{{{e^{itx}}}} {{{{\left( {x + i} \right)}^2}}}dx}$ where $-\infty < t<\infty$. Solution. Let $\displaystyle {f_t}(z) = \frac{{{e^{itz}}}}{{{{(z + i)}^2}}}$ and consider first the case $t>0$. Then $|f_t(z)|$ is bounded in the upper half-plane by $\displaystyle\frac{1}{|z+i|^2}$. For $R>1$ let $\displaystyle C_R=\Gamma_R \cup [-R, R]$, where $\Gamma_R$ is the semicircle centered at the origin joining $R$ and $-R$, oriented counterclockwise. ## July 17, 2009 ### 3 indefinite integral problems involving sinx/x via residue Problem 1. Compute $\displaystyle\int\limits_{ - \infty }^\infty {\frac{{\sin x}} {x}dx}$ via complex variable methods. Problem 2. Compute $\displaystyle\int\limits_{ - \infty }^\infty {\frac{{\sin^2 x}} {x^2}dx}$ via complex variable methods. Problem 3. Compute $\displaystyle\int\limits_{ - \infty }^\infty {\frac{{\sin^3 x}} {x^3}dx}$ via complex variable methods.
# Tag Info 35 Actually, it doesn't have the same mass, it has significantly less mass than its precursor star. Something like 90% of the star is blown off in the supernova event (Type II) that causes the black holes. The Schwarzschild radius is the radius at which, if an object's mass where compressed to a sphere of that size, the escape velocity at the surface would be ... 30 When you watch a pop-sci TV show, you need to take everything you see with a very healthy grain of salt. This is particularly the case if the show's host isn't a scientist, but even when a scientist is the host, you need to be suspicious. Stellar black holes do not turn into monsters that reach out and pluck objects from the heavens. From far away, a black ... 21 If you look closely at the crocodiles' tails you'll see that they wave their tails from side to side to provide propulsion for the jump. Compare this to a fish swimming: The side to side motion of the fish's tail propels it forward, and the crocodiles are using exactly the same sort of side to side motion to propel themselves upwards. 12 I've recently started trying to swim the butterfly. Unlike other swimming strokes, there doesn't seem to be any way to "go easy" and do a relaxing length of the pool: if I want to get my face out of the water to breathe, I essentially have to use the water to do a push-up. Now if a scrawny guy like me can lift his chin a few inches out of the water using ... 9 It actually goes the other way around: when a star collapses to form a black hole, its planets (if it has any) will become unbound and fly away to infinity. Simple reason: when the star explodes to form a compact object (neutron star or black hole), it releases most of its mass in the form of a SuperNova explosion, so that the central object around which ... 6 The upwards force comes from the rather violent tail movement. When the rest of the body is out of the water, the tail still acts sort of like a hydrofoil pushing the crocodile upwards, only not with a linear but oscillating motion, and obviously it's rather instable but enough to get the whole animal up in the air for a short while. 6 I believe the explanation can be found in Manual of Harmonic Analysis and Prediction of Tides : In deriving mathematical expressions for the tide-producing forces of the moon and sun, the principal factors to be taken into consideration are the rotation of the earth, the revolution of the moon around the earth, the revolution of the earth around ... 5 The acceleration of the expansion is currently observed to be happening. This observation is one of the pieces of data we use to infer the amount of dark matter. It tells us that there can't be more than a certain amount of dark matter, because that would be incompatible with the observed acceleration. 4 The short answer is yes, the presence of dark matter would act to counter the expansion of the universe. And in fact it does--but not enough to stop the expansion. Dark matter has gravity just like normal matter. In fact, that's pretty much the only reason we know dark mater exists at all: we can observe dark matter's gravitation effects in the rotation ... 4 If you measure the large-distance strength of the gravitational acceleration $g\approx \frac{GM}{r^2}$ of a star / black hole with the assumption that your distance $r$ is much further out than the various mass parts, shock wave, and ejected material; then $g\approx \frac{GM}{r^2}$ is (within a percent or so) the same before and after the supernova. This is ... 4 I'll take the question to be referring to solid rock. In reality, I think small asteroids are loose jumbles of rubble with a lot of vacuum between the rocks, and larger bodies like Ceres may have been liquid when they formed. Googling turned up [Scheuer 1981], which can be found online for free by googling. S/he estimates the maximum height of a mountain to ... 4 The masses can't repel each other because gravity is mediated by a spin 2 field, and for spin 2 the force between charges of equal signs is attractive. See the question Why is gravitation force always attractive? for an explanation of this. But it's impossible to say why the force can't be zero. Experiment shows that masses do attract each other, and ... 4 I posted a link to a summary paper on tides in a comment yesterday. That paper is Agnew, D. C. (2007), "Earth Tides", pp. 163-195 in Treatise on Geophysics: Geodesy, T. A. Herring, ed., Elsevier. That paper contains the answer to your question. I don't know how long that link will last, so I'll summarize some of what Agnew described. This is a summary ... 4 General relativity is only conformally invariant in two dimensions. This can be proven by making the transformation $g_{ab} \rightarrow \phi g_{ab}$, and seeing what transformation Einstein's equation${}^{1}$ makes. What you will find is that Einstein's equation will MOSTLY transform, but you will get terms proportional to $(d-2)(d-1)$ and derivatives of ... 3 g is telling you the constant rate at which the velocity is changing. So initially the velocity is 0 and after 1 second the velocity is 10 m/s. The average velocity during that first second is 5 m/sec so the mass has fallen 5 m. In the second second the initial velocity is 10 m/s and at the end of that second it is 20 m/s. The average velocity over that ... 3 You are not making a fundamental error, and your approach is in principle correct. Basically, what you get, is a discretization error. Basically, what you are doing is evaluating the integral $$d=\int_{t_0}^{t_1}v(t)dt=\int_{t_0}^{t_1} gt dt.$$ Which you approximated with a Riemann sum (maybe unintentionally?), i.e $$d=\sum_{i=1}^N g t_i \Delta t.$$ You ... 3 A few thoughts to help you on your way. When an elevator is moving, you have to do work against gravity. You are changing the potential energy of the system. The faster the elevator moves, the more work per unit time is needed (because power = work times velocity). If you are changing the velocity of an object, you are changing its kinetic energy: if it's ... 3 Correct. For a sphere of uniform density, the acceleration drops off linearly. $$g = g_{surface} \frac{r}{R}$$ where $r$ is the location under consideration, $R$ is the radius of the sphere and $r < R$. Under such a scheme, gravity would be one half that at the surface. The earth is not a uniform sphere though. The outer crust is much less dense than ... 3 Your figure for being below water is not correct. As you descend in the ocean the ambient pressure increases by about 1 atm/10 meters. In a uniform sphere, the gravitational field is linear in the radius, zero at the center. Making the incorrect assumption that the earth is a uniform sphere, being down 1 km would decrease the gravitational acceleration by ... 3 Gravity may be treated as a quantum field theory. In this kind of theory, interactions are represented by field correlations, more known as "virtual particles", "virtual gravitons" in the case of gravity. The fact that two charges (more precisely, in the case of the gravitation, $2$ positive energy densities) attract each other is due to the sign ... 3 Newton's law does predict the bending of light. However it predicts a value that is a factor of two smaller than actually observed. The Newtonian equation for gravity produces a force: $$F = \frac{GMm}{r^2}$$ so the acceleration of the smaller mass, $m$, is: $$a = \frac{F}{m} = \frac{GM}{r^2}\frac{m}{m}$$ If the particle is massless then $m/m = 0/0$ ... 3 UPDATED: See below. Your NDSolve inputs seem to be doing what I would expect for a mass around a gravitational center. Using: a = 0; b = 0; traj = Table[ s = NDSolve[{x''[t] == -x[t]/((x[t] - a)^2 + (y[t] - b)^2)^(3/2), y''[t] == -y[t]/((x[t] - a)^2 + (y[t] - b)^2)^(3/2), x[0] == 1, y[0] == 0, x'[0] == 0, y'[0] == v}, {x, y}, {t, -20, ... 3 I am rather surprised that neither link posted above gives a simple discussion of the effect, so here it goes. Let us consider many asteroids of cubic shape, of constant density $\rho$, and of varying side $l$. We ask when, roughly, self-gravity will be able to perturb this shape into a spherical one. A cube of side $l$ has the same volume as a sphere of ... 2 Some simple scaling relations suffice to determine the size beyond which gravity prevents non-spherical rocks from forming: A molecule of mass $m$ is bound to a mass $M$ of linear size $R$ with gravitational binding energy approximately equal to $G M m / R$. If this gravitational binding energy far exceeds the molecular binding energy $E_b$, gravity will ... 2 You may define a conserved total stress-energy tensor (matter + gravitation). The main problem is that a conserved total stress-energy tensor is not covariant, and that a covariant stress-energy tensor is not conserved. Said differently, $\nabla^\mu T_{\mu\nu}=0$, which is a covariant equation, does not represent a conservation law, while $\partial^\mu( ... 2 If your potential is$\propto 1/r\$, you're effectively simulating gravity. If it gives you better intuition, imagine it as the earth around the sun. As long as your numerical solver is doing a decent job, you shouldn't expect the ball to spiral in, the correct solution would be a conic section, that is it would orbit the origin in an elliptical path if the ... 2 There is a difference between "feeling the force" and "being stretched". If you imagine two balls connected by a spring, and falling towards a massive object, then the closer ball will experience a greater force and therefore "accelerate away" from the ball that is further away - the spring between them will stretch, and thus provide a force balance. A ... 2 Yes, it an extremely small effect but it exists in Einsteins general relativity. There is one case of a double star where there rotation around each other seems to lose energy at rate that this phenomena should give according to general relativity 2 There are more spins than just 2. There are particles with spin zero (Higgs particle). Spin half (electrons, positrons, neutrions, quarks, muons, etc.), spin 1 (photons, gauge bosons of weak interaction), spin 3/2, spin 2 (hypothetical gravitons). During attraction / repulsion there are 2 things that come into play: 1) The particles that get attracted / ... 2 The mathematical transformation from particle to antiparticle reverses the sign of the charge and the sign of the sign of the intrinsic parity. An antiparticle has the same (positive) mass and same spin as the corresponding particle. In the theory of supersymmetry, the known particles have "superpartners" with different spin. Supersymmetry is a very ... Only top voted, non community-wiki answers of a minimum length are eligible
# Simplifying F# Type Provider Development | Posted by Dave Fancher 0 Followers on Feb 13, 2015. Estimated reading time: 9 minutes | A note to our readers: You asked so we have developed a set of features that allow you to reduce the noise: you can get email and web notifications for topics you are interested in. Learn more about our new features. Type providers are one of the most interesting and empowering features of the F# 3.0 release. Properly written type providers make data access virtually frictionless in F# applications as they eliminate the need for manually developing and maintaining the types which correspond to the underlying data structures. This aspect is particularly important for data exploration tasks where many competing data access technologies require a fair amount of configuration before they're useful. For all their strengths, type providers tend to be a bit of a black box; once referenced, they usually just work. Not being the type of developer that settles for magical incantations, I recently spent some time delving into their depths. Getting started with creating a new type provider wasn't nearly as difficult as I expected. Once I found the FSharp.TypeProviders.StarterPack NuGet package which conveniently wraps up the provided types from the F# 3.0 Sample Pack to simplify type provider creation, I was able to write a type provider rather quickly. Despite the convenience afforded by the provided types, something that has really irked me about them is despite being distributed as F# source files (.fs) they were designed in a highly imperative, object-oriented manner and don't really follow any F# idioms. This results in code that looks out-of-place among the surrounding F# code. Consider the following code from an ID3 type provider which defines a new ProvidedProperty instance and attaches some XML documentation before attaching the property to a provided type identified as ty: let prop = ProvidedProperty( "AlbumTitle", typeof<string>, GetterCode = fun [tags] -> <@@ (((%%tags:obj) :?> Dictionary<string, ID3Frame>).["TALB"]).GetContent() |> unbox @@>) prop.AddXmlDocDelayed (fun () -> "Gets the album title. Corresponds to the TALB tag.") A similar pattern applies to creating provided methods as shown here: let method = ProvidedMethod( "GetTag", [ ProvidedParameter("tag", typeof<string>) ], typeof<ID3Frame option>, InvokeCode = (fun [ tags; tag ] -> <@@ let tagDict = ((%%tags:obj) :?> Dictionary<string, ID3Frame>) if tagDict.ContainsKey(%%tag:string) then Some tagDict.[(%%tag:string)] else None @@>)) method.AddXmlDocDelayed (fun () -> "Returns an ID3Frame object representing the specific tag") The syntax for both examples is straightforward — particularly for those with an object-oriented background. But to the F# programmer it's tedious; it requires intermediate bindings, doesn't play nicely with pipelining or function composition, and generally doesn't fit the spirit of the language. By writing a few simple functions that take advantage of F#'s statically resolved type parameters we can greatly improve the type provider development experience. Although each of the provided type classes have their place within a type provider, it seems that the most commonly used are the ProvidedConstructor, ProvidedProperty, ProvidedMethod, and ProvidedParameter classes, so the remainder of this article will focus specifically on those types. We begin by defining some simple factory functions to wrap calls to the respective constructors as follows: (These functions should be placed in a separate module. Consider decorating the module with the AutoOpen attribute for convenience.) let inline makeProvidedConstructor parameters invokeCode = ProvidedConstructor(parameters, InvokeCode = invokeCode) let inline makeReadOnlyProvidedProperty< ^T> getterCode propName = ProvidedProperty(propName, typeof< ^T>, GetterCode = getterCode) let inline makeProvidedMethod< 'T> parameters invokeCode methodName = ProvidedMethod(methodName, parameters, typeof< 'T>, InvokeCode = invokeCode) let inline makeProvidedParameter< ^T> paramName = ProvidedParameter(paramName, typeof< ^T>) There isn't much to these functions but there are a few things to note. Foremost is that by writing these as curried functions it's trivial to compose specialized functions that better convey the intent of the provided members via partial application. For example, in our hypothetical ID3 tag type provider example, we could expose individual properties for each ID3 tag. Rather than repeating the code for each tag, changing only the property and tag names, we could compose a new makeTagProperty function that sets the property type to string, accepts the tag and property name, and automatically builds the getter code expression. Next, each of the functions include the inline modifier. This instructs the compiler to insert the method body at the call site in place of the function call, thus eliminating the associated overhead. The inline modifier is often used in conjunction with operators but can be useful for this type of function as well. The most interesting aspect of each of the functions (except the makeProvidedConstructor function) is found in their use of generics. In each case generics are used to partially abstract away how the provided member's return type is specified. I prefer this approach as it provides consistency with other parts of the .NET Framework and isolates the calls to typeof to these functions. More important than the abstraction the generics provide is that rather than relying on regular type parameters the generics here use statically resolved type parameters as indicated by the ^ prefix rather than the standard apostrophe. This distinction is of particular importance because it affects how the type provider is compiled. With regular generics the type parameters are resolved at run time as is always the case in C# and Visual Basic. With F#'s statically resolved type parameters, the parameter types are resolved at compile time which generally results in more efficient code. Furthermore, statically resolved type parameters also allow a variety of additional constraint types which are not allowed on regular type parameters. One such constraint type is the member constraint which will be highlighted shortly. Thus far all we've really achieved with our helper functions is wrapping some constructors. While these provide some extra convenience, they do little to advance us toward our goal of writing more idiomatic F# code. To that end, let's turn our focus to the AddXmlDocDelayed method from the initial code sample. It would be nice to attach the XML documentation to the members as part of a pipeline or composition chain. Despite the fact that the AddXmlDocDelayed method exists on each of the provided types we've discussed so far (except ProvidedParameter) in each case it stands alone — there is no single, unifying interface which we can reference in a new function. In fact, each of the provided member types simply derive from a corresponding MemberInfo implementation and don't reference any other interfaces. This leaves us with but a few options: • Create individual functions for each provided member type, • Use reflection, • Use dynamic type-test patterns to get the appropriate type and corresponding method, or • Channel some black magic from F#'s statically resolved type parameters to write a function constrained to executing against only those types that include an AddXmlDocDelayed method. None of the first three options are appealing. Writing separate functions for each supported provided type is likely to cause future maintenance issues. Reflection is viable but requires additional error handling and won't provide any compile time support. Dynamic type-test patterns are more idiomatic F# and will certainly provide us with compile time checking but we must provide match cases for each allowable type. Only by leveraging a member-constrained statically resolved type parameter can we get compile time checking without being explicit about the types we want to work with. Here is the function in its entirety: let inline addDelayedXmlComment comment providedMember = (^a : (member AddXmlDocDelayed : (unit -> string) -> unit) providedMember, (fun () -> comment)) providedMember Excluding the signature, the addDelayedXmlComment function is a mere two lines of code. Unlike the previous factory functions, we've allowed the compiler to infer more about the function by omitting the explicit type parameter from the function's signature, instead placing those details in the function body. The function body's first line can be likened to using reflection to obtain a reference to a MethodInfo instance representing the type's AddXmlDocDelayed method and calling its Invoke method except here the resolution is occurring at compile time. Here, the type we're "reflecting" upon is denoted as ^a, which indicates that it's a statically resolved type parameter. Next is the member constraint indicating that ^a must have a member named AddXmlDocDelayed which accepts a function (unit -> string) and returns unit. Finally we pass in the arguments in tupled form with the first argument, providedMember, being akin to the object parameter supplied to MethodInfo.Invoke and the second argument being the function that AddXmlDocDelayed will use to generate the comment. The second line of the function simply returns the provided member. This allows the provided member to continue being passed along the function chain. With these new helper functions in place, the original imperative provided property code can be rewritten as follows: "AlbumTitle" (fun [tags] -> <@@ (((%%tags:obj) :?> Dictionary<string, ID3Frame>).["TALB"]).GetContent() |> unbox @@>)) |> addDelayedXmlComment "Gets the album title. Corresponds to the TALB tag." And the provided method can be rewritten like this: "GetTag" |> makeProvidedMethod<ID3Frame option> [ makeProvidedParameter<string> "tag" ] (fun [ tags; tag ] -> <@@ let tagDict = ((%%tags:obj) :?> Dictionary<string, ID3Frame>) if tagDict.ContainsKey(%%tag:string) then Some tagDict.[(%%tag:string)] else None @@>) |> addDelayedXmlComment "Returns an ID3Frame object representing the specific tag" |> ty.AddMember Here it is apparent through pipelining that we're defining a read only string property or method, attaching some XML documentation, and adding the member to the provided type. Given that most MP3 files have multiple string tags, it seems likely that the provided property code would be repeated for each of those tags. Rather than duplicating the code, changing only the tag and comment text, we can further leverage partial application of our helper functions to compose a specialized factory function: let inline makeTagPropertyWithComment tag comment = let expr = fun [tags] -> <@@ (((%%tags:obj) :?> Dictionary<string, ID3Frame>).[tag]).GetContent() |> unbox @@> The makeTagPropertyWithComment function uses the forward composition operator to compose a new function that first creates the provided property then adds the delayed XML comment. The function's return value is the provided property as evidenced by its signature: string -> string -> (string -> ProvidedProperty) As a result, we're free to pass the resulting provided property on to another function. By using this function, our tag property can be further reduced to: "AlbumTitle" |> makeTagPropertyWithComment tag "Gets the album title. Corresponds to the TALB tag." The difference between this version and the original, imperative version is astounding. Through some simple inline wrapper functions with statically resolved type parameters and member constraints, we've managed to reduce the code necessary for creating a provided property, adding delayed XML documentation, and attaching the property to a type by approximately 60%. In the process the code has become more idiomatic F# by eliminating the intermediate variable (prop) and replacing most direct method invocations with pipelined functions. What's more, extending these examples to provide additional functionality from the various provided types follows the same patterns we've just covered. It would be nice to see some of these techniques make it into future versions of F# or even the type provider starter pack. Perhaps it's time to submit a pull request! Dave Fancher has been developing software with the .NET Framework for more than a decade. He is a familiar face in the Indiana development community as both a speaker and participant in user groups around the state. In July 2013, Dave was recognized as a Microsoft MVP (Most Valuable Professional) for Visual F#. When not writing code or writing about code at davefancher.com, he can often be found watching a movie or gaming on his Xbox One. Style ## Hello stranger! You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered. ## Get the most out of the InfoQ experience. ### Tell us what you think Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Email me replies to any of my messages in this thread Close #### by on Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Email me replies to any of my messages in this thread Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Email me replies to any of my messages in this thread Discuss Login to InfoQ to interact with what matters most to you.
NZ Level 7 (NZC) Level 2 (NCEA) Domain and Range of Absolute Value Functions II ## Interactive practice questions Consider the function that has been graphed. a What is the domain of the function? all real $x$x A $x<3$x<3 B $x\ge0$x0 C $x>0$x>0 D all real $x$x A $x<3$x<3 B $x\ge0$x0 C $x>0$x>0 D b What is the range of the function? Give your answer as an inequality. Easy Approx a minute What is the domain of the function $f\left(x\right)=\left|6-x\right|$f(x)=|6x|? What is the range of the function $f\left(x\right)=\left|4-x\right|$f(x)=|4x|? Consider the function defined as $y=\left|4x+12\right|$y=|4x+12|. ### Outcomes #### M7-2 Display the graphs of linear and non-linear functions and connect the structure of the functions with their graphs #### 91257 Apply graphical methods in solving problems
I am read this paragraph and I have a doubt. "An adversary to PKC $\Pi$ is given by two probabilistic polynomial time algorithms, $A = (A1; A2)$. In the first stage, the "find" stage, the adversary analyzes the public key and tries to determine which plaintexts, when encrypted, are vulnerable to attack. This is the job of $A1$. In the second stage, the "guess" stage, the adversary $(A2)$ will be presented with a challenge ciphertext $y$, an encryption of one of the plaintexts he found in stage 1." What is the "new" in use the challenge ciphertext $y$ if I know the its plaintext? How I will be able to make attack? - Could you please provide a bit more context to your question? In particular, where did you see this paragraph, and what security property is it supposed to define? I can make some guesses based on what you wrote, but it would be nice to be able to tell for sure. –  Ilmari Karonen Apr 30 '13 at 4:22 What "new"? The word "new" never appears in the quote you provided, so I'm not sure where you got that from or why you are asking. Perhaps you might want to spell out any assumptions you are making in more detail. Here's a hint: Ask yourself, why would you expect the ciphertext or its corresponding plaintext to be new? Answer: The plaintext that's encrypted isn't necessarily new; it's not known to the attacker (it might be one of multiple possibilities, and the attacker doesn't know which), but that doesn't mean it's new. It can be unknown without being new. –  D.W. Apr 30 '13 at 6:59 It's hard to be sure without seeing a bit more context, but the paragraph you quoted looks like it's part of a definition of IND-CPA security (ciphertext indistinguishability under a chosen-plaintext attack) for public-key ciphers. Here's the corresponding definition from the Wikipedia article I linked to above: "For a probabilistic asymmetric key encryption algorithm, indistinguishability under chosen plaintext attack (IND-CPA) is defined by the following game between an adversary and a challenger. For schemes based on computational security, the adversary is modeled by a probabilistic polynomial time Turing machine, meaning that it must complete the game and output a guess within a polynomial number of time steps. In this definition $E(PK, M)$ represents the encryption of a message $M$ under the key $PK$: 1. The challenger generates a key pair $PK, SK$ based on some security parameter $k$ (e.g., a key size in bits), and publishes $PK$ to the adversary. The challenger retains $SK$. 2. The adversary may perform a polynomially bounded number of encryptions or other operations. 3. Eventually, the adversary submits two distinct chosen plaintexts $M_0, M_1$ to the challenger. 4. The challenger selects a bit $b \in \{0, 1\}$ uniformly at random, and sends the challenge ciphertext $C = E(PK, M_b)$ back to the adversary. 5. The adversary is free to perform any number of additional computations or encryptions. Finally, it outputs a guess for the value of $b$. A cryptosystem is indistinguishable under chosen plaintext attack if every probabilistic polynomial time adversary has only a negligible "advantage" over random guessing. An adversary is said to have a negligible "advantage" if it wins the above game with probability $\tfrac12 + \epsilon(k)$, where $\epsilon(k)$ is a negligible function in the security parameter $k$, that is for every (nonzero) polynomial function $\mathrm{poly}()$ there exists $k_0$ such that $|\epsilon(k)| < \left|\tfrac{1}{\mathrm{poly}(k)}\right|$ for all $k > k_0$." Unlike the definition you quoted, the Wikipedia version doesn't explicitly represent the adversary as two polynomial-time algorithms $A1$ and $A2$, but it does note that the total computational time used by the adversary must be polynomial, which amounts to the same thing. What the Wikipedia version does note explicitly, however, is that the adversary is supposed to choose (at least) two potentially vulnerable plaintexts, of which the challenger then encrypts one and sends it back to the adversary, who then tries to guess which of the plaintexts it corresponds to. The Wikipedia article also notes that the reason this isn't trivial is because the same plaintext can encrypt to many different ciphertexts: "Although the adversary knows $M_0, M_1$ and $PK$, the probabilistic nature of $E$ means that the encryption of $M_b$ will be only one of many valid ciphertexts, and therefore encrypting $M_0, M_1$ and comparing the resulting ciphertexts with the challenge ciphertext does not afford any non-negligible advantage to the adversary." -
# How do you graph y-x=0? Jan 17, 2017 see explanation. #### Explanation: The equation $y - x = 0$ may be written as follows. add x to both sides of the equation. $y \cancel{- x} \cancel{+ x} = 0 + x$ $\Rightarrow y = x \text{ is the equation}$ This tells us that the x and y coordinates are equal. To graph, choose values of x $x = - 2 \to y = - 2 \Rightarrow \left(- 2 , - 2\right) \text{ is a point on graph}$ $x = 0 \to y = 0 \Rightarrow \left(0 , 0\right) \text{ is a point on graph}$ $x = 3 \to y = 3 \Rightarrow \left(3 , 3\right) \text{ is also a point on graph}$ Plot these 3 coordinate points and draw a straight line through them. Thus you have the graph of y - x = 0 graph{x [-10, 10, -5, 5]}
Elias Koutsoupias Professor of Computer Science University of Athens Panepistimiopolis, Ilissia Athens 15784 Greece phone: +30 210 7275122 fax: +30 210 7275114 G. Christodoulou and E. Koutsoupias. The price of anarchy of finite congestion games. STOC, pages 67--73, Baltimore, MD,USA, 22--24 May, 2005. ### Abstract We consider the price of anarchy of pure Nash equilibria in congestion games with linear latency functions. For asymmetric games, the price of anarchy of maximum social cost is \Theta(\sqrt{N}), where N is the number of players. For all other cases of symmetric or asymmetric games and for both maximum and average social cost, the price of anarchy is 5/2. We extend the results to latency functions that are polynomials of bounded degree. We also extend some of the results to mixed Nash equilibria. ### Bib @string{STOC05 = {37th ACM Symposium on Theory of Computing}} @InProceedings{CK05, author = {G. Christodoulou and E. Koutsoupias}, title = {The price of anarchy of finite congestion games}, booktitle = STOC05, pages = {67--73}, year = 2005, month = {22--24 } #may,
## Preprint ims99-2 V. Kaloshin Generic Diffeomorphisms with Superexponential Growth of Number of Periodic Orbits Abstract: Consider a compact manifold M of dimension at least 2 and the space of $C^r$-smooth diffeomorphisms Diff$^r(M)$. The classical Artin-Mazur theorem says that for a dense subset D of Diff$^r(M)$ the number of isolated periodic points grows at most exponentially fast (call it the A-M property). We extend this result and prove that diffeomorphisms having only hyperbolic periodic points with the A-M property are dense in Diff$^r(M)$. Our proof of this result is much simpler than the original proof of Artin-Mazur. The second main result is that the A-M property is not (Baire) generic. Moreover, in a Newhouse domain ${\cal N} \subset \textup{Diff}^r(M)$, an arbitrary quick growth of the number of periodic points holds on a residual set. This result follows from a theorem of Gonchenko-Shilnikov-Turaev, a detailed proof of which is also presented. View ims99-2 (PDF format)
Article| Volume 100, ISSUE 6, P1568-1577, March 16, 2011 # Identifying Molecular Dynamics in Single-Molecule FRET Experiments with Burst Variance Analysis Open Archive ## Abstract Histograms of single-molecule Förster resonance energy transfer (FRET) efficiency are often used to study the structures of biomolecules and relate these structures to function. Methods like probability distribution analysis analyze FRET histograms to detect heterogeneities in molecular structure, but they cannot determine whether this heterogeneity arises from dynamic processes or from the coexistence of several static structures. To this end, we introduce burst variance analysis (BVA), a method that detects dynamics by comparing the standard deviation of FRET from individual molecules over time to that expected from theory. Both simulations and experiments on DNA hairpins show that BVA can distinguish between static and dynamic sources of heterogeneity in single-molecule FRET histograms and can test models of dynamics against the observed standard deviation information. Using BVA, we analyzed the fingers-closing transition in the Klenow fragment of Escherichia coli DNA polymerase I and identified substantial dynamics in polymerase complexes formed prior to nucleotide incorporation; these dynamics may be important for the fidelity of DNA synthesis. We expect BVA to be broadly applicable to single-molecule FRET studies of molecular structure and to complement approaches such as probability distribution analysis and fluorescence correlation spectroscopy in studying molecular dynamics. ## Introduction Single-molecule Förster resonance energy transfer (FRET) is an important tool for studying the dynamics of biological molecules and has contributed to fields such as protein folding ( • Deniz A.A. • Laurence T.A. • Weiss S. • et al. Single-molecule protein folding: diffusion fluorescence resonance energy transfer studies of the denaturation of chymotrypsin inhibitor 2. , • Schuler B. • Eaton W.A. Protein folding studied by single-molecule FRET. ), nucleic acid structure and dynamics ( • Karymov M.A. • Chinnaraj M. • Lyubchenko Y.L. • et al. Structure, dynamics, and branch migration of a DNA Holliday junction: a single-molecule fluorescence and modeling study. , • Zhao R. • Rueda D. RNA folding dynamics by single-molecule fluorescence resonance energy transfer. ), and the function of polymerases ( • Kapanidis A.N. • Margeat E. • Ebright R.H. • et al. Initial transcription by RNA polymerase proceeds through a DNA-scrunching mechanism. , • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. , • Liu S.X. • Abbondanzieri E.A. • Zhuang X. • et al. Slide into action: dynamic shuttling of HIV reverse transcriptase on nucleic acid substrates. , • Coban O. • Lamb D.C. • Nienhaus G.U. • et al. Conformational heterogeneity in RNA polymerase observed by single-pair FRET microscopy. ). A common method for analyzing single-pair FRET data is through histograms, which report on the distribution of FRET efficiencies and corresponding donor-acceptor distances for a given molecular species ( • Kapanidis A.N. • Margeat E. • Ebright R.H. • et al. Initial transcription by RNA polymerase proceeds through a DNA-scrunching mechanism. , • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. , • Liu S.X. • Abbondanzieri E.A. • Zhuang X. • et al. Slide into action: dynamic shuttling of HIV reverse transcriptase on nucleic acid substrates. , • Majumdar D.S. • Smirnova I. • Kaback H.R. • et al. Single-molecule FRET reveals sugar-induced conformational dynamics in LacY. , • Gansen A. • Valeri A. • Seidel C.A. • et al. Nucleosome disassembly intermediates characterized by single-molecule FRET. ). Typically, FRET experiments focus on interpreting changes in mean FRET efficiency, which reflect structural changes in the molecules of interest. Besides mean FRET efficiency, the widths and shapes of these distributions also contain information ( • Majumdar D.S. • Smirnova I. • Kaback H.R. • et al. Single-molecule FRET reveals sugar-induced conformational dynamics in LacY. , • Gopich I.V. • Szabo A. Single-molecule FRET with diffusion and conformational dynamics. , • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. , • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. , • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Characterizing multiple molecular states in single-molecule multiparameter fluorescence detection by probability distribution analysis. , • Laurence T.A. • Kong X.X. • Weiss S. • et al. Probing structural heterogeneities and fluctuations of nucleic acids and denatured proteins. , • Deniz A.A. • Dahan M. • Schultz P.G. • et al. Single-pair fluorescence resonance energy transfer on freely diffusing molecules: observation of Förster distance dependence and subpopulations. , • Watkins L.P. • Chang H.Y. • Yang H. Quantitative single-molecule conformational distributions: a case study with poly-(L-proline). , • Hanson J.A. • Yang H. • et al. Illuminating the mechanistic roles of enzyme conformational dynamics. ). Broad FRET distributions may indicate the presence of static heterogeneity, dynamic heterogeneity, or a combination of the two. Static heterogeneity is due to the coexistence of multiple species with static but distinct FRET efficiencies in the same sample, whereas dynamic heterogeneity is due to a single molecular species that fluctuates between multiple distinct FRET states. Dynamic heterogeneity is of special interest, since it can report on the relationship between the conformational states of a biomolecule and its mechanism of action ( • Henzler-Wildman K. • Kern D. Dynamic personalities of proteins. , • Henzler-Wildman K.A. • Thai V. • Kern D. • et al. Intrinsic motions along an enzymatic reaction trajectory. ). Recent methods such as probability distribution analysis (PDA) ( • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. , • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Characterizing multiple molecular states in single-molecule multiparameter fluorescence detection by probability distribution analysis. , • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Probability distribution analysis of single-molecule fluorescence anisotropy and resonance energy transfer. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Kalinin S. • Valeri A. • Seidel C.A. • et al. Detection of structural dynamics by FRET: a photon distribution and fluorescence lifetime analysis of systems with multiple states. ) and proximity ratio histogram analysis (PRH) ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. ) have helped interpret the widths of FRET distributions. These methods use the experimental distribution of photon counts from either single fluorescence bursts in solution-phase experiments (PRH) or equally sized time windows (PDA), to calculate the shot-noise-limited distribution of FRET values, i.e., the distribution corresponding to a single, static FRET value, broadened only by photon statistics (shot noise). Whereas a shot-noise-limited distribution is consistent with a single donor-acceptor distance and structural homogeneity, additional broadening indicates the presence of heterogeneity. Recent PDA extensions have made it possible to fit models of static or dynamic heterogeneity to these broad distributions ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. , • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Characterizing multiple molecular states in single-molecule multiparameter fluorescence detection by probability distribution analysis. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Kalinin S. • Valeri A. • Seidel C.A. • et al. Detection of structural dynamics by FRET: a photon distribution and fluorescence lifetime analysis of systems with multiple states. ). However, it is difficult for either PDA or PRH to determine the exact origin of the broadening. Fluorescence correlation spectroscopy (FCS) methods have been used extensively to identify molecular dynamics ( • Gurunathan K. • Levitus M. Applications of fluorescence correlation spectroscopy to the study of nucleic acid conformational dynamics. ); however, it is difficult to resolve dynamics on the diffusion timescale with these methods, and they operate at the small-ensemble level, hindering the study of samples with multiple molecular subpopulations or imperfect fluorescent labeling. Although recent FCS-based methods have broadened the range of detectable dynamic timescales ( • Torres T. • Levitus M. Measuring conformational dynamics: a new FCS-FRET approach. , • Hohlbein J. • Steinhart M. • Hübner C.G. • et al. Confined diffusion in ordered nanoporous alumina membranes. ), they require additional experimental controls and are better suited for small-ensemble data. Moreover, although several studies have used correlation-based methods to resolve dynamics in single-molecule subpopulations ( • Eggeling C. • Fries J.R. • Seidel C.A. • et al. Monitoring conformational dynamics of a single molecule by selective fluorescence spectroscopy. , • Laurence T.A. • Kwon Y. • Barsky D. • et al. Correlation spectroscopy of minor fluorescent species: signal purification and distribution analysis. , • Laurence T.A. • Kwon Y. • Barsky D. • et al. Motion of a DNA sliding clamp observed by single molecule fluorescence spectroscopy. ), these suffer from the diffusion-timescale insensitivity of ensemble-based correlation methods. There is therefore a need for methods that can detect diffusion-timescale dynamics in single-molecule FRET experiments. Here, we introduce burst variance analysis (BVA), which directly detects dynamics in single-molecule FRET data by examining how FRET efficiency fluctuates over time in individual molecules. Whereas the standard deviation of FRET for a static molecule is a simple analytical function of its mean FRET, molecules with dynamic fluctuations in FRET are characterized by an increased standard deviation. BVA compares the static and experimentally observed standard deviations, using a strict statistical criterion to determine whether a given sample exhibits dynamic FRET fluctuations. We demonstrate the ability of BVA to distinguish between static and dynamic heterogeneity using simulations and experiments on both static and dynamic DNA standards. We also show that BVA can be used to analyze the shot-noise predictions generated by PDA, providing a second dimension along which to test models of biomolecular dynamics against experimental data. Finally, we apply BVA to study fingers-closing dynamics in the Klenow fragment (KF) of Escherichia coli DNA polymerase I (Pol I). This conformational change precedes nucleotide incorporation and is thought to contribute to the polymerase's impressive fidelity. We found evidence for previously unidentified fingers-closing dynamics in both KF-DNA (binary) and KF-DNA-deoxynucleotide triphosphate (dNTP) (ternary) complexes, which may be functionally important for the fidelity of DNA synthesis. ## Materials and Methods ### Single-molecule fluorescence Solution-phase single-molecule fluorescence experiments were performed using alternating laser excitation as described in previous studies ( • Kapanidis A.N. • Margeat E. • Ebright R.H. • et al. Initial transcription by RNA polymerase proceeds through a DNA-scrunching mechanism. , • Kapanidis A.N. • Lee N.K. • Weiss S. • et al. Fluorescence-aided molecule sorting: analysis of structure and interactions by alternating-laser excitation of single molecules. ). The excitation powers measured in continuous-wave mode at 532 and 638 nm were 200 μW and 80 μW, respectively, for DNA samples, and 400 μW and 60 μW, respectively, for KF samples. Samples were analyzed at a concentration of 10–50 pM to minimize multimolecule bursts. DNA samples were measured in 400 mM NaCl, 10 mM Tris-HCl, pH 8.0, 1 mM EDTA, and 100 μg/mL BSA; KF samples were measured in 40 mM HEPES-NaOH, pH 7.3, 10 mM MgCl2, 1 mM DTT, 100 μg/ml BSA, 5% glycerol, and 1 mM β-mercaptoethylamine. Fluorescent labeling and purification of DNA and KF samples is described in the Supporting Material. ### Data analysis and simulations Analysis software was written in MATLAB (The MathWorks, Natick, MA) or C++. Fluorescent bursts were detected as described previously ( • Santoso Y. • Kapanidis A.N. Probing biomolecular structures and dynamics of single molecules using in-gel alternating-laser excitation. ) and analyzed to determine the proximity ratio, E∗, the donor-excitation photon count, N, the burst duration, T, and the donor/acceptor stoichiometry, S. Unless otherwise noted, all data were thresholded using S ≥ 0.45, eliminating acceptor-only fluorescent species from our analysis. BVA was implemented as described (see Theory), and PDA was carried out as described ( • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. ). BVA histograms were normalized such that the darkest shade represented the densest point on the histogram; white represented zero density. Simulation software was written in C++ and is described in the Supporting Material. ## Results and Discussion ### Theory #### Proximity ratio A fluorescence burst contains N donor-excitation photons, each detected in either the donor (D) or acceptor (A) channel. The photons arriving in each channel include contributions from both fluorescence, FD and FA, and background, BD and BA. Fluorescence due to leakage of donor fluorescence into the acceptor channel, and to direct excitation of the acceptor fluorophores by the donor-excitation laser ( • Lee N.K. • Kapanidis A.N. • Weiss S. • et al. Accurate FRET measurements within single diffusing biomolecules using alternating-laser excitation. ), can also be observed; we include these contributions in the acceptor fluorescence term, FA. The experimentally observed FRET, or proximity ratio E∗, therefore includes contributions from background, leakage, and direct excitation and is simply the ratio of photons observed in the acceptor channel to total number of photons observed: $E∗=FA+BAFD+BD+FA+BA=FA+BAN$ (1) The proximity ratio is a standard FRET-based reporter for measuring relative distance changes between two fluorophores ( • Dahan M. • Deniz A.A. • Weiss S. • et al. Ratiometric measurement and identification of single diffusing molecules. ), and we use it throughout this work. #### Probability distribution analysis In its simplest form, PDA predicts the distribution of observed FRET efficiencies, P(E), when the true FRET efficiency, 〈E〉, is the same for all molecules, and the only source of P(E) broadening is photon statistics ( • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. ). For the proximity ratio E∗, this expected distribution is $P(E∗)=∑allFA,BD,BAyieldingE∗P(F)×P(FA|〈E∗〉,F),$ (2) where P(F) is the distribution of fluorescence photons per burst, and $〈E∗〉$ is the mean proximity ratio. Assuming no background, P(F) = P(N), the experimental distribution of photon counts. Moreover, $P(FA|〈E∗〉,F)$, the distribution of fluorescence photons in the acceptor channel (including leakage and direct excitation), follows a binomial distribution ( • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. ): $P(FA|〈E∗〉,F)=(FFA)〈E∗〉FA(1−〈E∗〉)F−FA$ (3) The mean FRET, $〈E∗〉$, is typically a floating parameter, which we fit by minimizing a reduced chi-square objective function, $χr2$ (see Supporting Material). For simplicity, we ignore the contribution of background fluorescence, which is negligible in our experiments (background counts of ≤6 kHz have a negligible effect on FRET histograms under typical experimental conditions ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. )), though such contributions can be incorporated (Eq. S1, Eq. S2, and Eq. S3) ( • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. ). Extensions of PDA to predict the shot-noise-limited histograms of samples with static or dynamic heterogeneity are presented in Eq. S4, Eq. S5, Eq. S6, Eq. S7, and Eq. S8 in the Supporting Material, and in detail elsewhere ( • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Characterizing multiple molecular states in single-molecule multiparameter fluorescence detection by probability distribution analysis. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Kalinin S. • Valeri A. • Seidel C.A. • et al. Detection of structural dynamics by FRET: a photon distribution and fluorescence lifetime analysis of systems with multiple states. ). In a previous publication, we developed a method to predict the shot-noise-limited FRET histograms of molecules with kinetic schemes of arbitrary complexity ( • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. ). Whereas this method relied on the simplifying assumption of a uniform photon arrival-time distribution, we have since modified the method to incorporate the experimental distribution of arrival times, improving the accuracy of our PDA predictions. We refer to this method as arrival-time PDA (explained in detail in the Supporting Material); the arrival-time-PDA method preserves photon arrival-time information, allowing the PDA prediction itself to be analyzed by BVA for the presence of dynamics. For validation of the arrival-time-PDA method, see Fig. S2 in the Supporting Material. #### Burst variance analysis If the observed FRET distribution is broader than the expected shot-noise distribution, PDA can be used to fit for multiple static components, or multiple interconverting states ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. , • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Characterizing multiple molecular states in single-molecule multiparameter fluorescence detection by probability distribution analysis. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Kalinin S. • Valeri A. • Seidel C.A. • et al. Detection of structural dynamics by FRET: a photon distribution and fluorescence lifetime analysis of systems with multiple states. ) (also see the Supporting Material); however, it cannot discriminate between these two sources of heterogeneity. In its simplest form, we use BVA to determine whether the observed broadening of the $E∗$ distribution is due to dynamics. Whereas PDA examines the heterogeneity in FRET among all molecules in a sample, BVA analyzes the heterogeneity in the FRET of individual molecules over time (Fig. 1). For static heterogeneity, the width of the E∗ distribution expands beyond shot noise, because different molecules have different originating FRET values; however, the FRET distribution over time for any individual static molecule is consistent with a shot-noise-limited distribution (Fig. 1 A, magenta); in contrast, this single-molecule FRET distribution will be wider than shot noise if the molecule exhibits FRET dynamics (Fig. 1 A, blue). In BVA, we test for dynamics by comparing the expected shot-noise-limited standard deviation for a given mean E∗, σE∗, against the observed standard deviation, sE∗, for individual molecules. For a static species, the expected standard deviation due to shot noise, σE∗, depends only on photon statistics. Any set of n consecutive photons will follow a binomial distribution with respect to emission in the donor and acceptor channels. Assuming no background, the expected standard deviation of FA is that of a binomial, $nE∗(1−E∗)$, with the standard deviation of $E∗=FA/n$ being $σE∗=E∗(1−E∗)n.$ (4) To calculate the experimental burstwide standard deviation, $si$, we segment each burst, i, into Mi consecutive (and nonoverlapping) windows of n photons each (where Mi is the maximum number of windows in burst i; Fig. 1 B), and calculate the standard deviation of all windows within the burst: $si=1Mi∑j=1Mi(ɛij−μi)2,whereμi=1Mi(∑j=1Miɛij),$ (5) where ɛij is the proximity ratio of window j in burst i, and μi is the mean FRET of all such windows in burst i. In this work, we set n = 5 (rationale and discussion of window size effects is provided in the Supporting Material). Individual bursts often contain only a few photon windows, resulting in large errors in the calculated si. To increase the statistical power of BVA, we segment the E∗ axis into R bins, each centered on a given value of E∗ and bearing a width w. For each bin, we calculate the expected standard deviation, sE∗, of all windows belonging to bursts in the interval $L≤E∗, where $L=(E∗−w/2)$ and $U=(E∗+w/2)$ are lower and upper bounds, respectively, of the bin (Fig. 1 C), $sE∗=∑iwhereL≤Ei∗ (6) $Ei∗$ is the proximity ratio of burst $i$, and $μ$ is the mean FRET of all windows belonging to bursts with $L≤Ei∗. Unless otherwise indicated, we define R = 20 bins, each with a width of 0.05 along the $E∗$ axis; for instance, the bin centering on $sE∗=0.5$ includes windows from all bursts with $0.475≤Ei∗<0.525$. In this work, we consider sE∗ values only from those bins with at least 50 bursts, to ensure that any dynamics detected are representative of the sample. For simplicity, we ignore the contribution of background to BVA; as in PDA, such contributions are usually negligible (see Fig. S3). We note that, like any method of detecting FRET dynamics, BVA may be sensitive to dynamic changes in fluorophore quantum yield or orientation factor that also give rise to dynamic changes in FRET. It is therefore important to ensure that these artifacts are identified and eliminated with proper controls or else occur on timescales distinct from the dynamics of interest ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. , • Kalinin S. • Sisamakis E. • Seidel C.A. • et al. On the origin of broadening of single-molecule FRET efficiency distributions beyond shot noise limits. , • Chung H.S. • Louis J.M. • Eaton W.A. Distinguishing between protein dynamics and dye photophysics in single-molecule FRET experiments. ). #### Confidence intervals We calculate upper-limit confidence intervals on $σE∗$ by considering the sampling distribution of standard deviations, P(σ), expected for M windows of n photons. Although this distribution has an approximate analytical solution (see Eq. S9 and accompanying text), we use a computationally expensive, but more precise, Monte Carlo approach to calculate P(σ).To implement the Monte Carlo approach, we simulate the sampling distribution of σ, $σ=∑iwhereL≤Ei∗ (7) and $FAij$ are random variables drawn from a binomial distribution with n trials (i.e., the number of photons/window) and E∗ probability of success. We define the resulting Monte Carlo distribution as $PMC(σ)$. We use the distribution $PMC(σ)$ to calculate the upper-tail confidence interval on the standard deviation, $sE∗CI$, and test for dynamics by comparing it to the observed sE∗. As we calculate this interval on R = 20 data bins, we employ a Bonferroni correction for multiple hypothesis testing (implementation described in Abidi ( • Abidi H. Bonferroni and Sidak corrections for multiple comparisons. ), and in the Supporting Material). Unless otherwise indicated, we set our per-experiment confidence level to $α=.001$; deviations beyond the value of sE∗ corresponding to this level should reflect the presence of dynamics. ### Simulation results #### BVA can distinguish between static and dynamic heterogeneity We first tested whether BVA can distinguish between static and dynamic heterogeneity as sources of broadening in E∗ distributions. In addition to a series of single, static FRET species (Fig. S5), we simulated an equimolar mixture of three species with distinct but static FRET efficiencies ($〈E1∗〉=0.4$, $〈E2∗〉=0.5$, and $〈E3∗〉=0.6$) and analyzed it via PDA (Fig. 2 A, upper). As expected, the PDA prediction assuming these static species achieved a good fit to the data (black line; $χr2=1.07$); however, the same data could also be fit well by considering a single species with two dynamically interconverting states. To be specific, we performed a two-parameter fit assuming symmetry about E = 0.5 and equal forward and backward kinetic rates (red line; $〈E1∗〉=0.373±0.003$, $〈E2∗〉=0.627±0.003$, $k1→2=k2→1=883±17s−1$; $χr2=1.19$). The ability of both static and dynamic PDA predictions to account for the observed E∗ histogram (Fig. 2 A, upper; $χr2<2$) demonstrates the difficulty of resolving static versus dynamic heterogeneity with PDA alone. We then analyzed the static sample with BVA; as expected, all sE∗ values fell well within the predicted 99.9% confidence interval (Fig. 2 A, lower, triangles in gray region) correctly suggesting that the observed heterogeneity was due to static, rather than dynamic, sources. We next simulated a simple, two-state dynamic sample (Fig. 2 B) using the FRET values and first-order rate constants from the dynamic PDA prediction in Fig. 2 A ($〈E1∗〉=0.373$, $〈E2∗〉=0.627$, $k1→2=k2→1=883s−1$). Again, PDA predictions assuming either the given two-state dynamic model, or a three-species static model ($〈E1∗〉=0.382±0.002$, $〈E2∗〉=0.5$ (fixed), $〈E3∗〉=0.611±0.001$), could account for the observed E∗ distribution (Fig. 2 B, upper; $χr2<2$). BVA, however, showed a clear increase in sE∗ beyond the confidence interval for intermediate values of E∗, indicating dynamics (Fig. 2 B, lower, red triangles). Therefore, despite the similarity of E∗ histograms resulting from static or dynamic heterogeneity, BVA could correctly determine the type of heterogeneity present. For the dynamic sample, the sE∗ values at intermediate E∗ are above the confidence interval, whereas those nearer to the E∗ of each individual state are not. This occurs because the diffusion time of the molecules (∼1 ms) is similar to the timescale of dynamics (in our simulations, molecules fluctuate on a timescale of $1/k1→2=1/k2→1≈1.1ms$); some molecules will therefore sample only one state (e.g., E∗ ∼ 0.3) during their diffusion through the confocal spot, producing an E∗ value characteristic of the sampled state, and giving rise to an sE∗ value consistent with static behavior. Molecules with an intermediate E∗ value, however, sample both FRET states, and therefore show an increased sE∗. #### BVA detects FRET dynamics in a timescale-dependent fashion Studies on dynamic systems have shown that although FRET histograms broaden in response to dynamics near the diffusion timescale, they appear shot-noise-limited when dynamics are much faster or much slower than diffusion ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. ). In the former case, molecules interconvert so rapidly that the FRET efficiencies of their states average out, and each burst exhibits an apparently constant (intermediate) FRET efficiency; in the latter, molecules interconvert so slowly that every burst is spent in one state or the other, yielding a shot-noise-limited population corresponding to each state. As BVA detects dynamics through intraburst FRET fluctuations, we expected it to show a timescale dependence similar to that observed when analyzing E∗ histogram broadening alone. To test the ability of BVA to detect dynamics on different timescales, we studied the effects of fluctuation timescales on both E∗ histograms and the sE∗ calculated with BVA. We simulated dynamic species fluctuating between two FRET states, $〈E1∗〉=0.3$ and $〈E2∗〉=0.7$, at timescales on the order of diffusion ($k1→2=k2→1=103s−1$), or three orders of magnitude above ($k1→2=k2→1=106s−1$) or below it ($k1→2=k2→1=1s−1$). As expected, the molecule fluctuating on the diffusion timescale exhibited broadening by PDA, and an increased sE∗ (Fig. 3 B). In contrast, molecules fluctuating much slower or faster than the diffusion timescale appeared static by both PDA and BVA (Fig. 3, A and C); thus, at timescales >3 orders of magnitude slower or faster than diffusion, BVA could not detect dynamics. To determine the timescales over which BVA can detect dynamics, we simulated the same two-state fluctuation ($〈E1∗〉=0.3,$$〈E2∗〉=0.7$) at timescales from $106s−1$ to $100s−1$. To quantify our ability to detect dynamics, we calculated a dynamic score (DS), the sum of squared residuals between the observed standard deviation sE∗ and the upper-tail confidence interval $sE∗CI$ for all significant sE∗ (i.e., those above the confidence interval): $DS=∑(sE∗−sE∗CI>0)(sE∗−sE∗CI)2$ (8) The DS is a least-squares-like objective function providing an intuitive measure of dynamics: the DS is zero when all sE∗ are within the confidence interval and the molecule appears static, and nonzero when there is strong evidence for dynamics (i.e., some sE∗ > 0). Using the same two-state dynamic species ($〈E1∗〉=0.3,$$〈E2∗〉=0.7$), we calculated the DS over many timescales, and detected dynamics over four orders of magnitude (Fig. 3 D, black line). BVA was most sensitive to FRET fluctuations near the diffusion timescale, regardless of diffusion coefficient ($D=3.0×106$, $3.0×107$, $3.0×108nm2s−1$); in all cases, the maximal DS coincided with mean molecular diffusion time (Fig. 3 D, arrow). As diffusion time depends on the dimensions of the confocal spot, we expect features of the experimental setup to affect the sensitivity of BVA to dynamics. We also tested the effects of photon window size, n (Fig. S4), and fluctuation amplitude, $Δ〈E∗〉=|〈E2∗〉−〈E1∗〉|$ (Fig. 3 E), on the ability of BVA to detect dynamics. Fluctuation amplitude had a large impact, with a doubling of fluctuation amplitude yielding a roughly fourfold increase in DS at the diffusion timescale. ### Experimental results #### BVA can detect dynamic heterogeneity in DNA samples examined using smFRET For the experimental validation of BVA, we prepared a series of DNAs modeled after a DNA hairpin standard (Fig. S1). The hairpin is a stem-loop structure that interconverts dynamically between an open and a closed conformation on the timescale of diffusion (Figs. 4 A and Fig. S1 B) ( • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Wallace M.I. • Ying L.M. • Klenerman D. • et al. FRET fluctuation spectroscopy: exploring the conformational dynamics of a DNA hairpin loop. , • Ying L.M. • Wallace M.I. • Klenerman D. Two-state model of conformational fluctuation in a DNA hairpin-loop. ). The donor and acceptor fluorophores are on the stem and loop, respectively, such that opening and closing result in large FRET changes. To test whether BVA can distinguish between static and dynamic FRET species, we prepared both a dynamic hairpin and hairpin-like static controls, which remain permanently in either the closed or open conformation (Fig. S1, C and D). We verified that the individual closed and open hairpin conformations are static by testing a mixture of control hairpins (Fig. 4 B); each $E∗$ distribution was broader than expected from shot noise alone (Fig. S6), and consistent with a Gaussian distribution of $〈E∗〉$ in the range of 0.15–0.23 nm, as seen previously ( • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. , • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Kalinin S. • Sisamakis E. • Seidel C.A. • et al. On the origin of broadening of single-molecule FRET efficiency distributions beyond shot noise limits. ). Such heterogeneity has been attributed to either acceptor dye photophysics, or long-lived states in which fluorophores occupy different positions and/or orientations with respect to the DNA ( • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. , • Kalinin S. • Sisamakis E. • Seidel C.A. • et al. On the origin of broadening of single-molecule FRET efficiency distributions beyond shot noise limits. ). Consistent with this proposed quasistatic heterogeneity, BVA analysis showed no evidence for dynamics in either of the two control DNA populations (Fig. 4 B). We then analyzed the DNA hairpin, which interconverts at the millisecond timescale between the FRET states represented by the two controls, and should yield a large dynamic signal, as we detected previously using a simple form of BVA ( • Santoso Y. • Kapanidis A.N. Probing biomolecular structures and dynamics of single molecules using in-gel alternating-laser excitation. ). As expected, BVA revealed a dramatic increase in sE∗ at intermediate E∗ (Fig. 4 C, red triangles); in these bursts, the hairpin switched between the open and closed conformations during its transit through the confocal volume, yielding an intermediate E∗ and high sE∗. Previously, hairpin dynamics were proposed to occur via a simple two-state kinetic model ( • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. , • Ying L.M. • Wallace M.I. • Klenerman D. Two-state model of conformational fluctuation in a DNA hairpin-loop. , • Bonnet G. • Krichevsky O. • Libchaber A. Kinetics of conformational fluctuations in DNA hairpin-loops. ) wherein the hairpin fluctuates between open and closed conformations with first-order rate constants $kclose$ and $kopen$ ; we thus tested whether the observed $E∗$ distribution and sE∗ values were consistent with such a model. We first used PDA to fit the data to a two-state dynamic model while fixing the FRET and Gaussian broadening of each state equal to that of the static controls (Fig. 4 C, upper). We obtained best-fit kinetic values of $kopen=641±9s−1$ and $kclose=463±14s−1$, which yielded a good fit to the data ($χr2=1.53$) and an overall reaction time of $τR=1/(kopen+kclose)=0.91ms$, consistent with the results of previous correlation-based analyses ($τR$ of 0.5–1.0 ms) ( • Santoso Y. • Kapanidis A.N. Probing biomolecular structures and dynamics of single molecules using in-gel alternating-laser excitation. , • Wallace M.I. • Ying L.M. • Klenerman D. • et al. FRET fluctuation spectroscopy: exploring the conformational dynamics of a DNA hairpin loop. ). We then used BVA to determine the sE∗ of this PDA prediction, and found close agreement between the sE∗ of the data and the two-state kinetic model (Fig. 4 C, green triangles); in contrast, the sE∗ data could not be explained by a PDA prediction assuming a distribution of static underlying E∗ values (Fig. 4 C, blue triangles; blue line, upper). This supported earlier PDA-based work suggesting that a two-state dynamic model can account for the dynamics of the hairpin on the timescale of diffusion ( • Torres T. • Levitus M. Measuring conformational dynamics: a new FCS-FRET approach. ). We note, however, that our best-fit solution exhibits small but systematic deviations in sE∗ compared to the data, possibly due to the presence of minor additional states. Indeed, recent work found that a similar hairpin fluctuated with double-exponential kinetics, suggesting an intermediate state between the open and closed forms ( • Jung J.Y. • Van Orden A. A three-state mechanism for DNA hairpin folding characterized by multiparameter fluorescence fluctuation spectroscopy. ). Overall, our analysis demonstrates that BVA can both detect dynamics in experimental samples and test models of molecular heterogeneity against experimental data. #### Dynamics in the Klenow fragment of E. coli DNA Pol I The bacterial DNA Pol I is an essential component of DNA replication and repair, and it exhibits remarkable fidelity in selecting the correct template-directed nucleotide for addition to a growing DNA chain. Substantial effort has been invested in studying how this fidelity is achieved, with many reports pointing to conformational changes in the polymerase prior to nucleotide incorporation, and especially to a “fingers-closing” transition during which the polymerase forms a tight pocket around both its DNA substrate and an incoming nucleotide, positioning them for catalysis ( • Li Y. • Korolev S. • Waksman G. Crystal structures of open and closed forms of binary and ternary complexes of the large fragment of Thermus aquaticus DNA polymerase I: structural basis for nucleotide incorporation. , • Joyce C.M. • Benkovic S.J. DNA polymerase fidelity: kinetics, structure, and checkpoints. ) (Fig. 5 A). Previously, we used single-molecule FRET to monitor the fingers-closing transition in the Klenow Fragment (KF) of E. coli DNA Pol I by labeling it with donor and acceptor fluorophores on the fingers and thumb KF subdomains, respectively ( • Coban O. • Lamb D.C. • Nienhaus G.U. • et al. Conformational heterogeneity in RNA polymerase observed by single-pair FRET microscopy. , • Torres T. • Levitus M. Measuring conformational dynamics: a new FCS-FRET approach. ). We showed that fingers-closing and opening occurs dynamically in the absence of a DNA template; this was based on the fact that E∗ distributions of the unliganded KF were too wide to be accounted for by either one or two shot-noise-limited distributions, and that the unliganded KF appeared dynamic via FCS-based methods and an early form of BVA ( • Coban O. • Lamb D.C. • Nienhaus G.U. • et al. Conformational heterogeneity in RNA polymerase observed by single-pair FRET microscopy. ). Recently, we showed that the E∗ histogram for unliganded KF was consistent with a simple two-state kinetic model using PDA ( • Torres T. • Levitus M. Measuring conformational dynamics: a new FCS-FRET approach. ), such that unliganded KF appears to fluctuate between its open and closed states on a millisecond timescale, with rates in agreement with the first report of KF dynamics ( • Coban O. • Lamb D.C. • Nienhaus G.U. • et al. Conformational heterogeneity in RNA polymerase observed by single-pair FRET microscopy. ). For the DNA-Pol binary complex and DNA-Pol-dNTP ternary complex, however, the results were less clear: FCS-based methods failed to reveal clear dynamics in these complexes, despite the apparent existence of both open and closed conformations in equilibrium ( • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. , • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. ), and stopped-flow data showing diffusion-timescale fingers-closing during ternary complex formation ( • Joyce C.M. • Potapova O. • Grindley N.D. • et al. Fingers-closing and other rapid conformational changes in DNA polymerase I (Klenow fragment) and their role in nucleotide selectivity. ). Moreover, the distributions could be fitted well by either sums of (static) Gaussian distributions ( • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. ) or by a PDA-based two-state dynamic model ( • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. ). Finally, application of an early form of BVA, which relied on visual comparisons between BVA contour plots of experimental and simulated data, did not clearly identify dynamics, though it lacked the statistical power to do so conclusively ( • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. ). To determine whether the binary and ternary samples are dynamic, we studied them using BVA. In both samples, KF was bound to a nonextensible hairpin DNA with A as the templating nucleotide (Fig. S1 E); in the ternary complex sample, dTTP was added to form the Pol-DNA-dNTP complex (see Materials and Methods). We first used PDA to test whether the observed E∗ distributions for each sample could be accounted for with one of two models: a dynamic two-state model or a static two-species model (Fig. 5, BD). For the dynamic two-state model, we assumed that the polymerase fluctuates between the open and closed complex with first-order rate constants kclose and kopen. In each of the three KF complexes, we fixed the means and Gaussian widths of each state, fit for kclose and kopen, and obtained a good fit to the observed E∗ distribution (Fig. 5, BD, upper, red lines; $χr2<2$ in all cases). Furthermore, all fitted kopen and kclose fell in the 100- to 500-s−1 range, consistent with the rate of fingers-closing dynamics expected from previous studies ( • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. , • Joyce C.M. • Potapova O. • Grindley N.D. • et al. Fingers-closing and other rapid conformational changes in DNA polymerase I (Klenow fragment) and their role in nucleotide selectivity. ). We then fit each E∗ histogram assuming the existence of two static (or slowly interconverting) states, corresponding to the open and closed conformations; as in the dynamic model, we held the mean FRET of each state constant, but fit for the Gaussian widths about these means, $σopenr$ and $σclosedr$. To avoid introducing new parameters, we fixed the relative occupancy of each state according to the apparent equilibrium from the dynamic fit. As in the dynamic model, this static model achieved a reasonable fit for all three polymerase complexes (Fig. 5, BD, upper, black lines; $χr2<2$ in all cases). Together, these data suggest that PDA analysis of the observed E∗ histogram is consistent with both static and dynamic models of polymerase behavior in all three complexes. We then analyzed each sample and its corresponding PDA predictions with BVA, and compared the resulting sE∗ values (Fig. 5, BD, lower; full contour plots of si shown in Fig. S7). The sE∗ of the actual data were in qualitative agreement with the sE∗ from the dynamic predictions in all cases; in contrast, the sE∗ of the static predictions were consistently and substantially lower than the sE∗ for either the data or the dynamic prediction. To quantify this difference, we calculated the sum of squared residuals (SSR) between the actual sE∗ and those of each prediction, where a smaller SSR indicates a better fit. In all three KF species, the SSR of the dynamic prediction was about an order of magnitude smaller than that of the static prediction (Fig. 5, BD), suggesting that these wide E∗ distributions were due to dynamic, rather than static, heterogeneity. To ensure that this result was independent of the model of static heterogeneity employed, we tested a model that includes a third static species with an E∗ between that of the open and closed complexes; this model, too, did not agree with the data (Fig. S7). We also investigated how the specific timescales used in the dynamic prediction affected their agreement with the experimental data. In addition to finding that predictions using the rates extracted with PDA (100–500 s−1, consistent with fingers-closing dynamics) matched the observed sE∗ very well, we found that these rates could be altered by more than an order of magnitude before achieving an SSR comparable to the static prediction (Fig. S8). In conclusion, the full BVA method identified clear dynamics in both the binary and ternary complexes of KF (beyond a strict 99.9% confidence interval) and was able to reproduce the observed standard deviation data accurately using a two-state kinetic model (Fig. 5). These results raise the intriguing possibility that the fingers-closing transition in KF may be dynamic throughout its reaction trajectory, whether DNA-bound or poised for dNTP incorporation. ## Conclusions We introduced BVA, an analytical method that enables the detection of dynamics in single-molecule FRET experiments; it accomplishes this by comparing the standard deviation of FRET from individual molecules over time to that expected from theory. To characterize BVA, we analyzed static DNA molecules, a dynamic DNA hairpin, and the KF of DNA Pol I and its complexes. BVA analysis showed a lack of dynamics in static DNA standards but clear dynamics in a previously characterized dynamic DNA hairpin. We combined BVA with PDA to test specific models of static or dynamic heterogeneity against smFRET data on the DNA hairpin; whereas both static and dynamic sources of heterogeneity could explain the observed E∗ distribution, only a two-state dynamic model could account for the experimental data when incorporating standard deviation information via BVA. We used BVA to analyze the conformational dynamics of KF binary and ternary complexes, uncovering millisecond-timescale dynamics not previously detected using a correlation-based approach ( • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. ), and suggested (but not conclusively identified) using PDA ( • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Probability distribution analysis of single-molecule fluorescence anisotropy and resonance energy transfer. ). The presence of dynamics at the millisecond timescale is consistent with recent studies showing that the fingers-closing transition is not rate-limiting for nucleotide addition, supporting a model in which fingers-closing precedes, but does not commit the polymerase to, dNTP incorporation ( • Joyce C.M. • Potapova O. • Grindley N.D. • et al. Fingers-closing and other rapid conformational changes in DNA polymerase I (Klenow fragment) and their role in nucleotide selectivity. , • Rothwell P.J. • Mitaksov V. • Waksman G. Motions of the fingers subdomain of klentaq1 are fast and not rate limiting: implications for the molecular basis of fidelity in DNA polymerases. ). Together with our previous results ( • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. ), this suggests a model in which fingers-closing, a prechemistry step important for discriminating between matched and mismatched nucleotides, may occur several times prior to successful dNTP incorporation. Our results illustrate the usefulness of BVA in detecting structural dynamics using single-molecule FRET. Due to its diffusion-timescale sensitivity, BVA complements correlation-based methods, as well as time-resolved smFRET measurements ( • Laurence T.A. • Kong X.X. • Weiss S. • et al. Probing structural heterogeneities and fluctuations of nucleic acids and denatured proteins. ), which may have difficulty detecting dynamics near the diffusion timescale. BVA also complements a recent PDA-based method to identify the presence of FRET dynamics ( • Kalinin S. • Valeri A. • Seidel C.A. • et al. Detection of structural dynamics by FRET: a photon distribution and fluorescence lifetime analysis of systems with multiple states. ), both by offering model-free detection of dynamics, and by adding a dimension along which to hypothesis-test specific models of dynamics; the latter is useful in rejecting incorrect models that produce $E∗$ distributions consistent with experiments, but show poor agreement between predicted and observed BVA data. Since BVA is performed on single-molecule data, it can also be applied to subpopulations of interest, or imperfectly labeled samples, without producing the artifacts inherent to correlation-based methods. BVA should be broadly useful in single-molecule FRET studies of enzyme structures and dynamics, and of protein and nucleic acid folding. The authors thank C. M. Joyce and N. D. F. Grindley for helpful discussions, and C. M. Joyce and O. Potapova for providing fluorescently labeled Klenow fragment. This work was supported by a European Commission Seventh Framework Programme (FP7/2007–2013) grant (HEALTH-F4-2008-201418, entitled READNA) and a Biotechnology and Biological Sciences Research Council grant (BB/H01795X/1) to A. N. Kapanidis, an Environmental Protection Agency Cephalosporin Scholarship (Linacre College, University of Oxford, UK) to Yusdi Santoso, and a Clarendon Award (Oxford University, Oxford, UK) to J. P. Torella. The authors declare that they have no competing financial interests. ## Supporting Material • Document S1. Equations, references, and figures ## References • Deniz A.A. • Laurence T.A. • Weiss S. • et al. Single-molecule protein folding: diffusion fluorescence resonance energy transfer studies of the denaturation of chymotrypsin inhibitor 2. Proc. Natl. Acad. Sci. USA. 2000; 97: 5179-5184 • Schuler B. • Eaton W.A. Protein folding studied by single-molecule FRET. Curr. Opin. Struct. Biol. 2008; 18: 16-26 • Karymov M.A. • Chinnaraj M. • Lyubchenko Y.L. • et al. Structure, dynamics, and branch migration of a DNA Holliday junction: a single-molecule fluorescence and modeling study. Biophys. J. 2008; 95: 4372-4383 • Zhao R. • Rueda D. RNA folding dynamics by single-molecule fluorescence resonance energy transfer. Methods. 2009; 49: 112-117 • Kapanidis A.N. • Margeat E. • Ebright R.H. • et al. Initial transcription by RNA polymerase proceeds through a DNA-scrunching mechanism. Science. 2006; 314: 1144-1147 • Santoso Y. • Joyce C.M. • Kapanidis A.N. • et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. Proc. Natl. Acad. Sci. USA. 2010; 107: 715-720 • Liu S.X. • Abbondanzieri E.A. • Zhuang X. • et al. Slide into action: dynamic shuttling of HIV reverse transcriptase on nucleic acid substrates. Science. 2008; 322: 1092-1097 • Coban O. • Lamb D.C. • Nienhaus G.U. • et al. Conformational heterogeneity in RNA polymerase observed by single-pair FRET microscopy. Biophys. J. 2006; 90: 4605-4617 • Majumdar D.S. • Smirnova I. • Kaback H.R. • et al. Single-molecule FRET reveals sugar-induced conformational dynamics in LacY. Proc. Natl. Acad. Sci. USA. 2007; 104: 12640-12645 • Gansen A. • Valeri A. • Seidel C.A. • et al. Nucleosome disassembly intermediates characterized by single-molecule FRET. Proc. Natl. Acad. Sci. USA. 2009; 106: 15308-15313 • Gopich I.V. • Szabo A. Single-molecule FRET with diffusion and conformational dynamics. J. Phys. Chem. B. 2007; 111: 12925-12932 • Nir E. • Michalet X. • Weiss S. • et al. Shot-noise limited single-molecule FRET histograms: comparison between theory and experiments. J. Phys. Chem. B. 2006; 110: 22103-22124 • Antonik M. • Felekyan S. • Seidel C.A. • et al. Separating structural heterogeneities from stochastic variations in fluorescence resonance energy transfer distributions via photon distribution analysis. J. Phys. Chem. B. 2006; 110: 6970-6978 • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Characterizing multiple molecular states in single-molecule multiparameter fluorescence detection by probability distribution analysis. J. Phys. Chem. B. 2008; 112: 8361-8374 • Laurence T.A. • Kong X.X. • Weiss S. • et al. Probing structural heterogeneities and fluctuations of nucleic acids and denatured proteins. Proc. Natl. Acad. Sci. USA. 2005; 102: 17348-17353 • Deniz A.A. • Dahan M. • Schultz P.G. • et al. Single-pair fluorescence resonance energy transfer on freely diffusing molecules: observation of Förster distance dependence and subpopulations. Proc. Natl. Acad. Sci. USA. 1999; 96: 3670-3675 • Watkins L.P. • Chang H.Y. • Yang H. Quantitative single-molecule conformational distributions: a case study with poly-(L-proline). J. Phys. Chem. A. 2006; 110: 5191-5203 • Hanson J.A. • Yang H. • et al. Illuminating the mechanistic roles of enzyme conformational dynamics. Proc. Natl. Acad. Sci. USA. 2007; 104: 18055-18060 • Henzler-Wildman K. • Kern D. Dynamic personalities of proteins. Nature. 2007; 450: 964-972 • Henzler-Wildman K.A. • Thai V. • Kern D. • et al. Intrinsic motions along an enzymatic reaction trajectory. Nature. 2007; 450: 838-844 • Kalinin S. • Felekyan S. • Seidel C.A. • et al. Probability distribution analysis of single-molecule fluorescence anisotropy and resonance energy transfer. J. Phys. Chem. B. 2007; 111: 10253-10262 • Santoso Y. • Torella J.P. • Kapanidis A.N. Characterizing single-molecule FRET dynamics with probability distribution analysis. ChemPhysChem. 2010; 11: 2209-2219 • Kalinin S. • Valeri A. • Seidel C.A. • et al. Detection of structural dynamics by FRET: a photon distribution and fluorescence lifetime analysis of systems with multiple states. J. Phys. Chem. B. 2010; 114: 7983-7995 • Gurunathan K. • Levitus M. Applications of fluorescence correlation spectroscopy to the study of nucleic acid conformational dynamics. Prog. Nucleic Acid Res. Mol. Biol. 2008; 82: 33-69 • Torres T. • Levitus M. Measuring conformational dynamics: a new FCS-FRET approach. J. Phys. Chem. B. 2007; 111: 7392-7400 • Hohlbein J. • Steinhart M. • Hübner C.G. • et al. Confined diffusion in ordered nanoporous alumina membranes. Small. 2007; 3: 380-385 • Eggeling C. • Fries J.R. • Seidel C.A. • et al. Monitoring conformational dynamics of a single molecule by selective fluorescence spectroscopy. Proc. Natl. Acad. Sci. USA. 1998; 95: 1556-1561 • Laurence T.A. • Kwon Y. • Barsky D. • et al. Correlation spectroscopy of minor fluorescent species: signal purification and distribution analysis. Biophys. J. 2007; 92: 2184-2198 • Laurence T.A. • Kwon Y. • Barsky D. • et al. Motion of a DNA sliding clamp observed by single molecule fluorescence spectroscopy. J. Biol. Chem. 2008; 283: 22895-22906 • Kapanidis A.N. • Lee N.K. • Weiss S. • et al. Fluorescence-aided molecule sorting: analysis of structure and interactions by alternating-laser excitation of single molecules. Proc. Natl. Acad. Sci. USA. 2004; 101: 8936-8941 • Santoso Y. • Kapanidis A.N. Probing biomolecular structures and dynamics of single molecules using in-gel alternating-laser excitation. Anal. Chem. 2009; 81: 9561-9570 • Lee N.K. • Kapanidis A.N. • Weiss S. • et al. Accurate FRET measurements within single diffusing biomolecules using alternating-laser excitation. Biophys. J. 2005; 88: 2939-2953 • Dahan M. • Deniz A.A. • Weiss S. • et al. Ratiometric measurement and identification of single diffusing molecules. Chem. Phys. 1999; 247: 85-106 • Kalinin S. • Sisamakis E. • Seidel C.A. • et al. On the origin of broadening of single-molecule FRET efficiency distributions beyond shot noise limits. J. Phys. Chem. B. 2010; 114: 6197-6206 • Chung H.S. • Louis J.M. • Eaton W.A. Distinguishing between protein dynamics and dye photophysics in single-molecule FRET experiments. Biophys. J. 2010; 98: 696-706 • Abidi H. Bonferroni and Sidak corrections for multiple comparisons. in: Encyclopedia of Measurement and Statistics. Sage, Thousand Oaks, CA2007: 103-107 • Wallace M.I. • Ying L.M. • Klenerman D. • et al. FRET fluctuation spectroscopy: exploring the conformational dynamics of a DNA hairpin loop. J. Phys. Chem. B. 2000; 104: 11551-11555 • Ying L.M. • Wallace M.I. • Klenerman D. Two-state model of conformational fluctuation in a DNA hairpin-loop. Chem. Phys. Lett. 2001; 334: 145-150 • Bonnet G. • Krichevsky O. • Libchaber A. Kinetics of conformational fluctuations in DNA hairpin-loops. Proc. Natl. Acad. Sci. USA. 1998; 95: 8602-8606 • Jung J.Y. • Van Orden A. A three-state mechanism for DNA hairpin folding characterized by multiparameter fluorescence fluctuation spectroscopy. J. Am. Chem. Soc. 2006; 128: 1240-1249 • Li Y. • Korolev S. • Waksman G. Crystal structures of open and closed forms of binary and ternary complexes of the large fragment of Thermus aquaticus DNA polymerase I: structural basis for nucleotide incorporation. EMBO J. 1998; 17: 7514-7525 • Joyce C.M. • Benkovic S.J. DNA polymerase fidelity: kinetics, structure, and checkpoints. Biochemistry. 2004; 43: 14317-14324 • Joyce C.M. • Potapova O. • Grindley N.D. • et al. Fingers-closing and other rapid conformational changes in DNA polymerase I (Klenow fragment) and their role in nucleotide selectivity. Biochemistry. 2008; 47: 6103-6116 • Rothwell P.J. • Mitaksov V. • Waksman G. Motions of the fingers subdomain of klentaq1 are fast and not rate limiting: implications for the molecular basis of fidelity in DNA polymerases. Mol. Cell. 2005; 19: 345-355 • Johnson S.J. • Taylor J.S. • Beese L.S. Processive DNA synthesis observed in a polymerase crystal suggests a mechanism for the prevention of frameshift mutations. Proc. Natl. Acad. Sci. USA. 2003; 100: 3895-3900
# Stimuli provider (Environment)¶ ## Introduction¶ The database section must feed a stimuli provider (or environment), which is instantiated with a section named sp (or env) in the INI file. When the two sections are present in the INI file, they are implicitly connected: the StimuliProvider is automatically aware of the Database driver that is present. The StimuliProvider section specifies the input dimensions of the network (width, height), as well as the batch size. Example: [sp] SizeX=24 SizeY=24 BatchSize=12 ; [default: 1] Data augmentation and conditioning Transformation blocks and data analysis StimuliData blocks can be associated to a stimuli provider as shown below: The table below summarizes the parameters available for the sp section: Option [default value] Description SizeX Environment width SizeY Environment height NbChannels [1] Number of channels (applicable only if there is no env.ChannelTransformation[...]) BatchSize [1] Batch size CompositeStimuli [0] If true, use pixel-wise stimuli labels CachePath [] Stimuli cache path (no cache if left empty) The env section accepts more parameters dedicated to event-based (spiking) simulation: Option (env only) [default] Description StimulusType [SingleBurst] Method for converting stimuli into spike trains. Can be any of SingleBurst, Periodic, JitteredPeriodic or Poissonian DiscardedLateStimuli [1.0] The pixels in the pre-processed stimuli with a value above this limit never generate spiking events PeriodMeanMin [50 TimeMs] Mean minimum period $$\overline{T_{min}}$$, used for periodic temporal codings, corresponding to pixels in the pre-processed stimuli with a value of 0 (which are supposed to be the most significant pixels) PeriodMeanMax [12 TimeS] Mean maximum period $$\overline{T_{max}}$$, used for periodic temporal codings, corresponding to pixels in the pre-processed stimuli with a value of 1 (which are supposed to be the least significant pixels). This maximum period may be never reached if DiscardedLateStimuli is lower than 1.0 PeriodRelStdDev [0.1] Relative standard deviation, used for periodic temporal codings, applied to the spiking period of a pixel PeriodMin [11 TimeMs] Absolute minimum period, or spiking interval, used for periodic temporal codings, for any pixel For image segmentation, the parameter CompositeStimuli=1 must always be present, meaning that the labels of the image must have the same dimension than the image (and cannot be a single class value as in classification problem). ## Data range and conversion¶ A configuration section can be associated to a StimuliProvider, as shown below. The DataSignedMapping=1 parameter specifies that the input value range must be interpreted as signed, even if the values are unsigned, which is usually the case for standard image formats (BMP, JPEG, PNG…). In case of 8-bit images, values from 0 to 255 are therefore mapped to the range -128 to 127 when this parameter is enabled. [sp] SizeX=[database.slicing]Width SizeY=[database.slicing]Height BatchSize=${BATCH_SIZE} CompositeStimuli=1 ConfigSection=sp.config [sp.config] DataSignedMapping=1 Note In N2D2, the integer value input range [0, 255] (or [-128, 127] with the DataSignedMapping=1 parameter) (for 8-bit images), is implicitly converted to floating point value range [0.0, 1.0] or [-1.0, 1.0] in the StimuliProvider, after the transformations, unless one of the transformation changes the representation and/or the range of the data. Note The DataSignedMapping parameter only has effect when implicit conversion is performed. The input value range can also be changed explicitly using for example a RangeAffineTransformation, like below, in which case no implicit conversion is performed afterwards (and the DataSignedMapping parameter has no effect): [sp.Transformation-rangeAffine] Type=RangeAffineTransformation FirstOperator=Minus FirstValue=128.0 SecondOperator=Divides SecondValue=128.0 When running a simulation in N2D2, the graph of the transformations with all their parameters as well as the expected output dimension after each transformation is automatically generated (in the file transformations.png). As transformations can be applied only to one of the learn, validation or test datasets, three graphs are generated, as shown in the following figure. ## Images slicing during training and inference¶ In N2D2, the input dimensions of a neural network is fixed and cannot be changed dynamically during the training and inference, as images are processed in batch, like any other deep learning framework. Therefore, in order to deal with datasets containing images of variable dimensions, patches or slices of fixed dimensions must be extracted. In N2D2, two mechanisms are provided to extract slices: • For training, random slices can be extracted from bigger images for each batch, thus allowing to cover the full images over the training time with the maximum variability. This also act as basic data augmentation. Random slices extraction is achieved using a SliceExtractionTransformation, applied only to the training set with the parameter ApplyTo=LearnOnly. [sp.OnTheFlyTransformation-1] Type=SliceExtractionTransformation Width=${WIDTH} Height=${HEIGHT} RandomOffsetX=1 RandomOffsetY=1 AllowPadding=1 ApplyTo=LearnOnly • For inference, one wants to cover the full images once and only once. This cannot be achieved with a N2D2 Transformation, but has to be handled by the Database driver. In order to do so, any Database driver can have an additional “slicing” section in the N2D2 INI file, which will automatically extract regularly strided fixed size slices from the dataset. The example above can be used to extract slides for the validation and testing datasets, with the parameter ApplyTo=NoLearn. [database.slicing] Width=${WIDTH} Height=${HEIGHT} StrideX=[database.slicing]Width StrideY=[database.slicing]Height Overlapping=1 ApplyTo=NoLearn When an image size is not a multiple of the slices size, the most right and most bottom slices may have a size lower than the intended fixed slice size specified with Width and Height. There are two ways to deal with these slices: 1. Add the Overlapping=1 parameter, which allows an overlapping between the right/bottom-most slice and the preceding one. The overlapping area in the right/bottom-most slice is then marked as “ignore” for the labeling, to avoid counting twice the classification result on these pixels. 2. Add a PadCropTransformation to pad to the slice target size for NoLearn data. In this case the padded area can be either ignored or mirror padding can be used. ## Blending for data augmentation¶ Complex data augmentation / pre-processing pipelines can be created by combining the different available transformations. It is even possible to use multiple Database and StimuliProvider, to create for example a “blending” pipeline, which is introduced here and illustrated in the figure below. An example of a blending pipeline in the INI file is given here. The first part is the BlendingTransformation, which is inserted in the main image processing pipeline. ... ; Here we add a blending transformation, which will perform objects blending ; to images with the specified labels in the dataset, selected by the ; ApplyToLabels parameter. [sp.OnTheFlyTransformation-blend] Type=BlendingTransformation ApplyTo=LearnOnly Database=database_objects ; database driver to use for the objects to blend StimuliProvider=sp_objects ; stimuli provider specifying the transformations ; to apply on the object data before blending ; Specifies the name of the image label(s) on which a blending can be performed. ; Here, any image in a "backgrounds" sub-directory in the dataset will be used ; for the blending ; POSSIBLE FUTURE EXTENSION: possibility to associate some backgrounds to some ; object types only. Adding a background in a "backgrounds" sub-directory in the ; object directory may allow this. ; POSSIBLE FUTURE EXTENSION: specify ROIs for blending some object types. ApplyToLabels=*backgrounds* ; Indicate whether multiple object types can be mixed on the same background TypeMixing=0 ; Density of the object in the background, from 0.0 to 1.0 DensityRange=0.0 0.2 ; Horizontal margin between objects (in pixels) MarginH=0 ; Vertical margin between objects (in pixels) MarginV=0 ; Blending method ; POSSIBLE FUTURE EXTENSION: add other blending methods... BlendingMethod=SmoothEdge BlendingSmoothSize=5 ; For DEBUG purpose, specifying a non-empty SavePath will save all the generated ; blending with their associated JSON annotation in the SavePath directory. SavePath=blending ... The second part is the object pre-processing and extraction pipeline, that is attached to the BlendingTransformation. ; --- BEGIN --- DATA TO BLEND PRE-PROCESSING --- ; Database driver for the objects. Can be a sub-set of the main pipe image ; dataset, or even the full main dataset itself [database_objects] Type=DIR_Database DataPath=${DATA_PATH} Depth=-1 LabelDepth=1 Learn=1.0 EquivLabelPartitioning=0 ; Since we use the same dataset, ignore the background images that contain ; no object to blend. DefaultLabel=background ; Label for pixels outside any ROI (default is no label, pixels are ignored) ; Simuli provider for objects => no need to change this part. [sp_objects] ; Sizes to 0 means any size, require that BatchSize=0 SizeX=0 SizeY=0 BatchSize=0 ; Apply random rotation & scaling to objects ; POSSIBLE FUTURE EXTENSION: apply different transformations depending on the ; type of object [sp_objects.OnTheFlyTransformation-0] Type=SliceExtractionTransformation ; Sizes to 0 means any size, size will not be changed by the transformation Width=0 Height=0 RandomRotation=1 RandomScaling=1 RandomScalingRange=0.5 2.0 ; ... add here other transformations to apply to objects before extraction and ; blending ; Extend the object labels to have a smooth transition with background [sp_objects.OnTheFlyTransformation-1] Type=MorphologyTransformation Operation=Dilate Size=3 ApplyToLabels=1 NbIterations=2 ; This has to be the last transformation in the pre-processing of the images ; that will be blended. ; After data augmentation, a random object is extracted from the image, ; using ROIs or connected-component labeling. [sp_objects.OnTheFlyTransformation-2] Type=ROIExtractionTransformation ; Extract any label ID Label=-1 ; Perform connected-component labeling to the label to obtain objects ROIs. LabelSegmentation=1 Margin=0 KeepComposite=1 ; Possibility to filter the ROIs to keep before random selection of a single ; one: MinSize=0 FilterMinHeight=0 FilterMinWidth=0 FilterMinAspectRatio=0.0 FilterMaxAspectRatio=0.0 MergeMaxHDist=10 MergeMaxVDist=10 ; --- END --- DATA TO BLEND PRE-PROCESSING --- ## Built-in transformations¶ There are 6 possible categories of transformations: • env.Transformation[...] Transformations applied to the input images before channels creation; • env.OnTheFlyTransformation[...] On-the-fly transformations applied to the input images before channels creation; • env.ChannelTransformation[...] Create or add transformation for a specific channel; • env.ChannelOnTheFlyTransformation[...] Create or add on-the-fly transformation for a specific channel; • env.ChannelsTransformation[...] Transformations applied to all the channels of the input images; • env.ChannelsOnTheFlyTransformation[...] On-the-fly transformations applied to all the channels of the input images. Example: [env.Transformation] Width=24 Height=24 Several transformations can applied successively. In this case, to be able to apply multiple transformations of the same category, a different suffix ([...]) must be added to each transformation. The transformations will be processed in the order of appearance in the INI file regardless of their suffix. Common set of parameters for any kind of transformation: Option [default value] Description ApplyTo [All] Apply the transformation only to the specified stimuli sets. Can be: LearnOnly: learning set only ValidationOnly: validation set only TestOnly: testing set only NoLearn: validation and testing sets only NoValidation: learning and testing sets only NoTest: learning and validation sets only All: all sets (default) Example: [env.Transformation-1] Type=ChannelExtractionTransformation CSChannel=Gray [env.Transformation-2] Type=RescaleTransformation Width=29 Height=29 [env.Transformation-3] Type=EqualizeTransformation [env.OnTheFlyTransformation] Type=DistortionTransformation ApplyTo=LearnOnly ; Apply this transformation for the Learning set only ElasticGaussianSize=21 ElasticSigma=6.0 ElasticScaling=20.0 Scaling=15.0 Rotation=15.0 List of available transformations: ### AffineTransformation¶ Apply an element-wise affine transformation to the image with matrixes of the same size. Option [default value] Description FirstOperator First element-wise operator, can be Plus, Minus, Multiplies, Divides FirstValue First matrix file name SecondOperator [Plus] Second element-wise operator, can be Plus, Minus, Multiplies, Divides SecondValue [] Second matrix file name The final operation is the following, with $$A$$ the image matrix, $$B_{1st}$$, $$B_{2nd}$$ the matrixes to add/substract/multiply/divide and $$\odot$$ the element-wise operator : $\begin{split}f(A) = \left(A\;\substack{\odot\\op_{1st}}\;B_{1st}\right)\; \substack{\odot\\op_{2nd}}\;B_{2nd}\end{split}$ ### ApodizationTransformation¶ Apply an apodization window to each data row. Option [default value] Description Size Window total size (must match the number of data columns) WindowName [Rectangular] Window name. Possible values are: Rectangular: Rectangular Hann: Hann Hamming: Hamming Cosine: Cosine Gaussian: Gaussian Blackman: Blackman Kaiser: Kaiser #### Gaussian window¶ Gaussian window. Option [default value] Description WindowName.Sigma [0.4] Sigma #### Blackman window¶ Blackman window. Option [default value] Description WindowName.Alpha [0.16] Alpha #### Kaiser window¶ Kaiser window. Option [default value] Description WindowName.Beta [5.0] Beta ### CentroidCropTransformation¶ Find the centroid of the image and crop the image so that the center of the image matches the centroid location. The cropping can be done on both axis, or just one axis with the Axis parameter. If Axis is 1, only the horizontal axis will be cropped so that the centroid x-location is at half the image width. Option [default value] Description Axis [-1] Axis to consider for the centroid (-1 = both, 0 = cols, 1 = rows) In practice, this transformation can be used in conjunction with the PadCropTransformation, in order to obtain cropped images of always of the same dimension (by cropping for example to the smallest image obtained after CentroidCropTransformation), all centered on their respective centroid. ### BlendingTransformation¶ N2D2-IP only: available upon request. This transformation can be used to blend image objects, provided by another Database and associated StimuliProvider, to the images of the current StimuliProvider. Option [default value] Description Database Name of the Database section to use for the objects to blend StimuliProvider Name of the StimuliProvider section specifying the transformations to apply on the objects data before blending ApplyToLabels [] Space-separated list that specifies the name of the image label(s) on which a blending can be performed (in the current data pipe). The usual * and + wildcards are allowed. TypeMixing [0] If true (1), multiple object types can be mixed on the same image DensityRange [0.0 0.0] Range of density of the objects to blend in the image (values are from 0.0 to 1.0). A different random density in this range is used for each image. If the two values are equal, the density is constant. A constant density of 0 (corresponding the default range [0.0 0.0]) means that only a single object is blended in the image in all cases, regardless of the object size. Indeed, the density parameter is checked only after the first object was inserted. MarginH [0] Minimum horizontal margin between inserted objects (in pixels) MarginV [0] Minimum vertical margin between inserted objects (in pixels) BlendingMethod [Linear] Blending method to use (see the BlendingMethod section) BlendingAlpha [0.2] $$\alpha$$ factor for the blending. Depends on the blending method (see the BlendingMethod section) BlendingBeta [0.8] $$\beta$$ factor for the blending. Depends on the blending method (see the BlendingMethod section) BlendingSmoothSize [5] Blurring kernel size, used in some blending methods (see the BlendingMethod section) SavePath [] If not empty, all the blended images are stored in SavePath during the simulation #### BlendingMethod¶ In the following equations, $$O$$ is the object image, $$I$$ is the image of the current pipe on which objects must be inserted. And $$R$$ is the resulting image. Linear: no smoothing. $$R=\alpha.O + \beta.I$$ LinearByDistance: limit the blur in the blended object background. $$\Delta = \frac{\|O-I\|-min(\|O-I\|)}{max(\|O-I\|)-min(\|O-I\|)}$$ $$R=\alpha.O.(1-\Delta) + \beta.I.\Delta$$ SmoothEdge: smoothing at the borders of the objects. $$\alpha = \begin{cases} 1 & \text{when } LABEL \neq 0\\ 0 & \text{otherwise} \end{cases}$$ $$\alpha' = gaussian\_blur(\alpha)$$ $$R=\alpha'.O + (1-\alpha').I$$ SmoothEdgeLinearByDistance: combines SmoothEdge and LinearByDistance. $$\alpha = \begin{cases} \Delta & \text{when } LABEL \neq 0\\ 0 & \text{otherwise} \end{cases}$$ $$\alpha' = gaussian\_blur(\alpha)$$ $$R=\alpha'.O + (1-\alpha').I$$ #### Labels mapping¶ When processing the first batch of data, you might get a message like the following in the console: BlendingTransformation: labels mapping is required with the following mapping: 1 -> 9 (cat) 2 -> 12 (dog) 3 -> 66 (bird) What happens here is that the labels ID from the database containing the objects to blend (specified by the Database parameter) must match the correct labels ID from the current database (specified by the [database] section). In the log above, the labels ID on the left are the ones from the objects database and the labels ID on the right are the ones from the current database. In N2D2, upon loading a database, a new label ID is created for each new unique label name encoutered, in the loading order (alphabetical for DIR_Database, but may be arbitrary for other database drivers). The objects database may contain only a subset of the labels present in the current database, and/or the labels may be loaded in a different order. In both cases, the ID affected to a label name will be different between the two databases. During blending however, one wants that the blended object labels correspond to the labels of the current database. To solve this, labels mapping is automatically performed in N2D2 so that for corresponding label names, the label ID in the objects database is translated to the label ID of current database. In the log above for example, the objects database contains only 3 labels: “cat”, “dog” and “bird”, with ID 1, 2 and 3 respectively. These labels ID are automatically replaced by the corresponding ID (for identical label name) in the current database, for the blended objects, which are here 9, 12 and 66 respectively. Note Each label from the objects database (objects to blend) must match an existing label in the current database. There is a match if: • There is an identical label name in the current database; • There is a single label name in the current database that ends with the objects database label name. For example, the label “/dog” in the objects database will match with the “dog” label in the current database. If the objects database contains a label name that does not exist/match in the current database, an error is emitted: BlendingTransformation: label "xxx" in blending database not present in current database! ### ChannelDropTransformation¶ N2D2-IP only: available upon request. Randomly drop some channels of the image and replace them with a constant value. This can be useful to simulate missing channel data in multi-channel data. Option [default value] Description DropProb Channel’s drop probabilities (space-separated list of probabilities, in the order of the image channels) DropValue [0.0] Value to use for dropped channels pixels ### ChannelExtractionTransformation¶ Extract an image channel. Option Description CSChannel Blue: blue channel in the BGR colorspace, or first channel of any colorspace Green: green channel in the BGR colorspace, or second channel of any colorspace Red: red channel in the BGR colorspace, or third channel of any colorspace Hue: hue channel in the HSV colorspace Saturation: saturation channel in the HSV colorspace Value: value channel in the HSV colorspace Gray: gray conversion Y: Y channel in the YCbCr colorspace Cb: Cb channel in the YCbCr colorspace Cr: Cr channel in the YCbCr colorspace ### ChannelShakeTransformation¶ N2D2-IP only: available upon request. Randomly shift some channels of the image. This can be useful to simulate misalignment between multiple channel data. Option [default value] Description VerticalRange[*] Vertical shift range (in pixels) for each channel. For example, to randomly shift the second channel by +/- 5 pixels in the vertical direction, use: VerticalRange[1]=-5.0 5.0 HorizontalRange[*] Horizontal shift range (in pixels) for each channel Distribution [Uniform] Random distribution to use for the shift Rounded [1] If true (1), use integer value for the shifts (no pixel interpolation needed) BorderType Border type used when padding. Possible values: [MinusOneReflectBorder] ConstantBorder: pad with BorderValue ReplicateBorder: last element is replicated throughout, like aaaaaa|abcdefgh|hhhhhhh ReflectBorder: border will be mirror reflection of the border elements, like fedcba|abcdefgh|hgfedcb WrapBorder: it will look like cdefgh|abcdefgh|abcdefg MinusOneReflectBorder: same as ReflectBorder but with a slight change, like gfedcb|abcdefgh|gfedcba MeanBorder: pad with the mean color of the image BorderValue [0.0 0.0 0.0] Background color used when padding with BorderType is ConstantBorder #### Distribution¶ Possible distribution and meaning of the range. For example with VerticalRange[1]=-5.0 5.0. Uniform Uniform between -5 and 5. Normal Normal with mean (-5+5)/2=0 and std. dev. = (5-(-5))/6 = 1.67. The range defines the std. dev. such that range = 6 sigma. TruncatedNormal Same as Normal, but truncated between -5 and 5. ### ColorSpaceTransformation¶ Change the current image colorspace. Option Description ColorSpace BGR: convert any gray, BGR or BGRA image to BGR RGB: convert any gray, BGR or BGRA image to RGB HSV: convert BGR image to HSV HLS: convert BGR image to HLS YCrCb: convert BGR image to YCrCb CIELab: convert BGR image to CIELab CIELuv: convert BGR image to CIELuv RGB_to_BGR: convert RGB image to BGR RGB_to_HSV: convert RGB image to HSV RGB_to_HLS: convert RGB image to HLS RGB_to_YCrCb: convert RGB image to YCrCb RGB_to_CIELab: convert RGB image to CIELab RGB_to_CIELuv: convert RGB image to CIELuv HSV_to_BGR: convert HSV image to BGR HSV_to_RGB: convert HSV image to RGB HLS_to_BGR: convert HLS image to BGR HLS_to_RGB: convert HLS image to RGB YCrCb_to_BGR: convert YCrCb image to BGR YCrCb_to_RGB: convert YCrCb image to RGB CIELab_to_BGR: convert CIELab image to BGR CIELab_to_RGB: convert CIELab image to RGB CIELuv_to_BGR: convert CIELuv image to BGR CIELuv_to_RGB: convert CIELuv image to RGB Note that the default colorspace in N2D2 is BGR, the same as in OpenCV. ### DFTTransformation¶ Apply a DFT to the data. The input data must be single channel, the resulting data is two channels, the first for the real part and the second for the imaginary part. Option [default value] Description TwoDimensional [1] If true, compute a 2D image DFT. Otherwise, compute the 1D DFT of each data row Note that this transformation can add zero-padding if required by the underlying FFT implementation. ### DistortionTransformation¶ Apply elastic distortion to the image. This transformation is generally used on-the-fly (so that a different distortion is performed for each image), and for the learning only. Option [default value] Description ElasticGaussianSize [15] Size of the gaussian for elastic distortion (in pixels) ElasticSigma [6.0] Sigma of the gaussian for elastic distortion ElasticScaling [0.0] Scaling of the gaussian for elastic distortion Scaling [0.0] Maximum random scaling amplitude (+/-, in percentage) Rotation [0.0] Maximum random rotation amplitude (+/-, in °) ### EqualizeTransformation¶ Image histogram equalization. Option [default value] Description Method [Standard] Standard: standard histogram equalization CLAHE: contrast limited adaptive histogram equalization CLAHE_ClipLimit [40.0] Threshold for contrast limiting (for CLAHE only) CLAHE_GridSize [8] Size of grid for histogram equalization (for CLAHE only). Input image will be divided into equally sized rectangular tiles. This parameter defines the number of tiles in row and column. ### ExpandLabelTransformation¶ Expand single image label (1x1 pixel) to full frame label. ### FilterTransformation¶ Apply a convolution filter to the image. Option [default value] Description Kernel Convolution kernel. Possible values are: *: custom kernel Gaussian: Gaussian kernel LoG: Laplacian Of Gaussian kernel DoG: Difference Of Gaussian kernel Gabor: Gabor kernel #### * kernel¶ Custom kernel. Option Description Kernel.SizeX [0] Width of the kernel (numer of columns) Kernel.SizeY [0] Height of the kernel (number of rows) Kernel.Mat List of row-major ordered coefficients of the kernel If both Kernel.SizeX and Kernel.SizeY are 0, the kernel is assumed to be square. Note When providing a custom kernel, no normalization is applied on its coefficients. #### Gaussian kernel¶ Gaussian kernel. Option [default value] Description Kernel.SizeX Width of the kernel (numer of columns) Kernel.SizeY Height of the kernel (number of rows) Kernel.Positive [1] If true, the center of the kernel is positive Kernel.Sigma [$$\sqrt{2.0}$$] Sigma of the kernel #### LoG kernel¶ Laplacian Of Gaussian kernel. Option [default value] Description Kernel.SizeX Width of the kernel (numer of columns) Kernel.SizeY Height of the kernel (number of rows) Kernel.Positive [1] If true, the center of the kernel is positive Kernel.Sigma [$$\sqrt{2.0}$$] Sigma of the kernel #### DoG kernel¶ Difference Of Gaussian kernel kernel. Option [default value] Description Kernel.SizeX Width of the kernel (numer of columns) Kernel.SizeY Height of the kernel (number of rows) Kernel.Positive [1] If true, the center of the kernel is positive Kernel.Sigma1 [2.0] Sigma1 of the kernel Kernel.Sigma2 [1.0] Sigma2 of the kernel #### Gabor kernel¶ Gabor kernel. Option [default value] Description Kernel.SizeX Width of the kernel (numer of columns) Kernel.SizeY Height of the kernel (number of rows) Kernel.Theta Theta of the kernel Kernel.Sigma [$$\sqrt{2.0}$$] Sigma of the kernel Kernel.Lambda [10.0] Lambda of the kernel Kernel.Psi [$$\pi/2.0$$] Psi of the kernel Kernel.Gamma [0.5] Gamma of the kernel ### FlipTransformation¶ Image flip transformation. Option [default value] Description HorizontalFlip [0] If true, flip the image horizontally VerticalFlip [0] If true, flip the image vertically RandomHorizontalFlip [0] If true, randomly flip the image horizontally RandomVerticalFlip [0] If true, randomly flip the image vertically Option [default value] Description Scale [1.0] Scale to apply to the computed gradient Delta [0.0] GradientFilter [Sobel] Filter type to use for computing the gradient. Possible options are: Sobel, Scharr and Laplacian KernelSize [3] Size of the filter kernel (has no effect when using the Scharr filter, which kernel size is always 3x3) ApplyToLabels [0] If true, use the computed gradient to filter the image label and ignore pixel areas where the gradient is below the Threshold. In this case, only the labels are modified, not the image InvThreshold [0] If true, ignored label pixels will be the ones with a low gradient (low contrasted areas) Threshold [0.5] Threshold applied on the image gradient Label [] List of labels to filter (space-separated) GradientScale [1.0] Rescale the image by this factor before applying the gradient and the threshold, then scale it back to filter the labels ### LabelFilterTransformation¶ Filter labels in the image. The specified labels can be removed, kept (meaning all the other labels removed), or merged (the specified labels are replace by the first one). Option [default value] Description Labels Space-separated list of label names to be filtered Filter [Remove] Type of filter to apply: Remove, Keep (labels not in the list are removed) or Merge (labels in the list are all replaced by the first one) DefaultLabel [-2] Default label, to be used where labels are removed. With the default value (-2), the default label of the associated database is used. If there is no default label, -1 (ignore) is used This transformation filters both pixel-wise labels and ROIs. ### LabelSliceExtractionTransformation¶ Extract a slice from an image belonging to a given label. Option [default value] Description Width Width of the slice to extract Height Height of the slice to extract Label [-1] Slice should belong to this label ID. If -1, the label ID is random RandomRotation [0] If true, extract randomly rotated slices RandomRotationRange [0.0 360.0] Range of the random rotations, in degrees, counterclockwise (if RandomRotation is enabled) SlicesMargin [0] Positive or negative, indicates the margin around objects that can be extracted in the slice KeepComposite [0] If false, the 2D label image is reduced to a single value corresponding to the extracted object label (useful for patches classification tasks). Note that if SlicesMargin is > 0, the 2D label image may contain other labels before reduction. For pixel-wise segmentation tasks, set KeepComposite to true. AllowPadding [0] If true, zero-padding is allowed if the image is smaller than the slice to extract BorderType [MinusOneReflectBorder] Border type used when padding. Possible values: ConstantBorder: pad with BorderValue ReplicateBorder: last element is replicated throughout, like aaaaaa|abcdefgh|hhhhhhh ReflectBorder: border will be mirror reflection of the border elements, like fedcba|abcdefgh|hgfedcb WrapBorder: it will look like cdefgh|abcdefgh|abcdefg MinusOneReflectBorder: same as ReflectBorder but with a slight change, like gfedcb|abcdefgh|gfedcba MeanBorder: pad with the mean color of the image BorderValue [0.0 0.0 0.0] Background color used when padding with BorderType is ConstantBorder IgnoreNoValid [1] If true (1), if no valid slice is found, a random slice is extracted and marked as ignored (-1) ExcludeLabels [] Space-separated list of label ID to exclude from the random extraction (when Label is -1) This transformation is useful to learn sparse object occurrences in a lot of background. If the dataset is very unbalanced towards background, this transformation will ensure that the learning is done on a more balanced set of every labels, regardless of their actual pixel-wise ratio. Illustration of the working behavior of LabelSliceExtractionTransformation with SlicesMargin = 0: When SlicesMargin is 0, only slices that fully include a given label are extracted, as shown in figures above. The behavior with SlicesMargin < 0 is illustrated in figures below. Note that setting a negative SlicesMargin larger in absolute value than Width/2 or Height/2 will lead in some (random) cases in incorrect slice labels in respect to the majority pixel label in the slice. Illustration of the working behavior of LabelSliceExtractionTransformation with SlicesMargin = -32: ### MagnitudePhaseTransformation¶ Compute the magnitude and phase of a complex two channels input data, with the first channel $$x$$ being the real part and the second channel $$y$$ the imaginary part. The resulting data is two channels, the first one with the magnitude and the second one with the phase. Option [default value] Description LogScale [0] If true, compute the magnitude in log scale The magnitude is: $M_{i,j} = \sqrt{x_{i,j}^2 + x_{i,j}^2}$ If LogScale = 1, compute $$M'_{i,j} = log(1 + M_{i,j})$$. The phase is: $\theta_{i,j} = atan2(y_{i,j}, x_{i,j})$ ### MorphologicalReconstructionTransformation¶ Apply a morphological reconstruction transformation to the image. This transformation is also useful for post-processing. Option [default value] Description Operation Morphological operation to apply. Can be: ReconstructionByErosion: reconstruction by erosion operation ReconstructionByDilation: reconstruction by dilation operation OpeningByReconstruction: opening by reconstruction operation ClosingByReconstruction: closing by reconstruction operation Size Size of the structuring element ApplyToLabels [0] If true, apply the transformation to the labels instead of the image Shape [Rectangular] Shape of the structuring element used for morphology operations. Can be Rectangular, Elliptic or Cross. NbIterations [1] Number of times erosion and dilation are applied for opening and closing reconstructions ### MorphologyTransformation¶ Apply a morphology transformation to the image. This transformation is also useful for post-processing. Option [default value] Description Operation Morphological operation to apply. Can be: Erode: erode operation ($$=erode(src)$$) Dilate: dilate operation ($$=dilate(src)$$) Opening: opening operation ($$open(src)=dilate(erode(src))$$) Closing: closing operation ($$close(src)=erode(dilate(src))$$) Gradient: morphological gradient ($$=dilate(src)-erode(src)$$) TopHat: top hat ($$=src-open(src)$$) BlackHat: black hat ($$=close(src)-src$$) Size Size of the structuring element ApplyToLabels [0] If true, apply the transformation to the labels instead of the image Shape [Rectangular] Shape of the structuring element used for morphology operations. Can be Rectangular, Elliptic or Cross. NbIterations [1] Number of times erosion and dilation are applied ### NormalizeTransformation¶ Normalize the image. Option [default value] Description Norm [MinMax] Norm type, can be: L1: L1 normalization L2: L2 normalization Linf: Linf normalization MinMax: min-max normalization NormValue [1.0] Norm value (for L1, L2 and Linf) Such that $$||data||_{L_{p}} = NormValue$$ NormMin [0.0] Min value (for MinMax only) Such that $$min(data) = NormMin$$ NormMax [1.0] Max value (for MinMax only) Such that $$max(data) = NormMax$$ PerChannel [0] If true, normalize each channel individually Pad/crop the image to a specified size. Option [default value] Description Width Height BorderType [MinusOneReflectBorder] Border type used when padding. Possible values: ConstantBorder: pad with BorderValue ReplicateBorder: last element is replicated throughout, like aaaaaa|abcdefgh|hhhhhhh ReflectBorder: border will be mirror reflection of the border elements, like fedcba|abcdefgh|hgfedcb WrapBorder: it will look like cdefgh|abcdefgh|abcdefg MinusOneReflectBorder: same as ReflectBorder but with a slight change, like gfedcb|abcdefgh|gfedcba MeanBorder: pad with the mean color of the image BorderValue [0.0 0.0 0.0] Background color used when padding with BorderType is ConstantBorder ### ROIExtractionTransformation¶ The transformation is typically used as the last transformation of the object extraction pipeline to be used for blending in a BlendingTransformation. A random object of with the label Label is extracted from the image. Option [default value] Description Label [-1] Label ID to extract (-1 means any label ID) LabelSegmentation [0] If true (1), perform connected-component labeling to the label to obtain object ROIs Margin [0] Margin to keep around the object (in pixels) KeepComposite [1] If true (1), the extracted object label remains composite. Otherwise, the label is reduced to a single value When LabelSegmentation is 0, this transformation directly extracts one of the annotation ROI whose label matches Label. When LabelSegmentation is true (1), the annotation ROIs are not used directly. Rather, the flattened pixel-wise annotation is (re-)labeled using connected-component labeling to obtain ROIs to extract. Note that the annotation ROIs are part of the flattened pixel-wise annotation (see also the Database CompositeLabel parameter). Additional parameters for ROI filtering, before random selection of a single one: Parameter Default value Description MinSize 0 Minimum number of pixels than can constitute a bounding box. Bounding boxes with fewer than MinSize pixels are discarded FilterMinHeight 0 Minimum height of the ROI to keep it FilterMinWidth 0 Minimum width of the ROI to keep it FilterMinAspectRatio 0.0 Minimum aspect ratio (width/height) of the ROI to keep it (default is 0.0 = no minimum) FilterMaxAspectRatio 0.0 Maximum aspect ratio (width/height) of the ROI to keep it (default is 0.0 = no minimum) MergeMaxHDist 1 Maximum horizontal distance for merging (in pixels) MergeMaxVDist 1 Maximum vertical distance for merging (in pixels) Note that these parameters applies only when LabelSegmentation is true (1). ### RandomAffineTransformation¶ Apply a global random affine transformation to the values of the image. Option [default value] Description GainRange [1.0 1.0] Random gain ($$\alpha$$) range (identical for all channels) GainRange[*] [1.0 1.0] Random gain ($$\alpha$$) range for channel *. Mutually exclusive with GainRange. If any specified, a different random gain will always be sampled for each channel. Default gain is 1.0 (no gain) for missing channels The gain control the contrast of the image BiasRange [0.0 0.0] Random bias ($$\beta$$) range (identical for all channels) BiasRange[*] [0.0 0.0] Random bias ($$\beta$$) range for channel *. Mutually exclusive with BiasRange. If any specified, a different random bias will always be sampled for each channel. Default bias is 0.0 (no bias) for missing channels The bias control the brightness of the image GammaRange [1.0 1.0] Random gamma ($$\gamma$$) range (identical for all channels) GammaRange[*] [1.0 1.0] Random gamma ($$\gamma$$) range for channel *. Mutually exclusive with GammaRange. If any specified, a different random gamma will always be sampled for each channel. Default gamma is 1.0 (no change) for missing channels The gamma control more or less the exposure of the image GainVarProb [1.0] Probability to have a gain variation for each channel. If only one value is specified, the same probability applies to all the channels. In this case, the same gain variation will be sampled for all the channels only if a single range if specified for all the channels using GainRange. If more than one value is specified, a different random gain will always be sampled for each channel, even if the probabilities and ranges are identical BiasVarProb [1.0] Probability to have a bias variation for each channel. If only one value is specified, the same probability applies to all the channels. In this case, the same bias variation will be sampled for all the channels only if a single range if specified for all the channels using BiasRange. If more than one value is specified, a different random bias will always be sampled for each channel, even if the probabilities and ranges are identical GammaVarProb [1.0] Probability to have a gamma variation for each channel. If only one value is specified, the same probability applies to all the channels. In this case, the same gamma variation will be sampled for all the channels only if a single range if specified for all the channels using GammaRange. If more than one value is specified, a different random gamma will always be sampled for each channel, even if the probabilities and ranges are identical DisjointGamma [0] If true, gamma variation and gain/bias variation are mutually exclusive. The probability to have a random gamma variation is therefore GammaVarProb and the probability to have a gain/bias variation is 1-GammaVarProb. ChannelsMask [] If not empty, specifies on which channels the transformation is applied. For example, to apply the transformation only to the first and third channel, set ChannelsMask to 1 0 1 The equation of the transformation is: $\begin{split}S = \begin{cases} \text{numeric\_limits<T>::max()} & \text{if } \text{is\_integer<T>} \\ 1.0 & \text{otherwise} \end{cases}\end{split}$ $v(i,j) = \text{cv::saturate\_cast<T>}\left(\alpha \left(\frac{v(i,j)}{S}\right)^{\gamma} S + \beta.S\right)$ ### RangeAffineTransformation¶ Apply an affine transformation to the values of the image. Option [default value] Description FirstOperator First operator, can be Plus, Minus, Multiplies, Divides FirstValue First value SecondOperator [Plus] Second operator, can be Plus, Minus, Multiplies, Divides SecondValue [0.0] Second value The final operation is the following: $\begin{split}f(x) = \left(x\;\substack{o\\op_{1st}}\;val_{1st}\right)\; \substack{o\\op_{2nd}}\;val_{2nd}\end{split}$ ### RangeClippingTransformation¶ Clip the value range of the image. Option [default value] Description RangeMin [$$min(data)$$] Image values below RangeMin are clipped to 0 RangeMax [$$max(data)$$] Image values above RangeMax are clipped to 1 (or the maximum integer value of the data type) ### RescaleTransformation¶ Rescale the image to a specified size. Option [default value] Description Width Width of the rescaled image Height Height of the rescaled image KeepAspectRatio [0] If true, keeps the aspect ratio of the image ResizeToFit [1] If true, resize along the longest dimension when KeepAspectRatio is true ### ReshapeTransformation¶ Reshape the data to a specified size. Option [default value] Description NbRows New number of rows NbCols [0] New number of cols (0 = no check) NbChannels [0] New number of channels (0 = no change) ### SliceExtractionTransformation¶ Extract a slice from an image. Option [default value] Description Width Width of the slice to extract Height Height of the slice to extract OffsetX [0] X offset of the slice to extract OffsetY [0] Y offset of the slice to extract RandomOffsetX [0] If true, the X offset is chosen randomly RandomOffsetY [0] If true, the Y offset is chosen randomly RandomRotation [0] If true, extract randomly rotated slices RandomRotationRange [0.0 360.0] Range of the random rotations, in degrees, counterclockwise (if RandomRotation is enabled) RandomScaling [0] If true, extract randomly scaled slices RandomScalingRange [0.8 1.2] Range of the random scaling (if RandomRotation is enabled) AllowPadding [0] If true, zero-padding is allowed if the image is smaller than the slice to extract BorderType [MinusOneReflectBorder] Border type used when padding. Possible values: ConstantBorder: pad with BorderValue ReplicateBorder: last element is replicated throughout, like aaaaaa|abcdefgh|hhhhhhh ReflectBorder: border will be mirror reflection of the border elements, like fedcba|abcdefgh|hgfedcb WrapBorder: it will look like cdefgh|abcdefgh|abcdefg MinusOneReflectBorder: same as ReflectBorder but with a slight change, like gfedcb|abcdefgh|gfedcba MeanBorder: pad with the mean color of the image BorderValue [0.0 0.0 0.0] Background color used when padding with BorderType is ConstantBorder ### StripeRemoveTransformation¶ Remove one or several stripe(s) (a group of rows or columns) from 2D data. Option [default value] Description Axis Axis of the stripe (0 = columns, 1 = rows) Offset Offset of the beginning of the stripe, in number of rows or columns Length Length of the stripe, in number of rows or columns (a length of 1 means a single row or column will be removed) RandomOffset [0] If true (1), the stripe offset will be random along the chosen axis NbIterations [1] Number of stripes to remove StepOffset [Offset] Offset between successive stripes, when NbIterations > 1, not taking into account the length of the stripes ### ThresholdTransformation¶ Apply a thresholding transformation to the image. This transformation is also useful for post-processing. Option [default value] Description Threshold Threshold value OtsuMethod [0] Use Otsu’s method to determine the optimal threshold (if true, the Threshold value is ignored) Operation [Binary] Thresholding operation to apply. Can be: Binary BinaryInverted Truncate ToZero ToZeroInverted MaxValue [1.0] Max. value to use with Binary and BinaryInverted operations ### TrimTransformation¶ Trim the image. Option [default value] Description NbLevels Number of levels for the color discretization of the image Method [Discretize] Possible values are: Reduce: discretization using K-means Discretize: simple discretization ### WallisFilterTransformation¶ Apply Wallis filter to the image. Option [default value] Description Size Size of the filter Mean [0.0] Target mean value StdDev [1.0] Target standard deviation PerChannel [0] If true, apply Wallis filter to each channel individually (this parameter is meaningful only if Size is 0)
## Number of Components in a Second Order System We can write second order systems as a matrix. Any second order system can be written $\mathbf{a}_{ij}$ or as a matrix. $\left( \begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right)$ In general the $\mathbf{a}_{ij}$ are not related so there are 3{sup}3{/sup =9 components. If the i,j can run from 1 to $n$ there are $n^2$ components. For a symmetric system $\mathbf{a}_{ij} = a_{ji}$ The matrix above becomes $\left( \begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{array} \right)$ There are $\frac{3(3-1)}{2} +3=6$ components. If the i,j can run from 1 to $n$ there are $\frac{n(n-1)}{2} +n=\frac{n(n+1)}{2}$ components. For a skew symmetric system $\mathbf{a}_{ij} = -a_{ji}$ All the diagonal elements must be zero. The matrix above becomes $\left( \begin{array}{ccc} 0 & a_{12} & a_{13} \\ -a_{12} & 0 & a_{23} \\ -a_{13} & -a_{23} & 0 \end{array} \right)$ There are $\frac{3(3-1)}{2}=3$ components. If the i,j can run from 1 to $n$ there are $\frac{n(n-1)}{2}$ components.