url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/avoiding-vehicular-rear-end-collision.635564/
# Avoiding vehicular rear-end collision 1. Sep 13, 2012 ### PhizKid 1. The problem statement, all variables and given/known data A car A travelling at 161 km/hr is 676 m directly behind another car B travelling at 29.0 km/hr. What must the deceleration be in order for car A to avoid collision with car B? 2. Relevant equations Not sure 3. The attempt at a solution So the initial velocity of car A is 161 km/hr and we're trying to find the deceleration (or negative acceleration as I'll make it here) of it. Assuming car B has no acceleration, the distance gap between them will be: 676 m + 29.0 km/hr - (161 km/hr + negative acceleration). I tried to calculate it manually by finding the distance every second, but it takes too long on an exam and I can't find any formula that fits this criteria of information. How do I find the formula for the exact distance at a given point in time given this information? 2. Sep 13, 2012 ### cmmcnamara Can you setup the equations of motion for each car? This a constant acceleration problem so the basic constant acceleration kinematic equations will apply. 3. Sep 13, 2012 ### CAF123 Set up a coordinate system with car A at the origin and car B 676 units ahead of it on the x axis. This will help you in writing up the equations for their positions as functions of time. Can you do the rest? 4. Sep 13, 2012 ### azizlwl I think the question should be the minimum deceleration since the harder the brake applied the sooner the car attain speed less than the front car. 5. Sep 13, 2012 ### PhizKid Car A is going at: 161 km/hr - a*t and car B is just going at: 29.0 km/hr so the position of car A is just: 161 km/hr - a*t + 0 m. and car B's position is: 29.0 km/hr + 676 m. Right? 6. Sep 13, 2012 ### CAF123 I think you are along the right lines. The position of car A should be $s_{xA}= 161t + \frac{1}{2}at^2$ The position of car B should be $s_{xB} = 29t + 676$ 7. Sep 13, 2012 ### PhizKid I'm trying to stay away from the formulas because they mean nothing to me and their derivation by integration also doesn't make sense to me (why it happens or what the integration process means). I can take the equations at face value, but I have the memory of a fish and can barely remember d=r*t unless I actually think about the values. 8. Sep 13, 2012 ### azizlwl For minimum deceleration, a=(v2-u2)/2s a=(292-1612)/2x676 a=-18.55m/s2 Last edited: Sep 13, 2012 9. Sep 13, 2012 ### CAF123 Are you sure this is correct? What I thought of doing was to set $s_{xB}=s_{xA},$however this equation contains two unknowns, a and t. I need to find another equation, which at the moment I am still trying to find. 10. Sep 13, 2012 ### azizlwl You can still do it that way too. The third equation is the final speed of the behind car should be equal to the front car to avoid the collision. 11. Sep 14, 2012 ### CAF123 So, $v_{fA} = 161 + at$ and $v_{fB} = 29$. Setting these equal, rearranging for t and subbing into $s_{xA}=s_{xB}$ yields a negative acceleration of -12.9m/s2? Last edited: Sep 14, 2012 12. Sep 14, 2012 ### CAF123 13. Sep 14, 2012 ### azizlwl I've done a mistake for above post. 676=1/2(132)t t=10.24s a=-132/10.24= -12.9m/s2 14. Sep 14, 2012 ### CAF123 I think you made the same mistake as me first time. The 676 is in metres, while the 132 is in km/hr. 15. Sep 14, 2012 ### azizlwl Thanks for pointing that out. 16. Sep 14, 2012 ### PhizKid I don't understand anything you guys did, but using the given 1D Kinematic equations, I got: x - x_0 = 1/2(v_0 + v)t .676 - 0 = 1/2(29 + v)t v = (1.352 - 29t) / t for car B and for car A: v^2 = (v_0)^2 + 2a(x - x_0) v^2 = (161)^2 + 2a(0 - .676) v = sqrt(25921 - 1.352a) So I set those equal to each other, but I still have two unknown variables (a and t), so how do I get those values? 17. Sep 14, 2012 ### Staff: Mentor There seems to be a certain amount of unnecessary agony taking place over this question Quite often when there are two or more moving objects to keep track of in a problem it can be handy to perform what is known as a change of reference frame. So far you've been looking at the problem from the point of view of a theoretical observer standing still on the road. That is to say, the coordinate system being used is anchored to the road. But what would happen if, instead, you were to look at the problem from the point of view of an observer traveling along with car B? Suppose he's looking out the rear window of car B and sees car A approaching. What initial distance and speed of approach (also called "closing speed") will he observe? 18. Sep 14, 2012 ### PhizKid I'm not sure what you mean gneill. I tried a different approach by solving for 't' first: Car B's position will be 26t + .676 So the final position of car A will be at car B's position, therefore: 26t + .676 = 1/2(161 + 26)t Since the initial velocity of car A is 161, and the final velocity should be the same as car B which is 26. Solving for t reveals t = (.676 / 67.5) That's the first part, now to find the acceleration: The position of car A and car B must be the same: 26t + .676 = 161t - 1/2at^2 Plug in t, and a = 13,480.0295858. Obviously this is wrong as the number seems way too large, and it is a positive acceleration which means it's speeding up. Why does the solution come to be something that doesn't make any sense? 19. Sep 14, 2012 ### Staff: Mentor Your solution is fine... but consider what units would be associated with it. You did your calculations assuming units of km and hours. So what units will this acceleration have? It might be convenient to consider converting the starting values to meters and seconds at the outset, so these sort of "surprises" don't take you unawares Regarding my previous suggestion, an observer looking out the back of car B will see car A initially at a distance of 676 meters and approaching at speed 161 - 26 = 135 km/hr As far as he's concerned, from his point of view car A decelerates from that speed to zero in that distance. One simple, well known equation of motion can be applied to find that acceleration. 20. Sep 14, 2012 ### PhizKid 13,480.0295858 km/hr^2 = 1.04012574 m/s^2 :( Similar Discussions: Avoiding vehicular rear-end collision
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6929486393928528, "perplexity": 1134.3805860756981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00447.warc.gz"}
http://mathhelpforum.com/geometry/13986-vectors-help.html
1. ## vectors help Hi, is this categorized as number theory?....the only text book my college libary has for this kind of stuff is like, 30 years old and not the easiest read! This is an example in my workbook...if someone could spare the time to go through it that would be awesome. Any help much appreciated! Given that a = 2i + 3j 4k, b = i 2j and c = 5i + j + k. Find: (a) the vector a + b c (b) the magnitude of a + b c (c) a unit vector parallel to a + b 2. Originally Posted by bobchiba Hi, is this categorized as number theory?....the only text book my college libary has for this kind of stuff is like, 30 years old and not the easiest read! This is an example in my workbook...if someone could spare the time to go through it that would be awesome. Any help much appreciated! Given that a = 2i + 3j 4k, b = i 2j and c = 5i + j + k. Find: (a) the vector a + b c (b) the magnitude of a + b c (c) a unit vector parallel to a + b This is vector geometry a) a + b - c = 2i + 3j − 4k + i − 2j - (5i + j + k) = 2i + 3j − 4k + i − 2j - 5i - j - k = -2i - 5k b) Magnitude is square root of (-2)^2 + (-5)^2 = square root of 29 c) (-2 - 5k)/(square root of 29) 3. Originally Posted by bobchiba Hi, is this categorized as number theory?....the only text book my college libary has for this kind of stuff is like, 30 years old and not the easiest read! This is an example in my workbook...if someone could spare the time to go through it that would be awesome. Any help much appreciated! Given that a = 2i + 3j 4k, b = i 2j and c = 5i + j + k. Find: (a) the vector a + b c (b) the magnitude of a + b c (c) a unit vector parallel to a + b c The terms i, j, and k are unit vectors (i is a vector in the x-direction, j is a vector in the y-direction, and k is a vector in the z-direction). (a) the vector a + b c When you add vectors (a, b, and c are vectors), you add i's with i's, j's with j's, and k's with k's: a + b c = (2i + 3j 4k) + (i 2j) - (5i + j + k) . . . - . - .= (2i + i - 5i) + (3j - 2j - j) + (-4k - k) . . . - . - .= -2i + 0j - 5k 4. Hi again,...ok I understand part a and b now, thats great...but could someone shed i little more light on part c. Thanks. 5. Originally Posted by Glaysher c) (-2 - 5k)/(square root of 29) As Glaysher demonstrated, the unit vector parallel to -2i - 5k is the vector -2i - 5k divided by its magnitude, sqrt(29). Semi-quick explanation: All vectors have a direction and a magnitude. When a vector is written in the form: ai + bj + ck, it's magnitude can be found as the sqrt(a^2 + b^2 + c^2), and its direction is determined by the magnitude of a, b, and c with respect to each other (if a is very big while b and c are small, then the vector will look almost horizontal along the x-axis, if a, b, and c are about the same in value, we will have a vector that shoots off from the origin at almost 45% from each axis). Any vector with the same direction as our initial vector: -2i - 5k, but different magnitude is just multiplied by some scalar. For example, -10i - 25k is parallel to -2i - 5k, but its magnitude is 5 times as great. Since a "unit vector" has a magnitude of 1, if we want a unit vector in the direction of -2i - 5k, then by dividing by the magnitude of -2i - 5k, which is sqrt(29), we will have a unit vector in the direction of -2i - 5k that is 1 unit in length.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8905521631240845, "perplexity": 953.0760724963924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424876.76/warc/CC-MAIN-20170724122255-20170724142255-00155.warc.gz"}
https://stats.stackexchange.com/questions/3526/has-anyone-used-the-marascuilo-procedure-for-comparing-multiple-proportions
# Has anyone used the Marascuilo procedure for comparing multiple proportions? The Marascuilo procedure as described here seems to be a test that addresses the issue of multiple comparisons for proportions when you want to test which specific proportions are different from each other after rejecting the null in an overall chi-square test. However, I am not very familiar with this test. So, my questions: 1. What nuances (if any) should I worry about when using this test? 2. I know of at least two other approaches (see below) to address the same issue. Which test is the 'better' approach?: • Perhaps this discussion is relevant as -- it isn't often used because it is very conservative (much like Scheffe's Method)? – M. Tibbits Oct 12 '10 at 19:06 • Surely you mean "after rejecting the null" not "after failing to reject the null"? And it seems there's only one L in 'Marascuilo' (NIST's error, not yours): Leonard A. Marascuilo. Large-sample multiple comparisons. Psychological Bulletin, 1966; 65(5): 280-290. dx.doi.org/10.1037/h0023189. – onestop Oct 12 '10 at 21:58 In R, there is a function pairwise.prop.test() which allows any correction for multiple comparisons (single-step or step-down FWER methods or FDR-based), but it is quit what you already suggested (although Bonferroni is by far too conservative, but still very used in practice). A resampling approach, using permutation, might be interesting too. The coin R package provides a well-established testing framework in this respect, see §5 of Implementing a Class of Permutation Tests: The coin Package, but I never had to deal with permutation tests on categorical data in a post-hoc way.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630561351776123, "perplexity": 1495.1555359754043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00389.warc.gz"}
http://www.vallis.org/blogspace/preprints/1112.2782.html
[1112.2782] The Stochastic Gravitational Wave Background from the Single-Degenerate Channel of Type Ia Supernovae Authors: David Falta, Robert Fisher Date: 13 Dec 2011 Abstract: We demonstrate that the integrated gravitational wave signal of Type Ia supernovae (SNe Ia) in the single-degenerate channel out to cosmological distances gives rise to a continuous background to spaceborne gravitational wave detectors, including the Big Bang Observer (BBO) and Deci-Hertz Interferometer Gravitational wave Observatory (DECIGO) planned missions. This gravitational wave background from SNe Ia acts as a noise background in the frequency range 0.1 - 10 Hz, which heretofore was thought to be relatively free from astrophysical sources apart from neutron star binaries, and therefore a key window in which to study primordial gravitational waves generated by inflation. While inflationary energy scales of \$\gtrsim 10ˆ{16}\$ GeV yield inflationary gravitational wave backgrounds in excess of our range of predicted backgrounds, for lower energy scales of \$\sim10ˆ{15}\$ GeV, the inflationary gravitational wave background becomes comparable to the noise background from SNe Ia. Dec 20, 2011 1112.2782 (/preprints) 2011-12-20, 09:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251011610031128, "perplexity": 3949.5740753810514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00442.warc.gz"}
https://zbmath.org/authors/?q=J%2A+Globevnik
× # zbMATH — the first resource for mathematics ## Globevnik, Josip Compute Distance To: Author ID: globevnik.josip Published as: Globevnik, J.; Globevnik, Josip External Links: MGP · Wikidata Documents Indexed: 120 Publications since 1971, including 3 Books all top 5 #### Co-Authors 93 single-authored 8 Stout, Edgar Lee 4 Forstnerič, Franc 3 Aron, Richard M. 3 Vidav, Ivan 2 Alarcón, Antonio 2 Stensønes, Berit 1 Agranovsky, Mark L. 1 Černe, Miran 1 Cima, Joseph A. 1 Kalaj, David 1 Lopez Fernandez, Francisco Jose 1 Quinto, Eric Todd 1 Rosay, Jean-Pierre 1 Rudin, Walter 1 Schottenloher, Martin all top 5 #### Serials 10 Proceedings of the American Mathematical Society 8 Journal of Mathematical Analysis and Applications 8 Transactions of the American Mathematical Society 7 Mathematische Zeitschrift 6 Journal d’Analyse Mathématique 5 Mathematical Proceedings of the Cambridge Philosophical Society 5 Arkiv för Matematik 5 Mathematische Annalen 5 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 4 Pacific Journal of Mathematics 4 Glasnik Matematički. Serija III 3 Indiana University Mathematics Journal 3 Monatshefte für Mathematik 3 Mathematical Research Letters 2 Israel Journal of Mathematics 2 American Journal of Mathematics 2 Indagationes Mathematicae 2 Journal of Functional Analysis 2 Michigan Mathematical Journal 2 The Journal of Geometric Analysis 2 Annals of Mathematics. Second Series 1 Obzornik za Matematiko in Fiziko 1 Revue Roumaine de Mathématiques Pures et Appliquées 1 Studia Mathematica 1 Annales de l’Institut Fourier 1 Annales Polonici Mathematici 1 Bulletin of the London Mathematical Society 1 Bulletin des Sciences Mathématiques. Deuxième Série 1 Bulletin de la Société Mathématique de France 1 Commentarii Mathematici Helvetici 1 Compositio Mathematica 1 Duke Mathematical Journal 1 Illinois Journal of Mathematics 1 Inventiones Mathematicae 1 Journal für die Reine und Angewandte Mathematik 1 Mathematica Scandinavica 1 Complex Variables. Theory and Application 1 Publicacions Matemàtiques 1 Revista Matemática de la Universidad Complutense de Madrid 1 Indagationes Mathematicae. New Series 1 Bollettino della Unione Matematica Italiana. Series V. A 1 Rendiconti di Matematica, VI. Serie 1 Analysis & PDE all top 5 #### Fields 56 Several complex variables and analytic spaces (32-XX) 50 Functions of a complex variable (30-XX) 30 Functional analysis (46-XX) 7 Operator theory (47-XX) 4 Approximations and expansions (41-XX) 4 Integral transforms, operational calculus (44-XX) 4 Differential geometry (53-XX) 3 Potential theory (31-XX) 2 Real functions (26-XX) 2 Global analysis, analysis on manifolds (58-XX) 1 History and biography (01-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Convex and discrete geometry (52-XX) 1 Manifolds and cell complexes (57-XX) #### Citations contained in zbMATH 87 Publications have been cited 555 times in 302 Documents Cited by Year On complex strict and uniform convexity. Zbl 0307.46015 Globevnik, Josip 1975 Perturbation by analytic discs along maximal real submanifolds of $$C^ N$$. Zbl 0806.58044 Globevnik, Josip 1994 Boundary Morera theorems for holomorphic functions of several complex variables. Zbl 0760.32002 Globevnik, Josip; Stout, Edgar Lee 1991 Holomorphic embeddings of planar domains in $$\mathbb{C}^ 2$$. Zbl 0847.32030 Globevnik, Josip; Stensønes, Berit 1995 Non straightenable complex lines in $$\mathbb C^2$$. Zbl 0853.58015 Forstnerič, Franc; Globevnik, Josip; Rosay, Jean-Pierre 1996 Perturbing analytic discs attached to maximal real submanifolds of $$\mathbb{C}^ N$$. Zbl 0861.32013 Globevnik, Josip 1996 Analytic functions on $$c_ 0$$. Zbl 0748.46021 Aron, Richard M.; Globevnik, Josip 1989 Analyticity on rotation invariant families of curves. Zbl 0575.30033 Globevnik, Josip 1983 Discs in pseudoconvex domains. Zbl 0779.32016 Forstnerič, Franc; Globevnik, Josip 1992 A complete complex hypersurface in the ball of $$\mathbb{C}^N$$. Zbl 1333.32018 Globevnik, Josip 2015 Analyticity on circles for rational and real-analytic functions of two real variables. Zbl 1048.30001 Agranovsky, Mark L.; Globevnik, Josip 2003 Boundaries for polydisc algebras in infinite dimensions. Zbl 0395.46040 Globevnik, Josip 1979 Small families of complex lines for testing holomorphic extendibility. Zbl 1280.32004 Globevnik, Josip 2012 Testing analyticity on rotation invariant families of curves. Zbl 0639.30031 Globevnik, Josip 1988 Proper holomorphic discs in $$\mathbb{C}^2$$. Zbl 1027.32018 Forstnerič, Franc; Globevnik, Josip 2001 On Fatou-Bieberbach domains. Zbl 0919.32001 Globevnik, Josip 1998 Embedding holomorphic discs through discrete sets. Zbl 0859.32010 Forstneric, Franc; Globevnik, Josip; Stensønes, Berit 1996 The ends of varieties. Zbl 0678.32005 Globevnik, Josip; Stout, E. L. 1986 Holomorphic extensions from open families of circles. Zbl 1021.30037 Globevnik, Josip 2003 Relative embeddings of discs into convex domains. Zbl 0681.32019 Globevnik, Josip 1989 Boundary interpolation by proper holomorphic maps. Zbl 0611.32021 Globevnik, Josip 1987 On interpolation by analytic maps in infinite dimensions. Zbl 0369.46051 Globevnik, Josip 1978 Norm-constant analytic functions and equivalent norms. Zbl 0322.30040 Globevnik, Josip 1976 Meromorphic extensions from small families of circles and holomorphic extensions from spheres. Zbl 1275.32029 Globevnik, Josip 2012 Analyticity on translates of a Jordan curve. Zbl 1121.30018 Globevnik, Josip 2007 A boundary Morera theorem. Zbl 0785.32008 Globevnik, Josip 1993 Zero integrals on circles and characterizations of harmonic and analytic functions. Zbl 0701.30041 Globevnik, Josip 1990 Holomorphic functions with highly noncontinuable boundary behavior. Zbl 0564.32009 Globevnik, Josip; Stout, Edgar Lee 1982 Interpolation by proper holomorphic embeddings of the disc into $$\mathbb{C}^2$$. Zbl 1026.32030 Globevnik, Josip 2002 Highly noncontinuable functions on convex domains. Zbl 0482.32002 Globevnik, Josip; Stout, Edgar Lee 1980 Holomorphic functions unbounded on curves of finite length. Zbl 1343.32009 Globevnik, Josip 2016 Holomorphic extensions and rotation invariance. Zbl 0799.30003 Globevnik, Josip 1994 A family of lines for testing holomorphy in the ball of $${\mathbb{C}}^ 2$$. Zbl 0631.32010 Globevnik, Josip 1987 A note on normal-operator-valued analytic functions. Zbl 0266.47017 Globevnik, Josip; Vidav, I. 1973 Analyticity of functions analytic on circles. Zbl 1185.30002 Globevnik, Josip 2009 A bounded domain in $$\mathbb{C}^N$$ which embeds holomorphically into $$\mathbb{C}^{N+1}$$. Zbl 0949.32010 Globevnik, Josip 1997 A support theorem for the $$X$$-ray transform. Zbl 0755.44003 Globevnik, Josip 1992 Integrals over circles passing through the origin and a characterization of analytic functions. Zbl 0676.30003 Globevnik, Josip 1989 A characterization of harmonic functions. Zbl 0705.31001 Globevnik, Josip; Rudin, Walter 1988 Boundary interpolation and proper holomorphic maps from the disc to the ball. Zbl 0652.32015 Globevnik, Josip 1988 Analytic discs with rectifiable simple closed curves as ends. Zbl 0644.32003 Globevnik, Josip; Stout, Edgar Lee 1988 On holomorphic extensions from spheres in $$C^ 2$$. Zbl 0512.32012 Globevnik, Josip 1983 Complete embedded complex curves in the ball of $$\mathbb{C}^2$$ can have any topology. Zbl 1384.32014 Alarcón, Antonio; Globevnik, Josip 2017 Holomorphic extendibility and the argument principle. Zbl 1083.30036 Globevnik, Josip 2005 The argument principle and holomorphic extendibility. Zbl 1077.30019 Globevnik, Josip 2004 On growth of holomorphic embeddings into $$\mathbb{C}^2$$. Zbl 1041.32010 Globevnik, Josip 2002 Discs in Stein manifolds. Zbl 0974.32017 Globevnik, Josip 2000 Extensions and selections in subspaces of C(K). Zbl 0434.46031 Globevnik, Josip 1980 On operator-valued analytic functions with constant norm. Zbl 0291.47003 Globevnik, Josip; Vidav, I. 1974 A construction of complete complex hypersurfaces in the ball with control on the topology. Zbl 1423.32018 Alarcón, Antonio; Globevnik, Josip; López, Francisco J. 2019 Embedding complete holomorphic discs through discrete sets. Zbl 1351.32036 Globevnik, Josip 2016 Meromorphic extendibility and the argument principle. Zbl 1149.30031 Globevnik, Josip 2008 Analyticity on families of circles. Zbl 1057.30005 Globevnik, Josip 2004 On holomorphic embedding of planar domains into $$\mathbb{C}^2$$. Zbl 0998.32009 Černe, Miran; Globevnik, Josip 2000 Holomorphic functions which are highly nonintegrable at the boundary. Zbl 0948.32015 Globevnik, Josip 2000 Holomorphic functions on rotation invariant families of curves passing through the origin. Zbl 0815.30004 Globevnik, Josip 1994 Discs in the ball containing given discrete sets. Zbl 0668.32026 Globevnik, Josip 1988 The ends of discs. Zbl 0605.32006 Globevnik, Josip; Stout, Edgar Lee 1986 Analytic extensions and selections. Zbl 0415.30040 Globevnik, Josip 1979 Analytic functions whose range is dense in a ball. Zbl 0331.46025 Globevnik, Josip 1976 On vector-valued analytic functions with constant norm. Zbl 0267.30031 Globevnik, J. 1975 Boundary continuity of complete proper holomorphic maps. Zbl 1310.32009 Globevnik, Josip 2015 On meromorphic extendibility. Zbl 1157.30002 Globevnik, Josip 2009 Discs and the Morera property. Zbl 1016.32007 Globevnik, Josip; Stout, Edgar Lee 2000 Zero integrals on circles and characterizations of harmonic and analytic functions. Zbl 0696.31001 Globevnik, Josip 1990 Boundary regularity for holomorphic maps from the disc to the ball. Zbl 0611.30026 Globevnik, Josip; Stout, Edgar Lee 1987 The range of analytic extensions. Zbl 0326.30038 Globevnik, Josip 1977 Interpolation by vector-valued analytic functions. Zbl 0343.30034 Aron, Richard M.; Globevnik, Josip; Schottenloher, Martin 1976 The range of vector-valued analytic functions. Zbl 0331.46026 Globevnik, Josip 1976 The Rudin-Carleson theorem for vector-valued functions. Zbl 0318.46038 Globevnik, Josip 1975 The winding number of $$Pf + 1$$ for polynomials $$P$$ and meromorphic extendibility of $$f$$. Zbl 1258.30002 Globevnik, Josip 2012 The argument principle and holomorphic extendibility to finite Riemann surfaces. Zbl 1094.30028 Globevnik, Josip 2006 Morera theorems via microlocal analysis. Zbl 0866.30031 Globevnik, Josip; Quinto, Eric Todd 1996 Interpolation by analytic functions on $$c_ 0$$. Zbl 0677.46037 Aron, Richard M.; Globevnik, Josip 1988 The modulus of Rudin-Carleson extensions. Zbl 0638.30043 Globevnik, Josip 1988 Analytic continuation on complex lines. Zbl 0497.32009 Cima, Joseph A.; Globevnik, Josip 1982 Norm preserving interpolation sets for polydisc algebras. Zbl 0484.32006 Globevnik, Josip 1982 On boundary values of holomorphic functions on balls. Zbl 0484.32003 Globevnik, Josip 1982 Peak sets for polydisc algebras. Zbl 0477.32015 Globevnik, Josip 1982 On dominated extensions in function algebras. Zbl 0434.46032 Globevnik, Josip 1980 Fourier coefficients of the Rudin-Carleson extensions. Zbl 0398.30029 Globevnik, Josip 1980 On the ranges of analytic maps in infinite dimensions. Zbl 0407.46041 Globevnik, Josip 1979 Separability of analytic images of some Banach spaces. Zbl 0406.46039 Globevnik, Josip 1979 On the range of analytic functions into a Banach space. Zbl 0369.46027 Globevnik, Josip 1977 On analytic functions into $$\ell^p$$-spaces. Zbl 0336.30019 Globevnik, Josip 1977 Analytic extensions of vector-valued functions. Zbl 0309.30039 Globevnik, Josip 1976 Schwarz’s lemma for the spectral radius. Zbl 0294.46037 Globevnik, Josip 1974 A construction of complete complex hypersurfaces in the ball with control on the topology. Zbl 1423.32018 Alarcón, Antonio; Globevnik, Josip; López, Francisco J. 2019 Complete embedded complex curves in the ball of $$\mathbb{C}^2$$ can have any topology. Zbl 1384.32014 Alarcón, Antonio; Globevnik, Josip 2017 Holomorphic functions unbounded on curves of finite length. Zbl 1343.32009 Globevnik, Josip 2016 Embedding complete holomorphic discs through discrete sets. Zbl 1351.32036 Globevnik, Josip 2016 A complete complex hypersurface in the ball of $$\mathbb{C}^N$$. Zbl 1333.32018 Globevnik, Josip 2015 Boundary continuity of complete proper holomorphic maps. Zbl 1310.32009 Globevnik, Josip 2015 Small families of complex lines for testing holomorphic extendibility. Zbl 1280.32004 Globevnik, Josip 2012 Meromorphic extensions from small families of circles and holomorphic extensions from spheres. Zbl 1275.32029 Globevnik, Josip 2012 The winding number of $$Pf + 1$$ for polynomials $$P$$ and meromorphic extendibility of $$f$$. Zbl 1258.30002 Globevnik, Josip 2012 Analyticity of functions analytic on circles. Zbl 1185.30002 Globevnik, Josip 2009 On meromorphic extendibility. Zbl 1157.30002 Globevnik, Josip 2009 Meromorphic extendibility and the argument principle. Zbl 1149.30031 Globevnik, Josip 2008 Analyticity on translates of a Jordan curve. Zbl 1121.30018 Globevnik, Josip 2007 The argument principle and holomorphic extendibility to finite Riemann surfaces. Zbl 1094.30028 Globevnik, Josip 2006 Holomorphic extendibility and the argument principle. Zbl 1083.30036 Globevnik, Josip 2005 The argument principle and holomorphic extendibility. Zbl 1077.30019 Globevnik, Josip 2004 Analyticity on families of circles. Zbl 1057.30005 Globevnik, Josip 2004 Analyticity on circles for rational and real-analytic functions of two real variables. Zbl 1048.30001 Agranovsky, Mark L.; Globevnik, Josip 2003 Holomorphic extensions from open families of circles. Zbl 1021.30037 Globevnik, Josip 2003 Interpolation by proper holomorphic embeddings of the disc into $$\mathbb{C}^2$$. Zbl 1026.32030 Globevnik, Josip 2002 On growth of holomorphic embeddings into $$\mathbb{C}^2$$. Zbl 1041.32010 Globevnik, Josip 2002 Proper holomorphic discs in $$\mathbb{C}^2$$. Zbl 1027.32018 Forstnerič, Franc; Globevnik, Josip 2001 Discs in Stein manifolds. Zbl 0974.32017 Globevnik, Josip 2000 On holomorphic embedding of planar domains into $$\mathbb{C}^2$$. Zbl 0998.32009 Černe, Miran; Globevnik, Josip 2000 Holomorphic functions which are highly nonintegrable at the boundary. Zbl 0948.32015 Globevnik, Josip 2000 Discs and the Morera property. Zbl 1016.32007 Globevnik, Josip; Stout, Edgar Lee 2000 On Fatou-Bieberbach domains. Zbl 0919.32001 Globevnik, Josip 1998 A bounded domain in $$\mathbb{C}^N$$ which embeds holomorphically into $$\mathbb{C}^{N+1}$$. Zbl 0949.32010 Globevnik, Josip 1997 Non straightenable complex lines in $$\mathbb C^2$$. Zbl 0853.58015 Forstnerič, Franc; Globevnik, Josip; Rosay, Jean-Pierre 1996 Perturbing analytic discs attached to maximal real submanifolds of $$\mathbb{C}^ N$$. Zbl 0861.32013 Globevnik, Josip 1996 Embedding holomorphic discs through discrete sets. Zbl 0859.32010 Forstneric, Franc; Globevnik, Josip; Stensønes, Berit 1996 Morera theorems via microlocal analysis. Zbl 0866.30031 Globevnik, Josip; Quinto, Eric Todd 1996 Holomorphic embeddings of planar domains in $$\mathbb{C}^ 2$$. Zbl 0847.32030 Globevnik, Josip; Stensønes, Berit 1995 Perturbation by analytic discs along maximal real submanifolds of $$C^ N$$. Zbl 0806.58044 Globevnik, Josip 1994 Holomorphic extensions and rotation invariance. Zbl 0799.30003 Globevnik, Josip 1994 Holomorphic functions on rotation invariant families of curves passing through the origin. Zbl 0815.30004 Globevnik, Josip 1994 A boundary Morera theorem. Zbl 0785.32008 Globevnik, Josip 1993 Discs in pseudoconvex domains. Zbl 0779.32016 Forstnerič, Franc; Globevnik, Josip 1992 A support theorem for the $$X$$-ray transform. Zbl 0755.44003 Globevnik, Josip 1992 Boundary Morera theorems for holomorphic functions of several complex variables. Zbl 0760.32002 Globevnik, Josip; Stout, Edgar Lee 1991 Zero integrals on circles and characterizations of harmonic and analytic functions. Zbl 0701.30041 Globevnik, Josip 1990 Zero integrals on circles and characterizations of harmonic and analytic functions. Zbl 0696.31001 Globevnik, Josip 1990 Analytic functions on $$c_ 0$$. Zbl 0748.46021 Aron, Richard M.; Globevnik, Josip 1989 Relative embeddings of discs into convex domains. Zbl 0681.32019 Globevnik, Josip 1989 Integrals over circles passing through the origin and a characterization of analytic functions. Zbl 0676.30003 Globevnik, Josip 1989 Testing analyticity on rotation invariant families of curves. Zbl 0639.30031 Globevnik, Josip 1988 A characterization of harmonic functions. Zbl 0705.31001 Globevnik, Josip; Rudin, Walter 1988 Boundary interpolation and proper holomorphic maps from the disc to the ball. Zbl 0652.32015 Globevnik, Josip 1988 Analytic discs with rectifiable simple closed curves as ends. Zbl 0644.32003 Globevnik, Josip; Stout, Edgar Lee 1988 Discs in the ball containing given discrete sets. Zbl 0668.32026 Globevnik, Josip 1988 Interpolation by analytic functions on $$c_ 0$$. Zbl 0677.46037 Aron, Richard M.; Globevnik, Josip 1988 The modulus of Rudin-Carleson extensions. Zbl 0638.30043 Globevnik, Josip 1988 Boundary interpolation by proper holomorphic maps. Zbl 0611.32021 Globevnik, Josip 1987 A family of lines for testing holomorphy in the ball of $${\mathbb{C}}^ 2$$. Zbl 0631.32010 Globevnik, Josip 1987 Boundary regularity for holomorphic maps from the disc to the ball. Zbl 0611.30026 Globevnik, Josip; Stout, Edgar Lee 1987 The ends of varieties. Zbl 0678.32005 Globevnik, Josip; Stout, E. L. 1986 The ends of discs. Zbl 0605.32006 Globevnik, Josip; Stout, Edgar Lee 1986 Analyticity on rotation invariant families of curves. Zbl 0575.30033 Globevnik, Josip 1983 On holomorphic extensions from spheres in $$C^ 2$$. Zbl 0512.32012 Globevnik, Josip 1983 Holomorphic functions with highly noncontinuable boundary behavior. Zbl 0564.32009 Globevnik, Josip; Stout, Edgar Lee 1982 Analytic continuation on complex lines. Zbl 0497.32009 Cima, Joseph A.; Globevnik, Josip 1982 Norm preserving interpolation sets for polydisc algebras. Zbl 0484.32006 Globevnik, Josip 1982 On boundary values of holomorphic functions on balls. Zbl 0484.32003 Globevnik, Josip 1982 Peak sets for polydisc algebras. Zbl 0477.32015 Globevnik, Josip 1982 Highly noncontinuable functions on convex domains. Zbl 0482.32002 Globevnik, Josip; Stout, Edgar Lee 1980 Extensions and selections in subspaces of C(K). Zbl 0434.46031 Globevnik, Josip 1980 On dominated extensions in function algebras. Zbl 0434.46032 Globevnik, Josip 1980 Fourier coefficients of the Rudin-Carleson extensions. Zbl 0398.30029 Globevnik, Josip 1980 Boundaries for polydisc algebras in infinite dimensions. Zbl 0395.46040 Globevnik, Josip 1979 Analytic extensions and selections. Zbl 0415.30040 Globevnik, Josip 1979 On the ranges of analytic maps in infinite dimensions. Zbl 0407.46041 Globevnik, Josip 1979 Separability of analytic images of some Banach spaces. Zbl 0406.46039 Globevnik, Josip 1979 On interpolation by analytic maps in infinite dimensions. Zbl 0369.46051 Globevnik, Josip 1978 The range of analytic extensions. Zbl 0326.30038 Globevnik, Josip 1977 On the range of analytic functions into a Banach space. Zbl 0369.46027 Globevnik, Josip 1977 On analytic functions into $$\ell^p$$-spaces. Zbl 0336.30019 Globevnik, Josip 1977 Norm-constant analytic functions and equivalent norms. Zbl 0322.30040 Globevnik, Josip 1976 Analytic functions whose range is dense in a ball. Zbl 0331.46025 Globevnik, Josip 1976 Interpolation by vector-valued analytic functions. Zbl 0343.30034 Aron, Richard M.; Globevnik, Josip; Schottenloher, Martin 1976 The range of vector-valued analytic functions. Zbl 0331.46026 Globevnik, Josip 1976 Analytic extensions of vector-valued functions. Zbl 0309.30039 Globevnik, Josip 1976 On complex strict and uniform convexity. Zbl 0307.46015 Globevnik, Josip 1975 On vector-valued analytic functions with constant norm. Zbl 0267.30031 Globevnik, J. 1975 The Rudin-Carleson theorem for vector-valued functions. Zbl 0318.46038 Globevnik, Josip 1975 On operator-valued analytic functions with constant norm. Zbl 0291.47003 Globevnik, Josip; Vidav, I. 1974 Schwarz’s lemma for the spectral radius. Zbl 0294.46037 Globevnik, Josip 1974 A note on normal-operator-valued analytic functions. Zbl 0266.47017 Globevnik, Josip; Vidav, I. 1973 all top 5 #### Cited by 235 Authors 53 Globevnik, Josip 23 Forstnerič, Franc 15 Alarcón, Antonio 12 Kytmanov, Aleksandr Mechislavovich 12 Myslivets, Simona Glebovna 9 Kutzschebauch, Frank 8 Agranovsky, Mark L. 8 Lee, Han Ju 8 Wold, Erlend Fornæss 7 Lopez Fernandez, Francisco Jose 6 Acosta Vigil, María Dolores 6 Bertrand, Florian 6 Černe, Miran 6 Drnovšek, Barbara Drinovec 5 Chen, Lili 5 Dor, Avner 5 Gaussier, Hervé 5 Quinto, Eric Todd 5 Sukhov, Alexandre B. 4 Baracco, Luca 4 Hudzik, Henryk 4 Kaliman, Shulim I. 4 Kim, Sun Kwang 4 Kim, Sung Guen 4 Kot, Piotr 3 Bera, Sayani 3 Blanc-Centi, Léa 3 Borell, Stefan 3 Botelho, Geraldo 3 Choi, Yun Sung 3 Coupet, Bernard 3 Cui, Yunan 3 Czerwińska, Małgorzata Marta 3 Della Sala, Giuseppe 3 Dimant, Verónica 3 Lawrence, Mark G. 3 Pellegrino, Daniel Marinho 3 Ritter, Tyson 3 Semenov, Alexander M. 3 Sevilla-Peris, Pablo 3 Stout, Edgar Lee 3 Trapani, Stefano 3 Verma, Kaushal 3 Volchkov, Valeriĭ Vladimirovich 3 Volchkov, Vitaliĭ Vladimirovich 2 Abakumov, Evgeny V. 2 Aron, Richard M. 2 Carando, Daniel G. 2 Charpentier, Stéphane 2 Chen, Deyun 2 Defant, Andreas 2 Doubtsov, Evgueni Sergeevich 2 Dwilewicz, Roman J. 2 Fleming, Richard J. 2 Fridman, Buma L. 2 Gonzalo, Raquel 2 Hájek, Petr 2 Harz, Tobias 2 Jaramillo, Jesús Angel 2 Jiang, Yang 2 Kaczmarek, Radosław 2 Kulkarni, S. H. 2 Kuzovatov, Vyacheslav Igor’evich 2 Ma, Daowei 2 Maestre, Manuel 2 Majcen, Irena 2 Moraes, Luiza Amalia 2 Patrizio, Giorgio 2 Romero Grados, Luis 2 Rosay, Jean-Pierre 2 Rueda, Pilar 2 Seidel, Markus 2 Shafikov, Rasul Gazimovich 2 Shargorodsky, Eugene 2 Spiro, Andrea F. 2 Sukumar, Daniel 2 Tien Cuong Dinh 2 Tumanov, Alexander 2 Varolin, Dror 2 Veeramani, S. 2 Xiao, Ming 2 Zajec, Matej 1 Albuquerque, Nacib Gurgel 1 Alexander, Herbert J. 1 Alves, Thiago R. 1 Amar, Eric 1 Andrist, Rafael B. 1 Araújo, Gustavo da Silva 1 Bates, Larry M. 1 Bayart, Frédéric 1 Beauzamy, Bernard 1 Blasco, Oscar 1 Boggess, Albert 1 Bögli, Sabine 1 Boman, Jan 1 Bourgain, Jean 1 Bracci, Filippo 1 Buzzard, Gregery T. 1 Cavalcante, Wasthenny Vasconcelos 1 Chang, Chin-Huei ...and 135 more Authors all top 5 #### Cited in 82 Serials 27 Journal of Mathematical Analysis and Applications 20 Mathematische Zeitschrift 20 Proceedings of the American Mathematical Society 20 Transactions of the American Mathematical Society 15 Arkiv för Matematik 14 Mathematische Annalen 14 The Journal of Geometric Analysis 10 Israel Journal of Mathematics 10 Journal d’Analyse Mathématique 10 Siberian Mathematical Journal 8 International Journal of Mathematics 7 Journal of Functional Analysis 5 Advances in Mathematics 5 Annales de l’Institut Fourier 5 Duke Mathematical Journal 4 Mathematical Proceedings of the Cambridge Philosophical Society 4 Rocky Mountain Journal of Mathematics 4 Czechoslovak Mathematical Journal 4 Inventiones Mathematicae 4 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 4 Journal of Siberian Federal University. Mathematics & Physics 3 Mathematical Notes 3 Bulletin de la Société Mathématique de France 3 Integral Equations and Operator Theory 3 Indagationes Mathematicae. New Series 2 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 2 Compositio Mathematica 2 Journal für die Reine und Angewandte Mathematik 2 Mathematische Nachrichten 2 Monatshefte für Mathematik 2 Publications of the Research Institute for Mathematical Sciences, Kyoto University 2 Journal de Mathématiques Pures et Appliquées. Neuvième Série 2 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 2 Positivity 2 Journal of the Australian Mathematical Society 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Journal of Algebra and its Applications 2 Journal of Function Spaces 1 Reports on Mathematical Physics 1 Ukrainian Mathematical Journal 1 Mathematics of Computation 1 Annales Polonici Mathematici 1 Archiv der Mathematik 1 Collectanea Mathematica 1 Functiones et Approximatio. Commentarii Mathematici 1 Illinois Journal of Mathematics 1 Publications Mathématiques 1 Journal of the Mathematical Society of Japan 1 Journal of Pure and Applied Algebra 1 Mathematika 1 Memoirs of the American Mathematical Society 1 Michigan Mathematical Journal 1 Proceedings of the London Mathematical Society. Third Series 1 Rendiconti del Seminario Matematico della Università di Padova 1 Constructive Approximation 1 Journal of the Ramanujan Mathematical Society 1 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 1 Russian Mathematics 1 The Journal of Analysis 1 Calculus of Variations and Partial Differential Equations 1 Journal of Mathematical Sciences (New York) 1 Selecta Mathematica. New Series 1 Bulletin des Sciences Mathématiques 1 Izvestiya: Mathematics 1 Doklady Mathematics 1 Abstract and Applied Analysis 1 Annals of Mathematics. Second Series 1 Journal of the European Mathematical Society (JEMS) 1 Acta Mathematica Sinica. English Series 1 The Quarterly Journal of Mathematics 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V 1 Complex Variables and Elliptic Equations 1 Proceedings of the Steklov Institute of Mathematics 1 Journal of Fixed Point Theory and Applications 1 Operators and Matrices 1 Banach Journal of Mathematical Analysis 1 Analysis & PDE 1 Commentationes Mathematicae 1 Ufimskiĭ Matematicheskiĭ Zhurnal 1 Annals of Functional Analysis all top 5 #### Cited in 31 Fields 174 Several complex variables and analytic spaces (32-XX) 79 Functional analysis (46-XX) 50 Functions of a complex variable (30-XX) 33 Operator theory (47-XX) 25 Differential geometry (53-XX) 11 Algebraic geometry (14-XX) 7 Partial differential equations (35-XX) 7 Integral transforms, operational calculus (44-XX) 7 Global analysis, analysis on manifolds (58-XX) 6 Potential theory (31-XX) 6 Dynamical systems and ergodic theory (37-XX) 6 Abstract harmonic analysis (43-XX) 3 Real functions (26-XX) 2 History and biography (01-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Group theory and generalizations (20-XX) 2 Approximations and expansions (41-XX) 2 Manifolds and cell complexes (57-XX) 2 Probability theory and stochastic processes (60-XX) 2 Numerical analysis (65-XX) 1 General and overarching topics; collections (00-XX) 1 Combinatorics (05-XX) 1 Number theory (11-XX) 1 Commutative algebra (13-XX) 1 Special functions (33-XX) 1 Integral equations (45-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Convex and discrete geometry (52-XX) 1 General topology (54-XX) 1 Mechanics of particles and systems (70-XX) 1 Relativity and gravitational theory (83-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5943223237991333, "perplexity": 5063.60436704637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00081.warc.gz"}
https://drexel28.wordpress.com/2011/01/05/the-center-of-a-group/
Abstract Nonsense Review of Group Theory: The Center of a Group Point of post: In this post we take a brief break from our discussion of group actions to address the notion of the center of a group. Motivation Just as was the case for the center of an algebra it is often fruitful to consider the center of a group. In essence, the center of a group is merely the set of all elements of the group which commute with all other elements of the group. We shall use the concepts in this post in this and the next post to conclude that for any group of order $p^2$ with $p$ prime must be abelian. The Center of a Group Let $G$ be a group, we define the center of $G$ to be $\mathcal{Z}\left(G\right)=\left\{g\in G:gg'=g'g,\;\;\text{for every }g'\in G\right\}$ Our first theorem says that the center of a group is a normal subgroup of a group: Theorem: Let $G$ be a group, then $\mathcal{Z}\left(G\right)\trianglelefteq G$. Moreover, $G/\mathcal{Z}(G)\cong\text{ Inn}(G)\leqslant \text{Aut}(G)$ (where $\text{Inn}(G)$ is the set of inner automorphisms). Proof: Recall that $\Phi:G\to \text{Aut}(G):g\mapsto i_g$ is a homomorphism. Note then that \displaystyle \begin{aligned}\ker\Phi &= \left\{g\in G:i_g=\text{id}_G\right\}\\ &= \left\{g\in G:g^{-1}hg=h\;\;\text{for all }h\in G\right\}\\ &=\left\{g\in G:hg=gh\;\;\text{for all }\in G\right\}\\ &=\mathcal{Z}\left(G\right)\end{aligned} from where the conclusion follows from our earlier characterization of normality. The fact that $G/\mathcal{Z}(G)\cong\text{ Inn}(G)$ now follows immediately from the First Isomorphism Theorem.$\blacksquare$. An important theorem involving the center of a group is the following Theorem: Let $G$ be a group. Then, if $G/\mathcal{Z}(G)$ is cyclic then $G$ is abelian. Proof: Let $\mathcal{Z}(G)=\left\langle g\mathcal{Z}\left(G\right)\right\rangle$. Then, for any $a,b\in G$ we have that there exists $n,m\in\mathbb{Z}$ such that $a\mathcal{Z}(G)=g^n\mathcal{Z}(G)$ and $b\mathcal{Z}(G)=g^m\mathcal{Z}(G)$. Consequently there exists $z_1,z_2\in\mathcal{Z}(G)$ such that $a=g^nz_1$ and $b=g^mz_2$. Thus, \begin{aligned}ab &=\left(g^nz_1\right)\left(g^mz_2\right)\\ &=g^nz_1z_2g^m\\ &=(z_1z_2)g^ng^m\\ &=(z_1z_2)g^mg^n\\ &=g^m(z_1z_2)g^n\\ &=\left(g^mz_2\right)\left(g^nz_1\right)\\ &=ba\end{aligned} the conclusion follows. $\blacksquare$ The reason why this is slightly important is that it well let us conclude (later) that the only groups up to isomorphism of order $p^2$ where $p$ is prime are $C_{p^2}$ or $C_p\oplus C_p$ where $C_k$ is a symbol  for the generic cyclic group of order $k$ and $\oplus$ is the direct product of groups which we have yet to discuss. References: 1.  Lang, Serge. Undergraduate Algebra. 3rd. ed. Springer, 2010. Print. 2. Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 200 January 5, 2011 - 1. […] reasons) and is called the conjugacy class of . We note the trivial fact that (where is the center of ) if and only if […] Pingback by Review of Group Theory: Group Actions (Pt. IV Conjugation and the Class Equation) « Abstract Nonsense | January 6, 2011 | Reply 2. […] reasons) and is called the conjugacy class of . We note the trivial fact that (where is the center of ) if and only if […] Pingback by Review of Group Theory: Group Actions (Pt. IV Conjugation and the Class Equation) « Abstract Nonsense | January 6, 2011 | Reply 3. […] Assume that then and thus (from previous theorem) we may conclude that is cyclic. But, from an earlier theorem we may then conclude that is abelian and thus which is a contradiction. Thus, it follows that […] Pingback by Review of Group Theory: Group Actions (Pt. IV Conjugation and the Class Equation Pt. II) « Abstract Nonsense | January 6, 2011 | Reply 4. […] is that is a normal subgroup of . Indeed it clearly suffices to check that for every . Recall the inner automorphism and the fact that it is, surprisingly, an automorphism. Thus, one sees that and since were […] Pingback by Review of Group Theory: The Commutator Subgroup and the Abelianization of a Group « Abstract Nonsense | February 27, 2011 | Reply 5. […] further and show that for any finite group and any one has that (where, as usual, denotes the center of the group). This proof is quite long, but the result is interesting enough to merit the effort. […] Pingback by Representation Theory: The Dimension Theorem (Strong Version) « Abstract Nonsense | March 7, 2011 | Reply 6. […] We use the fact that since that is inevitably cyclic and so by a common theorem we must have that is abelian. Then, by the structure theorem we may conclude that either or […] Pingback by University of Maryland College Park Qualifying Exams (Group Theory and Representation Theory) ( January 2003)) « Abstract Nonsense | May 1, 2011 | Reply 7. […] get an idea of these subrings let’s define the analogous idea of the center of a group for a ring and show it’s a subring. Namely, if is a ring then we define the center of to […] Pingback by Subrings « Abstract Nonsense | June 15, 2011 | Reply
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903392791748047, "perplexity": 522.7894859813131}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590362.13/warc/CC-MAIN-20180718232717-20180719012717-00240.warc.gz"}
http://itools.subhashbose.com/index.php?mact=News,m4,default,1&m4number=3&m4detailpage=news&m4summarytemplate=default_noauthor&m4pagenumber=2&m4returnid=90&page=90
## Online LaTeX / TeX Expression Renderer & Editor This is a online LaTeX expression rendering and editing tool. Just type in the LaTeX code or compose the LaTeX equation using the graphical buttons and menus, the rendered image will appear below. This tools can be useful for a expert or a newbie having no experience with LaTeX. These generated LaTeX expressions / rendered images can be easily used with your favorite programs or word processors. Real-time Rendering Instruction to use 1. Just type or paste any LaTeX equation code in the provided text box, the rendered image will appear below the box. 2. LaTeX equation can also be composed using the graphical buttons and menus above. 2. You may Enable or Disable the Real-time Rendering checkbox, on enabling, it will keep on updating the rendered image in 'real-time' as you type in the textbox, you will not need to click the 'Render' button each time you type a equation 3. On clicking the 'Render' button it will update the rendered image with the entered LaTeX code. 4. If you want to save the rendered image or enter the image in any other documents, you may right-click on the rendered image and save. Examples: (Click on the expressions, LaTeX code will be created above.) Previous page: Online Logic Gate Simulator Next page: Upcoming Tools
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133246302604675, "perplexity": 4174.311334826103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00411.warc.gz"}
https://slideplayer.com/slide/2562645/
# Ch Renewable Energy Today ## Presentation on theme: "Ch Renewable Energy Today"— Presentation transcript: Ch. 18.1 Renewable Energy Today A. Renewable Energy Energy from sources that are constantly being formed. Life on Earth has always been powered by renewable energy – the sun! A. Renewable Energy Other forms of renewable energy: wind, biomass, moving water, Earth’s heat. B. Solar Energy Passive solar heating – uses the sun’s energy to heat something directly (i.e. sunlight coming through windows). B. Solar Energy How? Homes positioned according to the yearly movement of the sun benefit most from passive solar energy. B. Solar Energy Active solar energy – Sun’s energy is gathered by collectors and used to heat water or to heat a building. B. Solar Energy Photovoltaic Cells – solar cells, often placed on roofs, that convert the sun’s energy into electricity. Also used in calculators and to power the space station. C. Wind Energy Wind farms – large arrays of wind turbines, such as in California. Turbines spin, and this mechanical energy is converted into electrical energy. C. Wind Energy The windiest spots on Earth could generate more than 10x the energy used worldwide. C. Wind Energy Some farmers place 1 or 2 windmills on their land and then sell the electricity it produces to the power company! D. Biomass Power from organic (living) things, such as plant material and manure. Methane – gas produced from decomposing organic wastes. Can be burned to generate heat. D. Biomass Alcohol – liquid fuels derived from biomass. Ex. Ethanol – produced from corn and currently used as an alternative to gasoline in the Midwest. E. Hydroelectricity Hydroelectric Dams – accounts for 20% of the world’s energy, making power affordable and renewable. E. Hydroelectricity Benefits: Inexpensive to operate No air pollution Creates a reservoir, flooding the land, and possibly displacing people from their homes Can disturb ecosystems downstream E. Hydroelectricity Micro-hydropower: Used in developing countries Floating turbines used in small streams F. Geothermal Energy Power from the Earth’s heat – used to heat water and form steam, which turns a turbine, thereby producing electricity.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688449859619141, "perplexity": 5591.327673745174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00575.warc.gz"}
https://voer.edu.vn/c/the-ray-aspect-of-light/0e60bfc6/b451e180
Giáo trình # College Physics Science and Technology ## The Ray Aspect of Light Tác giả: OpenStaxCollege There are three ways in which light can travel from a source to another location. (See [link].) It can come directly from the source through empty space, such as from the Sun to Earth. Or light can travel through various media, such as air and glass, to the person. Light can also arrive after being reflected, such as by a mirror. In all of these cases, light is modeled as traveling in straight lines called rays. Light may change direction when it encounters objects (such as a mirror) or in passing from one material to another (such as in passing from air to glass), but it then continues in a straight line or as a ray. The word ray comes from mathematics and here means a straight line that originates at some point. It is acceptable to visualize light rays as laser rays (or even science fiction depictions of ray guns). Experiments, as well as our own experiences, show that when light interacts with objects several times as large as its wavelength, it travels in straight lines and acts like a ray. Its wave characteristics are not pronounced in such situations. Since the wavelength of light is less than a micron (a thousandth of a millimeter), it acts like a ray in the many common situations in which it encounters objects larger than a micron. For example, when light encounters anything we can observe with unaided eyes, such as a mirror, it acts like a ray, with only subtle wave characteristics. We will concentrate on the ray characteristics in this chapter. Since light moves in straight lines, changing directions when it interacts with materials, it is described by geometry and simple trigonometry. This part of optics, where the ray aspect of light dominates, is therefore called geometric optics. There are two laws that govern how light changes direction when it interacts with matter. These are the law of reflection, for situations in which light bounces off matter, and the law of refraction, for situations in which light passes through matter. # Section Summary • A straight line that originates at some point is called a ray. • The part of optics dealing with the ray aspect of light is called geometric optics. • Light can travel in three ways from a source to another location: (1) directly from the source through empty space; (2) through various media; (3) after being reflected from a mirror. # Problems & Exercises Suppose a man stands in front of a mirror as shown in [link]. His eyes are 1.65 m above the floor, and the top of his head is 0.13 m higher. Find the height above the floor of the top and bottom of the smallest mirror in which he can see both the top of his head and his feet. How is this distance related to the man’s height? Top $1\text{.}\text{715 m}$ from floor, bottom $0\text{.}\text{825 m}$ from floor. Height of mirror is $0\text{.}\text{890 m}$, or precisely one-half the height of the person.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5367974638938904, "perplexity": 513.9281588224786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00447.warc.gz"}
http://weidenpavillon.ch/index.php?id=41&action=viewImage&gallery=0&resultPage=3&idx=52
Strict Standards: Non-static method t3lib_extMgm::typo3_loadExtensions() should not be called statically in D:\www\www711\typo3\t3lib\config_default.php on line 397 Strict Standards: Non-static method t3lib_div::shortMD5() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 970 Strict Standards: Non-static method t3lib_extMgm::isCacheFilesAvailable() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 974 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 66 Strict Standards: Non-static method t3lib_extMgm::addService() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 69 Strict Standards: Non-static method t3lib_div::isFirstPartOfStr() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 521 Strict Standards: Non-static method t3lib_div::trimExplode() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 566 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 147 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 166 Strict Standards: Non-static method t3lib_extMgm::addPageTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 253 Strict Standards: Non-static method t3lib_extMgm::isLoaded() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 269 Strict Standards: Non-static method t3lib_extMgm::addUserTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 279 Strict Standards: Non-static method t3lib_extMgm::isLoaded() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 282 Strict Standards: Non-static method t3lib_extMgm::addPageTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 302 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 322 Strict Standards: Non-static method t3lib_extMgm::extRelPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 326 Strict Standards: Non-static method t3lib_extMgm::isLoaded() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 333 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 400 Strict Standards: Non-static method t3lib_extMgm::addPItoST43() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 416 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 771 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 791 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 417 Strict Standards: Non-static method t3lib_extMgm::addPageTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 423 Strict Standards: Non-static method t3lib_extMgm::addUserTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 429 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 455 Strict Standards: Non-static method t3lib_extMgm::addPItoST43() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 457 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 771 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 791 Strict Standards: Non-static method t3lib_extMgm::addPageTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 471 Strict Standards: Non-static method t3lib_extMgm::addUserTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 474 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 479 Strict Standards: Non-static method t3lib_extMgm::addPItoST43() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 482 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 771 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 791 Strict Standards: Non-static method t3lib_extMgm::addPItoST43() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 501 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 771 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 791 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 505 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 507 Strict Standards: Non-static method t3lib_extMgm::addPItoST43() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 515 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 771 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 791 Strict Standards: Non-static method t3lib_extMgm::addPItoST43() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 534 Strict Standards: Non-static method t3lib_extMgm::getCN() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 771 Strict Standards: Non-static method t3lib_extMgm::addTypoScript() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 791 Strict Standards: Non-static method t3lib_extMgm::addUserTSConfig() should not be called statically in D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php on line 538 Strict Standards: Non-static method t3lib_extMgm::isLoaded() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 97 Strict Standards: Non-static method t3lib_div::makeInstance() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 104 Strict Standards: Non-static method t3lib_div::clientInfo() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 107 Strict Standards: Non-static method t3lib_div::getIndpEnv() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3227 Strict Standards: Non-static method t3lib_div::int_from_ver() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 114 Strict Standards: Non-static method t3lib_div::addSlashesOnArray() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 119 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 130 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 88 Strict Standards: Non-static method t3lib_div::makeInstanceClassName() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 155 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 158 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 160 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 161 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 162 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 163 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 164 Deprecated: Non-static method t3lib_div::clientInfo() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 398 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3227 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 401 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_db.php on line 896 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1482 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 514 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 520 Deprecated: Non-static method t3lib_div::intExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_db.php on line 635 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 523 Deprecated: Non-static method t3lib_div::intInRange() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 216 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 1026 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 1027 Deprecated: Non-static method t3lib_div::intExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_db.php on line 635 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 957 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 958 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 959 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 960 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 788 Deprecated: Non-static method t3lib_div::intInRange() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 793 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 822 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 822 Deprecated: Non-static method t3lib_div::md5int() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 824 Deprecated: Non-static method t3lib_div::callUserFunction() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 707 Deprecated: Non-static method t3lib_div::isFirstPartOfStr() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3893 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3914 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 788 Deprecated: Non-static method t3lib_div::intInRange() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 793 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 822 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 822 Deprecated: Non-static method t3lib_div::md5int() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 824 Deprecated: Non-static method t3lib_div::callUserFunction() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 707 Deprecated: Non-static method t3lib_div::isFirstPartOfStr() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3893 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3914 Warning: Cannot modify header information - headers already sent by (output started at D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php:429) in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 278 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 538 Deprecated: Non-static method t3lib_div::_GP() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 3668 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 625 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3148 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3166 Deprecated: Non-static method t3lib_div::dirname() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3154 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3154 Deprecated: Non-static method t3lib_div::revExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 1034 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 646 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 826 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 1026 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_userauth.php on line 1027 Deprecated: Non-static method t3lib_div::intExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_db.php on line 635 Deprecated: Non-static method t3lib_div::makeInstanceService() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_feuserauth.php on line 246 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4057 Deprecated: Non-static method t3lib_extMgm::findService() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4059 Deprecated: Non-static method t3lib_div::getFileAbsFileName() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4069 Deprecated: Non-static method t3lib_div::isAbsPath() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3360 Deprecated: Non-static method t3lib_div::isFirstPartOfStr() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3362 Deprecated: Non-static method t3lib_div::validPathStr() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3365 Deprecated: Non-static method t3lib_div::requireOnce() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4071 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4072 Deprecated: Non-static method t3lib_div::makeInstanceService() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_feuserauth.php on line 246 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4057 Deprecated: Non-static method t3lib_extMgm::findService() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4059 Deprecated: Non-static method t3lib_div::intExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_page.php on line 1009 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1408 Deprecated: Non-static method t3lib_div::getIndpEnv() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1251 Deprecated: Function ereg_replace() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_page.php on line 473 Deprecated: Non-static method t3lib_div::uniqueList() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_page.php on line 522 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 985 Deprecated: Non-static method t3lib_div::_GET() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1557 Deprecated: Non-static method t3lib_div::stripSlashesOnArray() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 283 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1612 Deprecated: Non-static method t3lib_div::md5int() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 250 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 269 Deprecated: Non-static method t3lib_div::compat_version() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_matchcondition.php on line 355 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::intExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 585 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 588 Deprecated: Non-static method t3lib_div::intExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 585 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 588 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 588 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 588 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 588 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 601 Deprecated: Non-static method t3lib_extMgm::isLoaded() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 607 Deprecated: Function ereg_replace() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 608 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 609 Deprecated: Non-static method t3lib_div::getURL() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 613 Deprecated: Non-static method t3lib_extMgm::isLoaded() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 607 Deprecated: Function ereg_replace() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 608 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 609 Deprecated: Non-static method t3lib_div::getURL() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 615 Deprecated: Non-static method t3lib_div::getURL() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 654 Deprecated: Non-static method t3lib_div::getURL() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 653 Deprecated: Non-static method t3lib_div::getURL() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 654 Deprecated: Non-static method t3lib_div::getURL() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 653 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 269 Deprecated: Non-static method t3lib_div::compat_version() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_matchcondition.php on line 355 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_pageSelect::getHash() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 327 Deprecated: Non-static method t3lib_div::md5int() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 363 Deprecated: Non-static method t3lib_div::compat_version() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1799 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::compat_version() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1802 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::int_from_ver() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 941 Deprecated: Non-static method t3lib_div::isAbsPath() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 1813 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 2024 Deprecated: Non-static method t3lib_extMgm::isLoaded() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 2092 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 2101 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 2108 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pagegen.php on line 944 Strict Standards: Non-static method t3lib_extMgm::isLoaded() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 222 Strict Standards: Non-static method TSpagegen::pagegenInit() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\pagegen.php on line 48 Strict Standards: Non-static method t3lib_div::intInRange() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pagegen.php on line 152 Strict Standards: Non-static method t3lib_div::_GP() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pagegen.php on line 177 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 3564 Strict Standards: Non-static method TSpagegen::getIncFiles() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\pagegen.php on line 60 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Deprecated: Non-static method t3lib_extMgm::isLoaded() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1118 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1119 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\templavoila\pi1\class.tx_templavoila_pi1.php on line 62 Strict Standards: Non-static method t3lib_div::trimExplode() should not be called statically in D:\www\www711\typo3\typo3\sysext\cms\tslib\index_ts.php on line 444 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 55 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_base.php on line 75 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_base.php on line 76 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_base.php on line 77 Strict Standards: Non-static method t3lib_extMgm::siteRelPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\constants.php on line 22 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 183 Strict Standards: Non-static method t3lib_extMgm::siteRelPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\constants.php on line 23 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 183 Strict Standards: Non-static method t3lib_extMgm::siteRelPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\constants.php on line 24 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\t3lib\class.t3lib_extmgm.php on line 183 Strict Standards: Non-static method t3lib_extMgm::extPath() should not be called statically in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 56 Deprecated: Non-static method t3lib_div::isFirstPartOfStr() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5678 Deprecated: Non-static method t3lib_div::makeInstanceClassName() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5687 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_base.php on line 412 Deprecated: Non-static method t3lib_div::makeInstance() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_base.php on line 415 Deprecated: Function call_user_method() is deprecated in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5692 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pibase.php on line 967 Deprecated: Non-static method t3lib_div::readLLfile() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pibase.php on line 971 Deprecated: Non-static method t3lib_div::getFileAbsFileName() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3583 Deprecated: Non-static method t3lib_div::isAbsPath() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3360 Deprecated: Non-static method t3lib_div::isFirstPartOfStr() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3362 Deprecated: Non-static method t3lib_div::validPathStr() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3365 Deprecated: Function ereg_replace() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 3585 Deprecated: Non-static method t3lib_div::xml2array() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pibase.php on line 1326 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_pibase.php on line 1362 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 210 Deprecated: Non-static method t3lib_div::_GET() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 358 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 363 Deprecated: Non-static method t3lib_div::_GET() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 190 Deprecated: Non-static method t3lib_div::stripSlashesOnArray() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 283 Deprecated: Non-static method t3lib_div::_POST() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_pi1.php on line 191 Deprecated: Non-static method t3lib_div::stripSlashesOnArray() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 300 Deprecated: Non-static method t3lib_parsehtml::getSubpart() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 2878 Deprecated: Non-static method t3lib_div::implodeArrayForUrl() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5369 Deprecated: Non-static method t3lib_div::unQuoteFilenames() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5049 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4601 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5110 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5161 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5167 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1362 Deprecated: Non-static method t3lib_parsehtml::getSubpart() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 2878 Deprecated: Non-static method t3lib_extMgm::isLoaded() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1118 Deprecated: Non-static method t3lib_extMgm::extPath() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1119 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 2723 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 2724 Deprecated: Non-static method t3lib_div::split_fileref() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1133 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 997 Deprecated: Non-static method t3lib_parsehtml::getSubpart() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 2878 Deprecated: Non-static method t3lib_div::implodeArrayForUrl() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5369 Deprecated: Non-static method t3lib_div::unQuoteFilenames() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5049 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4601 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5110 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5161 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5167 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1362 Deprecated: Non-static method t3lib_div::implodeArrayForUrl() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5369 Deprecated: Non-static method t3lib_div::unQuoteFilenames() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5049 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4601 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5110 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5161 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5167 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1362 Warning: Illegal string offset 'comment' in D:\www\www711\typo3\typo3conf\ext\bahag_photogallery\pi1\class.tx_bahagphotogallery_base.php on line 1067 Deprecated: Non-static method t3lib_div::implodeArrayForUrl() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5369 Deprecated: Non-static method t3lib_div::unQuoteFilenames() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5049 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4601 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5110 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5161 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5167 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1362 Deprecated: Non-static method t3lib_div::implodeArrayForUrl() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5369 Deprecated: Non-static method t3lib_div::unQuoteFilenames() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5049 Deprecated: Function ereg() is deprecated in D:\www\www711\typo3\t3lib\class.t3lib_div.php on line 4601 Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5110 Deprecated: Non-static method t3lib_div::trimExplode() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5161 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_content.php on line 5167 Deprecated: Non-static method t3lib_div::testInt() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\t3lib\class.t3lib_tstemplate.php on line 1362 Warning: Cannot modify header information - headers already sent by (output started at D:\www\www711\typo3\typo3conf\temp_CACHED_psa531_ext_localconf.php:429) in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 2898 Bauphasen Werden Sie Mitglied im Verein Weidenpavillon Huttwil # Bauphasen Previous Back to Gallery Next Begegnungsstätte Raum für Kunst & Kultur Ort derEntspannung Deprecated: Non-static method t3lib_div::inList() should not be called statically, assuming $this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 3106 Deprecated: Non-static method t3lib_extMgm::isLoaded() should not be called statically, assuming$this from incompatible context in D:\www\www711\typo3\typo3\sysext\cms\tslib\class.tslib_fe.php on line 3111
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.643081545829773, "perplexity": 23384.805064917993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00213.warc.gz"}
http://mathhelpforum.com/calculus/120492-find-dy-dx.html
1. Find dy/dx Question : Find $\displaystyle \frac{dy}{dx}$ $\displaystyle y = \frac{(x+2)^3 (3x+5)^{-4} sinx}{(2x+2)^2}$ 2. differentiate : $\displaystyle \ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ 3. hmm thats a neat way of interpreting a derivative thanks for that gives me a new perspective on computing really long derivatives. 4. I am stuck here Originally Posted by dedust differentiate : $\displaystyle \ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ Taking log on both sides $\displaystyle \ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ Differentiating wrt x $\displaystyle \frac{1}{y} \ \frac{dy}{dx}$ = $\displaystyle \frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ $\displaystyle \frac{dy}{dx}$ = $\displaystyle y \ \frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ ..................I am stuck here ?????? 5. Originally Posted by zorro Taking log on both sides $\displaystyle \ln y = 3 \ln (x+2) - 4\ln(3x+5) + \ln(\sin x) - 2\ln (2x+2)$ Differentiating wrt x $\displaystyle \frac{1}{y} \ \frac{dy}{dx}$ = $\displaystyle \frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ $\displaystyle \frac{dy}{dx}$ = $\displaystyle y \ \frac{9}{x+2} - \frac{32}{3x+5} + cot x - \frac{8}{2x+2}$ ..................I am stuck here ?????? remember that $\displaystyle \frac{d}{dx} \ln f(x) = \frac{f'(x)}{f(x)}$ hence $\displaystyle \frac{d}{dx} \{4\ln (3x + 5) \}= \frac{4 \times 3}{(3x + 5)} = \frac{12}{3x + 5}$ 6. I am getting the following answer $\displaystyle \frac{dy}{dx} = y \frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)}$.....................Is this correct????? 7. Originally Posted by zorro I am getting the following answer $\displaystyle \frac{dy}{dx} = y \frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)}$.....................Is this correct????? I haven't checked all of your work but if [tex]\frac{1}{y}\frac{dy}{dx} = y \frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)}[/itex] then $\displaystyle \frac{dy}{dx} = y (\frac{13}{(x-2)} - \frac{12}{(3x+5)} + cot x - \frac{4}{(2x+2)})$ (Note the parentheses.) 8. thanks mite $\displaystyle \frac{dy}{dx} = y \left( \frac{3}{x+2} - \frac{12}{3x+5} + cot x - \frac{4}{2x+2} \right)$ 9. Originally Posted by zorro thanks mite $\displaystyle \frac{dy}{dx} = y \left( \frac{3}{x+2} - \frac{12}{3x+5} + cot x - \frac{4}{2x+2} \right)$ don't forget to substitute back $\displaystyle y$ $\displaystyle \frac{dy}{dx} = \frac{(x+2)^3 (3x+5)^{-4} sinx}{(2x+2)^2} \left( \frac{3}{x+2} - \frac{12}{3x+5} + cot x - \frac{4}{2x+2} \right)$ 10. Thanks mite
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971077561378479, "perplexity": 7027.04803562481}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864110.40/warc/CC-MAIN-20180621075105-20180621095105-00546.warc.gz"}
http://mathoverflow.net/questions/592/logarithmic-structures-on-moduli-of-elliptic-curves-over-z?sort=oldest
# Logarithmic structures on moduli of elliptic curves over Z I've heard it stated that if you take the moduli of elliptic curves with some level structure imposed (as a moduli scheme over Spec(Z)), there is a logarithmic structure that you can impose at the cusps so that the natural projection maps obtained by forgetting the level structure are log-etale (at least away from primes dividing the order of your level structure). I can have some rough intuition about how this happens over a field of characteristic zero, but not integrally. Can anybody explain this or give me a reference for this structure? Additionally, has anybody worked out the appropriate integral ring of modular forms with logarithmic structure in some cases, similar to the Deligne-Tate calculation of modular forms over Z? - I thing Kato's log purity theorem gives you this. See, for instance, Theorem B in Mochizuki's "Extending Families of Curves over Log Regular Schemes." I think all you need is that the cusps form a normal crossings divisor on X(1) [if you're worried about X(1) being a stack rather than a scheme, you can start with a bit of extra level structure coprime to the primes you're interested in] and then your map Y(N) -> Y(1) is tamely ramified, which tells you that the normalization X(N) of X(1) in Y(N) carries a canonical log-structure in which the map X(N) -> X(1) is log-etale. - If you're working away from the primes dividing the level, your curves have semi-stable reduction, and have canonical log-smooth log structures. For any pair (X,D), where X is smooth and D is a divisor with normal crossings, there is a log structure given by the set of functions in Ox that are invertible away from D. In your case, I think you take X to be the universal curve, and D to be the divisor at infinity. Forgetting a coprime level structure yields a map with vanishing log-cotangent complex. References (may not have your precise statement): • F. Kato. Log smooth deformation theory • M. Olsson "Universal log structures on semistable varieties" Olsson has some other papers that might be useful. He takes them off his web page when they get published, but sometimes you can find them with Google Scholar. Edit: I haven't seen any work on the log-canonical rings of modular curves, but I don't really work in that area. You should allow poles of order n/2 for weight n forms, so for level 1, you get extra stuff like E14/Delta. - Thanks. I'll look at the references, but there are a couple of things that immediately come to mind. I think of modular forms of even weight as sections of a power of the cotangent bundle, which has a square root w which is the line bundle of invariant 1-forms. How do you interpret modular forms of odd weight in the logarithmic sense? It seems odd to try to write w(D/2). – Tyler Lawson Oct 15 '09 at 16:00 Google: no hits for "log spin structure" or "logarithmic spin structure". For notation, maybe \pi_*(\omega^1_{E/X})? The thing inside the parentheses implicitly involves the log structures. – S. Carnahan Oct 15 '09 at 16:37 Actually, I was hoping that it was something more like the fact that every point on the divisor has an automorphism group of size at least 2 - but I'm unsure in general about trying to talk about divisors on stacks. – Tyler Lawson Oct 15 '09 at 18:09 This isn't really my milieu, but I don't think you get odd weight forms if your level structure admits the -1 automorphism. If you have odd weight forms, I guess you could say that geometric fibers of the divisor at infinity are representable, but it doesn't seem to be a log statement. – S. Carnahan Oct 15 '09 at 20:37 Upon reviewing this, I should mention that I think that the standard rings of modular forms already incorporates the log structure. If you express complex-analytically the condition that the modular form $f(\tau)$ of weight $2k$ is holomorphic at $\infty$ in terms of $q = e^{2\pi i \tau}$, you find that it is equivalent to being of the form $g(q) dlog(q)^k$ for $g(q)$ holomorphic at the origin. – Tyler Lawson Sep 23 '10 at 1:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186010360717773, "perplexity": 400.60831547171085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273667.68/warc/CC-MAIN-20160524002113-00137-ip-10-185-217-139.ec2.internal.warc.gz"}
https://scioly.org/wiki/index.php?title=Anatomy_and_Physiology&direction=prev&oldid=33035
# Anatomy and Physiology Anatomy is an event which tests students' knowledge about the anatomy of a human body. Division B will typically concentrate on two predetermined systems, whereas Division C will typically concentrate on three systems. Topics may include diseases in those systems as well as the general anatomy and function of each system from the cellular to the holistic scale. Check the General Anatomy page for information concerning basic topics of anatomy. The event can be run in stations or be administered as one test packet. ## 2015 Body Systems Note: The Division B version of the event does not include the Immune System. ## The Stations If there are stations, there may be 10-20 of them. There will be sections in your test corresponding to each of the stations with questions (the format of which is decided by the tester, and can vary widely from tester to tester). Students typically have a time limit at stations (i.e. 5 minutes per station, then rotate). There may also be a different type of testing, where students are given a time limit to look at a PowerPoint slide and answer the question/questions on that slide. With this format, the whole group will be tested at once. Students must note that in tests there is a strong possibility that a model would be used. For example, the event writer could use a model of the entire body or a specific organ to base questions off of. To do well on an identification station like this, make sure you know your labeling, and be prepared to find numbers on the model quickly. Sometimes it's hard to find certain numbers, so just look very hard, and eventually you will find it. If you really can't find one of the numbers just move on. ## The Test The test will have pages/sections corresponding to the individual stations (if there aren't stations then it will be a normal test). It will have blank lines for you to record your answer. If there are stations, there may be no questions/diagrams in the packet, so all work must be done at the corresponding station. All answers must be recorded in the packet. Spelling usually will count unless you have a very lax judge, so be absolutely certain everything is spelled to perfection. Points may also be taken away if the packet is not neat or legible. As you record your answers, make sure that you are recording on the right page/section/question. This may save you time and effort. Please note that there may be lines for your team name, team number, or the participants' names on each page. No matter what, ALWAYS make sure you fill out that information on each page, for if you don't, they can take off points. In addition, if you don't identify yourself on your test, they will have a hard time finding you and letting you know about your results. Even if you got every question right, some judges will disqualify you for not filling out every field on your test on competition day. There may be as many as 60 questions on the test. The test may include diagrams to label, math problems, or general knowledge questions. ## Materials The only materials to bring are writing utensils along with a good eraser, two non-programmable calculators, and one double-sided page of notes containing information in any form from any source (i.e pictures, diagrams, handwritten notes, typed notes...). No other resources are allowed. Make sure you print the guide to this event in the event info on soinc.org. ## Preparing for This Event Make a binder! This will help you tremendously in preparing for this event. Even though you can't bring it in, it's a great way to keep all your information in the same place and to remember it. The binder should include material about anything that the Anatomy rules say might be on the test. Review your notes when you wake up and right before you go to sleep every day. The small minutes of studying really add up. Remember your charts and diagrams. They are very important in this event. They will account for a majority of the questions on the test. They can be used in the testing room. Simple diagrams often help with studying more the complicated ones do. Flash cards can be a useful resource for studying the skeletal and muscular system, whether you create them yourself or buy them. A good study technique is to print out pictures of the muscles to study and put them on index cards. Also, you can make online flashcards on quizlet.com. It is also very helpful to type up a table or list of information about the diseases, so you have a quick reference sheet to study off of (whether weeks before competition, or right before it). A useful studying book is the Complete Gray's Anatomy. However, it can get complicated, so using a high school, college, or high-level middle school textbook will greatly assist you in preparing for this event. It is also very helpful to practice, because the type of questions can vary widely from test to test. Study as much as you can and cover a wide range of material. Even if the rules don't specifically mention an area of a system to study, a good rule to keep in mind is better safe than sorry! The level of complexity of the tests will vary at each level, state, and from year to year. Better to study that one area in more detail than be unprepared for the test! ### Making the Note Sheet What to include on your note sheet : Use diagrams often to maximize your note sheet. Try to find ones with big font, so you can minimize it using image processing programs such as paint to make it smaller, but still readable. Also, colored diagrams are easier to use, making it faster to find the information you want. Overall diagrams are very useful, as are ones that specify in a particular function/part. Here is a good example. The diagram is colored, the font is big and it has information on most parts of the digestive system. File:Digestive system diagram.PNG A diagram of the digestive system's organs. Listing the steps to gas exchange would be a life saver if you add it to your note sheet. Gas exchange questions are very common, so be prepared. The same goes for the digestive system. Understand the route food goes through, from your mouth to your large intestine. Tips: • Use as small of a font as you can. Go as small as you can, but make sure to 'keep it readable'. There's no point in having volumes of information if you can't even interpret it. • Make your own diagrams, either by hand or with an image manipulation program. The example below was made by aubrey048. Examples of image manipulation programs are GIMP and MS Paint. An example of some muscle diagrams to use for your note sheet. • Color code. Use a different (readable) color for notes on each system. This will make things easy to find at competition day. Also color-code your diagrams if you can for maximum efficiency (as seen in the picture above). It's much easier to find a bright orange muscle than one outlined lightly in black. Keep the coding consistent so that by the end of the season you automatically associate a color to a type of information (ex: pink = muscles; blue = respiratory; green = endocrine and etc.) Highlighting will save you a LOT of time at competition. Each system can have color-coded subdivisions (diseases, functions of parts, etc.) • Type your sheet up, then hand-write extra notes in the margins. You can write in places where the printer might not be able to print. This is time consuming but well worth the time spent. • Source-check before doing anything. The last thing you need is to realize you put incorrect info on your note sheet, then have to do it all over again. • Use space efficiently by prioritizing. Include the things you have the most trouble remembering first. Extra information can be added later if you have room. • Use charts, like the hormones and Muscle Lists. Both (if minimized to fit your paper) are life-savers. Or make your own chart with specific information you need - the simple act of making a chart can help tremendously. • Laser printers are recommended if your font is that small. Font sizes can be reduced manually if you treat text like a picture (by typing it onto an image manipulation program and then shrinking the image), though this may reduce the readability of your notes. • After you print your note sheet, use a pen(cil) to write along the margins. This is a great way to fill up your note sheet, as the printer cannot print on the border off the paper. Remember not to write so small that you cannot see it. • Communicate with your partner (if you have one). This is vital in EVERY event. You do not want to be the only person on your team who knows how the sheet is laid out - if this happens, during the test your partner will be asking you continuously where things are, which can be distracting. If you don't trust your partner enough to make the resource sheet, at least show it to them/take a practice test with it so they can familiarize themselves with it. • Include formulas! Some tests will have you calculate the dead space in lungs, lung volume, blood pressure, and other anatomical formulas. Make sure you have the appropriate formulas for each system. ### Sample Exercises Check the Test Exchange for Anatomy tests! Endocrine 1. If people were injected with ghrelin, we would expect that they would ______. A) feel sleepy B) eat more C) lose weight D) stop growing E) sweat more 2. Describe the three types of hormones and provide examples of each. 3. What is the location for the receptor for water-soluble hormones? What is the location for the receptor for fat-soluble hormones? Why is there a difference in the location of the two receptors? 4. What is a goiter? How can it be prevented? 5. What is the difference between an endocrine gland and an exocrine gland? 6. What is the effect of hyposecretion of estradiol? Muscular 1. List the location, origin and insertion of the latissimus dorsi, rectus abdominis, and gastrocnemius. 2. How does exercise affect the muscular system? 3. List the steps of muscle contraction in order. Respiratory 1. Describe the function of the respiratory system. 2. What is a potential cause of emphysema? 3. List the steps of gas exchange in order. Nervous (2013-2014) 1. Describe poliomyelitus and list the different types and respective treatments. Digestive 1. Which of these is not a part of the small intestine? A)Ileum B)Proneum C)Jejunum B)Duodenum 2. Which of these is not a salivary gland? A)Subpharyngeal Gland B)Parotid Gland C)Submandibular Gland D)Sublingual Gland 3. What does gastric juice do? 4. What is the difference between mechanical digestion and chemical digestion? Give and example of each. 5. What is the function of the liver in the digestive system? 6. Name the parts of the large intestine. 7. What is the appendix? What is its role? Excretory 1. What are the functions of the excretory system as a whole? 2. What is urea? Integumentary (2013-2014) 1. What are the five layers of the epidermis? 2. Name the four types of mechanoreceptors. 3. How might one treat athlete's foot? ## Practice Tests See the Test Exchange for Anatomy (& Physiology) tests.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4434301555156708, "perplexity": 1249.13584448236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00078.warc.gz"}
https://www.bionicturtle.com/forum/threads/t5-residential-mortgage-backed-securities.5835/
# T5.Residential-Mortgage-Backed-Securities Discussion in 'P2.T5. Market Risk (25%)' started by EIA, May 9, 2012. 1. ### EIAMember Hi David, I am going through BT Veronesi questions. I came across this question on pg 10 105.3. Barry the analyst calculated the effective duration of a pass-through MBS as 5.3 years. His effective duration is based on re-pricing the MBS with a yield shock of 50 basis points; i.e., current yield plus and minus 50 bps. However, Barry's manager observes that Barry did not vary the prepayment (PSA) assumption when re-pricing under either the higher/lower yield scenarios. His manager argues that Barry should vary the PSA assumption as he varies the interest rate input. If Barry varies the PSA assumption as instructed by his manager, which of the following is true? a) The accurate duration will be lower than 5.3 years b) The accurate duration will be higher than 5.3 years c) It does not matter, neither duration nor convexity will be impacted d) Duration is approximately unchanged at 5.3 years but convexity will increase In your solution, you selected A but I don't think this answer is correct. It would have selected C because Barry has calculated effective duration and the increase in PSA or decrease in PSA does not affect the effective duration calculation except if is affecting interest rate movement. Please check Veronesi pg 300 section 8.3.3. BR EIA 2. ### David Harper CFA FRM CIPMDavid Harper Hi BR, This question is a deliberate attempt to query an understanding of Veronesi 8.3.3. The directional impact is maybe harder to follow (see http://www.bionicturtle.com/forum/t...onvexity-of-pass-through-mbs.5256/#post-16876 ) ... however, it's important to understand that PSA impacts effective duration. In fact, the need to use effective duration (which re-prices with rate shocks, as opposed to an analytical duration) to employ a varying PSA assumption in order to capture the negative convexity. Put another way, Veronesi uses effective duration because it is the only way to get a duration which incorporates PSA. Without the PSA assumption change, like any bond, a lower rate of (R - y ) will gives a higher MBS bond price of (P + x). The point of 8.3.3 is: if the rate goes lower, prepayments will increase, so we use a higher PSA assumption, which lowers the price of the MBS, which alters the duration and the convexity (i.e., it is the change in PSA assumption that allows us to simulate the negative convexity of the MBS). I hope that helps to explain, thanks,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8809655904769897, "perplexity": 2938.952006675705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640124.22/warc/CC-MAIN-20150417045720-00237-ip-10-235-10-82.ec2.internal.warc.gz"}
https://gianlubaio.blogspot.com/2012/07/vorsprung-durch-technik-bias.html
## Tuesday, 10 July 2012 ### Vorsprung durch technik bias Unlike the ground-breaking height experiment, this one is complete speculation (and also: no disrespect to anybody, irrespective of the car you drive). But: while riding my Vespa around London, I've been noticing that, in general (and I'll put this as mildly and unassumingly as I possibly can), Audi drivers are terrible. What I mean is that if I think of all the drivers I encounter in my daily one-hour ride to and from work, the ones that I remember speeding, cutting everybody else to get in front at a traffic light and unnecessarily moving to the right while I'm filtering to make it more difficult for me seem to be driving an Audi $-$ I've not got enough data to work at the actual model level, so for the moment I'll concentrate on makes. I have to admit that I thought of this on my way back earlier today, after seeing one of them nearly causing a multiple crash and then speeding through an amber-turning-red traffic light. Thus, of course I know that every possible known bias effect is present here. As soon as I realised that (which thankfully was just $\varepsilon \rightarrow 0$ seconds later), I tried to think carefully to see if I could remember any other instance of repeated bad driving by a particular brand of cars and, for the life of me, I could not think of any! The power of recall bias...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6488153338432312, "perplexity": 1136.0356724706835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583518753.75/warc/CC-MAIN-20181024021948-20181024043448-00316.warc.gz"}
http://mathhelpforum.com/advanced-algebra/10414-how-prove-value.html
# Math Help - How to prove this value 1. ## How to prove this value Max -logdet(S) st.(1)LMI>=0 , LMI=[S,Phi*S,Phi1*S,Phi2*S;(Phi*S)',S-r*S,0,0;(Phi1*S),0,r*S,0; (Phi2*S)',0,0,r*S]; (2)S>0; (3)0<r<1; (4)K*S*K'<=um S is positive definite matrix variable;Phi,Phi1,Phi2,K,um is fixed. r is scalar variable. Because of the product of S and r, I can only maxmize the object by giving r ie. 20 value between 0 and 1 and run the program in matlab 20 times and find the largest objective in the 20 cycle. I find there is always a single r such that the objective is the largest. But I want to proof it in maths , I can not make it I try to use S prosedure but I can not. I also try to proof its convexity, but it seems useless. Do you have any better idea? Thanks very much 2. Sorry the objective should be Max logdet(S)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218260407447815, "perplexity": 1126.89873040651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135777.13/warc/CC-MAIN-20140914011215-00162-ip-10-234-18-248.ec2.internal.warc.gz"}
https://forums.tomsguide.com/threads/question-about-connectors-in-visio.407684/
# Question about Connectors in Visio #### Anna Marchetti ##### Estimable Aug 12, 2015 4 0 4,510 0 I'm creating some network maps in Visio, and with the connectors I was wondering if there's anyway to do a many to one? I have some computers going to one printer and it looks a mess so I was hoping to be able to merge all the multiple connectors from the computers into one when the meet up at the printer. I'm working in Visio 2013. Anyone know? Thanks. #### Ralston18 ##### Splendid Moderator I think there are a couple of solutions: or 2) http/visguy.com/vgforum/index.php?topic=6953.0 If you google "Visio many to one 2013" or some similar words and order you will find quite a number of links. Hopefully one of them will be applicable to the overall requirements of your network map. Is it a network printer? If so, you really do not need to have a connection from every computer to the printer per se. Just place the printer on the network and identify it as a network printer. Then it can be understood that all computers on the network have or could have access to that printer. Simplifies the drawing.... You can likewise google "network diagrams" and see other examples that you may be able to emulate in some manner. #### Ralston18 ##### Splendid Moderator I think there are a couple of solutions: or 2) http/visguy.com/vgforum/index.php?topic=6953.0 If you google "Visio many to one 2013" or some similar words and order you will find quite a number of links. Hopefully one of them will be applicable to the overall requirements of your network map. Is it a network printer? If so, you really do not need to have a connection from every computer to the printer per se. Just place the printer on the network and identify it as a network printer. Then it can be understood that all computers on the network have or could have access to that printer. Simplifies the drawing.... You can likewise google "network diagrams" and see other examples that you may be able to emulate in some manner. Apps General Discussion 1 Apps General Discussion 1 Apps General Discussion 10 Apps General Discussion 2 Apps General Discussion 1 Apps General Discussion 2 Apps General Discussion 1 Apps General Discussion 5 Apps General Discussion 1 Apps General Discussion 3 Apps General Discussion 3 Apps General Discussion 2 Apps General Discussion 3 Apps General Discussion 1 Apps General Discussion 7 Apps General Discussion 2 Apps General Discussion 3 Apps General Discussion 3 Apps General Discussion 1 Apps General Discussion 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108054637908936, "perplexity": 1962.8285959878788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00539.warc.gz"}
https://projecteuclid.org/euclid.twjm/1500404804
## Taiwanese Journal of Mathematics ### ON NONLOCAL BOUNDARY VALUE PROBLEMS FOR HYPERBOLIC-PARABOLIC EQUATIONS #### Abstract The nonlocal boundary value problem for hyperbolic-parabolic equations $\left\{ \!\! \begin{array}{c} \frac{d^{2}u(t)}{dt^{2}}+Au(t)=f(t)(0\leq t\leq 1),\frac{du(t)}{dt} +Au(t)=g(t)(-1\leq t\leq 0), \\ u(-1)\!=\!\sum\limits_{i=1}^{N}\alpha _{i}u\left( \mu _{i}\right) \!+\!\sum\limits_{i=1}^{L}\beta _{i}u^{\prime }\left( \lambda _{i}\right) \!+\!\varphi ,\sum\limits_{i=1}^{N}|\alpha _{i}|,\sum\limits_{i=1}^{L}\left\vert \beta _{i}\right\vert \leq 1,0\!\lt \!\mu _{i},\lambda _{i}\!\leq\! 1 \end{array} \right.$ for differential equation in a Hilbert space $H$, with the self-adjoint positive definite operator $A$ is considered. The stability estimates for the solution of this problem are established. In applications, the stability estimates for the solutions of the mixed type bundary value problems for hyperbolic-parabolic equations are obtained. #### Article information Source Taiwanese J. Math., Volume 11, Number 4 (2007), 1075-1089. Dates First available in Project Euclid: 18 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1500404804 Digital Object Identifier doi:10.11650/twjm/1500404804 Mathematical Reviews number (MathSciNet) MR2348553 #### Citation Ashralyev, Allaberen; Ozdemir, Yildirim. ON NONLOCAL BOUNDARY VALUE PROBLEMS FOR HYPERBOLIC-PARABOLIC EQUATIONS. Taiwanese J. Math. 11 (2007), no. 4, 1075--1089. doi:10.11650/twjm/1500404804. https://projecteuclid.org/euclid.twjm/1500404804
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4550987184047699, "perplexity": 1406.4567311367196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00515.warc.gz"}
http://gushieblog.blogspot.com/2006/
## Saturday, December 30, 2006 ### Handling Method Arguments in Jython I have been away from Jython for a while over the Christmas period but now I am back and eager to make some progress porting the csv module. So where was I? Oh yes, last time I managed to get into a position where test_csv.py ran successfully (success meaning that the test script executed - not that the tests passed!), so now I can start trying to get a test to pass. I have decided to tackle test_reader_arg_valid1() test first simply because it's the first test in test_csv.py: def test_reader_arg_valid1(self): self.assertRaises(TypeError, csv.reader) self.assertRaises(TypeError, csv.reader, None) self.assertRaises(AttributeError, csv.reader, [], bad_attr = 0) self.assertRaises(csv.Error, csv.reader, [], 'foo') class BadClass: def __init__(self): raise IOError self.assertRaises(IOError, csv.reader, [], BadClass) self.assertRaises(TypeError, csv.reader, [], None) class BadDialect: bad_attr = 0 self.assertRaises(AttributeError, csv.reader, [], BadDialect) As you can see, a series of tests are being performed on the csv.reader() method so I need to concentrate on implementing just enough of it to get the test to pass. From the python documentation, csv.reader() is defined as follows: reader(csvfile[, dialect='excel'[, fmtparam]]) csvfile is the only required argument and it can be any object that supports the iterator protocol. Next, the dialect name can be specified as an optional parameter or omitted (in which case, the dialect will default to excel). The other optional fmtparam keyword arguments can be given to override individual formatting parameters in the current dialect. So csv.reader() has it all - mandatory, optional and keyword arguments. I am going to need to figure out how this works in Jython to pass the test_reader_arg_valid1() test. In Jython three types of method are supported: • StandardCall: Mandatory, Positional arguments. (i.e. void method(PyObject arg1, PyObject arg2) {} ) • PyArgsCall: List of optional, positional arguments. (i.e. void method(PyObject[] args)) • PyArgsKeywordsCall: List of optional, positional or keyword arguments. (i.e. void method(PyObject[] args, String[] keywords)) As csv.reader() must support keyword arguments, it must be defined as follows: public static void reader(PyObject[] args, String[] keywords) {} The parameter list must be specified exactly like this (except for the identifier names which can differ) because Jython uses reflection to make a method of type PyArgsKeywordsCall only if it has exactly one PyObject array as the first argument and one String array as the second argument. If you add another argument to the beginning of the parameter list, then the method will automatically be of StandardCall type and won't support keyword arguments. I can use the handy helper class, ArgParser to parse the arguments and extract the relevant values. First, I need to create an instance of ArgParser as follows: public static PyObject reader(PyObject[] args, String[] keywords) { ArgParser ap = new ArgParser( "reader", args, keywords, new String[] { "csvfile", "dialect", "delimiter", "doublequote", "escapechar", "lineterminator", "quotechar", "quoting", "skipinitialspace" }); //..} args and keywords are passed into ArgParser along with a list of the names of each argument that the method supports. Then it is possible to simply pick out the value of a parameter by invoking a getXXX() method specifying the position of the argument. So to get the value of "quotechar", you'd ask for the value at position 6 as follows: String quotechar = ap.getString(6, "'"); Simple, eh? It is equally as easy to support optional arguments. For example, the dialect argument would be extracted as follows: String dialect = ap.getString(1, "excel"); Here, if dialect is not specified then it will default to "excel". Now that I have learned how to support positional, optional and keyword arguments in Jython I can focus on type checking the arguments and throwing the appropriate exceptions in order to pass test_reader_arg_valid1(). ## Wednesday, December 13, 2006 ### Module Methods and Failing Tests I have been looking at how CPython handles keyword arguments in methods today. I've had my fair share of experience with Python over the years (though, not so much in the last few months) but I was totally unaware that methods may or may not support keyword arguments! Maybe that's because I often used the PyQt GUI toolkit bindings which didn't support keywords arguments anyway, I'm not sure. At the end of my last entry my _csv module was in a position where I was ready to implement the register_dialect() method. To do this I needed to figure out how Jython handles arguments as I thought I would need to support keyword arguments for register_dialect() - it's a python method after all and all python methods support keyword arguments don't they? In fact, as it turns out this isn't always the case! Although not explicity mentioned in the documentation, some CPython methods don't support keyword arguments and if you try to use them you will get a TypeError. Indeed, csv.register_dialect() is one such method: Python 2.3.6 (#1, Nov 17 2006, 22:32:43)[GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> import csv>>> csv.register_dialect(dialect=None, name="excel")Traceback (most recent call last): File "", line 1, in ?TypeError: register_dialect() takes no keyword arguments Presumably register_dialect() behaves like this because it is not really a Python method. The csv.py module just exposes register_dialect() from the C Module but a normal python developer would not know this and would quite rightly expect the method to support keyword arguments. This inconsistency is less than ideal and it's tempting to fix it for Jython but I think that would be a mistake. Jython is supposed to mimic CPython's behaviour whether rightly or wrongly. From a Jython perspective it's right if it's the way CPython behaves. So, in the case of register_dialect() I can explicitly specify the arguments as follows: public static void register_dialect(PyObject name, PyObject dialect) {} If I try to run test_csv.py now I get the following error: Traceback (innermost last): File "dist/Lib/test/test_csv.py", line 9, in ?ImportError: no module named gc The test_csv.py module uses the gc module which isn't supported by Jython yet. For now, I have just completely side-tracked this problem by making a copy of test_csv.py and removing all the tests that involve gc! Problem solved (temporarily at least)! Now, when I run my own copy of test_csv.py without the gc calls I get yet another annoying error: Traceback (innermost last): File "test_csv.py", line 363, in ? File "test_csv.py", line 364, in TestEscapedExcel File "/jy/dist/Lib/csv.py", line 39, in __init__None: Dialect did not validate: quoting parameter not set This, and no doubt many other future cryptic errors are due to the fact that all the identifiers are the wrong type - they are all PyObjects which confuses Jython a great deal. Now is the right time to revisit each identifier and change it to the correct type. It's worth noting at this stage, my goal for today is to get _csv into a state where it is good enough to fail all tests. Wow, what a statement - lets say that again: I want _csv to be good enough to fail all tests! What a strange goal to aim for. Well, actually once I have _csv in a state where test_csv.py can properly execute I am in a far better position than I was before. I can analyse the output of test_csv and tackle one test at a time, gaining satisfaction and confidence as I go. This is one of the primarily advantages of Test Driven Development and it's surprising how effective it is. First, I will tackle the methods. Rather than figure out the parameters for each method I have simply specified "PyObject[] args" as the parameter list which just means the method supports 0 or more arguments. For example, I have implemented unregister_dialect() as follows: static public PyObject unregister_dialect(PyObject[] args) { return null;} For all the QUOTE_xxx identifiers I looked in the _csv.c module and saw they were enums. In Java I just make these separate integers to get them to work initially. I left Error as a PyObject as I will need to spend some time looking at exceptions at a later date. Similarly, I have left Dialect well alone and will look into it when the time is right. Finally, I changed __doc__ and __version__ to empty Strings to complete the process. With all the identifiers now the correct type, Jython is happy to run test_csv.py. Of course, not many tests pass and there is a lot of output - here's a sample of it: ======================================================================FAIL: test_reader_arg_valid1 (__main__.Test_Csv)----------------------------------------------------------------------Traceback (most recent call last): File "/jy/dist/Lib/unittest.py", line 229, in __call__ File "test_csv.py", line 19, in test_reader_arg_valid1 File "/jy/dist/Lib/unittest.py", line 295, in failUnlessRaisesAssertionError: TypeError----------------------------------------------------------------------Ran 65 tests in 0.666sFAILED (failures=4, errors=55)Traceback (innermost last): File "test_csv.py", line 716, in ? File "test_csv.py", line 0, in test_main File "/jy/dist/Lib/test/test_support.py", line 262, in run_unittest File "/jy/dist/Lib/test/test_support.py", line 246, in run_suiteTestFailed: errors occurred; run in verbose mode for details I may have 59 failures but this is a much better position than before. I now have something to focus on - I can tackle each test as it comes and gain confidence as the number of failures decrease and the number of passes increase until the porting process is complete. Yippee! Now I am ready to implement the module proper, the first task is to find out what "c.s.v" stands for! ;) :) ### Porting C Modules to Jython CPython includes many library modules, some of which are written in pure Python (which is great because these will work in Jython (hopefully) without modification), but others are written in C, which means they must be rewritten in Java in order to work with Jython. Take the csv module for example. It is a Python library module so it should be possible to use it in Jython as-is without any extra work. If only it were that that simple! You see, if you look at the code for csv.py you'll notice it uses another module called _csv for most of it's behaviour and it just so happens that _csv is written in C. Therefore, it is necessary to port this module to Jython. It's worth noting that there are many cases where a python library module is just a wrapper for an underlying C module, but not always - for example, cStringIO and cPickle are first-class library modules implemented in C. Before actually creating the _csv module, it's always helpful to see things fail first then you get a nice feeling of satisfaction when you get the test to pass (or fail less!). So to prove that csv doesn't work in current Jython builds I ran the following test: bash# jython dist/Lib/test/test_csv.py which resulted in the following predictable error: Traceback (innermost last): File "dist/Lib/test/test_csv.py", line 8, in ? File "/work/jython/dist/Lib/csv.py", line 7, in ?ImportError: no module named _csv As expected csv.py is unable to import the _csv module because it doesn't exist. To create it I followed the guidelines by Charlie Groves in the wiki (which - funnily enough - uses the csv module as an example - what a coincidence!). I created a _csv.java file in \$JYTHON_HOME/src/org/python/modules" then added "_csv" to the list of modules in Setup.java. After building Jython and running test_csv.py again I saw the following error: Traceback (innermost last): File "dist/Lib/test/test_csv.py", line 8, in ? File "/work/jython/dist/Lib/csv.py", line 7, in ?ImportError: cannot import names Error, __version__, writer, reader, register_dialect, unregister_dialect, get_dialect, list_dialects, QUOTE_MINIMAL, QUOTE_ALL, QUOTE_NONNUMERIC, QUOTE_NONE, __doc__ Now, I have a different error, but the fact that the first error has disappeared means that Jython has recognised my new _csv module! The new error is just Jython complaining because _csv doesn't define any of the identifiers that it is expecting. Some of the missing identifiers are simple to resolve, like __doc__ which is just a string. Others are more difficult and will require further investigation like Error which is an exception and I don't know how to do exceptions in Jython yet. For now, I will just add everything as a PyObject to get past the error, then I will revisit each in turn. Here's _csv.java as it looks after adding all the missing identifiers: public class _csv { public static PyObject Error; public static String __version__ = "1.0"; public static PyObject Dialect; public static PyObject writer; public static PyObject reader; public static PyObject register_dialect; public static PyObject unregister_dialect; public static PyObject get_dialect; public static PyObject list_dialects; public static PyObject QUOTE_MINIMAL; public static PyObject QUOTE_ALL; public static PyObject QUOTE_NONNUMERIC; public static PyObject QUOTE_NONE; public static String __doc__;} Now, when I run test_csv.py I get the following error: Traceback (innermost last): File "dist/Lib/test/test_csv.py", line 8, in ? File "/work/jython/dist/Lib/csv.py", line 87, in ?TypeError: call of non-function ('NoneType' object) Although the error isn't very helpful, I can go to line 87 in csv.py (just by clicking on the error in Eclipse) and see that Jython is unhappy because register_dialect is supposed to be a method yet I have defined it as a PyObject, so now I can forget about the other identifiers and focus on getting this method to work. This is where this entry ends while I go and figure out how methods and dynamic arguments work in Jython! ## Monday, November 20, 2006 ### Trac: More Than Just a Bug Tracker Introducing Trac Trac is an enhanced wiki and issue tracking system for software development projects. Note: Click each image to see full size Trac uses a minimalistic approach to web-based software project management. As you can see from the navigation bar above Trac includes a wiki and a bug tracker (where bugs and tasks are referred to as tickets), as well as other less obvious features: • Timeline - lists all Trac events that have occurred in chronological order, a brief description of each event and if applicable, the person responsible for the change. • Roadmap - provides a view on the ticket system that helps planning and managing the future development of a project. • Browse Source - Trac is fully integrated with Subversion - more on this later! • Lots More! Creating a New Ticket Entering a new ticket is simple. Just select the type, enter a description, select the relevant properties then hit "Submit Ticket". One of the major advantages of Trac is that it's extremely easy to add new fields to the ticket system. Integrated Wiki A fully featured wiki is integrated into the Trac system with fully history and diff support. Queries Searches and queries can be done through the SQL-style reports or the more user-friendly "Custom Query" screen show here. The query interface supports custom fields and the results can be sorted by column. Project Management The roadmap section provides an interactive graphical overview of progress for each milestone. Clicking on the filled part of the bar takes you to a query showing all completed tickets and clicking on the empty part shows all active tickets. Milestones Clicking on a particular milestone from the Roadmap will take you to a detailed view showing more statistics, this time for various different properties. You can view tickets by owner, severity, etc. Timeline The timeline page shows a chronological list of all events and is a good way to see what's change since your last visit. It includes all sorts of interesting events from wiki changes to subversion commits to milestone completions. Each event provides a link to more detailed information. For example, an svn commit links directly to a visual diff of the changeset, which neatly brings me onto the next feature... Changesets Trac is extremely well integrated with Subversion and provides a nifty diff viewer. Show here is the in-line viewer but you can alter it to show changes side-by-side. Diffs aren't limited to the previous change - you can do a diff on any revision in the repository as illustrated in the next section. Source Browser The source browser lists all the changes in the repository and allows you to compare any two revisions - very powerful! Trac provides extensive support for linking to various events and items within the system for both wiki pages and ticket comments. Linking to source code changes is particularly powerful. When a fix is detected for a particular bug, the developer can easily link to the changeset from the ticket allowing readers to jump to a diff showing exactly what changes were required to fix the bug. Inline Diffs If linking to a particular changeset isn't immediate enough for you, then why not display the diff directly in the wiki page or ticket comment? Summary For more information on Trac refer to the website. There is also a demo project that you can checkout to evaluate and play around with Trac before downloading. ## Tuesday, June 13, 2006 As a C++ programmer accustomed to Doxygen I was always curious to learn a language where automatic code generation was taken seriously and supported as standard. When I finally moved over to a Java project I was shocked to discover how obtrusive and "in your face" Javadoc is. It seems to go out of it's way to get in my way! By comparison Doxygen is about as good as it gets. It is designed to produce great looking documentation with the least amount of developer effort. Javadoc, on the other hand expects developer contribution in areas that I feel are perfect candidates for automation. Take paragraphs for example; Javadoc expects the developer to use the standard HTML paragraph tag <p> in the comments. Why? Why? Why? Surely it would be quite simple to automatically detect an empty line as the start of a new paragraph? Many Java developers - including the Javadoc development team I'm sure - would take the view that HTML is the obvious choice for Javadoc text formatting and I agree that, at least theoretically it seems an obvious choice. In practice however, there is simply no need for HTML for simple text formatting such as marking text as bold, italic, etc. Using HTML for anything else is overkill in a source comment and only serves to make the comment unreadable in source form. /** This is <i>the</i> Rectangle class. <p> Refer to <a href="./doc-files/shapes-overview.html"> shape-overview</a> for more details. <p> There are four types of supported {@link Shape}: <ul> <li>{@link Rectangle} (this class)</li> <li>{@link Circle}</li> <li>{@link Square}</li> <li>{@link Triangle}</li> </ul>*/ Here is the equivalent comment using Doxygen: /** This is <i>the</i> Rectangle class. Refer to \ref shape-overview for more details. There are four types of supported Shape: - Rectangle (this class) - Circle - Square - Triangle*/ Points to note in this comparison are: • Doxygen will automatically recognise all code objects and insert a hyperlink, hence there is no need for a @link tag. • Doxygen provides a very convenient shorthand notation for lists. • Notice how easy it is to reference another page in the documentation compared to Javadoc's use of the HTML HREF tag. The most important problem with the Javadoc comment in the comparison is how much I need to concentrate on formatting issues while writing it. When writing Javadoc I am constantly thinking about what should and shouldn't be linked, whether the list will look right, etc. This is frustrating because, while I do want to document my code well I also want to focus on coding. Therefore, due to the effort involved in commenting Javadoc-style, I usually focus on the code while in a heavy development session then I go through and document everything afterwards. I'd much rather document my code incrementally during development, but Javadoc, it seems, almost strives to make this as difficult as possible! Doxygen allows me to use HTML where it works well (marking text as bold, etc.) but also supports convenient shorthand for lists and is intelligent enough to realise that an empty line should be converted into the start of a new paragraph in the generated documentation. I have provided the generated documentation of the Shape example for both Doxygen and Javadoc so you can decide for yourself which approach you prefer. Note the following Doxygen features: • Doxygen provides a hyperlinked graphical class hierarchy although it is initially well hidden! From the main page, select the "Classes" tab, then the "Class Hierarchy" sub-tab then click "Goto the graphical hierarchy". • Doxygen will produce a hyperlinked graphical class hierarchy for every class at the top of the page. • In the Doxygen page for Rectangle, notice that there is a link to the source code. Doxygen generates a hyperlinked HTML source browser for all source code. • Doxygen has a "\todo" command and will automatically generated a hyperlinked todo page. Handy! It supports a bug list and test list. • Doxygen provides many more features including full support for graphical class charts, grouping of classes, mathematical formulas, multiple output formats (HTML, LATEX, PDF, man pages, HTMLHelp, etc.). So what do you think? If you are a Java developer are you surprised how powerful Doxygen is or do you feel that the Javadoc approach is better? I'd be interested to here from developers who really prefer the Javadoc approach as it baffles me, that's for sure! ## Tuesday, May 16, 2006 ### Vim7.0 Released A new and much improved version of my favourite editor was released last week (I would've posted earlier but my blog didn't exist then!). It includes handy features such as tabbed windows, visual on-the-fly spell checking and an intellisense/auto-complete feature called "omni-completion". I have played with it a little but haven't been able to get the omni-completion working for Java yet. I will figure it out when I have more time, maybe later in the week.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23354151844978333, "perplexity": 2920.6747509128923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645513.14/warc/CC-MAIN-20180318032649-20180318052649-00495.warc.gz"}
http://math.stackexchange.com/questions/141008/coset-multiplication-when-h-1-1234
# coset multiplication when $H = \{(1),(12)(34)\}$ Let $H = \{(1),(12)(34)\}$ and $\alpha_1 = (243), \alpha_2 = (142), \alpha_3 = (132),$ and $\alpha_4 = (234)$. Coset multiplication is a bit confusing to me. The book states in an example that $\alpha_1H = \alpha_2H$ and I'd like to see it for myself, however not a single paragraph in this book actually explains the operation of $\alpha_1H$. As to why I wrote the other $\alpha_i$'s, the book claims that: $\alpha_1 \alpha_3H \neq \alpha_2 \alpha_4H$ I just would like to see it for myself before I press forward. Thanks! - $\alpha_1 H$ just means you take every element in $H$ and multiply on the left by $\alpha_1$. So $$\alpha_1 H = \{\alpha_1 (1), \alpha_1 (12)(34)\} = \{(243)(1), (243)(12)(34)\} = \{(243), (142)\}$$ Can you compute $\alpha_2 H$ yourself? What about $\alpha_1\alpha_3 H$ and $\alpha_2 \alpha_4 H$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133792519569397, "perplexity": 142.7975812717877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769550.123/warc/CC-MAIN-20141217075249-00115-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.eurotrib.com/story/2018/8/7/185625/2834
Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going. ## Notes on The Language of the Third Reich by gmoke Tue Aug 7th, 2018 at 06:56:25 PM EST I first learned of Victor Klemperer's The Language of the Third Reich in a column Mike Godwin of Godwin's Law ("As an online discussion continues, the probability of a comparison to Hitler or to Nazis approaches one") wrote in June 2018 in the LA Times (http://www.latimes.com/opinion/op-ed/la-oe-godwin-godwins-law-20180624-story.html). Godwin quoted Klemperer on how, at the beginning of the Nazi regime, he "was still so used to living in a state governed by the rule of law" that he couldn't imagine the horrors yet to come. "Regardless of how much worse it was going to get," he added, "everything which was later to emerge in terms of National Socialist attitudes, actions and language was already apparent in embryonic form in these first months."  Klemperer was, by training, a philologist, the study of language in oral and written historical sources; it is a combination of literary criticism, history, and linguistics, and kept a diary throughout the Third Reich which I've always meant to read. This is a good introduction to Klemperer and seems to be very apt in these days when the public discourse is full of misinformation, propaganda, and outright downright lies. Klemperer classified the language of the Third Reich as LTI [Lingua Tertii Imperii].  Very interesting book which I'm still digesting.  Thanks Mike Godwin. The Language of the Third Reich by Victor Klemperer NY:  Continuum Books, 2000 ISBN 0-8264-9130-8 (page 14)   Words can be like tiny doses of arsenic:  they are swallowed unnoticed, appear to have no effect, and then after a little time the toxic reaction sets in after all.  If someone replaces the words heroic' and virtuous' with fanatical' for long enough, he will come to believe that a fanatic really is a virtuous hero, and that no one can be a hero without fanaticism. (21)  One of their banners contends that You are nothing, your people is everything.'  Which means that you are never alone with yourself, never alone with your nearest and dearest, you are always being watched by your own people. The sole purpose of the LTI [Lingua Tertii Imperii] is to strip everyone of their individuality, to paralyse them as personalities, to make them into unthinking and docile cattle in a herd driven and hounded in a particular direction, to turn them into atoms in a huge rolling block of stone.  The LTI is the language of mass fanaticism.  When it addresses the individual - and not just his will but also his intellect - where it educates, it teaches means of breeding fanaticism and techniques of mass suggestion. (24)  But clichés do indeed soon take hold of us.  Language which writes and thinks for you....' NB:  Listen for how often clichés and buzzwords become common among politicians and pundits and the public (201)  But did the Americans and the Nazis really go in for the same kind of intermperance when it came to numbers and figures?  I already had my doubts at the time.  Wasn't there a bit of humour in the thirty feet of intestines, couldn't one always sense a certain straightforward naivety in the exaggerated figures of American adverts?  Wasn't it as if the advertiser was saying to himself each time:  you and I, dear reader, dervie the same pleasure from exaggeration, we both know how it's meant - so I'm not really lying at all, you subtract what matters and my eulogy isn't deceitful, it simply makes a greater impression and is more fun if it's expressed as a superlative? ... It may well be that the LTI learned from American customs when it came to the use of figures, but it differs from them hugely and twice over:  not only through exorbitant use of the superlative, but also through its deliberate maliciousness, because it is invariably and unscrupulously intent on deception and benumbing. NB:  Barnum from  The Humbugs of the World: An Account of Humbugs, Delusions, Impositions, Quackeries, Deceits and Deceivers Generally, in All Ages:  "But need I explain to my own beloved countrymen that there is humbug in politics? Does anybody go into a political campaign without it? are no exaggerations of our candidate's merits to be allowed? no depreciations of the other candidate? Shall we no longer prove that the success of the party opposed to us will overwhelm the land in ruin?" Trmp (or Tony Schwartz) from The Art of the Deal:  "The final key to the way I promote is bravado. I play to people's fantasies. People may not always think big themselves, but they can still get very excited by those who do. That's why a little hyperbole never hurts. People want to believe that something is the biggest and the greatest and the most spectacular. I call it truthful hyperbole. It's an innocent form of exaggeration--and a very effective form of promotion." Full notes at http://hubeventsnotes.blogspot.com/2018/08/notes-on-language-of-third-reich.html Poll More language of the Third Reich now than ever? . yes 0% . no 0% . not yes 0% . not no 0% . neither yes nor no 0% . both yes and no 0% . don't understand the question? 0% . none of the above 0% Votes: 0 Results | Other Polls Is it complete, or did they remove the chapter comparing the Nazis and the Zionists? If I remember correctly, this even was a problem for the DDR government. (His diaries are mostly unknown in Israel) Counterpunch yesterday published an essay by Ken Surin on the instrumentality of and connotations assigned to rhetorical "anti-semitism" and "NAZI" simile suicide. It is titled The UK's Labour Party and Its "Anti-Semitism" Crisis. The prompt is not nearly as interesting as the variety of "demons" and "evil" that humans have conjured with mere speech these past millennia to rationalize, ironically, a desire to murder or eat ... their own and others. Why? That is the biggest question the answers to which are not satisfying either. Diversity is the key to economic and political evolution. First of all, Klemperer writing about this under the Nazis is a bit different from your comparisons. As I recall it (I can't look it up right now) he bases his comparisons on a careful study of writings by Herzl and Buber on the one hand, and Rosenberg on the other, and regards them both as consequences of German Romanticism. But you should really base your comments on what he writes, not what is happening in the Labour party (not  in the Conservative party? They joined with the far-right antisemites in the European Parliament, not Labour) who have probably not even heard of some of the writes that I just mentioned. ahh, yes, "German Romanticism" in particular, periodicity in general as if to differentiate the form of a motif: I have alluded here to this 19th century era in the context of reactionary European "nationalism," epitomized by  The French Revolution, as well as the "völkisch movement" that quickly followed Bonaparte's imperial project which began with pacifying continent. No one took up the topic. I have also more recently excerpted purported testimony from History of the Peloponnesian War to emphasize the continuous and ubiquitous employments of rhetorical symbols among belligerents to rationalize war (genocide). To your point, I don't disagree. I also wrote: "The prompt [CORBYN] is not nearly as interesting as the variety of 'demons' and 'evil' that humans have conjured with mere speech these past millennia to rationalize, ironically, a desire to murder or eat ... their own and others." I will note here, from your appeal for precision in correspondence ("truth" of Proposition A), the urge to resist the implications of studied, successful methods of political indoctrination within the society you inhabit. "Demonizing the Other": Some call it "meme." Diversity is the key to economic and political evolution. ahh, yes, "German Romanticism" in particular, As I recall (I read it 20 years ago) his analysis was much more specific than that. But my main question (which you haven't answered) is whether this chapter is present in the English translation or not (as I recall, the DDR removed it, at least in the first edition). But in any case, there's no point in discussing my (possibly inaccurate) recollections of what he said. How about somebody with access to the book (which I don't have while travelling) actually reading what he wrote? It's only one chapter, probably not very long. whether this chapter is present in the English translation or not My answer is, it doesn't matter whether or not is was. The point has been made a thousand times (figuratively, speaking) before. Diversity is the key to economic and political evolution. Really? I have never seen it made in the form of a detailed analysis of the sources by an expert on literature, without reference to what Israel has actually done (the state didn't exists when he wrote). How about some examples? And may I point out that without reading LTI or his diaries (a good thing to do in any case) you can't really know whether whether the point has been made or not (once again, my summary based on memories from 20 years ago should not be the basis for discussion) (189)  He definitely got the idea from Herzl of seeing the Jews as a people, as a political entity, and of categorizing them as "global Jewry [Weltjudentum]"'. NB:  Hitler and Zionism, beware of who you take as an enemy for you become like them (193)  Later, using a number of key words and quotations, I set down clearly the similarities and dissimilarities between Herzl and Hitler.  There were, thank God, also dissimilarities between them. (196)  Of all the things on which Herzl bases his idea of a unified people, there is only one which truly fits the Jews:  their common opponent and persecutor;  seen from this point of view the Jews of all nations certainly unite into `global Jewry' in their opposition to Hitler - the man himself, his persecution complex and the precipitous cunning of his mania gave a concrete form to that which previously had only existed as an idea, and he converted more supporters to Zionism and the Jewish state than Herzl himself.  And Herzl once again - from whom could Hitler have gleaned more crucial and practical ideas for his own purposes? ... The problem is that Hitler and Herzl feed to a very large extent on the same heritage. Copied from my complete notes at http://hubeventsnotes.blogspot.com/2018/08/notes-on-language-of-third-reich.html Solar IS Civil Defense From your blog: and kept a diary throughout the Third Reich which I've always meant to read. The diaries before and after the Third Reich are also worth reading, but may not have been translated. Is there any new information or knowledge to be found in these notes? Diversity is the key to economic and political evolution. It was new when he wrote it..... I believe, you misspelled "new", to wit: Arendt Eichmann in Jerusalem, anything Frankfurt School, and The Pitfalls of National Consciousness, for example. What I'm saying is, the motives and subjects of war (genodide) are well known, not unique to jews, and, sadly, socially acceptable. Remediation OTOH is not well understood. Diversity is the key to economic and political evolution. Arendt was writing well after 1943 or 1944, when Klemperer developed these ideas in his diaries (and may have written some of LTI as well) ahh, yes, the the appeal to primacy to which I've not already alluded in English translations of "moral imperative" across millennia: What message are you attempting to communicate? Diversity is the key to economic and political evolution. It was new to me before I read it.  If you haven't read it, it will be new to you too. There is some news that stays news. Now, I'd be interested in finding an English translation of Hermann Broch's book on Massenwahnprojekt, a project I understand that has yet to be undertaken. Solar IS Civil Defense # Top Diaries ## Pope Francis' visit to Ireland by Frank Schnittger - Aug 17 by Oui - Aug 12 by gmoke - Aug 7 by Oui - Aug 9 ## No deal means no deal by Frank Schnittger - Aug 3 ## Brexit: How not to negotiate a deal [UPDATE] by Frank Schnittger - Jul 27 ## A victory for European justice by IdiotSavant - Jul 20 by Cat - Jul 29 # Recent Diaries ## Pope Francis' visit to Ireland by Frank Schnittger - Aug 17 by Oui - Aug 12 by Oui - Aug 11 by Oui - Aug 10 by Oui - Aug 9 by gmoke - Aug 7 by Oui - Aug 3 ## No deal means no deal by Frank Schnittger - Aug 3 by Oui - Aug 3 by Oui - Jul 31 by Oui - Jul 31 1 comment by Cat - Jul 29 ## Brexit: How not to negotiate a deal [UPDATE] by Frank Schnittger - Jul 27 by Oui - Jul 26 ## The UK to remain within a reformed EU? by Frank Schnittger - Jul 25 by Oui - Jul 23 ## Above the Law. Jupiter, the Ministers and the Bodyguard by Bernard - Jul 22 by Oui - Jul 22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2966431975364685, "perplexity": 3920.948836212435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214538.44/warc/CC-MAIN-20180819012213-20180819032213-00381.warc.gz"}
https://math.stackexchange.com/questions/989640/bernoulli-trials-case-in-probability
# Bernoulli trials case in probability A fair die is tossed twice. About how many times would you expect to roll 3 or greater? So based on sequence of Bernoulli trials: P(exactly k successes in n trials) = C(n,k) p^k q^(n-k) where p = probability of success, and q = probability of failure So in this case, p = 2/3, and q = 1/3 P = C(2,0) p^0 q^2 + C(2,1) p^1 q^1 + C(2,2) p^2 q^0 = 1, While on the other hand, P is absolutely greater than 1, by common sense. So any ideas? Maybe I mess up with some definitions though. Since the successes $X$ are binomially distributed with parameters $p=2/3$ and $n=2$ as you correctly have, then the expected number of sucesses $E[X]$ is given by $$E[X]=np=2\cdot\dfrac{2}{3}=\dfrac{4}{3}=1.333>1$$ You have calculated probabilities and you have forgotten to multiply by $k$ in order to find the expected value. Since you have all three probabilities they just add up to 1. Your formula would be correct as follows $$E[X]=0\cdot P(X=0)+1\cdot P(X=1)+2\cdot P(X=2)$$ (you missed the $0,1$ and $2$ in front of the probabilities).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912322461605072, "perplexity": 329.6471271390892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00113.warc.gz"}
https://www.yaclass.in/p/science-state-board/class-7/measurement-14102/fundamental-and-derived-quantities-9348/re-e53a83f2-d144-4e73-934d-ba22262cd3f7
PDF weekly test TRY NOW Length is one of the fundamental quantity that cannot be conveyed in any other way. Other measurements, in derived quantities such as area and volume, can be calculated using length. Area is the amount of space on a flat or a plane surface or a two-dimensional object. In general, length is a fundamental quantity. Here, length is multiplied twice to calculate the area, which is a derived quantity. The formula is given as, $\begin{array}{l}\mathit{Area}=\mathit{Length}×\mathit{Breadth}\\ =l×b\end{array}$ Using the above formula, one can find the area of a book or a house or even a plot of land. Unit of area $$=$$ $${metre} \times{metre}$$ $$=$$ $$metre^2$$ (or) $$m^2$$ The SI unit of the area of a surface is $$square\ metre$$ ($$m^2$$) since it is a product of two lengths. One square metre denotes the area enclosed inside a square of side $$1\ metre$$. Even though the area is measured in a $$square\ metre$$, the surface does not have to be square. Area of regular figures Some of the formulae used to find the area of regularly shaped figures is shown below. Shape Two-dimensional figure Area Square $\begin{array}{l}\mathit{side}×\mathit{side}\\ a×a={a}^{2}\end{array}$ Rectangle $\begin{array}{l}\mathit{length}×\mathit{breadth}\\ l×b\end{array}$ Circle $\begin{array}{l}\mathrm{\pi }×{\mathit{radius}}^{2}\\ \mathrm{\pi }{r}^{2}\end{array}$ Triangle $\begin{array}{l}\frac{1}{2}×\mathit{base}×\mathit{height}\\ \frac{1}{2}×b×h\end{array}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986708521842957, "perplexity": 564.844028001846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00127.warc.gz"}
http://economiclogic.blogspot.com/2013/06/raising-bar-of-falsifiability-in.html
## Friday, June 28, 2013 ### Raising the bar of falsifiability in Economics Economists do not dismiss theories easily. Although Popper taught us that once a falsifiable theory is reject by the data we should move on to better theories, it takes a lot of rejections for economists to move on. This may have two reasons: first, we all know that there can be serious issues with the data as we almost never have clean experiments to draw from. We are thus more tolerant for theories. Second, we tend to think that if a theory is rejected, we need to also propose a new one that is consistent with the data. That is quite a challenge. Ronen Gradwohl and Eran Shmaya build on this second argument to amend the falsifiability criterion of Popper by adding a new one: that each rejection by the data be accompanied by a short proof on the inconsistency. If I understand this right, it would not be sufficient to show that the theory predicts, say, a positive relation between two variables, and the data finds a negative one, one also needs a convincing sketch of a proof that would convince a court that the data is indeed identifying the right relation and that it is relevant for the the theory. And this needs to be short because courts (or the scientific community) are busy. It seems we are doing it all wrong in Economics, as our arguments are excessively long, and getting longer. This is at least in part due to the fact that we require not just a short proof, but an extensive, complete one, and then we are still not convinced. Are we overdoing it? Possibly, at least the length and complexity of papers in Economics are becoming too much.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791537284851074, "perplexity": 492.1453822435303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510280868.21/warc/CC-MAIN-20140728011800-00163-ip-10-146-231-18.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Scalar_(mathematics)
# Scalar (mathematics) For other uses, see Scalar (disambiguation). Scalars are real numbers used in linear algebra, as opposed to vectors. This image shows a Euclidean vector. Its coordinates x and y are scalars, as is its length, but v is not a scalar. In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector.[1][2][3] More generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that vector space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied to produce a scalar. A vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part. The term is also sometimes used informally to mean a vector, matrix, tensor, or other usually "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, which is formally a 1×1 matrix, is often said to be a scalar. The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. ## Etymology The word scalar derives from the Latin word scalaris, adjectival form from scala (Latin for "ladder"). The English word "scale" is also derived from scala. The first recorded usage of the word "scalar" in mathematics was by François Viète in Analytic Art (In artem analyticen isagoge)(1591):[4] Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another are called scalar terms. (Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocentur Scalares.) According to a citation in the Oxford English Dictionary the first recorded usage of the term in English was by W. R. Hamilton in 1846, to refer to the real part of a quaternion: The algebraically real part may receive, according to the question in which it occurs, all values contained on the one scale of progression of numbers from negative to positive infinity; we shall call it therefore the scalar part. ## Definitions and properties ### Scalars of vector spaces A vector space is defined as a set of vectors, a set of scalars, and a scalar multiplication operation that takes a scalar k and a vector v to another vector kv. For example, in a coordinate space, the scalar multiplication $k(v_1, v_2, \dots, v_n)$ yields $(kv_1, kv_2, \dots, k v_n)$. In a (linear) function space, is the function xk(ƒ(x)). The scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. a number by the elements inside the brackets. ### Scalars as vector components According to a fundamental theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a coordinate vector space where the coordinates are elements of K. For example, every real vector space of dimension n is isomorphic to n-dimensional real space Rn. ### Scalars in normed vector spaces Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar ||v||. By definition, multiplying v by a scalar k also multiplies its norm by |k|. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k. A vector space equipped with a norm is called a normed vector space (or normed linear space). The norm is usually defined to be an element of V's scalar field K, which restricts the latter to fields that support the notion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four arithmetic operations; thus the rational numbers Q are excluded, but the surd field is acceptable. For this reason, not every scalar product space is a normed vector space. ### Scalars in modules When the requirement that the set of scalars form a field is relaxed so that it need only form a ring (so that, for example, the division of scalars need not be defined, or the scalars need not be commutative), the resulting more general algebraic structure is called a module. In this case the "scalars" may be complicated objects. For instance, if R is a ring, the vectors of the product space Rn can be made into a module with the n×n matrices with entries from R as the scalars. Another example comes from manifold theory, where the space of sections of the tangent bundle forms a module over the algebra of real functions on the manifold. ### Scaling transformation The scalar multiplication of vector spaces and modules is a special case of scaling, a kind of linear transformation. ### Scalar operations (computer science) Operations that apply to a single value at a time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113433957099915, "perplexity": 371.3858755207663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641393.36/warc/CC-MAIN-20150417045721-00130-ip-10-235-10-82.ec2.internal.warc.gz"}
https://gateoverflow.in/1230/gate2007-32
1.6k views Let $f(w, x, y, z) = \sum {\left(0,4,5,7,8,9,13,15\right)}$. Which of the following expressions are NOT equivalent to $f$? P: $x'y'z' + w'xy' + wy'z + xz$ Q: $w'y'z' + wx'y' + xz$ R: $w'y'z' + wx'y' + xyz+xy'z$ S: $x'y'z' + wx'y'+ w'y$ 1. P only 2. Q and S 3. R and S 4. S only edited | 1.6k views +1 can somebody remove that cursor from option A. K-map w'x' w'x wx wx' y'z' 1 1   1 y'z   1 1 1 yz   1 1 yz' So, minimized expression will be $xz + w'y'z' + wx'y'$ which is Q. From the K-map, we can also get P and R. So, only S is NOT equivalent to $f$. http://www.eecs.berkeley.edu/~newton/Classes/CS150sp98/lectures/week4_2/sld011.htm selected by 0 I am not understanding how to check P and R. one way is that we can draw the 4bit truth table but that is time consuming. How to derive it(P and R) using k-map? Let me show u a very simple method Let w =1 ,x =1 ,y=1 ,z=1 then the value of f is 1 consider each statement x'y'z' + w' x y' +w y' z +x z = 0.0.0 + 0.1.0 + 1.0.1 + 1.1 =1 w' y' z' +w x' z' y' +x z = 0.0.0 +1.0.0 +1.1  =1 w' y' z' +w x' y' +x y z +x y' z =0.0.0 +1.0.0 +1.1.1 +1.0.1 =1 x' y' z' +w x' y' + w' y = 0.0.0 + 1.0.0 + 0.1 =0 So statement (d) is false because w=1 x=1 y=1 z=1 the value of f is 0 . (d) does not contain the essential Minterms . xyz+wxy+wyz+xz xyz+wxy+wyz+xz 0 Is this method correct for generalisation? If yes,why? 0 is this a perfect method to solve such question, sometimes I got the error  by this method ? need help @shekhar_chauhan 0 0 Here it works by only putting 0 and 1 values. But this is not a general way to solve this problem. For e.g. for $x.z$ we get $1.1=1$ though it's not a correct representation of function. Minterms of P, Q and R are same as f. So, S is not equivalent to f Ans (D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7177621722221375, "perplexity": 3402.7869841881275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00612.warc.gz"}
https://www.physicsforums.com/threads/hibbeler-12-231.739869/
# Hibbeler 12-231 1. Feb 23, 2014 ### c0der 1. The problem statement, all variables and given/known data Attached 2. Relevant equations Attached 3. The attempt at a solution Attached. Couple of questions/confirmations: it should be vbcos(45)i - vbsin(45)j, but this is the same thing as cos(45)=sin(45) vb should be directed at B How do you solve equations 1 and 2? Trial and error? vb/r will always be 5m/s in this case? #### Attached Files: • ###### Hibbeler 12-231.png File size: 32.7 KB Views: 84 2. Feb 23, 2014 ### lewando Try squaring both sides of 1 and 2 then add the two equations together. 3. Feb 23, 2014 ### c0der Thanks! Then I get vb^2 = 29 + 20sin(theta). Plugging sin(theta) = [ vb^2 - 29 ] / 20 into (2) I get two solutions for vb, one being negative. I hope I am correct in saying that this is the one to discard as the magnitude of a vector cannot be negative. However in a previous problem, the relative velocity magnitude comes out negative in the solution, which doesn't make sense 4. Feb 23, 2014 ### lewando Not following how you arrived at that. Squaring eq. 1 should give you: (0.707vb)2 = 25cos2θ No substitution into (eq. 2)2 needed. Just add the 2 equations. However you did it, did you at least get vb = 6.21 m/s? 5. Feb 23, 2014 ### c0der Yes I got vb = 6.21 or vb = -3.382 from the quadratic. I substituted because there are 2 unknowns there. When I add the squares of the equations, vb^2 (cos^2(45) + sin^2(45)) = 25 ( cos^2(theta) + sin^2(theta) ) + 4 + 20*sin(theta) vb^2 = 29 + 20sin(theta), still 2 unknowns 6. Feb 23, 2014 ### lewando I guess I should have been clearer. Sorry, big EDIT: Before squaring eq. 1, isolate the sinθ on the right-hand side. Before squaring eq. 2, isolate the cosθ on the right-hand side. Now you can square them. Adding the 2 equations gives you sin2θ + cos2θ on the right hand side. And everyone knows what that is equal to. Now you have a quadratic in vb only. 7. Feb 23, 2014 ### c0der Excellent, that still works out the same, so I take the positive magnitude. In other problems such as 12-227, the magnitude of the relative velocity comes out negative. If this is not wrong, this means that the overall sense of direction of the vector needs to be reversed. In this problem here, we get a quadratic with a negative velocity magnitude. I assume we discard this not because it's negative, but because it's smaller than vb/r as vb = vr + vb/r. Hope this is correct. Draft saved Draft deleted Similar Discussions: Hibbeler 12-231
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072514176368713, "perplexity": 2374.6643565711774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00191.warc.gz"}
http://www.maa.org/publications/periodicals/convergence/integral-pounds?device=mobile
# Integral Pounds Author(s): A certain man had in his trade four weights with which he could weigh integral pounds from one up to 40.  How many pounds was each weight?  (Fibonacci, Liber Abaci, 1202)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539706707000732, "perplexity": 17515.388023542895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640550.27/warc/CC-MAIN-20150417045720-00031-ip-10-235-10-82.ec2.internal.warc.gz"}
https://diabetesjournals.org/care/article/27/3/856/23010/Erratum
In the above-listed article, the final sentence of the conclusions section in the abstract should read: “Difference in diabetes incidence was detectable only in the IGT subgroup; weight loss was similar in subjects with IGT or NGT” (i.e., “IGT or NGT” replaces “IGT and or NGT”).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9158570170402527, "perplexity": 10843.401480290882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00288.warc.gz"}
http://www.numdam.org/item/M2AN_2007__41_6_1021_0/
Finite-difference preconditioners for superconsistent pseudospectral approximations ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 41 (2007) no. 6, p. 1021-1039 The superconsistent collocation method, which is based on a collocation grid different from the one used to represent the solution, has proven to be very accurate in the resolution of various functional equations. Excellent results can be also obtained for what concerns preconditioning. Some analysis and numerous experiments, regarding the use of finite-differences preconditioners, for matrices arising from pseudospectral approximations of advection-diffusion boundary value problems, are presented and discussed, both in the case of Legendre and Chebyshev representation nodes. DOI : https://doi.org/10.1051/m2an:2007052 Classification:  65N35,  65F15,  41A10 Keywords: spectral collocation method, preconditioning, superconsistency, Lebesgue constant @article{M2AN_2007__41_6_1021_0, author = {Fatone, Lorella and Funaro, Daniele and Scannavini, Valentina}, title = {Finite-difference preconditioners for superconsistent pseudospectral approximations}, journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique}, publisher = {EDP-Sciences}, volume = {41}, number = {6}, year = {2007}, pages = {1021-1039}, doi = {10.1051/m2an:2007052}, zbl = {1133.65103}, mrnumber = {2377105}, language = {en}, url = {http://www.numdam.org/item/M2AN_2007__41_6_1021_0} } Fatone, Lorella; Funaro, Daniele; Scannavini, Valentina. Finite-difference preconditioners for superconsistent pseudospectral approximations. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 41 (2007) no. 6, pp. 1021-1039. doi : 10.1051/m2an:2007052. http://www.numdam.org/item/M2AN_2007__41_6_1021_0/ [1] C. Canuto and P. Pietra, Boundary interface conditions within a finite element preconditioner for spectral methods. J. Comput. Phys. 91 (1990) 310-343. | Zbl 0717.65091 [2] C. Canuto and A. Quarteroni, Spectral and pseudo-spectral methods for parabolic problems with nonperiodic boundary conditions. Calcolo 18 (1981) 197-218. | Zbl 0485.65078 [3] C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral Methods in Fluid Dynamics. Springer, New York (1988). | MR 917480 | Zbl 0658.76001 [4] L. Fatone, D. Funaro and G.J. Yoon, A convergence analysis for the superconsistent Chebyshev method. Appl. Num. Math. (2007) (to appear). | MR 2376292 | Zbl pre05219831 [5] D. Funaro, Polynomial Approximation of Differential Equations, Lecture Notes in Physics 8. Springer, Heidelberg (1992). | MR 1176949 | Zbl 0774.41010 [6] D. Funaro, Some remarks about the collocation method on a modified Legendre grid. J. Comput. Appl. Math. 33 (1997) 95-103. | Zbl 0868.65049 [7] D. Funaro, Spectral Elements for Transport-Dominated Equations, Lecture Notes in Computational Science and Engineering 1. Springer (1997). | MR 1449871 | Zbl 0891.65118 [8] D. Funaro, A superconsistent Chebyshev collocation method for second-order differential operators. Numer. Algorithms 28 (2001) 151-157. | Zbl 0991.65071 [9] D. Funaro, Superconsistent discretizations. J. Scientific Computing 17 (2002) 67-80. | Zbl 0999.65073 [10] D. Gottlieb, M.Y. Hussaini and S.A. Orszag, Theory and application of spectral methods, in Spectral Methods for Partial Differential Equations, R.G. Voigt, D. Gottlieb and M.Y. Hussaini Eds., SIAM, Philadelphia (1984). | Zbl 0599.65079 [11] P. Haldenwang, G. Labrosse, S. Abboudi and M. Deville, Chebyshev 3-D spectral and 2-D pseudospectral solvers for the Helhmoltz equation. J. Comput. Phys. 55 (1981) 115-128. | Zbl 0544.65071 [12] T. Kilgore, A characterization of the Lagrange interpolation projections with minimal Tchebycheff norm. J. Approximation Theory 24 (1978) 273-288. | Zbl 0428.41023 [13] D.H. Kim, K.H. Kwon, F. Marcellán and S.B. Park, On Fourier series of a discrete Jacobi-Sobolev inner product. J. Approximation Theory 117 (2002) 1-22. | Zbl 1019.42014 [14] S.D. Kim and S.V. Parter, Preconditioning Chebyshev spectral collocation method for elliptic partial differential equations. SIAM J. Numer. Anal. 33 (1996) 2375-2400. | Zbl 0861.65095 [15] S.D. Kim and S.V. Parter, Preconditioning Chebyshev spectral collocation by finite-difference operators. SIAM J. Numer. Anal. 34 (1997) 939-958. | Zbl 0874.65088 [16] F. Marcellán, B.P. Osilenker and I.A. Rocha, Sobolev-type orthogonal polynomials and their zeros. Rendiconti di Matematica 17 (1997) 423-444. | Zbl 0891.33005 [17] E.H. Mund, A short survey on preconditioning techniques in spectral calculations. Appl. Num. Math. 33 (2000) 61-70. | Zbl 0964.65132 [18] S.A. Orszag, Spectral methods for problems in complex geometries. J. Comput. Phys. 37 (1980) 70-92. | Zbl 0476.65078 [19] G. Szegö, Orthogonal Polynomials. American Mathematical Society, New York (1939). | JFM 65.0278.03 | Zbl 0023.21505 [20] L.N. Trefethen and M. Embree, Spectra and Pseudospectra: the behavior of nonnormal matrices and operators. Princeton University Press (2005). | MR 2155029 | Zbl 1085.15009
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16710741817951202, "perplexity": 7074.556247015791}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347441088.63/warc/CC-MAIN-20200604125947-20200604155947-00530.warc.gz"}
http://math.stackexchange.com/users/11406/obounaim
# obounaim less info reputation 6 bio website location Algiers, Algeria age 23 member for 3 years, 4 months seen Jun 15 '12 at 6:54 profile views 14 Ubuntu enthusiast from Algeria. # 1 Question 13 Is that true that all the prime numbers are of the form $6m \pm 1$? # 168 Reputation This user has no recent positive reputation changes This user has not answered any questions # 19 Accounts Stack Overflow 1,193 rep 1923 Mathematics 168 rep 6 Server Fault 131 rep 3 Super User 131 rep 16 Meta Stack Exchange 112 rep 6
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2148333489894867, "perplexity": 6544.697912382923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00260-ip-10-234-18-248.ec2.internal.warc.gz"}
https://gideonwolfe.com/posts/bio/bioinfoproj/project/
# Intro⌗ This past quarter (Spring 2020) has been an interesting one for sure. With the constant news of COVID and worldwide protests coming through every media outlet, focusing on schoolwork seemed like futile pursuit at times. Thankfully, I was fortunate enough to have (mostly) fantastic professors that managed to provide informative and engaging lectures and assignments for the entirety of this challenging quarter. A part of me is really enjoying working from home this quarter, and that part is mostly my GPA, which ended up being over an entire point higher than my unimpressive cumulative GPA. Not quite sure why or how that happened, but I guess I’ll just continue as I am… # The Class⌗ One of the things that made this quarter special is the fact that I was able to take my schools Bioinformatics class, which I have been trying to get into for a couple years. It was taught by one of my favorite professors (and now my advisor), is only taught once a year, and requires a manual override to register for. I was thrilled when I found out I scraped one of the last slots! This class was a unique opportunity to collaborate with other STEM students (bioilogy/chemistry) as well as grad students and tackle a real world research problem with techniques and tools we learn in the first part of the quarter. The first 4 weeks brought all the CS people up to speed on the basics of DNA transcription/translation, protein structure, and alignment concepts. # The Project⌗ The latter half of the quarter was devoted to a collaborative research project, the groups for which we were allowed to self select out of a pool of pre approved project ideas. The project I chose was this: Target various amino acids on the spike protein of the SARS-Cov-2, mutate them, and analyze the effectiveness of said mutations. I chose this project because I like the idea of pinpointing vulnerable areas on the protein and mutating them to cause instability in the protein, possibly breaking functionality in the process. My role in the project mainly consisted of managing communications between us and our professor, as well as creating the pipeline for our data. In short, the pipeline worked like this: • Use proMuteBatch to generate batch files for proMute, in order to exhaustively mutate multiple locations on the protein • Extract the output of proMute (modified PDB files) as well as energy minimization data for each individual mutation (19 per target site) • Generate a script for the SDM tool, which allows us to predict $$\Delta\Delta G$$ values for each mutation. I created a short bash script to automate this task. Although it could certainly be better optimized, I sacrificed some efficiency for readability by non linux nerds. #!/bin/bash # After making promute, copy this script into the build directory # Change the "FILES" variable to the script (or scripts) you would like to execute # to profile different mutation scripts, you can run this script with the "time" command # Mutations on second PDB 6Y2E ./proMuteBatch 6Y2E A:A 25:25 X T25 # exhaustively mutate T at residue 25, chain A ./proMuteBatch 6Y2E A:A 26:26 X T26 # exhaustively mutate T at residue 26, chain A ./proMuteBatch 6Y2E A:A 27:27 X L27 # exhaustively mutate L at residue 27, chain A ./proMuteBatch 6Y2E A:A 28:28 X N28 # exhaustively mutate N at residue 28, chain A ./proMuteBatch 6Y2E A:A 29:29 X G29 # exhaustively mutate G at residue 29, chain A ./proMuteBatch 6Y2E A:A 30:30 X L30 # exhaustively mutate L at residue 30, chain A ./proMuteBatch 6Y2E A:A 31:31 X W31 # exhaustively mutate W at residue 31, chain A ./proMuteBatch 6Y2E A:A 32:32 X L32 # exhaustively mutate L at residue 32, chain A declare -A residues residues+=( ["PHE"]="F" ["LEU"]="L" ["ILE"]="I" ["MET"]="M" ["VAL"]="V" ["SER"]="S" ["PRO"]="P" ["THR"]="T" ["ALA"]="A" ["TYR"]="Y" ["HIS"]="H" ["GLN"]="Q" ["ASN"]="N" ["LYS"]="K" ["ASP"]="D" ["GLU"]="E" ["CYS"]="C" ["TRP"]="W" ["ARG"]="R" ["GLY"]="G" ) sed -i 's/$/ em/' T25 sed -i 's/$/ em/' T26 sed -i 's/$/ em/' L27 sed -i 's/$/ em/' N28 sed -i 's/$/ em/' G29 sed -i 's/$/ em/' L30 sed -i 's/$/ em/' W31 sed -i 's/$/ em/' L32 # Customize with your scripts FILES=("T25" "T26" "L27" "N28" "G29" "L30" "W31" "L32") # Execute the lines in the script for FILE in ${FILES[@]}; do # Create a file to store SDM script SDMFILE="${FILE}_sdm" touch $SDMFILE # Read each mutation into an array readarray -t LINES < "$FILE" # Routine to execute for every mutation for LINE in "${LINES[@]}"; do # Make a directory to store outputs # Structure is PDBID.Chain.Location.Res2.output OUTPUTDIRNAME=$(echo $LINE | awk '{ print$2"."$3"."$4"."$5".output" }') echo [INFO] Creating Directory$OUTPUTDIRNAME mkdir -p $OUTPUTDIRNAME # Directory to look for EM files that were generated # Structure is PDBID.ChainLocationRes2 EMDIRNAME=$(echo $LINE | awk '{ print$2"."$3$4$5 }') # Actually call proMute$LINE # translate the promute command to an SDM script line PDBID=$(echo$LINE | awk '{print $2}') RESNUM=$(echo $LINE | awk '{print$4}') CHAIN=$(echo$LINE | awk '{print $3}') FINALRES=$(echo $LINE | awk '{print$5}') ORIGRESCODE=$(cat${PDBID}.pdb | grep ATOM | awk -v rn="$RESNUM" -v chn=$CHAIN '$6 == rn &&$5 == chn {print $4}' | sed 's/.*$$...$$/\1/' | head -n 1) ORIGRES=${residues[$ORIGRESCODE]} if [ "$ORIGRES" != "$FINALRES" ] then # put this line in the script echo$(echo $CHAIN$ORIGRES$RESNUM$FINALRES) >> $SDMFILE fi # Move all output files: PDB, Fasta, and EM data mv *.txt$OUTPUTDIRNAME mv *.pdb $OUTPUTDIRNAME mv external/em/$EMDIRNAME $OUTPUTDIRNAME mv$FILE $OUTPUTDIRNAME echo [INFO] Moving files to$OUTPUTDIRNAME done # cleanup output rm -rf "${FILE}_output" mkdir -p "${FILE}_output" echo [INFO] Cleaning outputs from last run echo [INFO] Moving new files to "${FILE}_output" mv$SDMFILE "${FILE}_output/" mv *.output "${FILE}_output/" done Most of this script is pretty self explanatory. I simply generate a list of proMute commands using proMuteBatch, and executed all of them. I used clues from filenames and commands (such as protein name and residue number) to generate unique directories of every mutation ran by proMute. The only “confusing” part would be the generation of the SDM script. In short, the script format for SDM requires the original residue type for the location that is being mutated. proMute does not care about or output this information, so I had to do some fancy pipelining to get it to work. I look into the PDB file for the target protein, and find isolate all the ATOM lines, that correspond to individual atoms in the protein. I then isolate the residue number of interest, making sure we are looking at the correct chain in the PDB file. The first atom of the residue is kept, and then we simply look at the residue it belongs to. I wasn’t sure about the PDB notation, because sometimes one amino acid would be represented by multiple similar 3-4 letter codes. So I just chopped off anything after three characters, and then run the three letter name through my dictionary, which spits out the single letter amino acid code required by SDM. That’s a lot of work to go through for one letter… The final step in the pipeline was a script created by an adjacent group studying double mutations in the same protein. This script analyzed the energy minimization data and used it to predict an effect on protein function. # Results⌗ Here are my favorite figures from our final paper. We used pymol to color and label the locations of interest, as well as illustrate their bonds and connections. The colored bar charts on the right represent one mutation for each of the 19 bars. The data plotted is the output $$\Delta\Delta G$$ produced by SDM, where a negative value indicates reduced stability in the protein. A heatmap visualization of the same data: As you can see by the heatmaps, we were able to generate a significant amount of destabilizing mutations, especially on the 6Y2Estructure. # Conclusion⌗ If anyone wants to read our final project, I’ll leave it here to download. At the last minute, we decided to combine the reports of group 4 and 5, who investigated single and double mutations respectively. This provides a good way to compare a variety of possible mutations of the same structure. I had a really great time in this class, and could not have dreamed of creating a project of this caliber, and I could not have done it without my awesome group members and my professor who had the perfect “hands off” approach where he would happily provide guidance, but left large decisions up to the groups themselves.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37840238213539124, "perplexity": 4390.220531953827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00272.warc.gz"}
https://cs.stackexchange.com/questions/105439/hoare-logic-proving-conjunction-rule-from-basic-rules-possible-or-not
Hoare logic, proving conjunction rule from basic rules, possible or not? (This is HW.) Suppose I have these following proof rules given. I am currently considering if I can prove the conjunction rule (given below) from the above given Hoare logic rules. My answer would be "no", because from the proof rules of Hoare logic, I can neither use the Implied rule to prove it (because from $$A$$ I cannot infer $$A \wedge B$$) nor introduce it anywhere from the first four proof rules (the only one which introduces conjunction in the post-condition requires a while). I am fairly confident that this line of reasoning is correct, but am I missing anything? For example, can I argue on a the propositional logic level that since I have $$P$$ as a precondition, and I have $$Q_1$$ and $$Q_2$$ as postconditions, I can simply introduce conjunction on the propositional logic level (instead of inferring purely on Hoare logic)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840739965438843, "perplexity": 650.2740247782185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00402.warc.gz"}
http://mfat.imath.kiev.ua/authors/name/?author_id=87
# D. P. Proskurin Search this author in Google Scholar Articles: 9 ### On well-behaved representations of $\lambda$-deformed CCR Methods Funct. Anal. Topology 23 (2017), no. 2, 192-205 We study well-behaved ∗-representations of a λ-deformation of Wick analog of CCR algebra. Homogeneous Wick ideals of degrees two and three are described. Well-behaved irreducible ∗-representations of quotients by these ideals are classified up to unitary equivalence. ### Representations of relations with orthogonality condition and their deformations Methods Funct. Anal. Topology 18 (2012), no. 4, 373-386 Irreducible representations of $*$-algebras $A_q$ generated by relations of the form $a_i^*a_i+a_ia_i^*=1$, $i=1,2$, $a_1^*a_2=qa_2a_1^*$, where $q\in (0,1)$ is fixed, are classified up to the unitary equivalence. The case $q=0$ is considered separately. It is shown that the $C^*$-algebras $\mathcal{A}_q^F$ and $\mathcal{A}_0^F$ generated by operators of Fock representations of $A_q$ and $A_0$ are isomorphic for any $q\in (0,1)$. A realisation of the universal $C^*$-algebra $\mathcal{A}_0$ generated by $A_0$ as an algebra of continuous operator-valued functions is given. ### On $C^*$-algebra generated by Fock representation of Wick algebra with braided coefficients D. Proskurin Methods Funct. Anal. Topology 17 (2011), no. 2, 168-173 We consider $C^*$-algebras $\mathcal{W}(T)$ generated by operators of Fock representations of Wick $*$-algebras with a braided coefficient operator $T$. It is shown that for any braided $T$ with $||T||<1$ one has the inclusion $\mathcal{W}(0)\subset\mathcal{W}(T)$. Conditions for existence of an isomorphism $\mathcal{W}(T)\simeq\mathcal{W}(0)$ are discussed. ### $*$-wildness of some classes of $C^*$-algebras Methods Funct. Anal. Topology 12 (2006), no. 4, 315-325 We consider the complexity of the representation theory of free products of $C^*$-algebras. Necessary and sufficient conditions for the free product of finite-dimensional $C^*$-algebras to be $*$-wild is presented. As a corollary we get criteria for $*$-wildness of free products of finite groups. It is proved that the free product of a non-commutative nuclear $C^*$-algebra and the algebra of continuous functions on the one-dimensional sphere is $*$-wild. This result is applied to estimate the complexity of the representation theory of certain $C^*$-algebras generated by isometries and partial isometries. ### On the monotone independent families of operators Daniil P. Proskurin Methods Funct. Anal. Topology 10 (2004), no. 2, 64-68 ### On C*-algebra associated with ${\rm Pol}({\rm Mat}_ {2,2})_q$ Methods Funct. Anal. Topology 7 (2001), no. 1, 88-92 ### Stability of special class of $q_{ij}$-CCR and extensions of irrational rotation algebras Daniil P. Proskurin Methods Funct. Anal. Topology 6 (2000), no. 3, 97-104 ### Representations of Wick CCR algebra D. P. Proskurin Methods Funct. Anal. Topology 5 (1999), no. 2, 83-85 ### About positivity of Fock inner product of a certain Wick algebras D. P. Proskurin Methods Funct. Anal. Topology 5 (1999), no. 1, 88-94
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642538189888, "perplexity": 580.3789522496515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323864.76/warc/CC-MAIN-20170629033356-20170629053356-00033.warc.gz"}
https://yo-dave.com/2018/08/12/de-tabifying/
# De-Tabifying There is probably no more boring task in programming than processing a file to replace tabs with spaces or doing the reverse and replacing spaces with tabs. And yet it has started religious wars that have raged for years in various programming communities. This “tabifying” or “de-tabifying” is not something I have to do frequently, but every once in a while, I need to convert tabs to spaces in some text. Sometimes I need to do it repeatedly and quickly. For example, in a Markdown editor, I basically need to do a de-tabify on every keystroke. It can’t take long regardless of how long the input text is. Here’s what I came up with. (defn- blanks "Return a string consisting of the number of spaces requested." [how-many] (str/join (repeat how-many \space))) (defn- make-blank-array "Return a vector of strings containing only blank characters. The vector consists of strings of blanks of size 'most' down to one." [most] (loop [cnt most out []] (if (zero? cnt) out (recur (dec cnt) (conj out (blanks cnt)))))) ; Memoize so that we don't regenerate on every keystroke​. (def ^:private memoized-blank-array (memoize make-blank-array)) (defn detab "Replace any tabs in the input text with spaces. If no tab spacing is set in the options, a default value of 4 is used." ([text] (detab text {:tab-spacing 4})) ([text options] (let [tab-spacing (or (:tab-spacing options) 4)] (if-not (or (nil? text) (empty? text) (str/includes? text \tab)) text ; otherwise (let [spcs (memoized-blank-array tab-spacing)] (loop [in text out "" offset 0] (if (empty? in) out (let [nc (first in) rnc (if (= nc \tab) (get spcs offset) nc) ofs (if (= nc \newline) 0 (mod (+ offset (count rnc)) tab-spacing))] (recur (rest in) (str out rnc) ofs))))))))) It’s pretty straightforward. Assuming the text needs to be de-tabified at all, just run through the text character-by-character keeping track of where the current character is relative to the beginning of the line. If a tab occurs​, determine the number of spaces to insert based on the offset from the beginning of the line, i.e. where the next tab stop occurs. Get a pre-computed string of spaces by doing a little table lookup and replace the tab. Then just keep going. The only tricky thing in the above implementation is “memoizing” the table of pre-computed replacement strings.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42989328503608704, "perplexity": 6670.885967745821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745522.86/warc/CC-MAIN-20181119084944-20181119110340-00040.warc.gz"}
https://emacs.stackexchange.com/questions/5356/recompile-tex-file-without-modification
# Recompile tex file without modification I `\input{b.tex}` in `a.tex`. I made changes to `b.tex`, and tried to recompile `a.tex` by `C-c C-c` in AUCTeX, but it didn't work since `a.tex` hadn't been changed. How can I recompile then? • Does the manual link to the tex-compile command in the menu-bar work when `a.tex` has not been modified? – lawlist Dec 14 '14 at 0:39 • can i select menu in CLI? – Tim Dec 14 '14 at 0:57 • @Tim: do you mean can you select the menu when using Emacs through a terminal emulator? You should be able to -- alternately, you can hit `f10` (or `M-x menu-bar-open`) to open up the menu bar. – Dan Dec 15 '14 at 12:04 ## 1 Answer `C-c C-c` calls `TeX-command-master`, which prompts you for a command (with what it expects you to choose as the default option). Even if it does not detect that the document has changed, you can always force it to compile by entering `LaTeX` after you `C-c C-c` and it prompts you.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049822688102722, "perplexity": 2703.627886277366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00263.warc.gz"}
https://www.physicsforums.com/threads/index-of-subgroups.260722/
# Index of subgroups 1. Sep 30, 2008 ### fk378 1. The problem statement, all variables and given/known data Let G be a group and let H,K be subgroups of G. Assume that G is finite and that the indices |G| and |G:K| are relatively prime. Show that G=HK. Hint: Show that |G(intersect)K| is divisible by both |G| and |G:K| and then use the counting principle for |HK|. 3. The attempt at a solution First off, why do the indices have to be relatively prime? I don't know how to show that |G(intersect)K| is divisible by both |G| and |G:K|, but I do know that if I assume those, I know how to use the counting principle because ultimately it will come down to saying that |HK|=c|G| for some multiple c, and c must = 1 otherwise it says that for c>1, |HK|>|G| and that is not possible. EDIT: Is the intersection of the left coset of H and the left coset of K disjoint? Since they are both equivalence classes they would have to either be disjoint or equal, no? So then |G(intersect)K| would consist of both xH and xK for some x in G.....? Last edited: Oct 1, 2008 2. Oct 1, 2008 ### morphism If the indices were not prime then it's easy to come up with examples where G does not equal HK. As to showing that |G$\cap$K| is divisible by both |G| and |G:K|, here's a hint: H$\cap$K is a subgroup of H, K and G. 3. Oct 1, 2008 ### fk378 I understand that H(union)K is a subgroup of H, K and G. But I don't understand how the numbers would work. How do we know that |G(union)K| is definitely a multiple of both |G| and |G:K|? 4. Oct 1, 2008 ### fk378 I just added this "edit" into my original question: Is the intersection of the left coset of H and the left coset of K disjoint? Since they are both equivalence classes they would have to either be disjoint or equal, no? So then |G(intersect)K| would consist of both xH and xK for some x in G.....? 5. Oct 1, 2008 ### morphism Just write down what |G|, |G:K| and |G$\cap$K| are. It will also help to think about what |H$\cap$K| and |K$\cap$K| are. "The" left coset of H? I think you need to review your definitions. A coset is not an equivalence class; it's a set. Last edited: Oct 1, 2008 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Index of subgroups
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866655230522156, "perplexity": 1132.7272963210894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607325.40/warc/CC-MAIN-20170523025728-20170523045728-00524.warc.gz"}
https://www.mbatious.com/topic/52/demystifying-permutation-combination-rajesh-balasubramanian-cat-100th-percentile-cat-2011-2012-and-2014-part-2-7
# Demystifying Permutation & Combination - Rajesh Balasubramanian - CAT 100th Percentile - CAT 2011, 2012 and 2014 (Part 2/7) • PROBLEMS BASED ON DIGITS OF A NUMBER How many three digit numbers exist? How many of these three–digit numbers comprise only even digits? In how many 3–digit numbers is the hundreds digit greater than the ten’s place digit, which is greater than the units’ place digit? Three digit numbers range from 100 to 999. There are totally 900 such numbers. There is a simple framework for handling digits questions. Let three digit number be ‘abc’. ‘a’ can take values 1 to 9 {as the leading digit cannot be zero}. ‘b’ can take values 0 to 9. ‘c’ can take values 0 to 9. Totally, there are 9 × 10 × 10= 900 possibilities. Now, three digit number with even digits Let the three–digit number be ‘abc’. ‘a’ can take values 2, 4, 6 or 8 {as the leading digit cannot be zero}. ‘b’ can take values 0, 2, 4, 6 or 8. ‘c’ can take values 0, 2, 4, 6 or 8. 4 × 5 × 5 = 100 numbers In how many 3–digit numbers is the hundreds digit greater than the ten’s place digit, which is greater than the units’ place digit? The digits have to be from 0 to 9. Of these some three distinct digits can be selected in 10C3 ways. For each such selection, exactly one order of the digits will have the digits arranged in the descending order. So, number of possibilities = 10C3 = 120. We do not have to worry about the leading digit not being zero, as that possibility is anyway ruled out as a > b > c. How many 4–digit numbers exist where all the digits are distinct? Let 4–digit number be ‘abcd’. ‘a’ can take values from 1 to 9. 9 possibilities ‘b’ can take values from 0 to 9 except ‘a’. 9 possibilities ‘c’ can take values from 0 to 9 except ‘a’ and ‘b’. 8 possibilities ‘d’ can take values from 0 to 9 except ‘a’, ‘b’ and ‘c’. 7 possibilities Total number of outcomes = 9 × 9 × 8 × 7 PROBLEMS BASED ON REARRANGEMENT OF LETTERS OF A WORD In how many ways can we rearrange the letters of the word ‘MALE’? In how many ways can we rearrange the letters of the word ‘ALPHA’? In how many ways can we rearrange the letters of the word ‘LETTERS’? Number of ways of arranging letters of the word MALE = 4! = 24. (Think about the number of ways of arranging r distinct things). Now, ‘ALPHA’ is tricky. If we had 5 distinct letters, the number of rearrangements would be 5!, but here we have two ‘A’s. For a second, let us create new English alphabet with A1 and A2. Now the word A1LPHA2 can be rearranged in 5! ways. Now, in this 5! listings we would count A1LPHA2 and A2LPHA1, both of which are just ALPHA in regular English. Or, we are effectively double–counting when we count 5!. So, the total number of possibilities = 5!/2. The formula is actually 5!/2!. . Whenever we have letters repeating we need to make this adjustment. In how many ways can we rearrange the letters of the word ‘LETTERS’? – 7!/2!2! In how many ways can we rearrange letters of the word ‘POTATO’ such that the two O’s appear together? In how many ways can we rearrange letters of the word ‘POTATO’ such that the vowels appear together? The two O’s appear together Let us put these two O’s in a box and call it X. Now, we are effectively rearranging the letters of the word PTATX. This can be done in 5!/2! ways. Now, we need to count the ways when the vowels appear together. Let us put the three vowels together into a box and call it Y. We are effectively rearranging PTTY. This can be done in 4!/2! ways. However, in these 4!/2! ways, Y itself can take many forms. For instance, a word PTTY can be PTTAOO or PTTOAO or PTTOOA. How many forms can Y take? Y can take 3!/2! = 3 forms. So, total number of ways = 4!/2! × 3!/2! = = 12 × 3 = 36 ways PROBLEMS BASED ON DICE Dice questions have a similar framework to digits question. When a die is thrown thrice, we can take the outcomes to be ‘a’, ‘b’, ‘c’. There are two simple differences vis-a-vis digits questions. 1. a, b, c can take only values from 1 to 6. In digits we have to worry about 0, 7, 8 and 9 as well 2. There are no constraints regarding the leading die. All throws have the same number of options. So, in many ways, dice questions are simpler versions of digits questions. In how many ways can we roll a die thrice such that all three throws show different numbers? Let the throws be ‘a’, ‘b’, ‘c’. ‘a’ can take 6 options – 1 to 6 ‘b’ can take 5 options – 1 to 6 except ‘a’ ‘c’ can take 4 options – 1 to 6 except ‘a’ and ‘b’ Total number of outcomes = 6 × 5 × 4 = 120 In how many ways can we roll a die thrice such that at least two throws show the same number? We can have either two throws same or all three same. There are 6 ways in which all three can be same — (1, 1, 1), (2, 2, 2), (3, 3, 3), (4, 4, 4), (5, 5, 5), (6, 6, 6). Now, two throws can be same in three different forms a = b, b = c or c = a a = b –> Number of outcomes = 6 × 1 × 5. ‘a’ can take values from 1 to 6. b should be equal to ‘a’ and c can take 5 values – 1 to 6 except ‘a’ b = c there are 30 possibilities, and c = a there are 30 possibilities Total number of options = 6 + 30 + 30 + 30 = 96 In how many ways can we roll a die twice such that the sum of the numbers on the two throws is an even number less than 8? Sum can be 2, 4 or 6 2 can happen in one way: 1 + 1 4 can happen in 3 ways: 1 + 3, 2 + 2, 3 + 1 6 can happen in 5 ways: 1 + 5, 2 + 4, 3 + 3, 4 + 2,5 + 1 Totally, there are 1 + 3 + 5 = 9 ways. PROBLEMS BASED ON COIN TOSSES When a coin is tossed three times, how many ways can exactly one head show up? When a coin is tossed coin 5 times, in how many ways can exactly 3 heads show up? When a coin is tossed 5 times, in how many ways can do utmost 3 heads show up? Three coins are tossed, options with one head are HTT, THT and TTH. 3 ways 5 coins are tossed, three heads can be obtained as HHHTT, HTHTH, TTHHH, …etc. Obviously this is far tougher to enumerate. We can think of this differently. All the versions are nothing but rearrangements of HHHTT. This can be done in 5!/3!2! ways. Coins questions are common, so it helps to look at them from another framework also. Let us assume the outcomes of the 5 coin tosses are written down in 5 slots. Now, suppose, we select the slots that are heads and list them down. So, a HHHTT would correspond to 123. HTHTH would be 135. TTHHH would be 345. The list of all possible selections is nothing but the number of ways of selecting 3 slots out of 5. This can be done in 5C3 ways, or, 10 ways. Number of ways of getting exactly r heads when n coins are tossed = nCr In how many ways can we toss a coin 5 times such that there are utmost 3 heads? Number of ways of having 3 heads = 5C3 = 10 Number of ways of having 2 heads = 5C2 = 10 Number of ways of having 1 head = 5C1 = 5 Number of ways of having 0 heads = 5C0 = 1 Utmost 3 heads = 10 + 10 + 5 + 1 = 26 ways. When a coin is tossed 6 times, in how many ways do exactly 4 heads show up? Exactly 2 tails? When a coin is tossed 6 times, what is the total number of outcomes possible? Coin tossed 6 times, number of ways of getting 4 heads = 6C4 = 15 Exactly two tails = 6C2 = 15 Every outcome where there are 4 heads corresponds to an outcome where there are 2 tails. So, we are effectively counting the same set of outcomes in both scenarios. In other words nCr = nC(n–r) Total number of outcomes = 6C0 + 6C1 + 6C2 + 6C3 + 6C4 + 6C5 + 6C6 = 1 + 6 + 15 + 20 + 15 + 6 + 1 = 64. This 64 is also $2^6$. When a coin is tossed once, there are 2 possible outcomes. When it is tossed 6 times there will be 2 × 2 × 2 × 2 × 2 × 2 = $2^6$ = 64 outcomes. Therefore, nC0 + nC1 + nC2 + ……….. + nCn = $2^n$ {This is also seen in the topic ‘Binomial Theorem’. But never hurts to reiterate!} PROBLEMS BASED ON CARD PACKS Questions based on card pack are very straightforward. We need to know exactly what lies inside a card pack. Once we know this, everything else falls in place. A card pack has 52 cards - 26 in red and 26 in black. There are 4 suits totally, 2 of each colour. Each suit comprises 13 cards - an Ace, numbers 2 to 10, and Jack, Queen and King. The cards with numbers 2 to 10 are called numbered cards. Cards with J, Q, K are called Face cards as there is a face printed on them. In how many ways can we select 4 cards from a card pack such that all are face cards? There are 12 face cards in a pack. Number of ways of selecting 4 out of these = 12C4 In how many ways can we select 3 cards from a card pack such that none are black numbered cards? There are 18 black numbered cards. If we select 3 cards and none of these are black numbered cards, then all of these must be from the remaining 34. Number of ways of selecting 3 cards from 34 is 34C3 In how many ways can we select 5 cards from a card pack such that we select at least 1 card from each suit? We should select 1 card each from 3 of the suits and 2 from the fourth. The suit that contributes the additional card can be selected in 4C1 ways. So, total number of outcomes = 4C1 × 13C1 × 13C1 × 13C1 × 13C2 CIRCULAR ARRANGEMENT Let us say there are n people to be seated around a circular table. In how many ways can this happen? If we think about this as n slots, where n things are to fit in, the answer would be n!. But, what we miss out here is the fact that if every object moves one step to the right/left then this would not be a different arrangement. In fact any rotation of one arrangement is not giving us another arrangement. So, how do we think about this? Let us fix one person’s position. Then, we have the remaining (n–1) persons who can be arranged in (n–1)! ways. In how many ways can 6 people be arranged around a circle? In how many ways can 6 people be arranged around a circle if A and B should never sit together? Number of ways of arranging n people around a circle = (n –1)!. So, the number of ways of arranging 6 people around a table = (6 –1)! = 5! = 120. Now, A and B should never sit together. Let us calculate all the possibilities where A and B do sit together. Now, A and B can be together called X. Number of ways of arranging 5 around a circle = 4!. Now, AB can be sitting such that A is to the left of B or B is to the left of A. So, total number of options = 4! × 2. Number of options where A and B do not sit adjacent to each other = 5! – 2 × 4! = 120 – 48 = 72. SELECTING ONE OR MORE FROM A SET If there are 4 books and 3 CDs on a table, in how many ways can we select at least one item from the table? Each article has 2 options, either you can select it or you can skip it. Total no of option = $2^7$. Within this $2^7$ possibilities, there is one possibility that we skip ALL the items. Since we need to select at least least one item, we need to subtract this possibility. Total number of ways = $2^7$ – 1 = 127. If there are 4 identical copies of a book and 3 identical copies of a CD on a table, in how many ways can we select at least one item from the table? We can select either 0, 1, 2, 3 or 4 books. Similarly, we can select 0, 1, 2 or 3 CDs. So, total number of options = 5 × 4 = 20 ways. Of these one will include the option of not selecting any of the things. So, total number of possible outcomes = 4 × 5 – 1 = 19. Note that here we do not worry about WHICH CD or book we are selecting. Since the CDs and books are identical, only the number of CDs/books matters Looks like your connection to MBAtious was lost, please wait while we try to reconnect.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.638920783996582, "perplexity": 370.2019869488265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424559.25/warc/CC-MAIN-20170723122722-20170723142722-00256.warc.gz"}
https://qanda.ai/en/search/4%20%5E%7B%202%20%20%7D%20%20%20%5Ctimes%20%204%20%5E%7B%202%20%20%7D?search_mode=expression
# Calculator search results Formula Calculate the value Find the number of divisors List all divisors Do prime factorization Organize using the law of exponent $4 ^{ 2 } \times 4 ^{ 2 }$ $256$ Calculate the value $\color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } } \color{#FF6800}{ \times } \color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } }$ Simplify the expression $\color{#FF6800}{ 256 }$ $9$ Find the number of divisors $\color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } } \color{#FF6800}{ \times } \color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } }$ Do prime factorization $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } \color{#FF6800}{ \times } \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } }$ $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } \times \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } }$ Add the exponent as the base is the same $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } \color{#FF6800}{ + } \color{#FF6800}{ 4 } }$ $2 ^ { \color{#FF6800}{ 4 } \color{#FF6800}{ + } \color{#FF6800}{ 4 } }$ Add $4$ and $4$ $2 ^ { \color{#FF6800}{ 8 } }$ $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 8 } }$ Find the number of divisors using an exponent $\color{#FF6800}{ 9 }$ $1 , 2 , 4 , 8 , 16 , 32 , 64 , 128 , 256$ Find all divisors $\color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } } \color{#FF6800}{ \times } \color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } }$ Do prime factorization $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } \color{#FF6800}{ \times } \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } }$ $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } \times \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } }$ Add the exponent as the base is the same $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } \color{#FF6800}{ + } \color{#FF6800}{ 4 } }$ $2 ^ { \color{#FF6800}{ 4 } \color{#FF6800}{ + } \color{#FF6800}{ 4 } }$ Add $4$ and $4$ $2 ^ { \color{#FF6800}{ 8 } }$ $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 8 } }$ List divisors of factors $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 0 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 1 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 2 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 3 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 5 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 6 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 7 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 8 } }$ $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 0 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 1 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 2 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 3 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 5 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 6 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 7 } } , \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 8 } }$ Calculate the product of all divisors $\color{#FF6800}{ 1 } , \color{#FF6800}{ 2 } , \color{#FF6800}{ 4 } , \color{#FF6800}{ 8 } , \color{#FF6800}{ 16 } , \color{#FF6800}{ 32 } , \color{#FF6800}{ 64 } , \color{#FF6800}{ 128 } , \color{#FF6800}{ 256 }$ $2 ^ { 8 }$ Organize using the law of exponent $\color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } } \times 4 ^ { 2 }$ Do a factorization in prime factors until you can no longer factorize $\left ( 2 ^ { 2 } \right ) ^ { 2 } \times 4 ^ { 2 }$ $\left ( 2 ^ { 2 } \right ) ^ { 2 } \times \color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } }$ Do a factorization in prime factors until you can no longer factorize $\left ( 2 ^ { 2 } \right ) ^ { 2 } \left ( 2 ^ { 2 } \right ) ^ { 2 }$ $\left ( \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 2 } } \right ) ^ { \color{#FF6800}{ 2 } } \left ( 2 ^ { 2 } \right ) ^ { 2 }$ Calculate the power of the power $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } \left ( 2 ^ { 2 } \right ) ^ { 2 }$ $2 ^ { 4 } \left ( \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 2 } } \right ) ^ { \color{#FF6800}{ 2 } }$ Calculate the power of the power $2 ^ { 4 } \times \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } }$ $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } } \times \color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } }$ Add the exponent as the base is the same $\color{#FF6800}{ 2 } ^ { \color{#FF6800}{ 4 } \color{#FF6800}{ + } \color{#FF6800}{ 4 } }$ $2 ^ { \color{#FF6800}{ 4 } \color{#FF6800}{ + } \color{#FF6800}{ 4 } }$ Add $4$ and $4$ $2 ^ { \color{#FF6800}{ 8 } }$ $4 ^ { 4 }$ Organize using the law of exponent $\color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } } \times \color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } }$ Add the exponent as the base is the same $\color{#FF6800}{ 4 } ^ { \color{#FF6800}{ 2 } \color{#FF6800}{ + } \color{#FF6800}{ 2 } }$ $4 ^ { \color{#FF6800}{ 2 } \color{#FF6800}{ + } \color{#FF6800}{ 2 } }$ Add $2$ and $2$ $4 ^ { \color{#FF6800}{ 4 } }$ Solution search results Have you found the solution you wanted? Try again Try more features at QANDA! Search by problem image Ask 1:1 question to TOP class teachers AI recommend problems and video lecture
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81890469789505, "perplexity": 1496.9629838779945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00547.warc.gz"}
http://mathhelpforum.com/algebra/80857-question-invertible-matrices.html
# Thread: a question on invertible matrices 1. ## a question on invertible matrices suppose A and B are invertible matrices. Show that Bt is also invertible by producing a matrix C such that (BtA)C=I and C(BtA)=I thanks Mr F edit: The OP adds that t is the transpose. So the question is probably the following: Suppose A and B are invertible matrices. Show that $B^T$ is also invertible by producing a matrix C such that $(B^T A) C = I$ and $C (B^T A) = I$. 2. If B has an inverse, so does $B^T$. What is $(B^TA)(A^{-1}(B^T)^{-1})$ and $(A^{-1}(B^T)^{-1})(B^TA)$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7225186824798584, "perplexity": 432.1472818342549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00666.warc.gz"}
https://www.groundai.com/project/domains-and-random-variables/
Domain Theory and Random Variables # Domain Theory and Random Variables Michael Mislove ###### Abstract The aim of this paper is to establish a theory of random variables on domains. Domain theory is a fundamental component of theoretical computer science, providing mathematical models of computational processes. Random variables are the mainstay of probability theory. Since computational models increasingly involve probabilistic aspects, it’s only natural to explore the relationship between these two areas. Our main results show how to cast results about random variables using a domain-theoretic approach. The pay-off is an extension of the results from probability measures to sub-probability measures. We also use our approach to extend the class of domains for which we can classify the domain structure of the space of sub-probability measures. Department of Computer Science Tulane University, New Orleans, LA 70118 Keywords:  Domain theory, random variables, Skorohod Representation Theorem ## 1 Introduction This paper draws its impetus from a line of work whose goal is to develop a domain-theoretic approach to random variables.\@footnotemark\@footnotetextIn the probability theory literature, random variables are measurable maps from a probability space that take values in the reals, while random elements are measurable mappings from a probability space to an arbitrary measure space. Here we use “random variables” to denote either. The original motivation was to use random variables to devise models for probabilistic computation that don’t suffer from the well-known problems of the probabilistic power domain [19], a program that began about 10 years ago, and recently has seen some notable successes – more on that below. In this paper, we shift the focus from constructing monads for probabilistic choice to laying a foundation for a theory of random variables using domains. We show that an important result from the theory of random variables can be recast in the setting of domain theory, where measurable maps can then be approximated by Scott-continuous maps. The result in question is Skorohod’s Theorem [30], one of the basic results in stochastic process theory. In its simplest form, this theorem states that any Borel probability measure on a Polish space can be realized as the law for a random variable . That is, if is a Borel measure on a Polish space and if denotes Lebesgue measure on the unit interval, then there is a measurable map satisfying , the push forward of under . Furthermore, if in in the weak topology, then the random variables with laws and , respectively, satisfy almost surely. This result allows one to replace arguments about the convergence of measures in the weak topology with arguments about almost sure convergence of measurable maps from the unit interval to Polish spaces. It led Skorohod to develop the theory of càdlàg functions that play a prominent role in the analysis of stochastic processes. Our main results are inspired by Skorohod’s Theorem. Each of our results generalizes from probability measures to sub-probability measures, which are more commonly used in domain theory. Our first result extending Skorohod’s Theorem involves two new ingredients. First, in moving to the domain setting, we develop an approach to proving Skorohod’s theorem in which the Cantor set, , replaces the unit interval, and a domain is the target space. The role of Lebesgue measure is played by , Haar measure on regarded as a countable product of two-point groups. We also show that Skorohod’s Theorem with the unit interval and Lebesgue measure follows as a corollary of our approach. We introduce the Cantor set into the discussion because it offers a ready-made computational model in the form of the Cantor tree, – the full rooted binary tree whose set of maximal elements is isomorphic to the Cantor set. Achieving our results requires the Cantor tree: if we tried our approach using just the Cantor set, which is a chain in the natural order, then all monotone images also would be chains, which would limit the result. But the Cantor tree proves to be just the right structure to generalize to arbitrary countably-based domains as images. This brings up the second new component of our approach: the use of the transport numbers between simple measures that are fundamental to the Splitting Lemma for simple measures on a domain. They allow us to define a sequence of Scott-continuous maps from the Cantor tree to a target domain that approximate a given measure on the domain. To continue the discussion, we need some notation: we realize the Cantor tree, as the set of finite and infinite words over . If we endow with the prefix order, then , the convex power domain of is a coherent domain. If we denote by the set of words of length , then we let denote normalized counting measure on , which is Haar measure when is regarded as a finite group. Moreover, we have , where is the canonical projection. In fact, in SProb, the family of sub-probability measures on , regarded as Scott-continuous valuations over and ordered pointwise. If is a domain, then we let denote the family of Scott-continuous maps , where is a Lawson-closed antichain in , with the order iff and where all components are defined. Our first main result is the following: Theorem 1. (Skorohod’s Theorem for Domains) Let be a countably-based coherent domain, and let be sequence of Borel sub-probability measures satisfying , where the limit is taken in the Lawson topology. Then there are Scott-continuous maps satisfying for each , and in the Lawson topology on . Skorohod’s Theorem is a corollary of Theorem 1 as follows. Any Polish space has a computational model, a countably-based bounded complete domain for which is homeomorphic to the set Max of maximal elements endowed with the relative Scott topology. In fact, Max is a , hence a Borel subset of . The last piece is provided by the fact that the canonical surjection of the Cantor set onto the unit interval preserves all sups and infs, and so it has a lower adjoint preserving all suprema. Following by the maps provided in Theorem 1 then yields Skorohod’s original result. Our second theorem is a special case of the discussion above, when the Polish space is actually totally ordered. In this case, we abandon our indirect approach using the Cantor tree, and instead take a direct approach to considering mappings from the unit interval, but also restricting to the case that is a complete chain. This allows us to prove the following result using direct, domain-theoretic arguments: Theorem 3. If is a complete chain with as the only compact element, then , the family of sub-probability measures on , and , the family of probability measures on , are continuous lattices. This result significantly expands our knowledge of the domain structure of the family of sub-probability measures on a domain . Indeed, up to this point, the only domains for which the domain structure of is known are a tree, , for which BCD, the category of bounded complete domains, or a finite reverse tree , in which case is in RB [19]. ### 1.1 Related Work Previous work that is related to our results include Edalat’s extensive history of results devising domain-theoretic approaches to topics such as integration theory [11], stochastic processes [12], dynamical systems and fractals [13], and Brownian motion [4]. His development with Heckmann of the formal ball model [15] provided an approach tailored to modeling metric spaces and Lipschitz maps using domain theory. The concept of a computational model emerged in Edalat’s work on domain models of spaces arising in real analysis using the domain of compact subsets under reverse inclusion, where the target space arises as the set of maximal elements. The first paper formally presenting such a model was [13], where a domain model for locally compact second countable spaces was given. That paper presents a range of applications of the approach, including dynamical systems, iterated function systems and fractal, a computational model for classical measure theory on locally compact spaces, and a computational generalization of Riemann integration. Related work led to the formal ball model [15] which was tailor-made for modeling metric spaces and Lipschitz functions. Further discussion of these developments occurs in our discussion of Polish spaces in Section 4 below. Other related work concerns the program to develop random variable models of probabilistic computational processes. This began with [23], a paper that provided a domain model for finite random variables. Further efforts had limited success until recently. The model proposed in [17] turned out to be flawed, as was initially shown in [24, 25]. But inspired by ideas from [17], Barker [3] devised a monad of random variables that gives an abstract model for randomized algorithms. This line of research was initiated by Scott [29], who showed how the model of the lambda calculus could be extended naturally to support probabilistic choice with the aid of a random variable . Barker’s results abstract Scott’s approach by providing a model of randomized PCF that adds a version of probabilistic choice based on random variables. Finally, the author has devised another monad based on random variables [26] that supports settings in which processes, such as those representing honest participants in a crypto-protocol, for instance, have access to distinct sources of randomness, something that Barker’s monad does not support. It is notable that both of these monads leave important Cartesian closed categories of domains invariant – in particular, the category BCD of bounded complete domains, as well as the CCC RB of retracts of bifinite domains invariant, and each enjoys a distributive law with respect to at least one of nondeterminism monads. The rest of the paper is as follows. In the next section, we review the material we need from a number of areas, domain theory, topology, and probability theory. Section 3 develops results about mappings from the Cantor tree to the space of sub-probability measures on a countably-based coherent domain . Section 4 contains the main results of the paper, by first recalling the development of Polish spaces as computational models, and then presenting the main theorems. Section 5 summarizes what’s been proved, and discusses future work. ## 2 Background In this section we present the background material we need for our main results. ### 2.1 Domains Our results rely fundamentally on domain theory. Most of the results that we quote below all can be found in [1] or [16]; we give specific references for those that are not. To start, a poset is a partially ordered set. A poset is directed complete if each of its directed subsets has a least upper bound, where a subset is directed if each finite subset of has an upper bound in . A directed complete partial order is called a dcpo. The relevant maps between dcpos are the monotone maps that also preserve suprema of directed sets; these maps are usually called Scott continuous. From a purely topological perspective, a subset of a poset is Scott open if (i) is an upper set, and (ii) if implies for each directed subset . It is routine to show that the family of Scott-open sets forms a topology on any poset; this topology satisfies is the closure of a point, so the Scott topology is always , but it is iff is a flat poset. In any case, a mapping between dcpos is Scott continuous in the order-theoretic sense iff it is a monotone map that is continuous with respect to the Scott topologies on its domain and range. We let DCPO denote the category of dcpos and Scott-continuous maps; DCPO is a Cartesian closed category. If is a dcpo, and , then approximates iff for every directed set , if , then there is some with . In this case, we write and we let . A basis for a poset is a family satisfying is directed and for each . A continuous poset is one that has a basis, and a dcpo is a domain if is a continuous dcpo. An element is compact if , and is algebraic if forms a basis. Domains are sober spaces in the Scott topology. We let DOM denote that category of domains and Scott continuous maps; this is a full subcategory of DCPO, but it is not Cartesian closed. Nevertheless, DOM has several Cartesian closed full subcategories. Two of particular interest to us are the full subcategory SDOM of Scott domains, and BCD its continuous analog. Precisely, a Scott domain is an algebraic domain for which is countable and that also satisfies the property that every non-empty subset of has a greatest lower bound. An equivalent statement to the last condition is that every subset of with an upper bound has a least upper bound. A domain is bounded complete if it also satisfies this last property that every non-empty subset has a greatest lower bound; BCD denotes the category of bounded complete domains and Scott-continuous maps. Domains admit a Hausdorff refinement of the Scott topology which will play a role in our work. The weak lower topology on has the sets of the form if as a basis, where is a finite subset. The Lawson topology on a domain is the common refinement of the Scott- and weak lower topologies on . This topology has the family {U∖↑F∣U Scott open & F⊆P % finite} as a basis. The Lawson topology on a domain is always Hausdorff. A domain is coherent if its Lawson topology is compact. We denote the closure of a subset of a coherent domain in the Lawson topology by . Two examples of coherent domains that we need are the Cantor tree and the unit interval. If is a finite set, then denotes the set of finite and infinite words over . In this case, the family is a base for the Lawson topology. The fact that is clopen in the Lawson topology for each compact element implies that the Lawson topology on a coherent algebraic domain is totally disconnected. Of particular interest is , in which case is the binary Cantor tree whose set of maximal elements is the Cantor set in the relative Scott (= relative Lawson) topology. The other example is the unit interval , where iff or . The Scott topology on the has basic open sets together with for . Since DOM has finite products, is a domain in the product order, where iff for each ; a basis of Scott-open sets is formed by the sets for (this last is true in any domain). The Lawson topology on [0,1] has basic open sets for – i.e., sets of the form for , which is the usual topology. Then, the Lawson topology on is the product topology from the usual topology on . Since has a least element, the same results apply for any power of , where in iff for almost all , and for all . Thus, every power of is a coherent domain. We note that all of these examples – including the last one if is countable – are countably based domains. That is, each has a countable basis. A result that plays an important role for us is the following: ###### Lemma 2.1 If is a countably based domain. then every is the supremum of a countable chain with for each . Proof. Since has a countable base, there is a countable directed set with . If we enumerate , then we define the desired sequence as follows: , and if have been chosen from , then we choose with for each and . This extends the sequence, and then a standard maximality argument shows we can choose an countable sequence with for each . Finally, , since for each , but for each implies . We also need some basic results about Galois adjunctions (cf. Section 0-3 of [16]) in the context of complete lattices. If and are complete lattices, a Galois adjunction is a pair of mappings and satisfying and . In this case, is the lower adjoint, and is the upper adjoint. Lower adjoints preserve all suprema, and upper adjoints preserve all infima. In fact, each mapping between complete lattices that preserves all suprema is a lower adjoint; its upper adjoint is defined by . Dually, each mapping preserving all infima is an upper adjoint; its lower adjoint is defined by . The cumulative distribution function of a probability measure on and its upper adjoint given in the introduction are examples we’ll find relevant. ### 2.2 Subprobability measures on domains and the probabilistic power domain The probability-theoretic approach to measures on a complete metric space starts by considering the Borel -field generated by the open sets, and then defines a sub-probability measure as a non-negative, countably additive set function satisfying .\@footnotemark\@footnotetextMost texts confine the discussion to probability measures, but the results we need are valid for sub-probability measures. We provide proofs for the results we need below. Each such measure then defines an integral for , the Banach space of continuous functions of compact support, for example by approximating using simple functions. The sub-probability measures SProb then can be endowed with the weak topology, in which iff for each . There also is a functional-analytic approach, which starts with the continuous bounded functions on a locally compact Hausdorff space , and then considers the dual space of continuous linear functionals . The Riesz Representation Theorem shows there is an isomorphism between the space of measures on and the dual space of . We then can endow with the weak *-topology: iff for all . A functional is positive if for all , and then the isomorphism above restricts to one between the sub-probability measures SProb and the positive linear functionals with norm . These two approaches coincide – and the weak- and weak-topologies agree – when is a compact metric space. In particular, they agree for a countably-based coherent domain endowed with the Lawson topology. Domain theory traditionally takes yet a third approach to sub-probability measures, one that emphasizes the order structure. In this approach, the sub-probability measures over a domain are viewed as continuous valuations: mappings from the family of Scott-open sets to the interval satisfying: • (Strictness) , • (Modularity) , for , • (Scott continuity) If is directed, then . Valuations are ordered pointwise: iff for all . We denote the set of valuations over a domain with this order by . It is straightforward to show that each Borel sub-probability measure restricts to a unique Scott-continuous valuation on the Scott-open sets. The converse, that each Scott-continuous valuation on a dcpo extends to a unique Borel sub-probability measure was shown by Alvarez-Manilla, Edalat and Saheb-Djorhomi [2]. Linking the order-theoretic approach to and the approaches to SProb outlined above relies on the next result. We recall that a simple sub-probability measure on a space is a finite convex sum , where is finite, for each , and . The following is called the Splitting Lemma: ###### Theorem 2.2 (Splitting Lemma [18]) Let be a domain and let and be simple sub-probability measures on . Then the following are equivalent: 1. , 2. There is a family satisfying: • for each , • for each , • . Moreover, iff (i) for each and (ii) satisfies implies for each . This result can be used to show that, given a basis for , the family forms a basis for ; in particular, each sub-probability measure is the directed supremum of simple measures way-below it, so is a domain if is one. Moreover, Jung and Tix [19] showed that is a coherent domain if is. Our interest is in countably-based coherent domains, in which case we can refine the Splitting Lemma 2.2 and Lemma 2.1; here denotes the family of dyadic rationals in the unit interval: ###### Corollary 2.3 If is a coherent domain with countable basis , then is a countably-based coherent domain with basis . Moreover, if , then the family of transport numbers from the Splitting Lemma 2.2 can be chosen to satisfy for all . Finally, each is the supremum of a countable chain . Proof. It is shown in [19] that is coherent if is, and the Splitting Lemma 2.2 implies is a basis for . We next outline the proof of the second point – that the transport numbers between comparable simple measures all belong to if the coefficients of the measures do. This follows from the proof of the Splitting Lemma 2.2 as presented in [18]: That proof is an application of the Max Flow – Min Cut Theorem to the directed graph which has a “source node,” , connected by an outgoing edge of weight to each “node” , a “sink node,” , with an incoming edge of weight from each element , and edges from to of large weight (say, ), if . A flow is an assignment of non-negative numbers to each edge so that for nodes , where is the weight as defined above, and satisfying for each node . The value of a flow is , the total amount of flow out of using . A cut is a partition of with and . The value of a flow across the cut is . The Max Flow – Min Cut Theorem asserts that the maximum flow on a directed graph is equal to the minimum cut. It is proved by applying the Ford–Fulkerson Algorithm [6]. The algorithm starts by assigning the minimum flow for all edges , and then iterates a process of selecting a path from to , calculating the residual capacity of each edge in the path, defining a residual graph , augmenting the paths in to include additional flow, and then iterating. The result of the algorithm is the set of flows along edges across the cut, which are the transport numbers in our case. Since the calculations of new edge weights involve only arithmetic operations, and since the dyadic rationals form a subsemigroup of , the resulting transport numbers are dyadic rationals if the coefficients of the input distributions are dyadic. The final assertion follows from the fact that is a basis, by an application of Lemma 2.1. There remains a question of the order structure on that arises from . To clarify this point, we first recall that the real numbers, , are a continuous poset whose Scott topology has the intervals as a basis, and whose Lawson topology is the usual topology. ###### Proposition 2.4 Let be a coherent domain, and let be sub-probability measures on . Then the following conditions are equivalent: 1. . 2. For each Scott-continuous map , . 3. For each monotone Lawson-continuous , . Proof. We show the result for simple measures, which then implies it holds for all measures since is a domain – so its partial order is (topologically) closed – in which the simple measures are dense. So, suppose and are simple measures on . (i) implies (ii): Suppose that . If , then and . Since , there are guaranteed by the Splitting Lemma 2.2, and so ∫fdμ = ∑x∈Frx⋅f(x)=∑x∈F∑y∈Gtx,y⋅f(x) ≤ ∑x∈F∑y∈Gtx,y⋅f(y)≤∑y∈Gsy⋅f(y)=∫fdν, where the first inequality follows from the facts that implies and is monotone. This shows (i) implies (ii). (ii) implies (iii): Since monotone Lawson continuous maps are Scott continuous, this is obvious. (iii) implies (i): Let be a Scott-open subset of , and let . Using the facts that is coherent, so its Lawson topology is compact Hausdorff, and that is finite, we define a family of Scott-open upper sets indexed by , the dyadic numbers in as follows: We let , and for , we recursively choose , the Lawson-closure of . Then define a mapping by if , and otherwise . Since the family consists of Scott-open sets satisfying for , this mapping is monotone, and the standard Urysohn Lemma argument (cf. Theorem 33.1 [27]) shows it is Lawson continuous. So, by assumption. Since and are simple, and . By construction, , so , and , and so , as required. ###### Remark 2.5 Proposition 2.4 offers added insight into the relationship between SProb and for a countably-based coherent domain , by showing that the domain order from can be defined directly on SProb using the maps from to . ## 3 Domain Mappings from the Cantor Tree The Cantor tree is the family of finite and infinite words over in the prefix order. Equivalently, is the full rooted binary tree which is directed complete, and since it is countably based, this means it is closed under suprema of increasing chains. will play the role of the unit interval in our approach to generalizing Skorohod’s Theorem to the domain setting. For that, we need some preliminary definitions. An antichain is a non-empty subset satisfying implies and do not compare in the prefix order. An extensive study of Lawson-closed antichains in and of measures whose (Lawson) support is such an antichain are given in [24]. The key idea in that work was to associate to a Lawson-closed antichain, its Scott closure, which turns out to be because is a tree. Our interest here is somewhat different. The results obtained in [24] were in the context of probability measures, and we want to extend the treatment in [24] to include sub-probability measures, since they arise naturally in the current context. Our aim also is to define mappings from all of to the target domain , rather than simply defining them from the Lawson-support of a particular measure. The mappings we seek ultimately can then be realized as restrictions to the Cantor set of maximal elements of of mappings defined on all of . Accomplishing these goals will be accomplished by first defining mappings on finite antichains of compact elements in , and then extending them canonically to all of . ###### Notation 3.1 For the next result, we need to establish some notation. 1. We let be the set of -bit words in , which forms an antichain. Recall that there is a well-defined retraction mapping from the Cantor set onto ; this mapping sends each element of to its largest prefix in . In addition, if , then there is a map that sends each -bit word to its -bit prefix. 2. We can restrict the projection to , the Cantor set of maximal elements in , and its image is then . This projection has a corresponding embedding sending an -bit word to the infinite word all of whose coordinates are . Then and . If are Lawson-closed antichains with , then there is a canonical partial mapping sending each element of to the unique element of below it, iff there is some element of below it. This mapping is continuous in the relative Scott topologies on its domain and . 3. If is a domain, then we let denote the family functions where is a Lawson-closed antichain and is continuous in the relative Scott topology inherited from . If , then we define iff and where is defined. 4. If is the set of -bit words, we order with the lexicographic order. Then each dyadic rational that can be expressed with denominator can also expressed as an interval in , namely, from to . Moreover, each sequence of such dyadics, whose sum is at most can be expressed as successive intervals, , etc. We denote each of these subintervals by . ###### Proposition 3.2 Let be a bounded complete domain with countable basis , and for each , let denote the antichain in of -bit words and normalized counting measure on . If is an increasing sequence of simple sub-probability measures on whose coefficients are dyadic rationals, then there is a corresponding sequence and mappings satisfying implies and . Proof. Let be a chain of simple sub-probability measures on , and assume with for each . We show by induction that every finite initial chain has a corresponding family satisfying and for each . An appeal to maximality then proves the result. Basis Step: To begin, recall that means there are transport numbers satisfying the conditions that and for each . Since for all , it follows from Corollary 2.3 that the transport numbers also are dyadic rationals. This implies we can choose so that: • Every can be expressed as a dyadic with denominator , and • Every and every can be expressed as a dyadic with denominator . We then define by , for ,\@footnotemark\@footnotetextTo avoid notational clutter, we assume the elements of are given in some total order, and that order is used to enumerate the intervals in . using the notation from Notation 3.1. That follows from the fact that . Defining takes a bit more work, because the tree structure on complicates the allocation of all the transport numbers for fixed, as varies over . To simply things, for a fixed , we let denote the sequence of transport numbers for . We also let , for each ; this represents the portion of not needed to accommodate any of the mass from . Next, we endow with the lexicographic order, based on the implicit order we assumed on each component. Then we enumerate as the sequence of intervals in lexicographic order, followed by the intervals in the order on , and then the final interval , the final subinterval of not needed for any mass in . We then define by for each and , and we leave undefined on . A simple calculation shows that : the mass in consists of that is transported from the s, and the remaining mass in is . To show that , note that our use of the lexicographic order implies that, for each , the sequence of intervals is in the upper set in of the interval , and since , this sequence of intervals exhausts the intersection of the upper set of with . This means the sequence represents exactly the mass transported from up to . From this it follows that, if is the natural projection in , then satisfies for some implies . This implies where both are defined, which is the definition of . Inductive Step: For the inductive step, we assume we are given , and that we have defined , antichains and functions with and for . We also assume that , , and . We also assume we are given a partition into subintervals, with each interval partitioned into subintervals satisfying for each , and where represents the final subinterval of not needed for the total mass in . We can also decompose into subintervals, where , which is the amount of mass at not needed to accommodate incoming mass from , and is the final interval representing the remaining “mass” in after the mass in is accounted for. Since , Corollary 2.3 asserts there are transport numbers satisfying and for each , and . Using the larger denominator , we decompose each summand of as and . This gives us a partition Cmn+1 = {[b(e,y),z]∣1≤e≤r,y∈Fn,z∈Fn+1}∪{[by,z]∣y∈Fn,z∈Fn+1} ∪{[uz]∣z∈Fn+1}∪{[wn+1]}, where , and . We define by . By construction, , and . It is also clear from the construction that iff for some , and iff for some . This implies that , as required. This shows that we can construct the required sequence of antichains and partial maps for each , and then the standard maximality argument shows that there is such a sequence that works simultaneously for all . For our next result, we need some information about the weak topology on . The result we need follows from the Portmanteau Theorem (cf., e.g., [5]), and a proof can be found as Corollaries 15 and 16 in [7]: ###### Theorem 3.3 Let be a countably based coherent domain endowed with the Borel -algebra. Then the weak topology on is the same as the Lawson topology on when viewed as a family of valuations. Moreover, for a family , the following are equivalent: 1. Both of the following hold: • for all finitely generated upper sets . • for all Scott-open sets . 2. for all Lawson-open sets . ###### Theorem 3.4 Let be a countably-based coherent domain, let denote the Cantor set of maximal elements in , the Cantor tree, and let denote Haar measure on viewed as a countable product of two-point groups. If , then there is a measurable partial map satisfying . Proof. Using Corollary 2.3, there is an increasing sequence of simple measures in with dyadic rational coefficients satisfying . Then Proposition 3.2 implies there is a sequence of antichains in and a sequence satisfying and implies where defined. If is the natural projection, then , and so , again where and are defined. In particular, if we let , then the restriction satisfies . Moreover, if , then . so the family is an increasing sequence of intervals in , each of which is clopen (being the embedded image of a subset of an antichain of compact elements in ). If we let , then is an open, hence Borel, subset of . We define by . Claim: is well-defined and measurable. Proof: If then , so . So we conclude that is an increasing sequence in . Since is a domain, this sequence has a well-defined supremum, so is well-defined for all . To show measurability, it is enough to show is a Borel subset of for all Scott-closed subsets . If is such a set, then iff , and since is Scott-closed, this holds iff
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830868244171143, "perplexity": 446.4359761884077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00530.warc.gz"}
http://aimsciences.org/article/doi/10.3934/naco.2011.1.71
# American Institute of Mathematical Sciences • Previous Article Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions • NACO Home • This Issue • Next Article Celis-Dennis-Tapia based approach to quadratic fractional programming problems with two quadratic constraints 2011, 1(1): 71-82. doi: 10.3934/naco.2011.1.71 ## A modified Fletcher-Reeves-Type derivative-free method for symmetric nonlinear equations 1 School of Mathematical Sciences, South China Normal University, Guangzhou, 510631, China 2 College of Mathematics and Econometrics, Hunan University, Changsha, 410082, China Received  October 2010 Revised  October 2010 Published  February 2011 In this paper, we propose a descent derivative-free method for solving symmetric nonlinear equations. The method is an extension of the modified Fletcher-Reeves (MFR) method proposed by Zhang, Zhou and Li [25] to symmetric nonlinear equations. It can be applied to solve large-scale symmetric nonlinear equations due to lower storage requirement. An attractive property of the method is that the directions generated by the method are descent for the residual function. By the use of some backtracking line search technique, the generated sequence of function values is decreasing. Under appropriate conditions, we show that the proposed method is globally convergent. The preliminary numerical results show that the method is practically effective. Citation: Dong-Hui Li, Xiao-Lin Wang. A modified Fletcher-Reeves-Type derivative-free method for symmetric nonlinear equations. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 71-82. doi: 10.3934/naco.2011.1.71 ##### References: [1] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search,, IMA Journal of Numerical Analysis, 5 (1985), 121. doi: 10.1093/imanum/5.1.121. [2] S. Bellavia and B. Morini, A globally convergent Newton-GMRES subspace method for systems of nonlinear equations,, SIAM Journal on Scientific Computing, 23 (2001), 940. doi: 10.1137/S1064827599363976. [3] A. Griewank, The “global” convergence of Broyden-like methods with suitable line search,, Journal of Australia Mathematical Society, 28 (1986), 75. [4] W. Cheng and D. H. Li, A derivative-free nonmonotone line search and its application to the spectral residual method,, IMA Journal of Numerical Analysis, 29 (2009), 814. doi: 10.1093/imanum/drn019. [5] Y. H. Dai and Y. Yuan, Convergence of the Fletcher-Reeves method under a generalized Wolfe search,, Journal of Computational Mathematics, 2 (1996), 142. [6] Y. H. Dai and Y. Yuan, Convergence properties of the Fletcher-Reeves method,, IMA Journal of Numerical Analysis, 16 (1996), 155. doi: 10.1093/imanum/16.2.155. [7] Y. H. Dai and Y. Yuan, "Nonlinear Conjugate Gradient Methods,", Shanghai Science and Technology Publisher, (2000). [8] R. Fletcher and C. Reeves, Function minimization by conjugate gradients,, Computer Journal, 7 (1964), 149. doi: 10.1093/comjnl/7.2.149. [9] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization,, SIAM Journal on Optimization, 2 (1992), 21. doi: 10.1137/0802003. [10] G. Z. Gu, D. H. Li, L. Qi and S. Z. Zhou, Descent directions of Quasi-Newton methods for symmetric nonlinear equations,, SIAM Journal on Numerical Analysis, 40 (2003), 1763. doi: 10.1137/S0036142901397423. [11] J. Y. Han, G. H. Liu and H. X. Yin, Convergence properties of conjugate gradient methods with strong Wolfe linesearch,, Systems Science and Mathematical Science, 11 (1998), 112. [12] W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods,, Pacific Journal of Optimization, 2 (2006), 35. [13] Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods,, Journal of Optimization Theory and Applications, 71 (1991), 399. doi: 10.1007/BF00939927. [14] W. La Cruz and M. Raydan, Nonmonotone spectral methods for large-scale nonlinear systems,, Optimization Methods and Software, 18 (2003), 583. doi: 10.1080/10556780310001610493. [15] W. La Cruz, J.M. Martínez and M. Raydan, Spectral resdual method without gradient information for solving large-scale nonlinear systems of equations,, Mathematics of Computation, 75 (2006), 1429. doi: 10.1090/S0025-5718-06-01840-0. [16] G. H. Liu, J. Y. Han and H. X. Yin, Global convergence of the Fletcher-Reeves algorithm with an inexact line search,, Applied Mathematics, 10 (1995), 75. [17] D. H. Li and W. Cheng, Recent progress in the global convergence of quasi-Newton methods for nonlinear equations,, Hokkaido Journal of Mathematics, 36 (2007), 729. [18] D. H. Li and M. Fukushima, A globally and superlinearly convergent Gauss-Newton based BFGS method for symmetric nonlinear equations,, SIAM Journal on Numerical Analysis, 37 (1999), 152. doi: 10.1137/S0036142998335704. [19] D. H. Li and M. Fukushima, A derivative-free line search and global convergence of Broyden-like methods for nonlinear equations,, Optimization Methods and Software, 13 (2000), 181. doi: 10.1080/10556780008805782. [20] Q. Li and D. H. Li, A class of derivative-free methods for large-scale nonlinear monotone equations,, IMA Journal of Numerical Analysis, (). [21] M. J. D. Powell, Some convergence properties of the conjugate gradient method,, Mathematical Programming, 11 (1976), 42. doi: 10.1007/BF01580369. [22] M. J . D. Powell, Restart procedures of the conjugate gradient method,, Mathematical Programming, 2 (1977), 241. doi: 10.1007/BF01593790. [23] Q. Yan, X. Z. Peng and D. H. Li, A globally convergent derivative-free method for solving large-scale nonlinear monotone equations,, Journal of Computational and Applied Mathematics, 234 (2010), 649. doi: 10.1016/j.cam.2010.01.001. [24] J. Zhang and D. H. Li, A norm descent BFGS method for solving KKT systems of symmetric variational inequality problems,, Optimization Methods and Software, 22 (2007), 237. doi: 10.1080/10556780500397074. [25] L. Zhang, W. Zhou and D. H. Li, Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,, Numerische Mathematik, 104 (2006), 561. doi: 10.1007/s00211-006-0028-z. [26] W. Zhou and D. H. Li, Limited memory BFGS method for nonlinear monotone equations,, Journal of Computational Mathematics, 25 (2007), 89. [27] W. Zhou and D. H. Li, A globally convergent BFGS method for nonlinear monotone equations,, Mathematics of Computation, 77 (2008), 2231. doi: 10.1090/S0025-5718-08-02121-2. [28] G. Zoutendijk, Nonlinear Programming, Computational Methods,, in, (1970), 37. show all references ##### References: [1] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search,, IMA Journal of Numerical Analysis, 5 (1985), 121. doi: 10.1093/imanum/5.1.121. [2] S. Bellavia and B. Morini, A globally convergent Newton-GMRES subspace method for systems of nonlinear equations,, SIAM Journal on Scientific Computing, 23 (2001), 940. doi: 10.1137/S1064827599363976. [3] A. Griewank, The “global” convergence of Broyden-like methods with suitable line search,, Journal of Australia Mathematical Society, 28 (1986), 75. [4] W. Cheng and D. H. Li, A derivative-free nonmonotone line search and its application to the spectral residual method,, IMA Journal of Numerical Analysis, 29 (2009), 814. doi: 10.1093/imanum/drn019. [5] Y. H. Dai and Y. Yuan, Convergence of the Fletcher-Reeves method under a generalized Wolfe search,, Journal of Computational Mathematics, 2 (1996), 142. [6] Y. H. Dai and Y. Yuan, Convergence properties of the Fletcher-Reeves method,, IMA Journal of Numerical Analysis, 16 (1996), 155. doi: 10.1093/imanum/16.2.155. [7] Y. H. Dai and Y. Yuan, "Nonlinear Conjugate Gradient Methods,", Shanghai Science and Technology Publisher, (2000). [8] R. Fletcher and C. Reeves, Function minimization by conjugate gradients,, Computer Journal, 7 (1964), 149. doi: 10.1093/comjnl/7.2.149. [9] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization,, SIAM Journal on Optimization, 2 (1992), 21. doi: 10.1137/0802003. [10] G. Z. Gu, D. H. Li, L. Qi and S. Z. Zhou, Descent directions of Quasi-Newton methods for symmetric nonlinear equations,, SIAM Journal on Numerical Analysis, 40 (2003), 1763. doi: 10.1137/S0036142901397423. [11] J. Y. Han, G. H. Liu and H. X. Yin, Convergence properties of conjugate gradient methods with strong Wolfe linesearch,, Systems Science and Mathematical Science, 11 (1998), 112. [12] W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods,, Pacific Journal of Optimization, 2 (2006), 35. [13] Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods,, Journal of Optimization Theory and Applications, 71 (1991), 399. doi: 10.1007/BF00939927. [14] W. La Cruz and M. Raydan, Nonmonotone spectral methods for large-scale nonlinear systems,, Optimization Methods and Software, 18 (2003), 583. doi: 10.1080/10556780310001610493. [15] W. La Cruz, J.M. Martínez and M. Raydan, Spectral resdual method without gradient information for solving large-scale nonlinear systems of equations,, Mathematics of Computation, 75 (2006), 1429. doi: 10.1090/S0025-5718-06-01840-0. [16] G. H. Liu, J. Y. Han and H. X. Yin, Global convergence of the Fletcher-Reeves algorithm with an inexact line search,, Applied Mathematics, 10 (1995), 75. [17] D. H. Li and W. Cheng, Recent progress in the global convergence of quasi-Newton methods for nonlinear equations,, Hokkaido Journal of Mathematics, 36 (2007), 729. [18] D. H. Li and M. Fukushima, A globally and superlinearly convergent Gauss-Newton based BFGS method for symmetric nonlinear equations,, SIAM Journal on Numerical Analysis, 37 (1999), 152. doi: 10.1137/S0036142998335704. [19] D. H. Li and M. Fukushima, A derivative-free line search and global convergence of Broyden-like methods for nonlinear equations,, Optimization Methods and Software, 13 (2000), 181. doi: 10.1080/10556780008805782. [20] Q. Li and D. H. Li, A class of derivative-free methods for large-scale nonlinear monotone equations,, IMA Journal of Numerical Analysis, (). [21] M. J. D. Powell, Some convergence properties of the conjugate gradient method,, Mathematical Programming, 11 (1976), 42. doi: 10.1007/BF01580369. [22] M. J . D. Powell, Restart procedures of the conjugate gradient method,, Mathematical Programming, 2 (1977), 241. doi: 10.1007/BF01593790. [23] Q. Yan, X. Z. Peng and D. H. Li, A globally convergent derivative-free method for solving large-scale nonlinear monotone equations,, Journal of Computational and Applied Mathematics, 234 (2010), 649. doi: 10.1016/j.cam.2010.01.001. [24] J. Zhang and D. H. Li, A norm descent BFGS method for solving KKT systems of symmetric variational inequality problems,, Optimization Methods and Software, 22 (2007), 237. doi: 10.1080/10556780500397074. [25] L. Zhang, W. Zhou and D. H. Li, Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,, Numerische Mathematik, 104 (2006), 561. doi: 10.1007/s00211-006-0028-z. [26] W. Zhou and D. H. Li, Limited memory BFGS method for nonlinear monotone equations,, Journal of Computational Mathematics, 25 (2007), 89. [27] W. Zhou and D. H. Li, A globally convergent BFGS method for nonlinear monotone equations,, Mathematics of Computation, 77 (2008), 2231. doi: 10.1090/S0025-5718-08-02121-2. [28] G. Zoutendijk, Nonlinear Programming, Computational Methods,, in, (1970), 37. [1] Wei-Zhe Gu, Li-Yong Lu. The linear convergence of a derivative-free descent method for nonlinear complementarity problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 531-548. doi: 10.3934/jimo.2016030 [2] Gaohang Yu. A derivative-free method for solving large-scale nonlinear systems of equations. Journal of Industrial & Management Optimization, 2010, 6 (1) : 149-160. doi: 10.3934/jimo.2010.6.149 [3] A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319 [4] Liang Zhang, Wenyu Sun, Raimundo J. B. de Sampaio, Jinyun Yuan. A wedge trust region method with self-correcting geometry for derivative-free optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 169-184. doi: 10.3934/naco.2015.5.169 [5] Herbert Gajewski, Jens A. Griepentrog. A descent method for the free energy of multicomponent systems. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 505-528. doi: 10.3934/dcds.2006.15.505 [6] Jun Takaki, Nobuo Yamashita. A derivative-free trust-region algorithm for unconstrained optimization with controlled error. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 117-145. doi: 10.3934/naco.2011.1.117 [7] Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. II. Convergence of the method of finite differences. Inverse Problems & Imaging, 2016, 10 (4) : 869-898. doi: 10.3934/ipi.2016025 [8] Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial & Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565 [9] Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. I. Well-posedness and convergence of the method of lines. Inverse Problems & Imaging, 2013, 7 (2) : 307-340. doi: 10.3934/ipi.2013.7.307 [10] M. S. Lee, B. S. Goh, H. G. Harno, K. H. Lim. On a two-phase approximate greatest descent method for nonlinear optimization with equality constraints. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 315-326. doi: 10.3934/naco.2018020 [11] Hideo Takaoka. Energy transfer model for the derivative nonlinear Schrödinger equations on the torus. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5819-5841. doi: 10.3934/dcds.2017253 [12] Nakao Hayashi, Pavel I. Naumkin, Patrick-Nicolas Pipolo. Smoothing effects for some derivative nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 685-695. doi: 10.3934/dcds.1999.5.685 [13] Paola Mannucci. The Dirichlet problem for fully nonlinear elliptic equations non-degenerate in a fixed direction. Communications on Pure & Applied Analysis, 2014, 13 (1) : 119-133. doi: 10.3934/cpaa.2014.13.119 [14] Aimin Huang, Roger Temam. The nonlinear 2D subcritical inviscid shallow water equations with periodicity in one direction. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2005-2038. doi: 10.3934/cpaa.2014.13.2005 [15] Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013 [16] Liyan Qi, Xiantao Xiao, Liwei Zhang. On the global convergence of a parameter-adjusting Levenberg-Marquardt method. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 25-36. doi: 10.3934/naco.2015.5.25 [17] Flavia Smarrazzo. On a class of equations with variable parabolicity direction. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 729-758. doi: 10.3934/dcds.2008.22.729 [18] Russell E. Warren, Stanley J. Osher. Hyperspectral unmixing by the alternating direction method of multipliers. Inverse Problems & Imaging, 2015, 9 (3) : 917-933. doi: 10.3934/ipi.2015.9.917 [19] Zihua Guo, Yifei Wu. Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac 12} (\mathbb{R} )$. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 257-264. doi: 10.3934/dcds.2017010 [20] Chunlin Hao, Xinwei Liu. Global convergence of an SQP algorithm for nonlinear optimization with overdetermined constraints. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 19-29. doi: 10.3934/naco.2012.2.19 Impact Factor:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6712124347686768, "perplexity": 3101.0416410916478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742963.17/warc/CC-MAIN-20181115223739-20181116005739-00145.warc.gz"}
https://math.stackexchange.com/questions/3023826/uniqueness-of-faithful-tracial-states-on-separable-c-algebra
# Uniqueness of faithful (!) tracial states on separable $C^*$-algebra Let $$A$$ be a separable $$C^*$$-algebra and $$S\subseteq A$$ a norm-dense, countable set in $$A$$. Assume that there are two faithful (meaning that the corresponding GNS-construction gives a faithful representation) tracial states $$\tau$$, $$\rho$$ on $$A$$. Does this already imply that $$\tau = \rho$$? Background and own thoughts: The GNS-construction gives us two separable Hilbert spaces $$\cal H_1$$ $$\cal H_2$$ which are (by the faithfulness) the closure of $$S$$ by the norms coming from $$\tau$$ and $$\rho$$. As the Hilbert spaces are separable, there exists an isometric isomorphism $$U: \cal H_1 \rightarrow \cal H_2$$, so for any $$x \in S$$ we have $$\rho(x^*x)=\left\Vert x\right\Vert _{\rho}^{2}=\left\Vert Ux\right\Vert _{\tau}^{2} = \tau\left(\left(Ux\right)^{*}\left(Ux\right)\right)$$. Now, if $$U$$ is implemented by a unitary (or isometry) in $$A$$, this implies that $$\rho$$ and $$\tau$$ coincide on positive elements of $$S$$, hence are equal. The question is: Does such a unitary/isometry exist? If not, does uniqueness hold nevertheless? • This is false even in Abelian $C^\ast$-algebras. Just take two different probability measures with full support in $\mathbb R$. – Adrián González-Pérez Dec 3 '18 at 11:46 • There are some simple $C^\ast$-algebras that have the unique trace property , like $C^*_r ( \mathbb F_2)$, but that is far from the norm. – Adrián González-Pérez Dec 3 '18 at 11:49 This fails almost always, unless you have unique trace. For instance let $$A=\mathbb C\oplus\mathbb C$$. Let $$\tau(x,y)=(x+y)/2$$, $$\rho(x,y)=x/3+2y/3$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645160436630249, "perplexity": 176.63533185046975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574409.16/warc/CC-MAIN-20190921104758-20190921130758-00263.warc.gz"}
http://events.berkeley.edu/index.php/calendar/sn/pubaff.html?event_ID=128252&date=2019-09-12
3-Manifold Seminar: Residual Properties of Groups Seminar | September 12 | 11:10 a.m.-12:30 p.m. | 939 Evans Hall Nic Brody, UC Berkeley Department of Mathematics A group is said to be residually finite if every nontrivial element can be distinguished from the identity in a finite quotient. We introduce the notion of an (alpha, kappa)-residual group, for alpha an ordinal and kappa a cardinal. The ordinal generalization allows one, for example, to measure the degree to which a given group fails to be residually finite in a more refined manner. The cardinal generalization provides a better handle on the space of residual chains in a group. We consider several examples and propose a few questions related to these properties. events@math.berkeley.edu
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869556784629822, "perplexity": 874.5527199404848}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00368.warc.gz"}
http://www.dealii.org/developer/doxygen/deal.II/classPreconditionBlock.html
Reference documentation for deal.II version Git 93c0925 2015-03-27 14:23:00 +0100 PreconditionBlock< MATRIX, inverse_type > Class Template Reference #include <deal.II/lac/precondition_block.h> Inheritance diagram for PreconditionBlock< MATRIX, inverse_type >: [legend] ## Public Types typedef types::global_dof_index size_type ## Public Member Functions PreconditionBlock (bool store_diagonals=false) ~PreconditionBlock () void initialize (const MATRIX &A, const AdditionalData parameters) void clear () bool empty () const value_type el (size_type i, size_type j) const void invert_diagblocks () template<typename number2 > void forward_step (Vector< number2 > &dst, const Vector< number2 > &prev, const Vector< number2 > &src, const bool transpose_diagonal) const template<typename number2 > void backward_step (Vector< number2 > &dst, const Vector< number2 > &prev, const Vector< number2 > &src, const bool transpose_diagonal) const size_type block_size () const std::size_t memory_consumption () const DeclException2 (ExcWrongBlockSize, int, int,<< "The blocksize "<< arg1<< " and the size of the matrix "<< arg2<< " do not match.") Public Member Functions inherited from Subscriptor Subscriptor () Subscriptor (const Subscriptor &) virtual ~Subscriptor () Subscriptoroperator= (const Subscriptor &) void subscribe (const char *identifier=0) const void unsubscribe (const char *identifier=0) const unsigned int n_subscriptions () const void list_subscribers () const DeclException3 (ExcInUse, int, char *, std::string &,<< "Object of class "<< arg2<< " is still used by "<< arg1<< " other objects."<< "\n\n"<< "(Additional information: "<< arg3<< ")\n\n"<< "See the entry in the Frequently Asked Questions of "<< "deal.II (linked to from http://www.dealii.org/) for "<< "a lot more information on what this error means and "<< "how to fix programs in which it happens.") DeclException2 (ExcNoSubscriber, char *, char *,<< "No subscriber with identifier <"<< arg2<< "> subscribes to this object of class "<< arg1<< ". Consequently, it cannot be unsubscribed.") template<class Archive > void serialize (Archive &ar, const unsigned int version) ## Protected Member Functions void initialize (const MATRIX &A, const std::vector< size_type > &permutation, const std::vector< size_type > &inverse_permutation, const AdditionalData parameters) void set_permutation (const std::vector< size_type > &permutation, const std::vector< size_type > &inverse_permutation) void invert_permuted_diagblocks (const std::vector< size_type > &permutation, const std::vector< size_type > &inverse_permutation) Protected Member Functions inherited from PreconditionBlockBase< inverse_type > PreconditionBlockBase (bool store_diagonals=false, Inversion method=gauss_jordan) ~PreconditionBlockBase () void clear () void reinit (unsigned int nblocks, size_type blocksize, bool compress, Inversion method=gauss_jordan) void inverses_computed (bool are_they) void set_same_diagonal () bool same_diagonal () const bool store_diagonals () const bool empty () const unsigned int size () const inverse_type el (size_type i, size_type j) const void inverse_vmult (size_type i, Vector< number2 > &dst, const Vector< number2 > &src) const void inverse_Tvmult (size_type i, Vector< number2 > &dst, const Vector< number2 > &src) const FullMatrix< inverse_type > & inverse (size_type i) const FullMatrix< inverse_type > & inverse (size_type i) const Householder< inverse_type > & inverse_householder (size_type i) const Householder< inverse_type > & inverse_householder (size_type i) const LAPACKFullMatrix< inverse_type > & inverse_svd (size_type i) const LAPACKFullMatrix < inverse_type > & inverse_svd (size_type i) const FullMatrix< inverse_type > & diagonal (size_type i) const FullMatrix< inverse_type > & diagonal (size_type i) const void log_statistics () const std::size_t memory_consumption () const DeclException0 (ExcDiagonalsNotStored) DeclException0 (ExcInverseNotAvailable) ## Protected Attributes size_type blocksize SmartPointer< const MATRIX, PreconditionBlock< MATRIX, inverse_type > > A double relaxation std::vector< size_typepermutation std::vector< size_typeinverse_permutation Protected Attributes inherited from PreconditionBlockBase< inverse_type > Inversion inversion ## Private Types typedef MATRIX::value_type number typedef inverse_type value_type Protected Types inherited from PreconditionBlockBase< inverse_type > enum  Inversion typedef types::global_dof_index size_type ## Detailed Description ### template<class MATRIX, typename inverse_type = typename MATRIX::value_type> class PreconditionBlock< MATRIX, inverse_type > Base class for actual block preconditioners. This class assumes the MATRIX consisting of invertible blocks of blocksize on the diagonal and provides the inversion of the diagonal blocks of the matrix. It is not necessary for this class that the matrix be block diagonal; rather, it applies to matrices of arbitrary structure with the minimal property of having invertible blocks on the diagonal. Still the matrix must have access to single matrix entries. Therefore, BlockMatrixArray and similar classes are not a possible matrix class template arguments. The block matrix structure used by this class is given, e.g., for the DG method for the transport equation. For a downstream numbering the matrices even have got a block lower left matrix structure, i.e. the matrices are empty above the diagonal blocks. Note This class is intended to be used for matrices whose structure is given by local contributions from disjoint cells, such as for DG methods. It is not intended for problems where the block structure results from different physical variables such as in the Stokes equations considered in step-22. For all matrices that are empty above and below the diagonal blocks (i.e. for all block diagonal matrices) the BlockJacobi preconditioner is a direct solver. For all matrices that are empty only above the diagonal blocks (e.g. the matrices one gets by the DG method with downstream numbering) BlockSOR is a direct solver. This first implementation of the PreconditionBlock assumes the matrix has blocks each of the same block size. Varying block sizes within the matrix must still be implemented if needed. The first template parameter denotes the type of number representation in the sparse matrix, the second denotes the type of number representation in which the inverted diagonal block matrices are stored within this class by invert_diagblocks(). If you don't want to use the block inversion as an exact solver, but rather as a preconditioner, you may probably want to store the inverted blocks with less accuracy than the original matrix; for example, number==double, inverse_type=float might be a viable choice. Block (linear algebra) Date 1999, 2000, 2010 Definition at line 84 of file precondition_block.h. ## Member Typedef Documentation template<class MATRIX, typename inverse_type = typename MATRIX::value_type> typedef MATRIX::value_type PreconditionBlock< MATRIX, inverse_type >::number private Define number type of matrix. Definition at line 92 of file precondition_block.h. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> typedef inverse_type PreconditionBlock< MATRIX, inverse_type >::value_type private Value type for inverse matrices. Definition at line 97 of file precondition_block.h. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> typedef types::global_dof_index PreconditionBlock< MATRIX, inverse_type >::size_type Declare type for container size. Definition at line 103 of file precondition_block.h. ## Constructor & Destructor Documentation template<class MATRIX, typename inverse_type = typename MATRIX::value_type> PreconditionBlock< MATRIX, inverse_type >::PreconditionBlock ( bool store_diagonals = false ) Constructor. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> PreconditionBlock< MATRIX, inverse_type >::~PreconditionBlock ( ) Destructor. ## Member Function Documentation template<class MATRIX, typename inverse_type = typename MATRIX::value_type> void PreconditionBlock< MATRIX, inverse_type >::initialize ( const MATRIX & A, const AdditionalData parameters ) Initialize matrix and block size. We store the matrix and the block size in the preconditioner object. In a second step, the inverses of the diagonal blocks may be computed. Additionally, a relaxation parameter for derived classes may be provided. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> void PreconditionBlock< MATRIX, inverse_type >::initialize ( const MATRIX & A, const std::vector< size_type > & permutation, const std::vector< size_type > & inverse_permutation, const AdditionalData parameters ) protected Initialize matrix and block size for permuted preconditioning. Additionally to the parameters of the other initalize() function, we hand over two index vectors with the permutation and its inverse. For the meaning of these vectors see PreconditionBlockSOR. In a second step, the inverses of the diagonal blocks may be computed. Make sure you use invert_permuted_diagblocks() to yield consistent data. Additionally, a relaxation parameter for derived classes may be provided. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> void PreconditionBlock< MATRIX, inverse_type >::set_permutation ( const std::vector< size_type > & permutation, const std::vector< size_type > & inverse_permutation ) protected Set either the permutation of rows or the permutation of blocks, depending on the size of the vector. If the size of the permutation vectors is equal to the dimension of the linear system, it is assumed that rows are permuted individually. In this case, set_permutation() must be called before initialize(), since the diagonal blocks are built from the permuted entries of the matrix. If the size of the permutation vector is not equal to the dimension of the system, the diagonal blocks are computed from the unpermuted entries. Instead, the relaxation methods step() and Tstep() are executed applying the blocks in the order given by the permutation vector. They will throw an exception if length of this vector is not equal to the number of blocks. Note Permutation of blocks can only be applied to the relaxation operators step() and Tstep(), not to the preconditioning operators vmult() and Tvmult(). It is safe to call set_permutation() before initialize(), while the other order is only admissible for block permutation. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> void PreconditionBlock< MATRIX, inverse_type >::invert_permuted_diagblocks ( const std::vector< size_type > & permutation, const std::vector< size_type > & inverse_permutation ) protected Replacement of invert_diagblocks() for permuted preconditioning. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> void PreconditionBlock< MATRIX, inverse_type >::clear ( ) Deletes the inverse diagonal block matrices if existent, sets the blocksize to 0, hence leaves the class in the state that it had directly after calling the constructor. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> bool PreconditionBlock< MATRIX, inverse_type >::empty ( ) const Checks whether the object is empty. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> value_type PreconditionBlock< MATRIX, inverse_type >::el ( size_type i, size_type j ) const template<class MATRIX, typename inverse_type = typename MATRIX::value_type> void PreconditionBlock< MATRIX, inverse_type >::invert_diagblocks ( ) Stores the inverse of the diagonal blocks in inverse. This costs some additional memory - for DG methods about 1/3 (for double inverses) or 1/6 (for float inverses) of that used for the matrix - but it makes the preconditioning much faster. It is not allowed to call this function twice (will produce an error) before a call of clear(...) because at the second time there already exist the inverse matrices. After this function is called, the lock on the matrix given through the use_matrix function is released, i.e. you may overwrite of delete it. You may want to do this in case you use this matrix to precondition another matrix. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> template<typename number2 > void PreconditionBlock< MATRIX, inverse_type >::forward_step ( Vector< number2 > & dst, const Vector< number2 > & prev, const Vector< number2 > & src, const bool transpose_diagonal ) const Perform one block relaxation step in forward numbering. Depending on the arguments dst and pref, this performs an SOR step (both reference the same vector) of a Jacobi step (botha different vectors). For the Jacobi step, the calling function must copy dst to pref after this. Note If a permutation is set, it is automatically honored by this function. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> template<typename number2 > void PreconditionBlock< MATRIX, inverse_type >::backward_step ( Vector< number2 > & dst, const Vector< number2 > & prev, const Vector< number2 > & src, const bool transpose_diagonal ) const Perform one block relaxation step in backward numbering. Depending on the arguments dst and pref, this performs an SOR step (both reference the same vector) of a Jacobi step (botha different vectors). For the Jacobi step, the calling function must copy dst to pref after this. Note If a permutation is set, it is automatically honored by this function. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> size_type PreconditionBlock< MATRIX, inverse_type >::block_size ( ) const Return the size of the blocks. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> std::size_t PreconditionBlock< MATRIX, inverse_type >::memory_consumption ( ) const Determine an estimate for the memory consumption (in bytes) of this object. ## Member Data Documentation template<class MATRIX, typename inverse_type = typename MATRIX::value_type> size_type PreconditionBlock< MATRIX, inverse_type >::blocksize protected Size of the blocks. Each diagonal block is assumed to be of the same size. Definition at line 333 of file precondition_block.h. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> SmartPointer > PreconditionBlock< MATRIX, inverse_type >::A protected Pointer to the matrix. Make sure that the matrix exists as long as this class needs it, i.e. until calling invert_diagblocks, or (if the inverse matrices should not be stored) until the last call of the preconditoining vmult function of the derived classes. Definition at line 341 of file precondition_block.h. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> double PreconditionBlock< MATRIX, inverse_type >::relaxation protected Relaxation parameter to be used by derived classes. Definition at line 345 of file precondition_block.h. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> std::vector PreconditionBlock< MATRIX, inverse_type >::permutation protected The permutation vector Definition at line 350 of file precondition_block.h. template<class MATRIX, typename inverse_type = typename MATRIX::value_type> std::vector PreconditionBlock< MATRIX, inverse_type >::inverse_permutation protected The inverse permutation vector Definition at line 355 of file precondition_block.h. The documentation for this class was generated from the following file:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4225195050239563, "perplexity": 16644.445615800163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296603.6/warc/CC-MAIN-20150323172136-00274-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/166151-engine-trouble---collision-detection/
#### Archived This topic is now archived and is closed to further replies. # Engine Trouble - Collision Detection This topic is 5555 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I made an engine, unfortunately after I added time to it (I had it as per frame), the collision detection went wacko. If I am travelling at speeds between 3 and 9 meters per second there is a chance that I will fall right through the floor. I can''t find out why. Can some one help. Also, when I stop, I sort of vibrate. All of my functions that are called from these two work fine, but if you want to know what any function does, just ask. For example, you may not know what happens if you set a Vector3D equal to a double. So, I would reply that it sets the length of the vector to that double. Anyways, here are my two functions: double Character::Collisions(Triangle* TriangleList, unsigned long Time) { if(Velocity==0.0) return 0.0; Vector3D Movement(Velocity*Time); const double MoveDist=Movement.Length(); Triangle* Current=TriangleList; double Coefficient,DistToCollision=-1.0; // DistToCollision=-1.0 only if no collision has been found Vector3D PointOfCollision; while(Current!=NULL) { Vector3D planeNormal(Current->Normal()); Vector3D RelativePos(Position-(Current->Vertex[0])); if(Dot(planeNormal,RelativePos)<0) planeNormal*=-1; double PerpDist=Current->Intersect(Position,-planeNormal); Vector3D IntersectionPoint; if(PerpDist<=1.0) // Is the Plane Embedded in the Unit Sphere? { Vector3D SphereIntersectPoint(-planeNormal*PerpDist); IntersectionPoint=Position+SphereIntersectPoint; } else { Vector3D SphereIntersectPoint(Position-planeNormal); double CollDist=Current->Intersect(SphereIntersectPoint,Movement); Vector3D Temp=Movement.unit()*CollDist; // Temporary vector for velocity to plane collision point IntersectionPoint=Temp+SphereIntersectPoint; // So we just add that to the sphere intersection point! } double Dist=MoveDist+SMALL; if(!(Current->Contains(IntersectionPoint))) // If IntersectionPoint is not within the triangle { // The Intersection Point must be moved double Temp; Sphere Point[3]={ Sphere(Current->Vertex[0]), Sphere(Current->Vertex[1]), Sphere(Current->Vertex[2])}; for(int K=0; K<3; K++) { Temp=Point[K].Intersect(Position,Movement); if(Temp>=0.0 && Temp<Dist) { IntersectionPoint=Current->Vertex[K]; Dist=Temp; } } if(Dist>0.0) { Cylinder Edge[3]={ Cylinder(Current->Vertex[0],Current->Vertex[1]), Cylinder(Current->Vertex[1],Current->Vertex[2]), Cylinder(Current->Vertex[2],Current->Vertex[0])}; for(int K=0; K<3; K++) { Temp=Edge[K].Intersect(Position,Movement); if(Temp>=0.0 && Temp<Dist) { Vector3D CollPoint(Position+(Movement.unit()*Temp)); IntersectionPoint=Edge[K].Axis.ClosestPoint(CollPoint); Dist=Temp; } } } if(Dist>MoveDist) IntersectionPoint=Position-(Movement.unit()*2); } else Dist=Sphere(Position).Intersect(IntersectionPoint,-Movement); //if(Sphere(Position,1.0-SMALL).Contains(IntersectionPoint)) // MessageBox(NULL,"Embedded Sphere","ERROR",MB_OK|MB_ICONSTOP); if(Dist>=0.0 && Dist<=MoveDist && (DistToCollision==-1.0 || Dist<DistToCollision)) { DistToCollision=Dist; Coefficient=Current->Coeff; PointOfCollision=IntersectionPoint; } Current=Current->Next; } if (DistToCollision==-1.0) // If no Collision was found { // We can just Position+=Movement; // Move to the destination return MoveDist; // And return the distance travelled } DistToCollision-=SMALL; // If the Ellipsoid Slides along the wall, the distance will be Zero, so the Ellipsoid must Hover along the wall Movement=DistToCollision; // Set the length of the velocity vector to the distance that the Ellipsoid must move Position+=Movement; // Move the Ellipsoid this distance Movement=MoveDist-DistToCollision; // Change the Velocity into the remaining distance that must not be moved Vector3D Destination(Position+Movement); // The destination point (if there was no collision) Vector3D SlidePlaneNormal((Position-PointOfCollision).unit()); // The normal vector for the slide plane double Length=Plane(Position,SlidePlaneNormal).Intersect(Destination,SlidePlaneNormal); Destination+=SlidePlaneNormal*Length; // Move the destination onto the slide plane double ForceNormal=-Dot(SlidePlaneNormal,Velocity)/Time; // Acceleration caused by the Slide Plane double ForceFriction=Coefficient*ForceNormal; // Acceleration that will be caused by friction Velocity=(Destination-Position)/Time; // The new Velocity vector without Friction if(ForceFriction>Velocity.Length()) // If the force of friction is larger than the new Velocity Vector Velocity=Vector3D(0,0,0); // There is no velocity Velocity=Velocity.Length()-(ForceFriction*Time); // Subtract friction from velocity vector double DistTravelled=DistToCollision+Collisions(TriangleList,Time); // The Distance travelled is return DistTravelled; // Return the distance travelled } void Character::Move(Triangle* TriangleList, unsigned long Time) { if(keys[VK_LEFT]) Longitude-=0.025; if(keys[VK_RIGHT]) Longitude+=0.025; Polar Look(0,Longitude); if(keys[''W''] || keys[VK_UP]) Look.R+=0.00001; if(keys[''S''] || keys[VK_DOWN]) Look.R-=0.00001; Vector2D Movement(Look); Velocity+=Vector3D(Movement.X,GRAVITY,Movement.Y)*Time; double DistTravelled=Collisions(List,Time); Velocity=DistTravelled; Velocity/=Time; delete List; } Even if you couldn''t help, I still apreciate the effort! -------------------------------------- I am the master of stories..... If only I could just write them down... ##### Share on other sites I don''t think you need to inverse the normal if the player is under the triangle. That would probably push the player under the terrain. Apart from that, too much code You gotta debug it yourself. Debugging is a skill. I''d write a small app with only a couple of triangles and a sphere, and test the algo with that. If you set the triangles to known values, and so on, it will be much easier to see what''s going wrong. And draw stuff like normals, points of intersections, velocities, ect... So many things can go bonkers in your algo. And put the code in classes and split it into more functions! It''s well messy. It would be easier to conceptualise. And there seems to be redundant piece of code there, which doesn''t help. ##### Share on other sites I figured it out! Or at least part of it... DistTo Collision is the distance to the collision - 1e-8 So it CAN be negative. I decided to take the absolute value of DistToCollision right before I create DistTravelled, and poof, no more falling through the floor. Unfortunately, there''s a sort of Earthquake kind of thing going on, but maybe if I add 1e-8 to DistToCollision rather than taking the absolute value of it, the Tremors will go away. I''ll test now. -------------------------------------- I am the master of stories..... If only I could just write them down... ##### Share on other sites It didn''t work... I think this little shaking sickness is due to problems with friction. Even "stopped" my velocity fluctuates between 0 and 3 meters per second. I don''t know why. -------------------------------------- I am the master of stories..... If only I could just write them down... ##### Share on other sites is that on the Y axis? this could be due to the gravity. ##### Share on other sites No it wasn''t, but I fixed the problem finally. I had to remove "DistToCollision-=SMALL;" and replace it with "Position+=SlidePlaneNormal*SMALL;" a couple of lines down. Now my only problem is that stairs are easier to climb at an angle than straight forward. Oh, well. -------------------------------------- I am the master of stories..... If only I could just write them down... 1. 1 2. 2 3. 3 Rutin 20 4. 4 frob 18 5. 5 • 32 • 13 • 10 • 11 • 9 • ### Forum Statistics • Total Topics 632561 • Total Posts 3007088 ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45423027873039246, "perplexity": 3227.1990974808114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00302.warc.gz"}
http://mathhelpforum.com/calculus/6382-double-integral-help.html
# Math Help - Double Integral Help 1. ## Double Integral Help I have a question in my Prob & Stat course that deals with double integration, and it's been two years since Calc III and I really don't remember exactly how to solve one. Here's the question: The first part isn't really a double integral, but will eventually lead to one: Integral ( xe^(-x(1+y) ) ) dy Essentially, I am not sure how to integrate that for y. The bounds don't really matter in this case, I need to know the y-integration. To anyone who can help, it'd be much appreciated. Thanks. 2. Originally Posted by GmasterFlash13 I have a question in my Prob & Stat course that deals with double integration, and it's been two years since Calc III and I really don't remember exactly how to solve one. Here's the question: The first part isn't really a double integral, but will eventually lead to one: Integral ( xe^(-x(1+y) ) ) dy Essentially, I am not sure how to integrate that for y. The bounds don't really matter in this case, I need to know the y-integration. To anyone who can help, it'd be much appreciated. Thanks.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825631380081177, "perplexity": 352.0963639126076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277091.36/warc/CC-MAIN-20160524002117-00008-ip-10-185-217-139.ec2.internal.warc.gz"}
https://robotwealth.com/can-you-apply-factors-to-trade-performance/
# Can you apply factors to trade performance? Posted on Sep 10, 2019 by 68 Views When tinkering with trading ideas, have you ever wondered whether a certain variable might be correlated with the success of the trade? For instance, maybe you wonder if your strategy tends to do better when volatility is high? In this case, you can get very binary feedback by, say, running backtests with and without a volatility filter. But this can mask interesting insights that might surface if the relationship could be explored in more detail. Zorro has some neat tools that allow us to associate data of interest with particular trading decisions, and then export that data for further analysis. Here’s how it works: Zorro implements a TRADE struct for holding information related to a particular position. This struct is a data container which holds information about each trade throughout the life of our simulation. We can also add our own data to this struct via the TRADEVAR array, which we can populate with values associated with a particular trade. Zorro stores this array, along with all the other information about each and every position, as members of the TRADE struct. We can access the TRADE struct members in two ways: inside a trade management function (TMF) and inside a trade enumeration loop. Here’s an example of exporting the last estimated volatility at the time a position was entered, along with the return associated with that position (this is a simple, long only moving average cross over strategy, data is loaded from Alpha Vantage): The general pattern for accomplishing this is: 1. Define a meaningful name for the element of the TradeVar that we’ll use to hold our volatility data (line 5) 2. Define a Trade Management Function to expose the TRADE struct and use it to assign our variable to our TradeVar (lines 7-12). A return value of 16 tells Zorro to run the TMF only when the position is entered and exited. 3. Calculate the variable of interest in the Zorro script. Here we calculate the rolling 50-day standard deviation of returns (line 34). 4. Pass the TMF and the variable of interest to Zorro’s enter function (line 38). 5. In the EXITRUN (the last thing Zorro does after finishing a simulation), loop through all the positions using a trade enumeration loop and write the details, along with the volatility calculated just prior to entry, to a csv file. Running this script results in a small csv file being written to Zorro’s Log folder. A sample of the data looks like this: Once we’ve got that data, we can easily read it into our favourite data analysis tool for a closer look. Here, I’ll read it into R and use the tidyverse libraries to dig deeper. (This will be very cursory. You could and should go a lot deeper if this were a serious strategy.) First, read the data in, and process it by adding a couple of columns that might be interesting: If we head the resulting data frame, we find that it looks like this: Sweet! Looks like we’re in business! Now we can start to answer some interesting questions. First, is volatility at the time of entry related to the magnitude of the trade return? Intuitively we’d expect this to be the case, as higher volatility implies larger price swings and therefore larger absolute trade returns: Nice! Just what we’d expect to see. Does this relationship hold for each individual asset that we traded? Looks like the relationship generally holds at the asset level, but note that we have a small sample size so take the results with a grain of salt: Is volatility related to the actual trade return? Looks like it might be. But this was a long-only strategy that made money in a period where everything went up, so I wouldn’t read too much into this without controlling for that effect. Is there a significant difference in the entry volatility for winning and losing trades? Finally, we can treat our volatility variable as a “factor” to which our trade returns are exposed. Is this factor useful in predicting trade returns? First, we’ll need some functions for bucketing our trade results by factor quantile: If we bucket our results by factor quantile, do any buckets account for significantly more profit and loss? Are there any other interesting relationships? Looks like there might be something to that fifth quantile (but of course beware the small sample size). We can retrieve the cutoff value for the fifth quantile by sorting our factor and finding the value four-fifths the length of the resulting vector: ## Conclusion There you have it. This was a simple example of exporting potentially relevant data from a Zorro simulation and reading it into a data analysis package for further research. How might you apply this approach to more serious strategies? What data do you think is potentially relevant? Tell us your thoughts in the comments. Want a broader understanding of algorithmic trading? See why it’s the only sustainable approach to profiting from the markets and how you can use it to your success inside the free Algo Basics PDF below…. Get the free PDF instantly Want to see how others use algos to trade for a living — so maybe you can too? Discover how systematic retail traders generate profit long-term, and where to start:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3506203591823578, "perplexity": 1443.6630566710805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00274.warc.gz"}
http://arxiv.org/abs/1102.1144
math.CO (what is this?) # Title: On sum of powers of Laplacian eigenvalues and Laplacian Estrada index of graphs Authors: Bo Zhou Abstract: Let $G$ be a simple graph and $\alpha$ a real number. The quantity $s_{\alpha}(G)$ defined as the sum of the $\alpha$-th power of the non-zero Laplacian eigenvalues of $G$ generalizes several concepts in the literature. The Laplacian Estrada index is a newly introduced graph invariant based on Laplacian eigenvalues. We establish bounds for $s_{\alpha}$ and Laplacian Estrada index related to the degree sequences. Comments: 9 pages Subjects: Combinatorics (math.CO) MSC classes: 05C50 Journal reference: MATCH Commun. Math. Comput. Chem. 62 (2009) 611-619 Cite as: arXiv:1102.1144 [math.CO] (or arXiv:1102.1144v1 [math.CO] for this version) ## Submission history From: Bo Zhou [view email] [v1] Sun, 6 Feb 2011 12:12:42 GMT (6kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7036690711975098, "perplexity": 1780.6057628581323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827077.13/warc/CC-MAIN-20160723071027-00004-ip-10-185-27-174.ec2.internal.warc.gz"}
http://papers.nips.cc/paper/4055-learning-networks-of-stochastic-differential-equations
# NIPS Proceedingsβ ## Learning Networks of Stochastic Differential Equations [PDF] [BibTeX] [Supplemental] ### Abstract We consider linear models for stochastic dynamics. Any such model can be associated a network (namely a directed graph) describing which degrees of freedom interact under the dynamics. We tackle the problem of learning such a network from observation of the system trajectory over a time interval T. We analyse the l1-regularized least squares algorithm and, in the setting in which the underlying network is sparse, we prove performance guarantees that are uniform in the sampling rate as long as this is sufficiently high. This result substantiates the notion of a well defined ‘time complexity’ for the network inference problem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754684567451477, "perplexity": 591.4891908488072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159470.54/warc/CC-MAIN-20180923134314-20180923154714-00533.warc.gz"}
http://computer-programming-forum.com/23-functional/75317c9fe7b0fdfb.htm
Bird-Meerten Formalism Author Message Bird-Meerten Formalism Bird-Meerten Formalism ---------------------- Considering Bird-Meertens formalism [1] for (functional) program derivation from a given specification, following questions were encountered : 1. What is the expressive power of this formalism? 2. Can this be easily extended for various other data structures such as trees? Any references? 3. Is this formalism consistent and complete? What I meant by this is,  does the following operators such as map, reduce, directed reduce, accumulate, filter, prefix etc always suffice for the fixed set of operators? 4. It has been shown that certain function compositions for the given problem might result in an efficient program derivation. This may not be true always by just trying some means. So, is there a way out to find, when does some new composition of functions could result in a faster implementation. 5. What is the criteria that a given formalism should possess? I mean,  if  I have to derive a program from a given specification with some available formalisms- what should I look at in those given formalisms? For instance, Partsch[2] also speaks about Transformational Program Development but doesn't have a motivation for mathematical considerations and is based on pragmatic needs of the given specification and program. How does this exactly differ from [1] apart from the fact that [1] has some nice algebraic properties. Any answers and guidance would be gratefully appreciated. Thanks a lot. S. Ramesh Babu ========================References================================= 1. Bird R S: A Calculus of Functions for Program Derivation' in Research Topics in Functional Programming, Addison-Wesely Publishing Company, ed.  D.Turner 1990. ------------------------------------------------------------------- 2. Partsch H: Transformational Program Development in a Particular Domain', Science of Computer Programming, Vol. 7, No. 2, 1986 =================================================================== -- ==================================================================== | Research Scholar ------------------------------------------------| | Dept. Of Computer Science & Engg.  ------------------------------| Tue, 03 Jan 1995 13:01:48 GMT Bird-Meerten Formalism Quote: >          Bird-Meerten Formalism >      ---------------------- In our Software Engineering group in Nijmegen, The Netherlands, we work on both formalisms mentioned hereunder, the Bird-Meertens Formalism (BMF), and the CIP wide spectrum language [CIP85]. Since Helmut Partsch is our professor, and former member of the Munich CIP-project, we work mainly with this formalism, and are less familiar with BMF; our colleagues will certainly be able to answer the questions we left open. Quote: >Considering Bird-Meertens formalism [1] for (functional) program >derivation from a given specification, following questions were >encountered : First some general remarks on BMF. Begun as a theory on lists [Bir87,Bac89c], the method has been generalized to other data structures as well foundations for this were first laid on basis of the Hagino types [Mal90a]. The current situation is that the proponents of BMF make heavily use of category theory; Fokkinga shows in his PhD thesis [Fok92] nicely how categorical "arrow chasing" can be translated into calculations. Quote: >1. What is the expressive power of this formalism? BMF merely is interested in functions over polynomial datatypes (using cartesian product, coproduct (= disjoint sum) and primitive sets as type constructors). With regard to these datatypes, and assuming "interesting" functions over the primitive sets, all possible functions can be constructed. Quote: >2. Can this be easily extended for various other data structures >   such as trees? Any references? See above, and [Bir89b,Jeu91]. Quote: >3. Is this formalism consistent and complete? >   What I meant by this is,  does the following operators such as >   map, reduce, directed reduce, accumulate, filter, prefix etc >   always suffice for the fixed set of operators? What you mean by consistency, I don't know; and with regard to completeness, it seems the same question as 1. For the theoretical foundations, BMF has drifted away from the "primitiveness" of the operators you mention. These now follow directly from the categorical construction of the datatype involved; for instance, the map, seen as a mapping from a function f: A --> B to a function f*: A* --> B*, basically is the morphism-aspect of the functor which constructs from the datatype A the datatype A* of lists over A. Quote: >4. It has been shown that certain function compositions for the >   given problem might result in an efficient program derivation. >   This may not be true always by just trying some means. So, is >   there a way out to find, when does some new composition of >   functions could result in a faster implementation. We don't know any theory about which steps are "right" and which not. I guess sometimes one even has to go to a less efficient expression temporarily -- "recuperer pour mieux sauter" as the French say. Quote: >5. What is the criteria that a given formalism should possess? >   I mean,  if      I have to derive a program from a given >   specification with some available formalisms- what should >   I look at in those given formalisms? >   For instance, Partsch[2] also speaks about Transformational >   Program Development but doesn't have a motivation for >   mathematical considerations and is based on pragmatic needs >   of the given specification and program. How does this exactly >   differ from [1] apart from the fact that [1] has some nice >   algebraic properties. Comparing the methods of BMF and CIP, one observes that both advocate transformational programming, with the same fundamental ideas, but that they differ greatly in the scope to which the method is applied. BMF on the one hand concentrates on functional programs, which initially are stated in an *executable* way (with some syntactic adjustments, you can type in your BMF expression in any functional programming language!) and then only are optimized w.r.t. execution time and/or space. Sometimes Bird gives an example how a BMF program can be translated into an imperative program (which is straightforward) but leaves it with that. CIP on the other hand, is meant as an overall method for program development [Boi92e]. One first specifies an algebraic data type -- without the BMF constraint of polynomial types -- and then has several layers of expression constructs for making programs over these data types. Apart from the usual functional and imperative constructs, the so-called descriptive constructs need to be mentioned: one can specify functions by means of predicates, using quantifiers, and one can "describe" the result of a function by means of a predicate which it has to fulfill. Thus one can start program development with a very abstract specification and step-by-step transform this into a concrete, executable program. An overview over the method is given in [Par90a]. Since all language layers (algebraic, descriptive, functional, imperative) are combined in one, formally defined, language [CIP85], the transformations from one layer to another are correct. So, both methods are mathematically sound, and formally defined, but differ in their scope and objective. Your question w.r.t. the criteria solely depends on what you have in mind: do you want to improve the efficiency of an existing (functional) program, or do you want to develop a program from scratch? Daniel Tuijnman                 Max Geerling Dept. of Computing Science, Software Engineering group University of Nijmegen The Netherlands References: (in BibTeX format) On the Bird-Meertens Formalism: Bir87, Author= {Bird, R.S.}, Title= {An Introduction to the Theory of Lists}, Pages= {5--42}, Crossref= {NATO.ASI.F36}} NATO.ASI.F36, Title= {Logic of Programming and Calculi of Discrete Design. {NATO ASI} Series Vol. {F}36}, Booktitle= {Logic of Programming and Calculi of Discrete Design. {NATO ASI} Series Vol. {F}36}, Editor= {M. Broy}, Publisher= sv, Year= 1987} Bir89b, Author= {Bird, R.S.}, Title= {Lectures on Constructive Functional Programming}, Crossref= {NATO.ASI.F55}, Pages= {151--218}} NATO.ASI.F55, Title= {Constructive Methods in Computing Science. {NATO ASI} Series Vol. {F}55}, Booktitle= {Constructive Methods in Computing Science. {NATO ASI} Series Vol. {F}55}, Editor= {M. Broy}, Publisher= sv, Year= 1989} Mei91a, Author= {Meijer, E. and Fokkinga, M.M. and Paterson, R.}, Title= {Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire}, Booktitle= {Functional Programming and Computer Architecture}, Year= 1991} Fok92, Author= {M.M. Fokkinga}, Title= {Law and Order in Algorithmics}, Year= 1992, School= {Universiteit Twente}} Mal90a, Author= {G.R. Malcolm}, Title= {Algebraic Data Types and Program Transformation}, School= {Rijksuniversiteit Groningen}, Month= Sep, Year= 1990} Mee89c, Author= {L.G.L.T. Meertens}, Title= {Constructing a calculus of programs}, Pages= {66--90}, Crossref= {lncs375}} Mee91, Author= {L.G.L.T. Meertens}, Title= {Paramorphisms}, Journal= {Formal Aspects of Computing}, Year= 1991, Note= {To appear}, Referencefrom= {Jeu91 says it's still to appear}} Bac89c, Author= {Backhouse, R.C.}, Title= {An Exploration of the {Bird-Meertens} Formalism}, Ame89, Organization= {STOP}, Title= {STOP International Summer School on Constructive Algorithmics, {A}meland}, Booktitle= {STOP International Summer School on Constructive Algorithmics, {A}meland}, Author= {Bird, R.S. and Backhouse, R.C. and Malcolm, G.  and de Moor, O. and Jeuring, J.T. and Jones, G. and Fokkinga, M.M. and Sheeran, M. and Meertens, L.G.L.T.}, Referencefrom= {Eerke Boiten}, Note= {Lecture notes}, Month= sep, Year= 1989} Jeu91, Author= {J.T. Jeuring}, Title= {The Derivation of Hierarchies of Algorithms on Matrices}, Pages= {9--32}, Crossref= {TC2-91}} TC2-91, Booktitle= {Proceedings of the IFIP TC2 Working Conference on Constructing Programs from Specifications}, Title= {Proceedings of the IFIP TC2 Working Conference on Constructing Programs from Specifications}, Editor= {B. M{\"o}ller}, Publisher= nhpc, Year= 1991} On the CIP formalism: ... Tue, 03 Jan 1995 22:22:18 GMT Bird-Meerten Formalism A nice survey of literature on the BMF, as well as answers to I would like to add the following to Daniel's posting: Quote: >The current situation is that the proponents of BMF make heavily use of >category theory; Fokkinga shows in his PhD thesis [Fok92] nicely how >categorical "arrow chasing" can be translated into calculations. Don't say that! Maarten Fokkinga is absolutely against diagram chasing (which is what you mean by arrow chasing'). Instead, all his proofs are equational, which is much more concise. It takes some time to get used to, however. Quote: >>1. What is the expressive power of this formalism? >With regard to these datatypes, and assuming >"interesting" functions over the primitive sets, all possible functions >can be constructed. Of course it is as powerful as lambda-calculus, or a turing machine. But since its aim is to specify programs, the real question is: does it look nice? Since BMF makes much use of initial algebras, initial homomorphisms are one of the basic constructs. These are called catamorphisms. There are many other interesting patterns of recursion, and these have been added. They have fancy names like anamorphism, hylomorphism, paramorphism, zygomorphism, etc. Why bother with encapsulating all these different recursion patterns? Because they have nice characteristic properties, which enables the programmer to _calculate_ programs. And calculating programs with these laws' is what BMF was invented for. Quote: >>3. Is this formalism consistent and complete? >>   What I meant by this is,  does the following operators such as >>   map, reduce, directed reduce, accumulate, filter, prefix etc >>   always suffice for the fixed set of operators? It is consistent, in the sense that you will never derive an erroneous program from a correct one (as long as you stick to the rules). As Daniel already mentioned, the operators you mention are special cases of morphisms. It is not the case that everything that is true can be derived in BMF. That would be equivalent to the halting problem, since it requires a complete decision procedure for equality of functions. Quote: >>4. It has been shown that certain function compositions for the >>   given problem might result in an efficient program derivation. >>   Is there a way out to find, when does some new composition of >>   functions could result in a faster implementation. There are some laws which always improve on space/time. Sometimes it just depends... Quote: >>5. What is the criteria that a given formalism should possess? It should be correct, and it should be usable. If this is important for you, use BMF. If, on the contrary, your criterium is that there is an implementation on your favorite machine, use CIP. Quote: >CIP on the other hand, is meant as an overall method for program >development [Boi92e]. One first specifies an algebraic data type -- >without the BMF constraint of polynomial types -- and then has several >layers of expression constructs for making programs over these data types. If you can give an example of a datatype which uses non-polynomial constructors, I would be very interested! All datatypes occurring in current programming languages, however, are polynomial. Quote: >Thus one can start program development with a very abstract >specification and step-by-step transform this into a concrete, >executable program. An overview over the method is given in [Par90a]. >Since all language layers (algebraic, descriptive, functional, >imperative) are combined in one, formally defined, language [CIP85], >the transformations from one layer to another are correct. I don't see how this can guarantee correctness. I think that you always have to prove correctness of the transformation steps yourself. It doesn't really matter if you use one language or more, in this respect. It helps if your language is formally defined (it helps even more if it is simple, like BMF), but that doesn't take away the proof obligations. Quote: >Daniel Tuijnman                     Max Geerling Sat, 07 Jan 1995 00:07:06 GMT Bird-Meerten Formalism Quote: >>The current situation is that the proponents of BMF make heavily use of >>category theory; Fokkinga shows in his PhD thesis [Fok92] nicely how >>categorical "arrow chasing" can be translated into calculations. >Don't say that! Maarten Fokkinga is absolutely against diagram chasing >(which is what you mean by arrow chasing'). Instead, all his proofs are >equational, which is much more concise. It takes some time to get used >to, however. Actually, that is what I meant to say. And I agree with you wholeheartedly, that Maarten's proofs are much more elegant. Quote: >>>5. What is the criteria that a given formalism should possess? >It should be correct, and it should be usable. If this is important for >you, use BMF. If, on the contrary, your criterium is that there is an >implementation on your favorite machine, use CIP. We don't think you're being serious here. Are you referring to the CIP-S system? That does indeed give machine support for carrying out the transformations, but certainly doesn't produce executable code. The distinction you draw here between BMF and CIP sounds very strange to me. Both methods have their merits in their own right, but they are aimed quite differently, as we pointed out in our previous post. Both BMF and CIP are defined formally, and their transformation rules are correct. Usability applies to both methods, as numerous examples demonstrate. Yes, as far as manipulating functional programs, e.g. transforming one recursion pattern to another, is concerned, BMF has done a great deal more than CIP, and has developed a much conciser (and elegant, and at last uniform) notation. But this is due to the deliberate restriction of BMF to functional programming -- which we do not condemn, but merely state as a fact that is relevant for the comparison above. Within functional programming, both methods do have the same power. We challenge you to present us a BMF law which cannot be expressed as a CIP transformation rule. Quote: >>CIP on the other hand, is meant as an overall method for program >>development [Boi92e]. One first specifies an algebraic data type -- >>without the BMF constraint of polynomial types -- and then has several >>layers of expression constructs for making programs over these data types. >If you can give an example of a datatype which uses non-polynomial >constructors, I would be very interested! All datatypes occurring in >current programming languages, however, are polynomial. You're right about the latter. But for instance bags (multisets) or (finite) sets are not expressible as a polynomial datatype, since additionally laws are needed to express commutativity and idempotency. Certainly, under some conditions, they can be represented by one (e.g., finite sets over an ordered carrier set as ordered sequences without multiple occurrences), but this has not been a topic of research within BMF. With respect to datatypes with laws, only Chapter 5 of Maarten Fokkinga's PhD-thesis comes to mind, which certainly is promising for the future. But then, to actually implement these datatypes with laws in an everyday programming language, you still need some kind of representation theorem in order to obtain an isomorphism between the datatype with laws and a (subset of a) purely polynomial datatype. Quote: >>Thus one can start program development with a very abstract >>specification and step-by-step transform this into a concrete, >>executable program. An overview over the method is given in [Par90a]. >>Since all language layers (algebraic, descriptive, functional, >>imperative) are combined in one, formally defined, language [CIP85], >>the transformations from one layer to another are correct. >I don't see how this can guarantee correctness. I think that you always >have to prove correctness of the transformation steps yourself. No! Just like in BMF, CIP transformational rules are of the form condition(s)   =>   E1 = E2 Furthermore, some rules establish a refinement between E1 and E2, which is not possible in BMF as it has no nondeterministic constructs. Here, E1 and E2 are *expression schemes* rather than expressions. After matching E1 (or E2 for that matter) with your actual expression, giving a substitution s, and checking the condition(s), the actual transformation step s(E1) --> s(E2) can be established. So, not the transformation *steps* have to be proven correct, only the general transformation *rules*. Quote: > ... It >doesn't really matter if you use one language or more, in this respect. In theory you're right. But it can be quite a pain in the ass to try to make an "interface" of transformation rules between two completely different languages. In view of that, a wide spectrum language comprising various layers which have been tuned to one another, like CIP-L, gives better support to program development than using a different language for every layer. For example, in CIP-L, transformation from (tail)recursive functional programs to imperative, iterative programs is straightforward: f:: m -> r f x = H(x),     if B(x) = f (K(x)), otherwise ^ | ----+---- | v function f (x : m) r; begin var vx : m := x; while not(B(x)) do vx := K(vx); f := H(vx) end; where for the sake of clarity, for the functional program a Miranda-like syntax is used, and for the imperative part Pascal-like syntax. Furthermore, B, H and K are arbitrary expressions only containing x as free variable. The double arrow means that both programs are equivalent. Quote: >It helps if your language is formally defined (it helps even more if it >is simple, like BMF), but that doesn't take away the proof obligations. It helps? I should say, it is an absolutely necessary prerequisite to have it formally defined. Greetings, Daniel Tuijnman                 Max Geerling -- Daniel Tuijnman Sat, 07 Jan 1995 21:08:43 GMT Bird-Meerten Formalism writes, as a follow-up to my posting, that CIP is a wide-spectrum' programming language, capable of expressing abstract, non-executable specifications, as well as concrete programs (both imperative and functional). Quote: >Within functional programming, both methods do have the same power. We >challenge you to present us a BMF law which cannot be expressed as a CIP >transformation rule. I suppose you can encode BMF in CIP, but that was not my point, and hence you thought that I was not serious''. I stated that, in my opinion, BMF is much conciser, and simpler to learn, than CIP. On the other hand, there is not yet a system for automatic reasoning in BMF. There is such a system for CIP, although I recall the demonstration of it at CWI, where attempts to transform the program 1+1' to the more efficient program 2' ended in a core dump, after some 15 minutes of trying the thing to work. Making things concise and simple is the _aim_ of BMF. For instance, using category theory has reduced the number of different laws dramatically. Being able to do everything is the aim of CIP. But that has lead to an enormous system, which is hard to master. Quote: >>>[In CIP] One first specifies an algebraic data type -- >>>without the BMF constraint of polynomial types -- and then has several >>>layers of expression constructs for making programs over these data types. >>If you can give an example of a datatype which uses non-polynomial >>constructors, I would be very interested! All datatypes occurring in >>current programming languages, however, are polynomial. to me in a mail. For instance, the type of finitely branching trees is not. It is the least fixed point of the functor \lambda X . A + (X*) where X* means lists of X. There is a list functor (*) in there, which is not polynomial. This can be circumvented, however, by viewing this as a many-sorted algebra, and defining lists and finitely branching trees together. Another example of a non-polynomial type is the type of functions from A to B, A -> B. The restriction made in BMF is that some nice properties do not hold for such types, which do hold for polynomially-generated types. But you're not going to change that in CIP, I think. (That would be truly amazing!) Quote: >But for instance bags (multisets) or (finite) >sets are not expressible as a polynomial datatype, since additionally >laws are needed to express commutativity and idempotency...  but this >has not been a topic of research within BMF. >[Maarten Fokkinga's work] ... But then, to actually >implement these datatypes with laws in an everyday programming language, >you still need some kind of representation theorem in order to obtain an >isomorphism between the datatype with laws and a (subset of a) purely >polynomial datatype. \begin{plug} Read my latest technical report, Congruences and Quotients in Categories of Algebras'' for this. It gives a simple way to model equations in BMF, and gives the connections with everyday programming languages''. \end{plug} Can you point out (preferably in mail) what representation theorem you well. Sun, 08 Jan 1995 22:57:10 GMT Bird-Meerten Formalism Quote: Nico Verwer writes: >                                                               On the > other hand, there is not yet a system for automatic reasoning in BMF. There is an automatic theorem prover based on Fokkinga's paramorphisms. This work is described in the paper: L. Fegaras, T. Sheard, and D. Stemple. "Uniform Traversal Combinators: Definition, Use and Properties" In eleventh International Conference on Automated Deduction (CADE-11), Saratoga Springs, New York, June 1992. Leo Fegaras Tue, 10 Jan 1995 00:03:33 GMT Bird-Meerten Formalism Over four years of discussion within the Dutch STOP project have led to a certain amount of consensus about the "CIP vs. BMF" issue. Now, moving on to the other important discussion, transformation systems: Quote: >[..] BMF is much conciser, and simpler to learn, than CIP. On the >other hand, there is not yet a system for automatic reasoning in BMF. Chisholm/Backhouse, and by Lindsey. That no system exists to do everything one could want it to, well, that's no surprise. See our "Ustopia" report for a list of interesting requirements. Quote: >There is such a system for CIP, although I recall the demonstration of >it at CWI, where attempts to transform the program 1+1' to the more >efficient program 2' ended in a core dump, after some 15 minutes of >trying the thing to work. I must concur that the CIP *prototype* system (written at TU Munich in the early 80's, Riethmayer et al.) has not done much for the popularity of program transformation systems, to say the least. We asked a student to work with it for a few months, and he wasn't much more enthusiastic than Nico is. This may be partly due to the *prototype* character of the system, and to the {*filter*} machines on which it was first brought up. Most of the early work on program transformation systems was aimed at showing the feasibility of the approach, and not as much on the practicability of the systems, I'm afraid. Our group is currently experimenting with the "real" CIP-S transformation system, a modern (really) window-based system developed more recently by TU Munich and Siemens. Our first experiences are much more positive than those with the prototype system. We hope to report more extensively on this later. -- Dept. of Informatics (STOP Project) University of Nijmegen, NL Mon, 23 Jan 1995 21:08:34 GMT Page 1 of 1 [ 8 post ] Relevant Pages
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6913386583328247, "perplexity": 5473.4697680714535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00210.warc.gz"}
http://link.springer.com/article/10.1007/BF02663442
Metallurgical Transactions A , Volume 23, Issue 12, pp 3325–3335 # Variation of the partial thermodynamic properties of oxygen with composition in YBa2Cu3O7−δ • Tom Mathews • K. T. Jacob Physical Chemistry DOI: 10.1007/BF02663442 Mathews, T. & Jacob, K.T. MTA (1992) 23: 3325. doi:10.1007/BF02663442 • 39 Views ## Abstract The variation of equilibrium oxygen potential with oxygen concentration inYBa2Cu3O7-δhas been measured in the temperature range of 773 to 1223 K. For temperatures up to 1073 K, the oxygen content of theYBa2Cu3O7-δsample, held in a stabilized-zirconia crucible, was altered by coulometric titration. The compound was in contact with the electrolyte, permitting direct exchange of oxygen ions. For measurements above 1073 K, the oxide was contained in a magnesia crucible placed inside a closed silica tube. The oxygen potential in the gas phase above the 123 compound was controlled and measured by a solid-state cell based on yttria-stabilized zirconia, which served both as a pump and sensor. Pure oxygen at a pressure of 1.01 × 105 Pa was used as the reference electrode. The oxygen pressure over the sample was varied from 10-1 to 105 Pa. The oxygen concentrations of the sample equilibrated with pure oxygen at 1.01 × 105 Pa at different temperatures were determined after quenching in liquid nitrogen by hydrogen reduction at 1223 K. The plot of chemical potential of oxygen as a function of oxygen non-stoichiometry shows an inflexion at δ ∼ 0.375 at 873 K. Data at 773 K indicate tendency for phase separation at lower temperatures. The partial enthalpy and entropy of oxygen derived from the temperature dependence of electromotive force (emf ) exhibit variation with composition. The partial enthalpy for °= 0.3, 0.4, and 0.5 also appears to be temperature dependent. The results are discussed in comparison with the data reported in the literature. An expression for the integral free energy of formation of YBa2Cu3O6.5 is evaluated based on measurements reported in the literature. By integration of the partial Gibbs’ energy of oxygen obtained in this study, the variation of integral property with oxygen concentration is obtained at 873 K. ### Nomenclature ni moles ofi μi chemical potential ofi T temperature, K P pressure, Pa Po standard pressure (1.01 × 105 Pa) Xi mole fraction of component GM relative integral molar Gibbs’ energy of mixing Gi relative partial molar Gibbs’ energy of mixing of componenti Hi relative partial molar enthalpy of mixing of componenti Si relative partial molar entropy of mixing of componenti δ oxygen nonstoichiometric parameter in YBa2Cu3O7−δ E emf, V F R gas constant, 8.3143 J mol-1 deg-1 η number of electrons
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617410063743591, "perplexity": 2522.601008559354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.10/warc/CC-MAIN-20160924173741-00083-ip-10-143-35-109.ec2.internal.warc.gz"}
https://link.springer.com/chapter/10.1007/978-94-007-0884-6_77
# Mass Transfer Considerations for Scale-Up and Scale-Down of Animal Cell Bioprocesses • R. Puskeiler • M. Edler • K. Didzus • R. Müller • J. Gabelsberger Conference paper Part of the ESACT Proceedings book series (ESACT, volume 5) ## Abstract The development, characterization and validation of animal cell bioprocesses can greatly benefit from straight forward scale-up/-down procedures that generally rely on scale down models. The match of scale down model data to the data gathered at larger scales relies on several factors one of which is the mass transfer coefficient. This study reports the kLa for O2 and CO2 measured in a bioreactor equipped with a ring sparger. The results indicate that higher power input leads to a decrease of CO2 elimination capacity. When high power input is attributed to a smaller bubble diameter, this finding confirms a conclusion in literature that relates CO2 elimination to bubble size (Mostafa and Gu (2003), Biotechnol Prog 19(1):45–51; Frahm et al. (2002), J Biotechnol 99(2):133–148). Aeration rate, however, does not influence the mass transfer ratio at a given power input. ## Keywords Mass Transfer Coefficient Bubble Size Aeration Rate Small Bubble Bubble Diameter These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. ## References 1. M. Boon, J.J. Heijnen (1998), Hydromet 48:187–204. 2. B. Frahm, H.C. Blank, P. Cornand, W. Oelssner, U. Guth, P. Lane, A. Munack, K. Johannsen, R. Pörtner (2002), J Biotechnol 99(2):133–148. 3. R. Fuchs, D.D.Y. Ryu (1971), Ind Eng Chem Process Des Dev 10(2):190–196. 4. D.R. Gray, S. Chen, W. Howarth, D. Inlow, B.L. Maiorella (1996), Cytotechnol 22(1–3):65–78. 5. V. Linek, V. Vacek, P. Benes (1987), Chem Eng J Biochem Eng 34(1):11–34.Google Scholar 6. S.S. Mostafa, X. Gu (2003), Biotechnol Prog 19(1):45–51. 7. R. Puskeiler, D. Weuster-Botz (2005), J Biotechnol 120(4):430–438. 8. M. Sobotka, A. Prokop, I.J. Dunn, A. Einsele (1982), Ann Rep Ferm Proc 5:127–210.Google Scholar ## Authors and Affiliations • R. Puskeiler • 1 • M. Edler • 2 • K. Didzus • 1 • R. Müller • 1 • J. Gabelsberger • 1 1. 1.Pharma Biotech Production and DevelopmentRoche Diagnostics GmbHPenzbergGermany 2. 2.Institute of Chemistry, University of LoebenLoebenAustria
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993324637413025, "perplexity": 25478.85353254813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00407.warc.gz"}
https://docs.deistercloud.com/Axional%20development%20products.15/Axional%20Studio.4/Development%20Guide.10/Languages%20reference.16/XSQL%20Script.10/Packages.40/connection/connection.schema.copy.xml?embedded=true
Allows to copie the structure and data from one table to another, even between differents databases. # 1 connection.schema.copy This functionality is used to copie data between differents databases. If you try to copy in the same database you can use the function table.copy. If the table do not exist in the database, it is created with the information of the obtained metadata using the JDBC driver. That is why it is necessary to review the physical model in case the table does not exist in destination because sometimes it could be a difference in types of fields or default values that should be corrected previously. #### Consideraciones de rendimiento The use of the batch attribute increases the ratio of number of inserted registers in the destiny table. If the destination database is in transactional mode, this will affect negatively in the performance of the insertion operations in the destination table. <connection.schema.copy    src='src'    dst='dst'    tables='tables'    delete='delete'    drop='drop'    where='where'    skip='skip'    batchsize='batchsize'/> Example The following example copy all the tables of the 'db_ori' database in the 'db_dst' database. If the tables does not exist in the destination, they will created. The existing registers in the destination tables will not be affected. Copy <xsql-script> <body> <set name='time_start'><system.currentTimeMillis /></set> <connection.schema.copy src='db_ori' dst='db_dst' /> <set name='time_end'><system.currentTimeMillis /></set> <println>The database copy process takes: <sub><time_end/><time_start/></sub> ms.</println> </body> </xsql-script> Example Copies the registers of the table 'capuntes' which accomplish the condition 'apteid > 100' of the 'db_ori' database to the 'db_dst' database. The existing registers in the destination tables will not be afected. Copy <xsql-script> <body> <set name='time_start'><system.currentTimeMillis /></set> <connection.schema.copy src='db_ori' dst='db_dst' tables='capuntes' where='apteid > 100' /> <set name='time_end'><system.currentTimeMillis /></set> <println>The database copy process takes: <sub><time_end/><time_start/></sub> ms.</println> </body> </xsql-script>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3753468096256256, "perplexity": 2964.063773141843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308149.76/warc/CC-MAIN-20191215122056-20191215150056-00269.warc.gz"}
https://bibbase.org/network/publication/mchich-brochier-auger-brehmer-interactionsbetweenthecrossshorestructureofsmallpelagicfishpopulationoffshoreindustrialfisheriesandnearshoreartisanalfisheriesamathematicalapproach-2016
Interactions Between the Cross-Shore Structure of Small Pelagic Fish Population, Offshore Industrial Fisheries and Near Shore Artisanal Fisheries: A Mathematical Approach. Mchich, R., Brochier, T., Auger, P., & Brehmer, P. Acta Biotheoretica, 64(4):479--493, December, 2016. 00000 This work presents a mathematical model describing the interactions between the cross-shore structure of small pelagic fish population an their exploitation by coastal and offshore fisheries. The complete model is a system of seven ODE’s governing three stocks of small pelagic fish population moving and growing between three zones. Two types of fishing fleets are inter-acting with the fish population, industrial boats, constrained to offshore area, and artisanal boats, operating from the shore. Two time scales were considered and we use aggregation methods that allow us to reduce the dimension of the model and to obtain an aggregated model, which is a four dimension one. The analysis of the aggregated model is performed. We discuss the possible equilibriums and their meaning in terms of fishery management. An interesting equilibrium state can be obtained for which we can expect coexistence and a stable equilibrium state between fish stocks and fishing efforts. Some identification parameters are also given in the discussion part of the model. @article{mchich_interactions_2016, title = {Interactions {Between} the {Cross}-{Shore} {Structure} of {Small} {Pelagic} {Fish} {Population}, {Offshore} {Industrial} {Fisheries} and {Near} {Shore} {Artisanal} {Fisheries}: {A} {Mathematical} {Approach}}, volume = {64}, issn = {0001-5342, 1572-8358}, shorttitle = {Interactions {Between} the {Cross}-{Shore} {Structure} of {Small} {Pelagic} {Fish} {Population}, {Offshore} {Industrial} {Fisheries} and {Near} {Shore} {Artisanal} {Fisheries}}, doi = {10.1007/s10441-016-9299-7}, abstract = {This work presents a mathematical model describing the interactions between the cross-shore structure of small pelagic fish population an their exploitation by coastal and offshore fisheries. The complete model is a system of seven ODE’s governing three stocks of small pelagic fish population moving and growing between three zones. Two types of fishing fleets are inter-acting with the fish population, industrial boats, constrained to offshore area, and artisanal boats, operating from the shore. Two time scales were considered and we use aggregation methods that allow us to reduce the dimension of the model and to obtain an aggregated model, which is a four dimension one. The analysis of the aggregated model is performed. We discuss the possible equilibriums and their meaning in terms of fishery management. An interesting equilibrium state can be obtained for which we can expect coexistence and a stable equilibrium state between fish stocks and fishing efforts. Some identification parameters are also given in the discussion part of the model.}, language = {en}, number = {4}, urldate = {2017-01-02TZ}, journal = {Acta Biotheoretica}, author = {Mchich, Rachid and Brochier, Timothée and Auger, Pierre and Brehmer, Patrice}, month = dec, year = {2016}, note = {00000}, keywords = {ACL, DISCOVERY}, pages = {479--493} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088656067848206, "perplexity": 6185.01397907523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00331.warc.gz"}
https://www.physicsforums.com/threads/solve-this-sum-from-vectors.590059/
Solve this sum from vectors 1. Mar 25, 2012 Fazan Mushtaq The resultant of two forces 3p and 2p is R.If the first force is doubled then the resultant is also doubled.The angle between the two forces is?plz explain answer to me,i dont want direct numerical value. 2. Mar 25, 2012 emailanmol What is the magnitude of resultant vector in terms of |A| |B| and theta, Where |A| , |B| are magnitudes of vectors being added and theta is angle between them 3. Mar 25, 2012 azizlwl Use Law of Cosines
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798504710197449, "perplexity": 3513.770182250807}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160754.91/warc/CC-MAIN-20180924205029-20180924225429-00185.warc.gz"}
https://scholars.ttu.edu/en/publications/minimizing-economic-and-environmental-impacts-through-an-optimal--5
# Minimizing economic and environmental impacts through an optimal preventive replacement schedule: Model and application Feri Afrinaldi, Taufik, Andrea Marta Tasman, Hong Chao Zhang, Alizar Hasan Research output: Contribution to journalArticlepeer-review 17 Scopus citations ## Abstract This paper presents a mathematical model to determine the optimal schedule of preventive replacement of a component such that the economic and environmental impacts of the component are minimized. For the economic dimension, the model minimizes the operation, failure and replacement costs of the component. From the environmental perspective, the model aims to minimize the environmental impact associated with the use phase and action taken to replace the component. The model is made general and can accommodate any environmental impact category. Due to the complexity of the objective functions of the model, genetic algorithm (GA) is proposed to find the optimal solutions. To reduce GA search space, upper and lower bounds of the solutions are determined based on the numerical analysis of the first derivatives of the objective functions of the model. To show the applicability of the model, a case study aiming to minimize total expected cost and global warming potential (GWP) of a bus tire is presented. The results of the case study show that the optimal preventive replacement schedule minimizing total expected cost per km is when the tire reaches 17,700 km and the schedule minimizing total expected GWP per km is when the tire reaches 19,500 km. The schedules result in US\$23 and 0.2 kg CO2-eq savings in the total expected cost and GWP per tire. The solutions of the multi-objective optimization problem indicate that a 1000 km increase in the optimal schedule minimizing total cost will result in a 0.4% increase in the total expected cost and a 0.002% reduction in the total expected GWP of the tire. The sensitivity analysis presents that 1% reduction in the operation cost and fuel consumption contributes to a 0.91% reduction in the total expected cost and 0.99% reduction in the total expected GWP, respectively. Original language English 882-893 12 Journal of Cleaner Production 143 https://doi.org/10.1016/j.jclepro.2016.12.033 Published - Feb 1 2017 ## Keywords • Economic impact • Environmental impact • Maintenance • Modeling • Scheduling
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021261096000671, "perplexity": 1294.9377271328124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176049.8/warc/CC-MAIN-20201124082900-20201124112900-00427.warc.gz"}
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Book%3A_Beginning_Chemistry_(Ball)/12%3A_Acids_and_Bases/12.2%3A_Arrhenius_Acids_and_Bases
# 12.2: Arrhenius Acids and Bases Learning Objectives • Identify an Arrhenius acid and an Arrhenius base. • Write the chemical reaction between an Arrhenius acid and an Arrhenius base. Historically, the first chemical definition of an acid and a base was put forward by Svante Arrhenius, a Swedish chemist, in 1884. An Arrhenius acid is a compound that increases the H+ ion concentration in aqueous solution. The H+ ion is just a bare proton, and it is rather clear that bare protons are not floating around in an aqueous solution. Instead, chemistry has defined the hydronium ion (H3O+) as the actual chemical species that represents an H+ ion. H+ ions and H3O+ ions are often considered interchangeable when writing chemical equations (although a properly balanced chemical equation should also include the additional H2O). Classic Arrhenius acids can be considered ionic compounds in which H+ is the cation. Table 12.2.1 lists some Arrhenius acids and their names. Table 12.2.1 Some Arrhenius Acids Formula Name HC2H3O2 (also written CH3COOH) acetic acid HClO3 chloric acid HCl hydrochloric acid HBr hydrobromic acid HI hydriodic acid HF hydrofluoric acid HNO3 nitric acid H2C2O4 oxalic acid HClO4 perchloric acid H3PO4 phosphoric acid H2SO4 sulfuric acid H2SO3 sulfurous acid An Arrhenius base is a compound that increases the OH ion concentration in aqueous solution. Ionic compounds of the OH ion are classic Arrhenius bases. Example $$\PageIndex{1}$$: Identify each compound as an Arrhenius acid, an Arrhenius base, or neither. 1. HNO3 2. CH3OH 3. Mg(OH)2 Solution 1. This compound is an ionic compound between H+ ions and NO3 ions, so it is an Arrhenius acid. 2. Although this formula has an OH in it, we do not recognize the remaining part of the molecule as a cation. It is neither an acid nor a base. (In fact, it is the formula for methanol, an organic compound.) 3. This formula also has an OH in it, but this time we recognize that the magnesium is present as Mg2+ cations. As such, this is an ionic compound of the OH ion and is an Arrhenius base. Exercise $$\PageIndex{1}$$ Identify each compound as an Arrhenius acid, an Arrhenius base, or neither. 1. KOH 2. H2SO4 3. C2H6 1. Arrhenius base 2. Arrhenius acid 3. neither Acids have some properties in common. They turn litmus, a plant extract, red. They react with some metals to give off H2 gas. They react with carbonate and hydrogen carbonate salts to give off CO2 gas. Acids that are ingested typically have a sour, sharp taste. (The name acid comes from the Latin word acidus, meaning “sour.”) Bases also have some properties in common. They are slippery to the touch, turn litmus blue, and have a bitter flavor if ingested. Acids and bases have another property: they react with each other to make water and an ionic compound called a salt. A salt, in chemistry, is any ionic compound made by combining an acid with a base. A reaction between an acid and a base is called a neutralization reaction and can be represented as follows:acid + base → H2O + salt The stoichiometry of the balanced chemical equation depends on the number of H+ ions in the acid and the number of OH ions in the base. Example $$\PageIndex{1}$$: Write the balanced chemical equation for the neutralization reaction between H2SO4 and KOH. What is the name of the salt that is formed? Solution The general reaction is as follows: H2SO4 + KOH → H2O + salt Because the acid has two H+ ions in its formula, we need two OH ions to react with it, making two H2O molecules as product. The remaining ions, K+ and SO42−, make the salt potassium sulfate (K2SO4). The balanced chemical reaction is as follows: H2SO4 + 2KOH → 2H2O + K2SO4 Exercise $$\PageIndex{1}$$ Test Yourself Write the balanced chemical equation for the neutralization reaction between HCl and Mg(OH)2. What is the name of the salt that is formed?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7781386971473694, "perplexity": 5822.232830496587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00298.warc.gz"}
http://www.stats.bris.ac.uk/R/web/packages/speaq/vignettes/classic_speaq_vignette.html
# User guide for speaq package version <= 1.2.3 ## Preface This introduction was written for the speaq package up until version 1.2.3. Since version 2.0 a lot of functionality is added but the original functionality is maintained. This vignette can therefor still be used as it decribes one part of the package dealing with spectral alignment and quantitation. ## Introduction We introduce a novel suite of informatics tools for the quantitative analysis of NMR metabolomic profile data. The core of the processing cascade is a novel peak alignment algorithm, called hierarchical Cluster-based Peak Alignment (CluPA). The algorithm aligns a target spectrum to the reference spectrum in a top-down fashion by building a hierarchical cluster tree from peak lists of reference and target spectra and then dividing the spectra into smaller segments based on the most distant clusters of the tree. To reduce the computational time to estimate the spectral misalignment, the method makes use of Fast Fourier Transformation (FFT) cross-correlation. Since the method returns a high-quality alignment, we can propose a simple methodology to study the variability of the NMR spectra. For each aligned NMR data point the ratio of the between-group and within-group sum of squares (BW-ratio) is calculated to quantify the difference in variability between and within predefined groups of NMR spectra. This differential analysis is related to the calculation of the F-statistic or a one-way ANOVA, but without distributional assumptions. Statistical inference based on the BW-ratio is achieved by bootstrapping the null distribution from the experimental data. We are going to introduce step-by-step how part of speaq works for a specific dataset, this includes • automatically do alignment • allow user intervening into the process • compute BW ratios • visualize results For any issue reports or discussions about speaq feel free to contact us via the developing website at github (https://github.com/beirnaert/speaq). ## Data input We randomly generate an NMR spectral dataset of two different groups (15 spectra for each group). Each spectrum has two peaks slightly shifted cross over spectra. More details are described in the manual document of function makeSimulatedData(). library(speaq) #Generate a simulated NMR data set for this experiment res=makeSimulatedData(); X=res$data; groupLabel=res$label; Now, we draw a spectral plot to observe the dataset before alignment. drawSpec(X) ## Landmark peak detection This section makes use of MassSpecWavelet package to detect peak lists of the dataset. cat("\n detect peaks....") ## ## detect peaks.... startTime <- proc.time() peakList <- detectSpecPeaks(X, nDivRange = c(128), scales = seq(1, 16, 2), baselineThresh = 50000, SNR.Th = -1, verbose = FALSE) endTime <- proc.time() cat("Peak detection time:", (endTime[3] - startTime[3])/60, " minutes") ## Peak detection time: 0.02425 minutes ## Reference finding Next, We find the reference for other spectra align to. cat("\n Find the spectrum reference...") ## ## Find the spectrum reference... resFindRef <- findRef(peakList) refInd <- resFindRef$refInd # The ranks of spectra for (i in 1:length(resFindRef$orderSpec)) { cat(paste(i, ":", resFindReforderSpec[i], sep = ""), " ") if (i%%10 == 0) cat("\n") } ## 1:24 2:25 3:27 4:9 5:21 6:16 7:7 8:23 9:19 10:18 ## 11:5 12:30 13:12 14:10 15:15 16:20 17:2 18:6 19:22 20:26 ## 21:14 22:11 23:29 24:28 25:3 26:13 27:17 28:1 29:4 30:8 cat("\n The reference is: ", refInd) ## ## The reference is: 24 ## Spectral alignment For spectral alignment, function dohCluster() is used to implement hierarchical Cluster-based Peak Alignment [1] (CluPA) algorithm. In this function maxShift is set by 100 by default which is suitable with many NMR datasets. Experienced users can set select more proper for their dataset. For example: # Set maxShift maxShift = 50 Y <- dohCluster(X, peakList = peakList, refInd = refInd, maxShift = maxShift, acceptLostPeak = TRUE, verbose = FALSE) ### Automatically detect the optimal maxShift If users are not confident when selecting a value for the maxShift, just set the value to NULL. Then, the software will automatically learn to select the optimal value based on the median Pearson correlation coefficient between spectra. It is worth noting that this metric is significantly effected by high peaks in the spectra [2], so it might not be the best measure for evaluating alignment performances. However, it is fast for the purpose of detecting the suitable maxShift value. This mode also takes more time since CluPA implements extra alignment for few maxShift values. If set verbose=TRUE, a plot of performances of CluPA with different values of maxShift will be displayed. For example: Y <- dohCluster(X, peakList = peakList, refInd = refInd, maxShift = NULL, acceptLostPeak = TRUE, verbose = TRUE) ## ## -------------------------------- ## maxShift=NULL, thus CluPA will automatically detect the optimal value of maxShift. ## -------------------------------- ## ## maxShift= 2 ## Median Pearson correlation coefficent: -0.02429686 , the best result: -1 ## maxShift= 4 ## Median Pearson correlation coefficent: 0.006180729 , the best result: -0.02429686 ## maxShift= 8 ## Median Pearson correlation coefficent: 0.0452733 , the best result: 0.006180729 ## maxShift= 16 ## Median Pearson correlation coefficent: 0.7615448 , the best result: 0.0452733 ## maxShift= 32 ## Median Pearson correlation coefficent: 0.9317428 , the best result: 0.7615448 ## maxShift= 64 ## Median Pearson correlation coefficent: 0.9317428 , the best result: 0.9317428 ## maxShift= 128 ## Median Pearson correlation coefficent: 0.9317428 , the best result: 0.9317428 ## maxShift= 256 ## Median Pearson correlation coefficent: 0.9317428 , the best result: 0.9317428 ## Optimal maxShift= 32 with median Pearson correlation of aligned spectra= 0.9317428 ## ## Alignment time: 0.01775 minutes In this example, the best maxShift=32 which is highlighted by a red star in the plot achieves the highest median Pearson correlation coefficient (0.93). ### Spectral alignment with selected segments If users just want to align in specific segments or prefer to use different parameter settings for different segments. speaq allows users to do that by intervene into the process. To do that, users need to create a segment information matrix as the example in Table 1. begin end forAlign ref maxShift 100 200 0 0 0 450 680 1 0 50 Each row contains the following information corresponding to the columns: • begin: the starting point of the segment. • end: the end point of the segment. • forAlign: the segment is aligned (1) or not (0). • ref: the index of the reference spectrum. If 0, the algorithm will select the reference found by the reference finding step. • maxShift: the maximum number of points of a shift to left/right. It is worth to note that only segments with forAlign=1 (column 3) will be taken into account for spectral alignment. Now, simply run dohClusterCustommedSegments with the input from the information file. segmentInfoMat = matrix(data = c(100, 200, 0, 0, 0, 450, 680, 1, 0, 50), nrow = 2, ncol = 5, byrow = TRUE) colnames(segmentInfoMat) = c("begin", "end", "forAlign", "ref", "maxShift") segmentInfoMat ## begin end forAlign ref maxShift ## [1,] 100 200 0 0 0 ## [2,] 450 680 1 0 50 Yc <- dohClusterCustommedSegments(X, peakList = peakList, refInd = refInd, segmentInfoMat = segmentInfoMat, minSegSize = 128, verbose = FALSE) ## Spectral plots We could draw a segment to see the performance of the alignement. drawSpec(Y) We could limit the heights of spectra to easily check the alignment performance. drawSpec(Y, startP = 450, endP = 680, highBound = 5e+05, lowBound = -100) We achieved similar results with Yc but the region of the first peak was not aligned because the segment information just allows align the region 450-680. drawSpec(Yc) ## Quantitative analysis This section presents the quantitative analysis for wine data that was used in our paper [1]. To save time, we just do permutation 100 times to create null distribution. N = 100 alpha = 0.05 # find the BW-statistic BW = BWR(Y, groupLabel) # create sampled H0 and export to file H0 = createNullSampling(Y, groupLabel, N = N, verbose = FALSE) # compute percentile of alpha perc = double(ncol(Y)) alpha_corr = alpha/sum(returnLocalMaxima(Y[2, ])pkMax > 50000) for (i in 1:length(perc)) { perc[i] = quantile(H0[, i], 1 - alpha_corr, type = 3) } Now, some figures are plotting. Read the publication to understand more about these figures. drawBW(BW, perc, Y, groupLabel = groupLabel) drawBW(BW, perc, Y, startP = 450, endP = 680, groupLabel = groupLabel) ## References [1] Vu, Trung Nghia, Dirk Valkenborg, Koen Smets, Kim A. Verwaest, Roger Dommisse, Filip Lemiere, Alain Verschoren, Bart Goethals, and Kris Laukens. “An Integrated Workflow for Robust Alignment and Simplified Quantitative Analysis of NMR Spectrometry Data.” BMC Bioinformatics 12, no. 1 (October 20, 2011): 405. [2] Vu, Trung Nghia, and Kris Laukens. “Getting Your Peaks in Line: A Review of Alignment Methods for NMR Spectral Data.” Metabolites 3, no. 2 (April 15, 2013): 259-76.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.330199271440506, "perplexity": 5745.954322617582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00734.warc.gz"}
https://rbseguide.com/class-12-maths-chapter-12-ex-12-4-english-medium/
# RBSE Solutions for Class 12 Maths Chapter 12 Differential Equation Ex 12.4 ## Rajasthan Board RBSE Class 12 Maths Chapter 12 Differential Equation Ex 12.4 Solve the following differential equations : Question 1. (ey + 1) cos x dx + ey sin x dy = 0. Solution: Given equation is Question 2. (1 + x2) dy = (1 + y2) dx. Solution: Given Question 3. (x + 1) $$\frac { dy }{ dx }$$ = 2xy Solution: Question 4. Solution: Question 5. (sin x + cos x) dy + (cos x – sin x) dx = 0. Solution: Given (sin x + cos x) dy + (cos x – sin x) dx Question 6. Solution: Question 7. sec2 x tan y dy + sec2 y. tan x dx = 0. Solution: Given, Question 8. Solution: Question 9. (1 + cos x) dy = (1 – cosx) dx Solution: Question 10. Solution:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928410053253174, "perplexity": 9976.575855536415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585353.52/warc/CC-MAIN-20211020214358-20211021004358-00557.warc.gz"}
https://mathematica.stackexchange.com/questions/30725/constraint-syntax-compaction
# Constraint syntax compaction Is there a more compact way to represent these constraints: NMaximize[{a+b+c,a <= 5 && b <= 5 && c <= 5}, {a,b,c}] like for x in {a,b,c}, x <= 5 or something. I personally like to use Thread for such things (bounds are e.g. easy to adjust), like: NMaximize[{a + b + c, Thread[{a, b, c} <= {5, 6, 7}]}, {a, b, c}] If it is all the same bound, we can directly write (as in Artes' comment below): NMaximize[{a + b + c, Thread[{a, b, c} <= 5]}, {a, b, c}] I think the syntax should be clear - see also Docu Center for a very similar example (on Thread) Also see Artes' comment below for further ideas based on the (exemplary) function you provide. • NMaximize[{a + b + c, Thread[{a, b, c} <= 5]}, {a, b, c}] or simply NMaximize[{a + b + c, Thread[# <= 5]}, #] &@{a, b, c}, or NMaximize[{Plus @@ #, Thread[# <= 5]}, #] &@{a, b, c}. – Artes Aug 19 '13 at 11:29 • Thanks @Artes, I added that to the list and made clear that my previous answer was gear towards different bounds – Pinguin Dirk Aug 19 '13 at 11:31 • Thanks. Thread[] seems similar to Haskell's generalized map "fmap" (Functor). – Neal Alexander Aug 19 '13 at 12:56 If you do these things a lot you may consider building your own syntax to be able to write constraints in a more concise manner, e.g.: constrAnd[list_, func_] := And @@ (func /@ list) lt[list_,n_] := constrAnd[{a, b, c}, # <= n &] lt[{a, b, c},5] a <= 5 && b <= 5 && c <= 5 So that you may now write NMaximize[{a + b + c, lt[{a, b, c}, 5]}, {a, b, c}] This function is logically equivalent to the example in the OP, "for x in ...". What about this (because NMinimize also accepts a list of boundary conditions): NMaximize[{a + b + c, # <= 5 & /@ {a, b, c}}, {a, b, c}] or (if you are after the very same expression) NMaximize[{a + b + c, And @@ (# <= 5 & /@ {a, b, c})}, {a, b, c}] Both are obviously not more compact as such, but very easily adapted to larger number of parameters. If all elements of your list share the same upper bound, you can write Max[{a,b,c}]<5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2189999520778656, "perplexity": 3030.945483897847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00430.warc.gz"}
https://amathew.wordpress.com/2010/12/15/vector-bundles-grassmannians-and-characteristic-classes/?replytocom=8978
I’ve been reading about spectra and stable homotopy theory lately, but don’t feel ready to start talking about them here. Instead, I shall say a few words on characteristic classes. The present post will be quite general and preparatory — the more difficult matter is to actually construct such characteristic classes. Our goal is to see that characteristic classes essentially boil down to computing the cohomology of the infinite Grassmannian. A lot of problems in mathematics involve the existence of sections to vector bundles. For instance, there is the old question of when the sphere is parallelizable. A quick Euler characteristic argument shows that even-dimensional spheres can’t be—then there would be an everywhere nonzero vector field, whose infinitesimal flows would be homotopic to the identity (and consequently having nonzero Lefschetz number by the even-dimensionality) while having no fixed points. In fact, much more is known. Using the group or group-like structures on ${S^1, S^3, S^7}$ (coming from the complex numbers, quarternions, and octonions), it is easy to see that these manifolds are parallelizable. But in fact no other sphere is. A characteristic class is a means of assigning some invariant to a vector bundle. Ideally, it should be trivial on trivial bundles, so the characteristic class can be thought of as an “obstruction” to finding large numbers of linearly independent sections. More formally, let ${p: E \rightarrow B}$ be a vector bundle. A characteristic class assigns to this bundle (of some fixed dimension, say ${n}$) an element of the cohomology ring ${H^*(B)}$ (with coefficients in some ring). To be interesting, the characteristic class has to be natural. That is, if ${f: B' \rightarrow B}$ is a map, then the characteristic class of the pull-back bundle ${f^*E \rightarrow B'}$ should be the pull-back of the characteristic class of ${E \rightarrow B}$. This contrafunctoriality is a good reason to prefer cohomology to homology here: one can’t push vector bundles forward, but one can pull them back by a map of spaces along the base. It is with this that we can talk about naturality. So what does a characteristic class really mean? I claim that a characteristic class is really the same thing as an element of the cohomology ring of the Grassmannian ${\mathrm{Gr}_n(\mathbb{R}^{\infty})}$ of ${n}$-planes in ${\mathbb{R}^{\infty}}$. The reason for this is fundamentally categorical, and it boils down to the fact that there is a universal ${n}$-dimensional bundle ${U \rightarrow B_U}$ for any integer ${n}$. Any vector bundle of dimension ${n}$ on a paracompact base space ${B}$ can be obtained by pulling back this bundle in some way, from some map ${B \rightarrow B_U}$. Further, this map ${B \rightarrow B_U}$ is unique up to homotopy. (It is a basic fact that a pull-back of a vector bundle ${E \rightarrow B}$ by a map ${B' \rightarrow B}$ depends only on the homotopy class of the map ${B' \rightarrow B}$.) Another way of stating this is that the contravariant functor ${F_n}$ that assigns to each ${B}$ (a reasonable space, say paracompact) the set of isomorphism classes of ${n}$-dimensional bundles is representable on the homotopy category. Let’s state this more precisely. First, what do we mean for ${F_n}$ to be a functor? Well, first it has to assign to every ${B}$ some set ${F_n(B)}$—OK, we have that. But, it also has to assign to every homotopy class of maps ${B' \rightarrow B}$ a map ${F_n(B) \rightarrow F_n(B')}$. Fortunately, we have a way of doing that: the pull-back. Given a bundle over ${B}$, we can pull it back to ${B'}$ from the map ${B' \rightarrow B}$. As I said, this depends only on the homotopy class of ${B'\rightarrow B}$, so this is not surprising. Now in fact we have proved a pretty general result about representable functors on the pointed homotopy category of CW complexes. Indeed, Brown representability can be used to tell us that ${F_n}$ is representable on that category. But we are not working with pointed spaces, and we don’t want to restrict only to CW complexes. So this will be a separate result. What is this universal bundle going to look like? There is a very clean picture of it. Namely, it is going to be the tautological ${n}$-plane bundle over ${\mathrm{Gr}_n(\mathbb{R}^{\infty})}$. That is, the fiber over a point ${x}$ in the Grassmannian is just the collection of vectors in ${\mathbb{R}^{\infty}}$ that lie in the plane corresponding to that point. One can check that this is indeed a vector bundle, and in Milnor-Stasheff’s “Characteristic classes” it is proved that it is universal. The key idea of the proof is that, on a compact space, any vector bundle ${p: E \rightarrow B}$ can be imbedded inside a trivial bundle ${B \times \mathbb{R}^{N}}$ for some ${N}$ large. As a result, over each ${b \in B}$, the fiber ${p^{-1}(b)}$ is identified with a subspace of ${\mathbb{R}^N}$ whose dimension is ${n}$. In this way, we can get a map from ${B}$ into the Grassmannian of ${n}$-planes in ${\mathbb{R}^N}$ by sending each ${b \in B }$ to the corresponding ${n}$-plane. The reason we had to use the infinite Grassmannian in general is that ${N}$ could be very large. Alright. So we know that there is a universal ${n}$-bundle ${U}$ over ${\mathrm{Gr}_n(\mathbb{R}^{\infty})}$ for each ${n}$. Let’s say we have a characteristic class ${c}$ that sends each ${n}$-bundle ${p: E \rightarrow B}$ to an element ${c(E) \in H^*(B)}$. Then, by definition, there is ${c(U) \in H^*(\mathrm{Gr}_n(\mathbb{R}^{\infty}))}$. This is the universal characteristic class. Since any bundle is a pull-back of ${U}$, any characteristic class is a pull-back of this. Thus ${c}$ is determined by ${c(U)}$. Conversely, if we prescribe ${c(U)}$, then we can define ${c(E)}$ by writing ${E}$ as a pull-back of some ${B \rightarrow \mathrm{Gr}_n(\mathbb{R}^{\infty})}$ (unique up to homotopy) and then just setting ${c(E) }$ to be the pull-back of ${c(U)}$. What we have really done here is, of course, the Yoneda lemma, and nothing more. Namely, we have the functor ${F_n}$ on the homotopy category which is representable, and we have the functor ${B \rightarrow H^*(B)}$ on the homotopy category. A characteristic class is just a natural transformation $\displaystyle F_n \rightarrow H^*$ and since ${F_n}$ is representable by ${\mathrm{Gr}_n(\mathbb{R}^{\infty})}$, Yoneda’s lemma states that these characteristic classes are in bijection with $\displaystyle H^*(\mathrm{Gr}_n(\mathbb{R}^{\infty})).$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 77, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565702080726624, "perplexity": 108.5429774735214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00502.warc.gz"}
https://www.physicsforums.com/threads/conservation-of-energy-and-electric-potential.650441/
# Conservation of Energy and Electric Potential • Thread starter fornax • Start date • #1 20 0 ## Homework Statement A very long, thin straight line of charge has a constant charge density of 2.0pC/cm. An electron is initially 1.0cm from the line and moving away with a speed of 1000km/s. How far does the electron go before it comes back? ## Homework Equations ΔU = ΔV*q ΔKE + ΔU = 0 ΔV = -2Kλln(rb/ra) W = 2Kλqln(rb/ra) W = -ΔU ## The Attempt at a Solution first off: 2.0pC/cm = 200pC/m 1000km/s = 1x10^6m/s KEa + Ua = KEb + Ub 1/2mv^2 + Ua = 0 + Ub 1/2mv^2 = Ub-Ua 1/2 * (9.1x10^-31)*(1x10^6)^2 = -ΔU 4.55x10^-19 = -ΔU W = -ΔU 4.55x10^-19 = 2Kλqln(rb/ra) 4.55x10^-19 = 2K(200x10^-4)(1.6x10^-19)ln(rb/.01) 7.899x10^-9 = ln(rb) - ln.01 ln(rb) = 4.605 rb = 100m Well, that is the process that I followed, and I feel like I did ok, except that the majority of the rest of my class got different answers. They all got .66m, which seems more realistic, 100m just seems too far. I have another question that again, I seem to get a different value than everyone else. I think I may be screwing up the conservation part in the beginning, but I have no idea where I'm going wrong. I asked my teacher, and he said to use ΔU = ΔV*q, but other than considering ΔV = -2Kλln(rb/ra) & W = 2Kλqln(rb/ra), I don't know where I would use it. Any help would be greatly appreciated. ## Answers and Replies • #2 ehild Homework Helper 15,543 1,913 You might made a mistake by converting pC to coulombs. 1pC =10-12 C. ehild • #3 Simon Bridge Homework Helper 17,873 1,655 I find it helps to do the whole algebra before putting numbers into the equations. I get: ##\frac{1}{2}mv^2=2ke\lambda\ln(r_0/r)## ... find r. Hmm ... at first glance - I think you have rb and ra swapped over in the log. • #4 20 0 Well, using the observation that you made ehild, it did turn out that was a big chunk of my error. Replacing all of the 10^-4 with 10^-12 did give me a lower result, but not the one I was hoping for, I got .022m. I'm not sure what to make of it, it seems more reasonable than 100m, but is now relatively short. I know that when working with particles with such small mass it can be extremely hard to predict your results, and this is why I am unsure of it. I know that I shouldn't base my expectations for my answer of off my classmates work, but when the majority get the same answer it's hard to ignore. As far as switching the ra & rb, I did give it a go, and I ended up with 220.3m. Also going off the intergration 2Kλq∫ a->b 1/r dr = 2Kλq : b-a : ∫1/r dr = lnr evaluated at b-a lnb - lna = ln(b/a). I do appreciate the help though, and if I goofed this somehow, let me know. The detailss always get me, and it's the details that count :/ I'll post my other problem, and see if you guys see any mistakes there, if you do, maybe it will reveal the flaw in my thinking. Perhaps I can get some insight into where my problem is. • #5 296 1 I get: ##\frac{1}{2}mv^2=2ke\lambda\ln(r_0/r)## ... find r. If you find the work = q ∫ E dr, the integration is from ro to r which leads to ln(r/ro). The steps taken look right to me. • #6 Simon Bridge Homework Helper 17,873 1,655 Like I said: lets take it formally and do all the algebra first. Double-check the conversions and list all the values used. Starting with: $$mv^2+4ke\lambda\ln\left( \frac{r_i}{r_0} \right )=4ke\lambda\ln\left( \frac{r_f}{r_0} \right )$$... where ##r_0## is the radius of some reference potential - I get: $$mv^2=4ke\lambda\ln\left( \frac{r_f}{r_0} \right )-4ke\lambda\ln\left( \frac{r_i}{r_0} \right )=4ke\lambda\ln\left( \frac{r_f}{r_0}\frac{r_0}{r_i} \right ) = 4ke\lambda\ln\left ( \frac{r_f}{r_i} \right )$$... (ahh right!) and solve for ##r_f## :$$r_f=r_i\exp\left [ \frac{mv^2}{4ke\lambda} \right ]$$... using the following values: ##\lambda = 2.0\text{pC/cm} = 2.0\times 10^{-12}\text{C/cm}=2.0\times 10^{-10}\text{C/m}## ##r_i=1.0\text{cm}## ##v=1000\text{km/s} = 1000000\text{m/s}## ##e=1.60\times 10^{-19}\text{C}=1.60\times 10^{-7}\text{pC}## ##m=9.11\times 10^{-31}kg## ##k=8.99\times 10^9\text{Nm$^2$/C$^2$}## I am getting: ##r_f=2.2\text{cm}## (too!) ... If the rest of the class is correct, then the both of us have missed out a factor of 30 someplace. We need some other way to evaluate this to check. Last edited: • #7 20 0 I have to say I like how neat your method is, without even putting numbers in you have a nice neat formula to follow. As far as that factor of 30, I suppose there is a possiblity that they are wrong? Our professor never really confirmed it was correct, so there is potential for that situation. This is like a take home quiz, so we like to at least compare answers, so there is also the possibility people ar just copying each other. I think I'll keep mulling it over, maybe ask my professor about it, at least I'll probably get partial credit if it is wrong. I really appreciate your help with this! It's nice to know I have somewhere to go when I really get stumped or confused. • #8 Simon Bridge Homework Helper 17,873 1,655 This is why I keep banging on about doing the algebra first. See how easy it is to follow what I'm doing? Of course that means it is easier for people to see I've made a mistake ... but that's why we are here right? But nobody ever listens :( BTW: notice how aralbrec challenged my earlier result? He used a quick reality check to show what I got was inconsistent with the physical situation. • Last Post Replies 9 Views 7K • Last Post Replies 1 Views 4K • Last Post Replies 7 Views 3K • Last Post Replies 4 Views 1K S • Last Post Replies 4 Views 17K • Last Post Replies 1 Views 4K • Last Post Replies 20 Views 4K • Last Post Replies 2 Views 11K • Last Post Replies 6 Views 812 • Last Post Replies 3 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7319682836532593, "perplexity": 1570.7503289758092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00260.warc.gz"}
http://physics.stackexchange.com/questions/53332/relativistic-equivalent-of-a-spring-force?answertab=oldest
# Relativistic equivalent of a spring-force? Usually what helps me understand a concept better in physics is to write a simulation of it. I've got to the point where I'm competent in the basics of special relativity, but, I can't figure out how to write a relativistic simulation! My normal approach in Newtonian mechanics is just to attach two objects together with a hookean spring force, maybe add a damping effect, and then just to use the calculated force to get acceleration, and numerically integrate over that. The first issue is length contraction. The issue isn't as easy as a contraction of $1/\gamma$. If I have two point events simultaneous in some reference frame, corresponding to locations of masses connected by a spring, I do know their spacelike separation, which is an invariant. However, even though the proper length is invariant, if I change reference frames, the events are no longer simultaneous, so I can't really get a consistent definition from this... I figured that I could apply the spring force instantaneously, "faster than light", and show how when I do that that leads to violation of causality, but I can't even define force consistently! I understand that if I use a wave equation or some sort of electromagnetic force, then I can have a force field that transforms correctly, but I really don't want to do this, because I'm not great at electromagnetism, and this is really just for me to better my understanding of special relativity. Plus it would be difficult computationally. I haven't been able to make any headway on assuming that the masses are connected by a chain of springs, whose velocities relative to each other are $<<c$, but I think the solution may lie there. - –  Ben Crowell Apr 9 '13 at 16:17 The review mentioned is a very good exploration of the limitations that a rigid body faces in STR. The OP has to set up his simulation such that at any point the tension doesn't exceed the gradient of energy density in the material. Also, the results seem to depend on the material parameters from this. Any progress till now @Neurofuzzy ? –  Debanjan Basu Apr 29 '13 at 12:44 @DebanjanBasu I didn't make any progress for a while, so I put it on the back-burner. I'm studying tensors, simultaneously with other physics things, and I'm going to attempt it again once I finish the special relativity chapter in my mechanics book (Goldstein). But it's definitely not a concept I'll be able to forget about :) –  NeuroFuzzy Apr 29 '13 at 20:43 My suggestion is that if your trying to model Special Relativity using anything but the equations provided that you are asking for trouble. Special Relativity should be handled with equations first, so that you don't confuse yourself trying to wrap your head around the implications. Also, modeling a force transfer as faster then light is breaking the laws of what your trying to model. - I'm doing self-study, so "what equations are provided" isn't fixed. If it's more advanced (such as, if really the only way to simulate it is to use a wave equation) then I'll work towards modeling that. –  NeuroFuzzy Feb 7 '13 at 23:48 Start with special relativity definitions for momentum, time, and length and go from there. The important part is to interpret after calculating. –  Eric_ Feb 7 '13 at 23:51 Your question was interesting to me, but you hadn't shown any effort to solve the problem yourself. Which might be because you couldn't even begin. I don't have enough karma to comment on the question directly. Which is why I am posting this as an answer. Please don't vote on this at all, thus! 1. For a short explanation of how the force relation works in STR, look at page 41 of this document. 2. Also as Eric suggests, the form of the spring force does not generalize uniquely to A relativistic force field, which is perhaps why it does not have an unique satisfying answer. You could try to classify the set of potentials which reduce to the spring force potential as $c>>1$. This would be why i was intrigued by the question. Do expect an edit from me in the future with the results of my feeble attempts at a solution to this. 3. Although the spring force might not be a good potential to look at to gain intuition in STR, it might be an interesting problem in its own right. However if you still want you can try to simulate two charged particles in the relativistic EM potential of each other. It would be rather fun to observe and you might test out the Lienard-Wiechert potential. Or, sigh, you can play this free game. My suggestion - do both. 4. And do post a link to your simulation here when it is done. Would love to see your work on it! EDIT1: This paper takes on solving a relativistic Harmonic Potential for those who are interested. - The OP tried to write a computer simulation, and gives a clearly worded explanation of the issues that arose in attempting to do so. How you get from there to "you hadn't shown any effort to solve the problem yourself" is an absolute mystery to me. –  Nathaniel Feb 8 '13 at 15:01 @Nathaniel: Yeah .. I notice that now! It was my mistake to gloss over the material, not the OP's. –  Debanjan Basu Feb 8 '13 at 16:34 Thank you! This was very helpful. In trying some of the practice problems on the first link, I realized I don't have as firm a grasp on force as I thought I did... And the last section of that document is telling! Maybe I won't be able to write a computationally simple sim after all! Anyways, I'll definitely post a link, as an answer maybe, once I figure out what I'm doing. It sounds like I'll have to do particles in a field, I've never numerically done both at the same time though. –  NeuroFuzzy Feb 9 '13 at 10:15 If you know about Lagrangian and Hamiltonian formalisms yo might try to find first the equations of motion. This is done in the paper Relativistic harmonic oscillator. In a nutshell, what is done is the following, a "relativistic" hamiltonian (for slow particles) is set up (we set c=1): $$H = \sqrt{p^2+m_0^2} + \frac{1}{2}k x^2$$ Then the evolution of a particle will be given by the Poisson Brackets: $$\{\cdot,H\} = \frac{p}{\sqrt{p^2+m_0^2}}\partial_x - kx\partial_p$$ then $$\dot{x} =\{x,H\} =\frac{p}{\sqrt{p^2+m_0^2}}$$ and $$\dot{p}= \{p,H\} = -kx$$ and then use a integration algorithm (I personally like the velocity verlet algorithm) to get your evolution. An alternative aproach can be seen in Relativistic (an)harmonic oscillator - This only works for a fixed base point though. Is there any spring force which works for a pair of point masses? –  Mikola May 12 '14 at 19:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8361072540283203, "perplexity": 368.3172850898934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641393.36/warc/CC-MAIN-20150417045721-00200-ip-10-235-10-82.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/3633/lining-up-edges-of-a-tree-with-tikz
Lining up edges of a tree with TikZ I'm drawing trees with TikZ and \begin{tikzpicture} \node {$C$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} produces edges that don't line up as I want them to. If I were to draw circles around the nodes, an edge goes from the bottom center of the parent's circle to the top center of the child's, making the arm appear jagged. I've tried moving the anchors around, but that wasn't it. What I want seems to occur automatically in the examples in the pgf manual (see the final tree example on page 192). What am I not doing? - Could you please add a picture of what you are trying to achieve. –  Caramdir Sep 30 '10 at 22:55 Not quite sure what's going on here. Have you altered the default parent and child anchors? Here are 3 examples: \begin{tikzpicture}[parent anchor=south,child anchor=north,every node/.style={circle,draw}] \node {$C1$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} \begin{tikzpicture}[parent anchor=center,child anchor=center,every node/.style={circle,draw}] \node {$C2$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} \begin{tikzpicture}[every node/.style={circle,draw}] \node {$C3$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} which produce: Now, your example (my C3) appears to be the best; is it C1 you're looking for? - I wanted C3, but was getting C1 with the same code. For whatever reason, deleting \usetikzlibrary{tikz-qtrees} (which I don't need--it was left from an earlier tree-drawing experiment) solves the problem. I assume it changes the anchors. –  hoyland Oct 2 '10 at 0:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7632741332054138, "perplexity": 3408.577857810861}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00106-ip-10-16-133-185.ec2.internal.warc.gz"}
https://sccn.ucsd.edu/wiki/EEGLAB/RERP
# EEGLAB/RERP rERP is an open source Matlab toolbox for calculating overlapping Event Related Potentials (ERP) by multiple regression (an alternative to averaging). ## Contents >> eeglab rebuild to load the toolbox. You should see ‘rERP’ menu under Tools once you have loaded a dataset. # References [1] M. D. Burns, N. Bigdely-Shamlo, N. J. Smith, K. Kreutz-Delgado, and S. Makeig, "Comparison of Averaging and Regression Techniques for Estimating Event Related Potentials," presented at the IEEE Engineering in Medicine and Biology Conference, 2013. [endnote] [bibtex] [download pdf] [2] N. J. Smith and M. Kutas, "Regression-based estimation of ERP waveforms: II. Non-linear effects, overlap correction, and practical considerations." [endnote] [bibtex] [download pdf] [3] N. J. Smith. (2011). Scaling up psycholinguistics. [endnote] [bibtex] download pdf [4] N. Bidely-Shamlo, K. Kreutz-Delgado, M. Miyakoshi, M. Westerfield, T. Bel-Bahar, C. Kothe, et al., "Hierarchical Event Descriptor (HED) Tags for Analysis of Event-Related EEG Studies," IEEE GlobalSIP, Austin, TX, 2013. Load a continuous (un-epoched) dataset or study in EEGLAB and select the Tools-rERP-Run analysis. This will bring up the rERP setup GUI. If you haven’t used this toolbox before, the “Select profiles:” list should be empty. Select one of the datasets from the list and click “Make profile for dataset” at the bottom. Once that completes, click “Edit profile” at the bottom right. This will bring you to the profile GUI: where you specify the regression settings. If you have not computed ICA on that dataset, hit the “Switch to channels button”. By default, it operates on ICs. If you have less than 8GB of memory available, change the artifact function name from “rerp_reject_samples_robcov” to “rerp_reject_samples_probability”. rerp_reject_samples_robcov is more powerful, but more expensive. Under “Included event types” you will see a comprehensive list of all event codes extracted from the EEG struct. By default, all event types are included as variables in the regression model. Be sure to remove any redundant or superfluous event types by selecting them and pressing the “Remove >>” button. Press the “Run” button in the bottom right corner. If you have Parallel Computing Toolbox, be sure to start a pool: the program will parallelize over the individual time series. When it finishes, go back to the EEGLAB GUI and select Tools-rERP-Plot results. This will bring up the plotting GUI. Under the “Results” heading, you see the result of the analysis you just ran. Under the “Plot types” heading, you see a list of possible plots to perform for that result; select one of them. A more detailed description of the different plots is available below. Under the “Components (R-Squared)” or “Channels (R-Squared)” heading, you will see a list of channels or ICs with their average R-Squared value in parentheses. You can choose multiple time series at once and it will plot them all. Under the “Event types” heading, you will see a list of event types for that result. You can choose multiple event types at once and it will plot them all. Select “Exclude insignificant”, to plot only the ERP estimates which resulted in a statistically significant R-squared. Hit the plot button and a new plot window will appear. You can select another result and hit plot again to plot multiple datasets on the same axes. You will have to hit “Clear figure” if you want to plot different time series or event types as each plot is simply overlaid on the previous one. Right click any graph to save it as an image. # Description Traditionally, the way of calculating an ERP estimate is to average epochs time-locked to a stimulus class of interest. This technique places severe restrictions on the experimental protocol: only a small number of stimulus categories can be used, stimulus events must be well separated in time and all other cognitive processes must be held constant. Violating the latter conditions will cause the ERP to be estimated sub-optimally. [1] demonstrated that a multiple regression model is capable of explaining more experimental data variance than averaging for overlapping ERPs. By adopting a simple regression model over averaging, we make an additional assumption: that the observed EEG signal is a linear combination of ERPs added to white Gaussian noise. Under this new assumption, ERPs combine additively when they overlap in time. This is also known as a Linear Time Invariant (LTI) model. We can illustrate the concept graphically as a convolution of impulsive inputs with the characteristic brain response (ERP): We can rewrite the convolution as a sum of matrix-vector products. Here, A_i is the matrix which predicts the occurrence of event type i and beta_i is the ERP vector for event type i. We stack the ERP variables into a single vector beta and consolidate the predictors for the different event types into a single predictor matrix, A. Now, y = A*beta and we can solve for beta using our favorite technique, such as Penalized Least Squares. The rERP toolbox includes: • Routines for constructing sparse predictor matrix $A$ and solving for $/Beta$ • GUI interface to EEGLAB • Time-frequency analysis (rERSP) analogous to ERSP • [HED Tag] (hierarchical regression) support • Automatic artifact rejection using [cleanrawdata] or [probability] • Convex regularization (LASSO, Ridge, ElasticNet) with grid search • Tools for plotting and publishing results • Statistical significance derived from R-squared # GUI ## Setup GUI You can start the setup GUI from EEGLAB as described in the “Quick Start” section above, or enter at the command line >> rerp_result_study = pop_rerp_study(‘eeg’, EEG); or >> rerp_result_study = pop_rerp_study(‘study’, STUDY); You have datasets on the left and profiles on the right. All datasets you select will be analyzed with all profiles you select. Selecting 3 datasets and 2 profiles will yield 6 results total. See [Toolbox Structure and Scripting] section for more information on how profiles are used in the toolbox. Select datasets to analyze: Make profile for dataset: Select profiles: Edit profile: Cancel/Run: Make profile from dataset: ## Profile GUI From the top: Auto-save results: toolbox will save results to disk automatically at the path to the right. You can change the path manually or by pressing “Browse path”. rERSP: Do a log-power time-frequency decomposition on the data, then perform linear regression on each frequency-bin time-course. “Number of bins” defines the frequency resolution of the transform. Analogous to ERSP, but applies regression (instead of averaging) to time-frequency decomposed data. Include/Exclude components/channels: Select which time series are to be analyzed. You can change the selection method by pressing the toggle buttons to the right. Category/Continuous epoch boundaries: Choose the epoch window boundary, relative to event onset. See the Description section above for information about Continuous Variables. Artifact rejection: If you have already removed artifact data samples by some other means, deselect. Otherwise leave it checked. The toolbox will automatically ensure that artifact frames have been identified and excluded from regression. Once a profile has been updated with artifact indexes, it will display the number of frames identified. Artifact function: function used to identify artifact indexes. It should follow the prototype artifact_indexes = artifact_function(EEG). Include/Exclude event types: The event types in the “Include” list will be variables in regression. This applies even when doing hierarchical regression with HED tags: only events of the included type are considered. While selecting event types, you can hit enter, which will bring up a window for entering descriptions of the event types. This will be useful later for plotting. Hierarchical regression: If the EEG.event(:).hedTag field has been populated with HED strings, you have the option of using HED tags as regression variables in lieu of event types. See the section [“About HED tags”] for more information about what HED tags are and when you should consider using them. Enforce HED specification: If selected, the toolbox will throw an error if the HED tags do not conform to the [HEDIT project] specification. Useful for comparing cross-study results. Display HED hierarchy: show the structure of the HED hierarchy for this profile. Include HED tags: The hierarchical version of “Include event types”. All tags in this list are treated as variables in regression. Some of these tags will have markings, such as stimulus/instruction in the above screenshot. * stimulus/instruction * indicates that tag has been excluded from regression (because it is redundant with stimulus/instruction/fixate). Similarly, tags enclosed with { } are affected by separator tags and tags enclosed in [ ] are combined into a single Continuous variable. Exclude HED tags: are excluded from regression Include/Exclude: click to add selected tags to the respective list • We display a “parameter count” as a quick check on how many variables the model is generating. It is quite possible to generate highly collinear model, which overtaxes the data with a relatively low parameter count [2]. Future versions may include [Variance Inflation Factor] analysis to give a better idea of how hard we are pushing the data. Regularization (recommended): Enable penalized regression. Prevents overfitting of data. The toolbox can use variants of L1 and L2 norm penalty function. It should be noted that these penalized estimators are biased, which could affect inferential results based on those estimates. Grid search (recommended): Find the regularization constants ${\lambda}_{i}$ by testing a grid of values. The constant which yields highest R-squared (i.e. best prediction of the data using cross-validation) is selected. Lambda: If “Grid search” is disabled, we can enter a specific lambda to use. Always enter two values, even if only using one of them. Penalty function: You probably want to use L2 norm. This is the fastest method and it has not been shown that the L1 Norm and ElasticNet variants are any better for ERP regression purposes. L1 and ElasticNet are considered experimental. Load profile/Save profile: This is a time saver. If you want to try many analyses on a single dataset to see what works, you can hit “Force recompute artifact frames”, then save variants of the profile that you want to run. Set default profile: When generating new profiles, these settings will be transferred. Cancel/OK: Cancel will cancel all changes to the profile since you opened the profile GUI. OK keeps the changes. Number of grid zoom levels: We can do multiple levels of grid search, zooming in on the local optima that we find. Two is a conservative number. One is just a normal “flat” grid search. Number of grid points: After the first level search, how many points to generate around the local optimal value in deeper levels. Number of cross-validation folds: Number of cross-validation folds to use when computing R-squared. Used in grid search and final results regardless of whether regularization is enabled. First phase lambda: The exact lambda values to use for the first grid search. This determines where the grid search will look for lambda in subsequent levels as well. Default is set to a wide range of positive, exponentially increasing lambda values. ElasticNet quick zoom (experimental): There is preliminary evidence that optimal ElasticNet lambda (combined L1 and L2 norm) R-squared values are achieved around the optimal separate L1 and L2 norm lambda. This finds the optimal L1 and L2 lambda values first, then does the full grid search around those values (O($n$) vs O(${n}^{2}$). Save all grid search results: If you want to explore the predictive surface of the grid search, this will save every result that the grid search generates. ## Result GUI Results: List of results to b plotted. Select one result at a time to be plotted. In future releases, multiselect will be enabled to combine multiple results for statistics. Plot type: List of available plots will change depending on which result is selected. The possible plots are • Rerp by event type: Make a single plot for each event type/HED tag selected. If multiple time series are selected, they are plotted on the same axis. • Rerp by channel/component: Make a single plot for each time series selected. If multiple event types/HED tags are selected, they are plotted on the same axis. • R2 total: Plot R-Squared (coefficient of determination) for each time series taking into account all event types/HED tags. This is essentially the percentage of total data variance predicted by a linear model. • R2 by event type: Plot R-Squared (coefficient of determination) for each time series taking into account each event type/HED tag separately. This is essentially the percentage of total data variance accounted for by a linear model with only that event type/HED tag. • Rerp image: Plot original data epochs, modeled data epochs and difference (noise) epochs next to each other. • Rersp: Analogous to ERSP, but applies linear regression (instead of averaging) to time-frequency decomposed data (dB power). • Grid search: View the predictive surface for the grid search. Shows how performance changes by varying lambda. For multilevel grid search, this produces as many graphs as there are levels. Rerp image boundary: When Rerp image plot type is selected, this determines the epoch window. p < : For plots which identify statistical significance (i.e. Rerp by event type, Rerp by channel/component), determines p-value threshold for significance. Channels/Components (R2): List of time-series that were analyzed. It is possible to plot multiple time-series at once. Sort by R2: Sort the Channels/Components list by total R2. Shows the components which had more variance accounted for at the top. HED tags/Event types: List of variables that were included in the regression model. It is possible to plot multiple tags/types at once. Exclude insignificant: Only plot waveforms which had statistically significant R2. Locking/Sorting variable: For Rerp image. Two variables must be selected, locking and sorting. Use this toggle button to cycle between selecting each variable. Load results/Save result as: Load multiple results from other locations. By default, the GUI loads all results in the rERP/results folder. Resave a particular result to another location. Display profile: Display the profile used to compute the currently selected result. Clear figure: Clear the plotting window of all plots. Plot: Plot results based on currently selected options in the GUI. You can plot the results of multiple datasets on top of each other by hitting plot in between changing the results (results must be compatible for this to work properly, i.e. be from the same profile). Combined statistics from multiple results will be available in future releases by selecting multiple results; currently on one result may be plotted at a time. # Toolbox Structure and Scripting The best, most up to date reference for specific functions and classes can be found using the Matlab documentation system (e.g. doc RerpProfile; ). Simple examples of scripting the toolbox can be found in rerp_example.m The basic premise is to consolidate all information needed to specify a regression into one object: a profile (of RerpProfile class). The only information not specified in the profile is the data. A profile is specific to the dataset it was formed from: it contains event timings and artifact indexes, as well as generic settings such as epoch boundaries and penalty functions. The profile and EEG struct (containing the data) are sent to the pop_rerp function which performs checks on the system and ultimately calls the rerp function. The pop_rerp function, after analyzing the data according to the profile, returns a result object (of RerpResult class). The result object contains the results of the regression, as well as the profile that was used to derive it. The RerpResult class contains an RerpProfile class which contains a (generic) settings structure. In general, it is easiest to use the GUI for modifying profiles and plotting results. Hierarchical Event Description (HED) tags were proposed in [4] as a way to standardize EEG experimental event descriptions. One benefit of this approach is that it permits higher level analyses involving multiple studies and experiments. For regression purposes, we use hierarchical event descriptions to define hierarchical regression models. This is best demonstrated with a simple example. The HED tag Stimulus/Expected/Target has three sub-tags: Stimulus, Stimulus/Expected and Stimulus/Expected/Target. Tagging an experimental event with Stimulus/Expected/Target is equivalent to tagging it with all three sub-tags. Say we have an experiment with two event types: event type 1 is the presentation of an image, event type 2 is the presentation of an image with a target that the subject is searching for. We tag event type 1 as Stimulus/Expected and event type 2 as Stimulus/Expected/Target. Both event types inherit the tag Stimulus/Expected, but only event type 2 is Stimulus/Expected/Target. Of course, we can also use tags to express non-hierarchical event descriptions. Tagging tag event type 1 as Stimulus/1 and event type 2 as Stimulus/2, we are back to using the standard event codes as our regression variables. In the regression framework, we interpret the sub-tags as distinct variables, analogous to using event types as variables in nonhierarchical regression. In our example, we define a variable Stimulus/Expected (any time event types 1 or 2 occur) and Stimulus/Expected/Target (any time event type 2 occurs). When we include both variables in a regression model, we are effectively modeling the ERP of event type 2 as the sum of the responses to the sub-tags. Another way to put it: event type 2 is the response to being shown any image (Stimulus/Expected) plus an additional response due to the fact that the image is a target (Stimulus/Expected/Target). We can interpret the “additional response” to the target image as a recognition event, or P300 response. We have used the overlap modeling capability of regression to decompose an ERP into different layers which combine additively to produce the overall effect. Another benefit of hierarchical event descriptions is that each experimental event is allowed to be defined in many different ways simultaneously. Each event is tagged with a HED string (e.g. EEG.event( i ).hedTag = ‘Stimulus/Expected; Stimulus/Visual; Custom/Block/1; Custom/Reaction time/2.33’ ) which is composed of many HED tags separated by comma or semicolon. This allows us to capture very specific information about each event, while still retaining the ability to reference variables which are common across many events. The toolbox will compile the HED strings from all experimental events into a master hierarchy. From there, the user can pick out which tags to include in the regression model. ## Special HED Tags Separator tags: The toolbox automatically identifies tags with the | sublevel as a separator tag (e.g. Custom/Block/|/1, Custom/Block/|/2). If we want to separate certain events into different categories, we can add separator tags to their HED strings. For example, say we want to divide the data into two blocks and see how the rERP estimates are different between the blocks. Perhaps the experimental conditions changed between the two blocks. We can add the tag Custom/Block/|/1 to the HED strings of all events in the first block and Custom/Block/|/2 to all events in the second block. By including Custom/Block in the regression model, it two separate variable spaces for all of the included tags; one for each block. The number of parameters to estimate effectively doubles. If there were three blocks, it would triple, and so on. Tags which have been separated in the variable space are marked with {{ }} in the "Include HED tags" list, and show the separator tag in parentheses (e.g. { { Custom/Pulse (Custom/Block) } }). The separator tag is shown in { }, (e.g. { Custom/Block }. Continuous tags: The toolbox automatically identifies tags with the # sublevel as a continuous tag (e.g. Custom/Mag/#/.22, Custom/Mag/#/.000001). We may believe that the amplitude of the evoked response is related to another continuous variable that we have access to (e.g. intensity, acceleration). If so, we can tag each event with the value of that continuous variable when the event occurs (e.g. Custom/Mag/#/2.5 ). This models the scaled responses within the regression framework. The relationship between the continuous variable and the amplitude of the response need not be linear. If you choose to make a tag continuous, be sure to include the #/ level in every single instance of that tag. # Feature Backlog - Variance Inflation Factor (VIF) analysis - Ability to perform regression in different basis (e.g. spline, Gaussian) - Statistics: significance values for ERP and ERSP estimates (glm-ie integration) # Credits Developed and Maintained by: Matthew Burns (SCCN, INC, UCSD) Email: rerptoolbox (at) gmail (dot) com Significant contributions from: [Nathaniel J. Smith] [Nima Bigdely-Shamlo] [Luca Pion-Tonachini] [Christian Kothe] [Scott Makeig] Supported by a gift from The Swartz Foundation (Old Field NY).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 5, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32059425115585327, "perplexity": 2788.130508157135}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00240.warc.gz"}
http://barnesanalytics.com/reverse-image-lookup-out-of-the-box?shared=email&msg=fail
Reverse Image Lookup Out Of The Box What is a reverse image search? It is where you give me an image and I find a bunch of images that look like the one that you gave me. So, I thought that I would give that a quick try. But I am out of time to train one from scratch. So I decided to cheat. I used transfer learning. Now the cool thing about transfer learning is that I am not going to train the model at all. Usually, you fine tune the model to your domain. But I don’t have time for that. So I just used an image classification algorithm out of the box. I didn’t train it at all, I just used the default values for the parameters. The surprising thing is that it worked really well. However, I still sucked on Kaggle’s leader board. Let the cheating begin After I got the data into google colab, I needed a function that would give me an embedding for the images that I would feed to it. Luckily, keras comes with a bunch of models ready to go for transfer learning, they have VGG, Resnet, and a bunch of others. Usually the way that you deal with these models is to take the embedding and attach that to a small neural net that you train to classify the images. Today we take the embeddings as they are, and try to find the nearest neighbors in the embedding space. from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.resnet50 import preprocess_input import numpy as np from urllib.request import urlopen from PIL import Image model = ResNet50(weights='imagenet', include_top=False) def get_features(url): try: img_file = urlopen(url) im = Image.open(img_file) except: output = [0]*(256*256*3) output = np.array(output).reshape(256,256,3).astype('uint8') im = Image.fromarray(output).convert('RGB') im2 = im.resize((224, 224), Image.ANTIALIAS) x = image.img_to_array(im2) x = np.expand_dims(x, axis=0) x = preprocess_input(x) features = model.predict(x).reshape(1,2048) return features In the embedding space, that I get for free, just from loading the model, if two images are close together, they should look similar, and theoretically, should have similar classes. So we’re just going to grab the embeddings for a bunch of images, let’s call it 10,000 to start off with. And then we’ll look at the k-nearest neighbors for that image. In our case the top 10-ish. We’ll actually look at 2 through 10, in case it tries to grab itself. Okay so let’s take a look at what that actually looks like in terms of code. The first thing that we need to do is to convert a bunch of images to data using the function above, so we’ll write another function that ultimately is just a wrapper around the previous function. This new function is going to look at 10,000 images at a time and convert them to a handy pandas dataframe. import pandas as pd pd.DataFrame().to_csv('drive/resnet50_features.csv') def convert_data(j): df = pd.read_csv('drive/resnet50_features.csv') X = [] i = 0 for url in urls[10000*j:10000*j+10000]: if i == 1: df = df.drop('Unnamed: 0',axis=1) v=str(round(100*(i)/10000,4)) print('\r','Status: ',v,'% Complete for Group ',str(j),end='') X.append(get_features(url)) i += 1 df = pd.DataFrame(np.array(X).reshape(i,2048)) df.to_csv('drive/resnet50_features{0}.csv'.format(j),index=None) return df And for speed of implementation, we’ll just look at the first 10,000 images. df = convert_data(0) ; Now, we just need a method for finding similar images. So what we’re going to do is just use plain old k-nearest neighbors to find the top 10-ish closest images to the one we’re querying on. First things first, let’s train the KNN algorithm using the features that we extracted from resnet. from sklearn.externals import joblib from sklearn.neighbors import KDTree kdt = KDTree(df, leaf_size=30, metric='euclidean') joblib.dump(kdt, 'drive/kdtree.pkl') With the model trained, we are ready to try to find similar images to a query image. The following function takes one of our images in the form of a url and displays that image. Then it finds the top 9 closest matches that it can find and displays them in a 3X3 grid of images. I then use simple voting by the most similar images to predict classes in the competition. It didn’t do as well as I would hope with just 10,000 images in the dataset, so I’m increasing that number now. Anyway, here is the function. import matplotlib.pyplot as plt import matplotlib.image as mpimg def display_matches(url): x = get_features(url) try: img_file = urlopen(url) im = Image.open(img_file) except: output = [0]*(256*256*3) output = np.array(output).reshape(256,256,3).astype('uint8') im = Image.fromarray(output) im2 = im.resize((224, 224), Image.ANTIALIAS) x2 = np.asarray(im2) plt.imshow(x2) plt.grid(False) plt.show() neighbors = kdt.query(x, k=10, return_distance=False) print(neighbors[0][1:]) matches=[] for neighbor in neighbors[0][1:]: try: img_file = urlopen(urls[neighbor]) im = Image.open(img_file) except: output = [0]*(224*224*3) output = np.array(output).reshape(224,224,3).astype('uint8') im = Image.fromarray(output) im2 = im.resize((224, 224), Image.ANTIALIAS) temp = np.asarray(im2) matches.append(temp) matches=np.array(matches).reshape(3,3,224,224,3).transpose(0,2,1,3,4).reshape(3*224,3*224,3) plt.imshow(matches) plt.grid(False) plt.show() return(None) All in all even at just 10,000 training images, with default weights for the resnet, the model seems to do a good job. For example here is one image that I queried: I got this back as my top 9 predicted matches. That looks awesome! My top 3 matches even have wood hangers! That isn’t so bad! Okay, let’s take a look at another one. And what the algorithm gave back as my top 9 matches: Not bad I got a bunch of flannel back, but I have a mixture of Men’s and Women’s  clothes which will probably screw up my predictions. Also I found a couple of examples that were just terrible looking at random query results. For example when I queried for this: I would have expected to see some soccer jersey, or just sports apparel, but what I got was this. Women wearing dresses?! I really need more data to feed into this thing. Fortunately, this is the exception rather than the rule. Generally, it seems to be doing a good job, but it can obviously improve.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30528268218040466, "perplexity": 1217.2228355319114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00203.warc.gz"}
https://blog.aurorasolar.com/hidden-factors-that-affect-solar-savings-part-2/
Solar design tips, sales advice, and industry insights from the premier solar design software platform Author ### Gwen Brown Gwen Brown is the Senior Content Marketer at Aurora Solar, managing the development of educational solar resources like blog posts and webinars. Previously, she was a Senior Research Associate at the Environmental Law Institute. She graduated Phi Beta Kappa from Gettysburg College. # The Hidden Factors that Affect Solar Savings: Part 2, the End of the Billing Cycle There are many variations in the ways that utilities bill solar customers with net metering—as we discussed in our earlier article on the differences between monthly and yearly billing. One factor that can have a big impact on solar savings is the month when your customer's billing cycle ends. Figure 1: Aurora’s financial analysis features allow users to model monthly energy bills based on billing cycles with different end months. #### When the Billing Cycle Ends Solar customers with net metering accumulate bill credits when their installations produce more energy than they can use. These credits can be used to reduce the customer’s electricity bill in other months when they don’t produce enough energy to offset their energy consumption. It’s common for utilities to only allow customers to rollover their credits for one year (though there are some exceptions, which we’ll discuss in a future installment of this series). The end of the billing cycle is the month that the billing year ends—when excess credits stop rolling over. At this point, customers who have excess credits are typically compensated for them at the wholesale rate of electricity, which is much lower than the retail rate at which they are compensated during the regular billing cycle. Because solar customers will have different amounts of excess credit at different times of the year, the month when the billing cycle ends can have a big impact on savings. #### Case Study To understand the impact of ending the billing cycle in different months, let’s look at a solar customer’s bills. We used Aurora’s solar design software to accurately model the energy consumption, solar production, and pre- and post-solar utility bills of a house in the San Francisco Bay Area of California. Figure 2 shows a customer’s pre- and post-solar bills if their billing cycle ends in December. Table 1 shows the amounts of the customer’s monthly bills, energy consumption and production, and solar savings. In this example, the customer’s total annual savings from solar are $2,730 (as shown in Table 1). Figure 2: Pre- and post-solar bills if the customer's billing cycle ends in December, modeled in Aurora based on PG&E Rate: E-1, Baseline Region P. Table 1: Pre- and post-solar energy consumption, production, and utility bills if the customer's billing cycle ends in December. By the time the end of the billing cycle arrives in December, this customer has used up almost all of the credits they accrued in the months that their solar system produced more energy than they consumed (April-September). This is ideal, because most of the value of these excess credits is lost at the end of the billing cycle since the customer is not paid for them at a retail rate. But what if the billing cycle ends when the customer has a lot of bill credits remaining? Figure 3 and Table 2 show what the customer’s bills would look like if the billing cycle ended in September, after the customer has accrued a lot of excess production credits over the summer. Figure 3: Customer’s pre- and post-solar bills if their billing cycle ends in September. Table 2: Pre- and post-solar energy consumption, production, and utility bills if the customer's billing cycle ends in September. If the billing cycle ends in September, the customer’s savings from solar will be significantly lower:$2,402, compared to an annual savings of $2,730 if the billing cycle ends in December. The customer saves 12% less per year, a loss of$328 annually. As this example illustrates, it is advantageous if the billing cycle ends after a period when solar energy production is low (e.g., winter months) so that excess credits can be used up. It is ideal for solar customers if the billing cycle ends after a period when solar energy production is low. Different utilities end the billing cycle in different months. For example, for customers of Rocky Mountain Power in Utah the billing cycle ends in March, and for customers of Duke Energy in North Carolina and South Carolina it ends in June. Others, like Long Island Power Authority in New York and the major California utilities, end the billing cycle at the one-year anniversary of when a customer installed solar. Where this is the case, customers considering solar should think carefully about the timing of the installation. It is also important to note that some utilities allow customers to make a one-time change to the month when their billing cycle ends. These utilities include National Grid Generation in New York, Public Service Electric & Gas Company in New Jersey, and Long Island Power Authority. In these cases, customers should carefully assess their monthly consumption and production patterns to determine the end of billing cycle that will be most beneficial. Some utilities end the billing cycle at the one-year anniversary of when a customer installed solar; others allow customers to make a one-time change to the month when their billing cycle ends. The end-of-billing-cycle month that is most financially advantageous for a particular customer will depend on the monthly variations in their energy consumption and the energy production of the solar installation. (Both of these factors can be analyzed in Aurora, using our consumption profile tool and NREL-validated performance simulation engine. Aurora’s financial analysis features allow you to model the impact of different scenarios on a project’s finances, as we have done here.) The end of the billing cycle can have a noticeable impact on the savings a customer obtains from solar. This must be taken into account to accurately estimate the financial return a solar project will provide. The local utility’s policies regarding changes to the billing cycle end date should also be carefully explored. #### Key Takeaways • There is a lot of variation in how net metered solar customers are billed across different utilities. It is important to understand these differences because they affect the financial return from solar. • One key variation that can have a significant impact on savings from solar is the month in which the billing cycle ends, which determines when credits from excess energy production stop rolling over. • Customers will see greater savings if the billing cycle ends at a time when they have less excess credit built up, because at the end of the billing cycle customers are typically compensated for excess production at a below-retail rate. • Solar Utility Bill Author ### Gwen Brown Gwen Brown is the Senior Content Marketer at Aurora Solar, managing the development of educational solar resources like blog posts and webinars. Previously, she was a Senior Research Associate at the Environmental Law Institute. She graduated Phi Beta Kappa from Gettysburg College.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2864503562450409, "perplexity": 2413.836180557789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00144.warc.gz"}
https://a.tellusjournals.se/articles/10.1080/16000870.2016.1271561/
A- A+ Alt. Display Downscaling an intense precipitation event in complex terrain: the importance of high grid resolution Abstract Floods due to intense rainfall are a major hazard to both people and infrastructure in western Norway. Here steep orography enhances precipitation and the complex terrain channels the runoff into narrow valleys and small rivers. In this study we investigate a major rainfall and flooding event in October 2014. We compare high-resolution numerical simulations with measurements from rain gauges deployed in the impacted region. Our study has two objectives: (i) to understand the dynamical processes that drove the high rainfall and (ii) the importance of high grid resolution to resolve intense rainfall in complex terrain. This is of great interest for numerical weather prediction and hydrological modelling. Our approach is to dynamically downscale the ERA-Interim reanalysis with the Weather Research and Forecasting model (WRF). We find that WRF gives a substantially better representation of precipitation both in terms of absolute values as well as spatial and temporal distributions than a coarse resolution reanalysis. The largest improvement between the WRF simulations is found when we decrease the horizontal model grid spacing from 9 km to 3 km. Only minor additional improvements are obtained when downscaling further to 1 km. We believe that this is mainly related to the orography in the study area and its representation in the model. Realistic representations of gravity waves and the seeder–feeder effect seem to play crucial roles in reproducing the precipitation distribution correctly. An analysis of associated wavelengths shows the importance of the shortest resolvable length scales. On these scales our simulations also show differences in accumulated precipitation of up to 300 mm over four days, further emphasising the need for resolving short wavelengths. Therefore, our results clearly demonstrate the need for high-resolution dynamical downscaling for extreme weather impact studies in regions with complex terrain. Keywords: How to Cite: Pontoppidan, M., Reuder, J., Mayer, S. and Kolstad, E.W., 2017. Downscaling an intense precipitation event in complex terrain: the importance of high grid resolution. Tellus A: Dynamic Meteorology and Oceanography, 69(1), p.1271561. DOI: http://doi.org/10.1080/16000870.2016.1271561 Published on 01 Jan 2017 Accepted on 21 Nov 2016            Submitted on 14 Mar 2016 1. Introduction Orographic enhancement of precipitation is a weather feature evident to anyone who has lived in the vicinity of mountains (Roe, 2005, and references therein). It explains why the coast in southwestern Norway is the wettest part of the country (Hanssen-Bauer and Førland, 2000). The annual averages exceed 3000 mm in several places, e.g. Jonshøgdi (station number 50310) with 3151 mm, but there is also large variability, e.g. Vossevangen (station number 51530) with 1280 mm (MET Norway, 2015). Orographic effects can, in addition to increasing climatological averages, be instrumental in generating extreme precipitation and associated hazards for life and property. Such a situation occurred in September 2005, when the remains of two tropical cyclones hit the west coast of Norway and the complex terrain induced strong rainfall enhancement on local scales (Stohl et al., 2008). The large rainfall amounts caused a fatal landslide close to the city of Bergen. Flow towards a barrier leads to dynamical interactions between the air mass and the terrain. The nature of the reaction depends on a number of fundamental factors, such as barrier dimensions, wind speed and atmospheric moisture content of the approaching air mass (Miglietta and Buzzi, 2001, 2004). In cases with sufficient wind speed and weak, but positive moist static stability, the air mass ascends adiabatically over the barrier and sets of gravity waves. Upward motions are found immediate upstream of the barrier and as vertical gravity wave perturbations downstream of the mountain (Roe, 2005; Houze Jr., 2012). Microphysical processes, such as hydrometeor formation and fall out time, are important delaying factors in the precipitation formation. The delay results in a belt of enhanced precipitation shifted towards the hilltop and on the immediate lee side. The latter effect is often referred to as the spillover effect (e.g. Sinclair et al., 1997; Jiang and Smith, 2003). Depending on the wind speed and the mountain orography, the spillover effect can potentially influence the precipitation distribution 20 km to 30 km downstream, with realistic values of the microphysical time delay between 500 s and 2000 s (Smith, 2003). Intense precipitation on smaller hills is observed even though the microphysical time scale is insufficient to produce precipitation. An explanation is found in the seeder–feeder effect first proposed by Bergeron (1949), where an overlaying seeder cloud, potentially independent of the barrier, produces ice nuclei that fall into a lower, terrain-induced feeder cloud. The result is an excess of condensation nuclei, which distinctly accelerates the coalescence processes compared to a non-seeded situation. Model results have shown a doubling in rain rates, caused by a pronounced decrease of the relevant time scales in droplet growth, when the seeder–feeder effect is implemented (Rutledge and Hobbs, 1983). The ability of a model to reproduce local extremes is important for impact assessments and forecasting of devastating events caused by heavy precipitation, e.g. flooding and landslides. It requires a sufficiently high grid resolution, partly because the model is unable to represent wavelengths shorter than up to 10 times the grid size (Warner, 2011). However, a doubling in horizontal resolution and an accompanied reduction of the model’s time step, will lead to an increase in computational demands by a factor of ${2}^{3}=8$. In addition, an increase in the vertical resolution, i.e. adding more model levels, will lead to a further increase in computational costs. It is therefore of great importance to find an appropriate model grid spacing, minimising computational demands, but still ensuring a simulation that reproduces weather extremes in a satisfactory manner. Barrier width has previously been shown to have a large influence on the grid resolution requirements (Colle et al., 2005; Smith et al., 2015). Larger barriers generate gravity waves of longer wavelengths and thereby reduce the need for very high resolution in the model, whereas narrower barriers excite the atmosphere at shorter wavelengths and therefore require an increased horizontal resolution for an accurate description of the precipitation patterns. Many studies have demonstrated the added value that high-resolution regional models yield with respect to the coarse-resolution driving reanalysis or climate models in regions with complex terrain, including western Norway (Barstad et al., 2009; Heikkilä et al., 2011; Mayer et al., 2015), the western USA (Di Luca et al., 2012) and the Alps (Ban et al., 2014; Torma et al., 2015). Yet, the lower limit for when the increased resolution adds value is not yet fully clear. One specific application which requires accurate information about the intensity and spatial distribution of precipitation is catchment hydrology under the aspect of flood risk projections (e.g. Wilson et al., 1979; Smith et al., 2014; Kay et al., 2015). Due to the lack of appropriate resolution, model simulations may describe the catchment and runoff improperly or distribute the precipitation into a wrong catchment area. As a consequence, the realism of horizontal distributions of precipitation has been shown to be a limiting factor in hydrology studies (e.g. Tramblay et al., 2013; Smith et al., 2014). Increasing the horizontal resolution generally allows for a more detailed representation of parameters relevant for runoff calculations, such as surface and soil properties and small-scale topographic features, leading to a more realistic hydrology. A number of studies have shown that a decrease of the grid spacing often improves the accuracy as one would primarily expect (e.g. Richard et al., 2007; Rögnvaldsson et al., 2007; Pieri et al., 2015; Smith et al., 2015), but there is also evidence that this is not always the case (e.g. Grubišić et al., 2005; Chan et al., 2013). Here we study an episode in October 2014, when consecutive days with heavy rainfall caused widespread flooding in the mountainous areas of western Norway. Large amounts of precipitation over several days led to saturation of the top layers of the soil. At the same time, the mountains in the study area were lacking snow that could have absorbed and temporarily stored some of the water at higher altitudes. A combination of those factors resulted in unusually large runoff. Our main motivation is to investigate to which degree a numerical weather prediction (NWP) model is able to reproduce the dynamical processes of an extreme rainfall event, and how sensitive the model result is to the choice of horizontal grid spacing. For this study we analysed model simulations with respect to structure and dynamics of the atmosphere to estimate the relevant spatial scales. We hypothesise that high horizontal resolution gives a better representation of the dynamical features that are the key drivers of the precipitation processes in the complex terrain. The paper is organised as follows. In Section 2 we describe the data set and the methods used, including the model description and setup. The results are presented in Section 3 and discussed in more detail in Section 4, with emphasis on the sensitivity to model grid spacing and its effect on atmospheric dynamics. 2. Data and methods 2.1. Observational data The observational precipitation data set consists of measurements from 43 stations operated by the Norwegian Meteorological Institute (MET Norway) and 11 rain gauges deployed in the Voss area in western Norway as part of a master’s project on fine-scale precipitation distribution in complex terrain (Pontoppidan, 2015). The instrument used in the field campaign was the tipping bucket rain gauge HOBO RG2-M (Onset, 2001), registering the time stamp of each tip corresponding to 0.2 mm of precipitation. The HOBO rain gauge is not heated and therefore limited to liquid precipitation sampling for reliable data. However, there was no occurrence of snow at any of the stations during the event under investigation here. Rain gauge measurements are in general prone to undercatch, i.e. the imperfect collection of precipitation, due to wind speed dependent flow distortion around the gauge and additional losses such as wetting, evaporation and splashing (e.g. Sevruk et al., 2009; Habib et al., 2010; Mekonnen et al., 2015). Wetting and evaporation are most relevant during periods with low rain rates and were neglected in our case. The anticipated largest error, the wind-induced undercatch, was minimised during the field campaign by a similar shielded placement in the terrain. No further corrections on the wind speed dependency were applied. We therefore estimate the rain gauges to show a small undercatch that should, however, be of comparable magnitude for all stations. Before and after the deployment period of the HOBO rain gauges, we performed a calibration check on each instrument, allowing for a correction of potential changes in the sensitivity of the instruments over time. A detailed description of the calibration check and correction procedure can be found in Pontoppidan (2015). The distribution of the stations in the area is shown in Fig. 1, and the corresponding exact locations and station altitudes are given in Table 1. Hagavik (P1) and Nesttun (P2) are coastal stations at low elevation with flat terrain upstream and moderately high and steep terrain downstream. Hisdalen (P3), Dale (P4) and Kaldestad (P5) are also located at low elevations, but with steep terrain both up- and downstream. The mountainous stations, Sandfjellet (P8), Hodnaberg (P9) and Flyane (P11), are situated at higher altitudes and are also mainly surrounded by higher terrain up- and downstream. The remaining stations, Steine (P7), Dyrvedalen (P10) and Vasslii (P12), are positioned on the north side of the wider Bergen–Voss valley, in slightly upslope terrain. They all have massive barriers upstream and high terrain immediately downstream and were categorised as Valley North. Further description of the MET Norway stations is available from their website (MET Norway). Fig. 1. Map of the experiment area with altitude from the terrain database ASTER GDEM v1 (Tachikawa et al., 2011) contoured in colours. The deployed rain gauges are colour coded and the stations P1–P12 correspond to the line colours in Fig. 5. Black squares are the stations M1–M3 referenced in the text, triangles are the remaining precipitation stations in the area operated by the Norwegian Meteorological Institute (MET Norway). The red line shows the lower edge of the cross sections analysed in Section 3.4. 2.2. Model setup We used version 3.5.1 of the Weather Research and Forecasting model (WRF), a non-hydrostatic NWP model with terrain-following sigma coordinates (Skamarock et al., 2008). The model domain setup is shown in Fig. 2. The outer domain had 301$×$271 grid points, with a horizontal resolution of 9 km, yielding a domain of 2709 km in the west–east direction and 2439 km in the south–north direction. The model time step in the outer domain was 45 s. The two-way nested domains, d02 and d03, had a grid resolution of 3 km and 1 km and had time steps of 15 s and 3 s, respectively. The extremely short time step of 3 s was necessary to avoid numerical instabilities in the simulation. As an additional effort to avoid instabilities, we smoothed the terrain with two passes of the smooth_desmooth option in the simulation that included all three domains. The 301$×$271 grid points of domain d02 resulted in a domain size of 903 km$×$813 km, while domain d03 had an extension of 211 km$×$211 km. All three domains had 70 vertical levels with the model top at 50 hPa. The initial and boundary conditions for the outer domain were taken from the ERA-Interim reanalysis produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) (Dee et al., 2011). The outer boundary conditions and sea surface temperatures were updated every 6 h during the simulation period. Fig. 2. Model domain set up for the WRF simulations. The domains d01, d02 and d03 have horizontal grid resolutions of 9 km, 3 km and 1 km, respectively. We used the following physical parametrisation schemes: the Kain–Fritsch cumulus scheme (Kain, 2004), Thompsons microphysics scheme (Thompson et al., 2004, 2008), the MYJ planetary boundary layer scheme (Janjić, 2000), the NOAH land surface model (Chen and Dudhia, 2001) and the Dudhia shortwave (Dudhia, 1989) and the RRTM longwave (Mlawer et al., 1997) radiation schemes. The cumulus scheme was only used in the outer 9 km domain, as the 3 km and 1 km domains are on a convection-permitting resolution. (Note that we also ran an experiment with the cumulus scheme disabled in the 9 km domain, and the results were virtually indistinguishable from the ones for the control run.) Table 2 gives a schematic overview of the parametrisations. Various combinations of these schemes have been used in other studies (e.g. Wang et al., 2014; Weckwerth et al., 2014; Mayer et al., 2015), and this particular set of physical parameters has shown reliable results in an earlier study of precipitation in complex terrain in western Norway (Barstad and Caroletti, 2013). The sensitivity to parametrisation schemes was not the scope of this study and has not been examined closer. This has been investigated in previous studies (e.g. Rögnvaldsson et al., 2011; Efstathiou et al., 2013; Pieri et al., 2015). The model was initiated at 00:00 UTC on 24 October 2014, or in abbreviated form 24/00. Three different runs were performed, with two-way feedback when nests were present. One with all three domains 9 km, 3 km and 1 km, a second with the 9 km and 3 km domains and a third with just the 9 km domain. The first 30 h of the model runs were discarded as spin-up, resulting in a four-day analysis window from 25/06 to 29/06. When comparing model results with observations, we used the nearest four model grid points, weighted according to their distance to the location. We performed different sensitivity tests to find the optimal spectral nudging settings (Von Storch et al., 2000; Omrani et al., 2012, 2015) for the simulation of precipitation during the flooding event. The best model representation of the single-location precipitation was found to be the one with the standard WRF relaxation time of 1 h and nudging wavelengths above 677 km zonally and 609 km meridionally in the outer domain only. We chose this model run for a dynamical investigation in Section 3.4. A more detailed description of the selection process based on multiple sensitivity simulations can be found in Pontoppidan (2015). 3. Extreme flooding event October 2014 3.1. Synoptic situation End of October 2014 south-western coastal Norway was exposed to considerable amounts of precipitation, resulting in widespread flooding. One of the hardest affected areas was located along the River Vosso, and the Voss area experienced a severe flooding event on 28 October. The official stations operated by MET Norway in the area reported three-day total precipitation amounts of 249 mm for Jonshøgdi (station number 50310, M1 in Fig. 1), 133 mm for Evanger (station number 51440, M2) and 111 mm for Vossevangen (station number 51530, M3) between 26/06 and 29/06. The synoptic situation a few days before the flooding event was characterised by the passage of multiple frontal systems with partly heavy precipitation. The analysis from 28/00 (Fig. 3) shows the centre of a low-pressure system over the Barents Sea and ongoing cyclogenesis over the experiment area. The frontal zones advected warm and moist air masses from the tropics towards western Norway. This is also evident in Fig. 4, which presents the specific humidity at 850 hPa from the ERA-Interim reanalysis for the same time. Fig. 3. Surface analysis chart from UK Meteorological Office, for 28 October 2014 00 UTC. Reproduced with kind permission of the Met Office. Fig. 4. Specific humidity (g kg) at 850 hPa at 00 UTC on 28 October (coloured contours) and the 850 hPa wind (arrows). Data from the ERA-Interim reanalysis. Two days before the flooding event, on 26 October, a low-pressure system was centred NW of Norway. The associated fronts passed over western Norway and caused considerable amounts of precipitation during the day, especially around noon. A cold front passed the area at 27/00, temporarily advecting drier air and causing a relatively dry period after the frontal passage. At the same time a disturbance over Scotland developed and moved towards Norway, leaving western Norway in the warm sector of an intensifying low-pressure system with again large amounts of precipitation from 27/12 to 28/17. The associated cold front passed the Bergen area in the afternoon and the precipitation intensity behind decreased. As result of several days with more or less continuous rainfall, the flood peaked in the Voss area early evening of 28 October. 3.2. Observed and simulated precipitation The observed precipitation during the four days prior to the flooding top is shown in Fig. 5a, with the HOBO gauges shown as coloured lines and the accumulated daily values from the MET Norway stations as black diamonds. The HOBO measurements clearly show different phases of precipitation intensity, manifested by the varying slope of the curves, related to the synoptic situation and development described above. Two distinct heavy precipitation periods are evident (all of 26 October and 27/12–28/12), separated by a dry period of about 12 h. The spatial variability among the stations was large, with an overall observed range between 340 mm at P5 and 47610 and approximately 20 mm at 29400. The span of the HOBO rain gauge values agrees with the observed precipitation range of the MET Norway stations, with P5 being amongst the stations with highest precipitation amounts and P1 in the lower part. The very low values are not captured by the HOBO gauges. Fig. 5. The observed (a) and simulated (b) accumulated precipitation amounts during the days before the flooding event, from 25 October 06 UTC to 29 October 2014 06 UTC. The observations of the HOBO rain gauges are given as solid coloured lines and the corresponding results from the interpolated 1 km model simulations as dashed lines. Data from the MET Norway stations are indicated by the black diamonds. The simulated precipitation from the 1 km model run is shown in Fig. 5b. The agreement between simulated and observed precipitation is obvious. The model represented the variability, in terms of both temporal and spatial precipitation distribution, remarkably well. It showed, however, a slight tendency to underestimate the precipitation amounts during the first 24 h. The total spatial variability also seems correctly represented with the MET Norway stations spanning approximately the observed range, and both P1 and P5 close to the observed minimum and maximum values. The variability and forecast timing of the intermediate stations were also captured adequately. The two intense precipitation periods were well represented with respect to timing, and the intermediate dry period was clearly reproduced in the simulation. 3.3. Comparison of model resolutions Figure 6 presents a comparison of the total four-day accumulated precipitation from 25/06 to 29/06 in the observations and simulations. For each station, the columns show the WRF simulations (9 km, 3 km and 1 km) and the observed precipitation amount. The observations were well replicated in the model simulations; one exception was station 25830, for which all the simulations overestimated the precipitation largely. Fig. 6. Comparison of four-day accumulated precipitation from 25 October 06 UTC till 29 October 06 UTC from the WRF 9 km, 3 km and 1 km model output, using the interpolated grid point, and the observations. There is one set of bars for each station and the labels on the horizontal axis show the station id. Figure 7 shows a Taylor diagram of the basic statistics for the model runs marked as coloured circles. The calculations are based on the simulated total four-day precipitation at the 54 stations. The observed standard deviation amongst the 54 stations was 76.4 mm, as marked with a black asterisk in the diagram. We notice that the coarse grid model run at 9 km underestimated the standard deviation, here representing the spatial variability amongst the stations, whereas this variability was slightly overestimated in the 3 km run, with 81.4 mm, and again slightly higher in the 1 km run, with 86.6 mm. In terms of root mean square (rms) difference, the 9 km run scored best, and for the correlation, which represents a spatial correlation (though only based on 54 stations), all runs were above 0.8. The best being the 1 km run, with the 3 km run only slightly below. Fig. 7. A Taylor diagram with the 9 km, 3 km and 1 km simulations marked as circles and the observations marked as a black star for reference. 3.4. Dynamics To study the underlying dynamical and physical processes, we investigated the vertical structure of the atmosphere in the model simulations. In accordance with the dominant inflow direction during the case study, we defined a SW to NE oriented cross section through the inner domain, in close vicinity to the stations P1, P3, P5, P7 and 51,470. The cross section is depicted as a red line in Fig. 1. For the following discussion, we selected the output at three model times representative for the dominant phases of the event. One during the first heavy precipitating period at 26/18, a second during the dry period at 27/06 and a third during the second heavy precipitation period at 28/06 (shown in Fig. 5a). A series of cross sections of vertical velocity and potential temperature from the 1 km resolution run are shown in Fig. 8a–c. The air mass approached the coast as a level non-turbulent flow. When it impinged on orography higher than a few hundred meters, gravity waves formed. The gravity waves were present at all the selected times, though with slightly lower intensity at 28/06. The potential temperature showed a clear terrain-induced displacement, diminishing only slightly with altitude. Fig. 8. Cross sections (above the red line in Fig. 1) of vertical velocities (a–c) with potential temperature contoured in black lines, specific humidity (d–f), liquid water content (g–i) and the sum of liquid and ice water content (j–l) from the 1 km resolution run, at three selected times: 26/18, 27/06 and 28/06 (in the left, middle and right column, respectively). Figure 9a–c shows the effect of different grid resolutions on the representation of gravity waves at 28/06. The 9 km grid spacing had fewer wave cells with significantly lower intensity, and the related vertical velocities ranged between $-$1.0 m s and 1.7 m s. The 3 km and 1 km resolution had similar cell structures and a vertical velocity range of $-$3.5 m s to 3.1 m s and $-$5.5 m s to 3.4 m s, respectively. Fig. 9. Cross sections (above the red line in Fig. 1) of vertical velocities (a–c), specific humidity (d–f), liquid water content (g–i) and the sum of liquid and ice water content (j–l) at time 28/06 for the 9 km, 3 km and 1 km runs (in the left, middle and right column, respectively. Cross sections of specific humidity at the three selected times are shown in Fig. 8d–f. The moisture content varied throughout the period, with a minimum in the dry period and a maximum at the last time shown at 28/06. Vertical displacements of drier air were evident downstream of the large mountains at all times. The major displacements caused by the large mountains were detectable throughout the lower 5 km, whereas smaller hills only caused displacement in the lower few hundred meters. The main difference in the humidity distribution between the two precipitation episodes was the considerably thicker layer of high specific humidity during the second phase (28/06), exceeding 6 g kg in the lowest 2 km of the atmosphere. During the first phase (26/18) this value only occurred in the lowest few hundred meters. The effect of the grid sizes shown in Fig. 9d–f seemed limited. The 9 km run was able to resolve the overall specific humidity at this time (28/06), and was quite similar to both the 3 km and 1 km run on a large scale. The orography did, however, affect the smaller scale specific humidity over the complex terrain. Liquid water content (LWC) and the sum of ice water content (IWC) and LWC for the three selected times are shown in Fig. 8g–i and 8j–l. At 26/18 the LWC clearly increased over elevated orography and reached its absolute maximum over the first massive barrier crest at 5.9E. The spillover effect was detectable as the high LWC values continued downslope. Similar features, but with less intensity, were in play at the barriers further downstream. The large areal inhomogeneity in the observed precipitation during this phase was likely related to the distinct differences in LWC. Around the final time the LWC values similarly increased at the first major barrier at 5.9E, but the LWC signals were more diffuse over the remaining terrain features, distributing the precipitation more evenly. A reason for this difference may be found in Fig. 8j–l, which shows that the first precipitation phase (26/18) was nearly unaffected by ice particles having a single maxima of LWC in the vertical dimension. On the other hand, the final precipitating period (28/06) had large amounts of ice particles which created a second vertical maxima of LWC and IWC. The large and rather homogeneous distribution of ice particles aloft could be a potential seeder cloud for the cloud layers below. As a consequence of this seeding, the droplets over a large area would grow faster, fully in accordance with the observations of increased precipitation amount and decreased horizontal variability. The effect of different horizontal resolutions on the LWC and the sum of LWC and IWC are shown in Fig. 9g–i and 9j–l, respectively. The 9 km resolution lacked a sufficient representation of the high LWC and IWC amounts in general. This resulted in a LWC$+$IWC maximum of 0.7 g kg. The 3 km and 1 km resolution simulations had generally higher values, with maxima of 1.5 g kg and 1.8 g kg, respectively. The spillover effect is detected as increased amounts of LWC and IWC immediately downslope of hill crests in the 3 km and 1 km runs. The spillover effect seems to be absent in the 9 km run. The horizontal distribution of accumulated model precipitation during the four-day period is presented in Fig. 10. The HOBO stations are marked on the map with circles and the MET Norway stations with squares (see Fig. 1), all filled with colours corresponding to the observed precipitation during the period. The 9 km run was unable to simulate the high precipitation amounts (Fig. 10a) and seems inadequate for further hydrological modelling. The 3 km (Fig. 10b) and 1 km (Fig. 10c) run simulated higher rainfall and higher variability, agreeing better with the observations. In the western part of the domains over the North Sea, the precipitation fields were in general homogeneous, and the absolute amounts were relatively low. The synoptic-scale forced ascent gave increased precipitation amounts closer to the coastline, and further inland the horizontal inhomogeneity was enhanced in all the simulations. For the 3 km and 1 km domain this inhomogeneity increased substantially, and small confined areas of accumulated simulated precipitation well above 600 mm can be discerned in the southern part of the area. The station P5 is located at the edge of such an area, situated at the first major terrain barrier in the flow direction. Within a radius of 5 km from this station the accumulated precipitation varied by as much as 300 mm during the four days. Fig. 10. Accumulated precipitation (25/06–29/06 October) in part of the 9 km grid (a). The circles correspond to the HOBO stations, squares are stations from MET Norway. The inner parts of the markers show the observed accumulated precipitation amounts in the period. The other panels show the same for part of the 3 km domain in (b) and for the 1 km domain in (c). The valley to the NE of P12 and 51530 received considerably less precipitation than the steep and elevated terrain surrounding it. This area is located approximately 100 km from the coast in the SW flow direction and situated in the synoptic-scale evaporation zone in the lee of the mountainous Hamlagrø plateau. The reduced precipitation (which has been informally confirmed by locals) observed there (P12) is likely linked to the location of the station, and the enhanced precipitation amounts around it were probably caused by smaller-scale orographic features. South in our study area, the model simulations also indicated several other precipitation hot spots. Of particular interest during the period investigated here was the area of enhanced precipitation at 60N, 6.5E. It covered large parts of the catchment of the Opo River, which was also severely affected by the flooding. In order to identify important wavelengths, we performed a spectral analysis of the model terrain and the humidity. The 850-hPa pressure level from the 26/18 cross section of specific humidity shown in Fig. 8d–f were analysed using a discrete cosine transform (Denis et al., 2002). Figure 11a shows the variance for specific humidity and model orography for the 1 km run. Correspondingly the 3 km and 9 km simulations are shown in Fig. 11b and Fig. 11c. Since wavelengths shorter than twice the grid spacing are unresolved, the 9 km domain was unable to resolve wavelengths shorter than 18 km. For the 1 km simulation both the terrain and the specific humidity variance followed the same pattern of increasing values from the shortest resolved wavelength and upwards. The peak was reached around wavelengths of 6 km. The 3 km simulation agreed well with the 1 km run at resolved wavelengths. The similarity between the specific humidity spectra and the orography spectra, at least for the 3 km and 1 km simulations, indicates a link between the two. Similar results were found when analysing vertical velocity, LWC and IWC at 27/06 and 28/06, although this is not shown here. Fig. 11. The upper panel (a) shows the variance spectrum of the terrain and the specific humidity cross sections (pressure level 850 hPa) shown in Fig. 8 from the 1 km simulations. The middle panel (b) shows the same from the 3 km simulation, and the lower panel (c) for the 9 km simulation. 4. Summary and discussion In late October 2014, western Norway experienced several days with heavy precipitation amounts. As a field campaign with HOBO RG2-M rain gauges in the Bergen–Voss area was conducted during this period, it represents a unique opportunity to investigate the ability of the WRF model to reproduce extreme precipitation in complex terrain. Here we used the model to simulate a period of four days prior to the flood (25–29 October, both 06 UTC). Overall, the high-resolution simulations (3 km and 1 km) agreed well with the observations. The rainfall during the first 24 h was slightly underestimated, but towards the end of the integration time the total accumulated precipitation amount at each station was well captured. The simulation also reproduced the observed horizontal precipitation distribution and the timing of the two precipitating periods quite well. Our investigation of the dynamics during the flooding event revealed several interesting features. The simulated strong gravity wave activity in the early stages of the event, together with moderate humidity levels and a near-absence of ice particles corresponded with an observed large precipitation variability between the stations. Later on, during the observed dry period, the simulations had slightly less wave activity, considerably drier air and a more stable atmosphere. During the last period of heavy precipitation, the model had weaker gravity waves, but very high humidity and large amounts of homogeneously distributed ice particles. In this period the distribution of the precipitation was more homogeneous than during the first precipitating interval. We suggest that this difference in homogeneity is caused by two effects: the slightly weaker gravity wave activity and a strong homogeneous seeder effect from the ice particles during the second precipitating period. This emphasises the significant influence of these effects, i.e. orographic modification, on the horizontal precipitation distribution, and thereby the importance of grid spacing to ensure that these features are resolved satisfactorily. It is very costly to run NWP models with a higher resolution than necessary. This is particularly important when performing multi-year downscaling experiments. We therefore paid particular attention to the results on different resolutions. Our study indicated that dynamical downscaling experiments are required to simulate fine-scale variability. Here even the 9 km grid spacing seems insufficient because of its considerably lower variability compared to the observations. Hydrological models depend on a correct distribution of precipitation into catchments and an accurate representation of soil runoff. This is crucial to address local flooding problems in a realistic manner, and we have showed that high grid resolution is necessary to fulfil these requirements. In our simulations the largest differences were found between the 9 km and 3 km runs. The 9 km run lacked the observed spatial variability and seemed partly unable to represent important dynamical features such as gravity waves. However, only marginal improvements were found when decreasing the grid spacing further from 3 km to 1 km. These findings are in qualitative agreement with several previous studies of precipitation in other mountainous regions (e.g. Richard et al., 2007; Rögnvaldsson et al., 2007; Pieri et al., 2015). We suggest that the complexity of the terrain in the area is important for the results. The extent of the mountainous Hamlagrø plateau in the area of interest is approximately 50 km from SW to NE. This plateau and the surrounding valleys were only slightly better represented at 1 km compared to 3 km, due to the necessary smoothing of the 1 km terrain. Our spectral analysis shows that the shortest resolved wavelengths, both in the 3 km and 1 km runs, had significant variance. In addition, there appears to have been a link between the orography and the humidity. On these rather short length scales, horizontal differences of up to 300 mm in accumulated four-day precipitation were found, enhancing our belief in the importance of resolving such short length scales. We suggest that the poor representation of the terrain in the 9 km simulation led to an insufficient representation of the gravity waves, LWC and IWC, and this was an important reason for the less accurate precipitation distribution investigated here. Our results indicate that the important wavelengths in this geographical area were sufficiently resolved in the 3 km run. These results agree with other studies that show reduced model improvement below certain grid thresholds on wider barriers (e.g. Grubišić et al., 2005; Rögnvaldsson et al., 2007; Ikeda et al., 2010). The ideal grid size, however, seems to depend on the complexity of the terrain in the region and the purpose of the investigation. We believe that the results from this case study can be generalised to the frontal precipitation that dominates western Norway, however, the shorter length scales investigated here are limited by the grid size, as length scales shorter than up to 10 times the grid size are not always resolved. Further studies covering longer time periods, with very high-resolution terrain data, may reveal whether there is additional added value to obtain by resolving even shorter length scales in complex terrain. As such, the results presented here are important in the context of regional downscaling in coastal areas with complex terrain. Acknowledgments The authors wish to thank the two anonymous reviewers, Ólafur Rögnvaldsson for useful comments to the manuscript and Anak Bhandari for his technical support during the field campaign. In addition Ingebjørg Aarvik, Marco Häberle, Iris Hestnes and Trine Jonassen assisted in the preparation of the campaign. The Norwegian Research Council, through the NOTUR project, has made super-computing resources on a Cray XE6m-200 computer at Parallab at the University of Bergen available. The preparation of the manuscript was financially supported by the Geophysical Institute and the Faculty of Mathematics and Natural Sciences at the University of Bergen under the ‘smådriftsmidler’ scheme. Kolstad’s work and part of Pontoppidan’s work was funded through the Research Council of Norways HordaKlim and R3 projects (grant numbers 245403 and 255397). Disclosure statement No potential conflict of interest was reported by the authors. References 1. Ban , N. , Schmidli , J. and Schär , C. 2014 . Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations . J. Geophys. Res. Atmos. 119 ( 13 ), 7889 – 7907 . DOI: https://doi.org/10.1002/2014JD021478 . 2. Barstad , I. and Caroletti , G. N. 2013 . Orographic precipitation across an island in southern Norway: model evaluation of time-step precipitation . Q. J. R. Meteorol. Soc. 139 ( 675 ), 1555 – 1565 . DOI: https://doi.org/10.1002/qj.v139.675 . 3. Barstad , I. , Sorteberg , A. , Flatøy , F. and Déqué , M. 2009 . Precipitation, temperature and wind in Norway: dynamical downscaling of ERA40 . Clim. Dyn. 33 ( 6 ), 769 – 776 . DOI: https://doi.org/10.1007/s00382-008-0476-5 . 4. Bergeron , T. 1949 . The problem of artificial control of rainfall on the globe: II. The coastal orographic maxima of precipitation in autumn and winter . Tellus . 1 ( 3 ), 15 – 32 . DOI: https://doi.org/10.1111/tus.1949.1.issue-3 . 5. Chan , S. C. , Kendon , E. J. , Fowler , H. J. , Blenkinsop , S. and Ferro , C. A. T. , and co-authors. 2013 . Does increasing the spatial resolution of a regional climate model improve the simulated daily precipitation? Clim. Dyn. 41 ( 5–6 ), 1475 – 1495 . DOI: https://doi.org/10.1007/s00382-012-1568-9 . 6. Chen , F. and Dudhia , J. 2001 . Coupling an advanced land surface-hydrology model with the Penn state-NCAR MM5 modeling system. Part II: preliminary model validation . Mon. Weather Rev. 129 ( 4 ), 569 – 585 . DOI: https://doi.org/10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2 . 7. Colle , B. A. , Wolfe , J. B. , Steenburgh , W. J. , Kingsmill , D. E. and Cox , J. A. W. and and co-authors . 2005 . High-resolution simulations and microphysical validation of an orographic precipitation event over the Wasatch mountains during IPEX IOP3 . Mon. Weather Rev . 133 ( 10 ), 2947 – 2971 . DOI: https://doi.org/10.1175/MWR3017.1 . 8. Dee , D. P. , Uppala , S. M. , Simmons , A. J. , Berrisford , P. and Poli , P. and co-authors . 2011 . The ERA-Interim reanalysis: configuration and performance of the data assimilation system . Q. J. R. Meteorol. Soc. 137 ( 656 ), 553 – 597 . DOI: https://doi.org/10.1002/qj.828 . 9. Denis , B. , Côté , J. and Laprise , R. 2002 . Spectral decomposition of two-dimensional atmospheric fields on limited-area domains using the Discrete Cosine Transform (DCT) . Mon. Weather Rev. 130 ( 7 ), 1812 – 1829 . DOI: https://doi.org/10.1175/1520-0493(2002)130<1812:SDOTDA>2.0.CO;2 . 10. Di Luca , A. , De Ela , R. and Laprise , R. 2012 . Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations . Clim. Dyn. 38 ( 5–6 ), 1229 – 1247 . DOI: https://doi.org/10.1007/s00382-011-1068-3 . 11. Dudhia , J. 1989 . Numerical study of convection observed during the winter monsoon experiment using a mesoscale two-dimensional model . J. Atmos. Sci. 46 ( 20 ), 3077 – 3107 . DOI: https://doi.org/10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2 . 12. Efstathiou , G. A. , Zoumakis , N. M. , Melas , D. , Lolis , C. J. and Kassomenos , P. 2013 . Sensitivity of WRF to boundary layer parameterizations in simulating a heavy rainfall event using different microphysical schemes. Effect on large-scale processes . Atmos. Res. 132-133 , 125 – 143 . DOI: https://doi.org/10.1016/j.atmosres.2013.05.004 . 13. Grubišić , V. , Vellore , R. K. and Huggins , A. W. 2005 . Quantitative precipitation forecasting of wintertime storms in the Sierra Nevada: sensitivity to the microphysical parameterization and horizontal resolution . Mon. Weather Rev. 133 ( 10 ), 2834 – 2859 . DOI: https://doi.org/10.1175/MWR3004.1 . 14. Habib , E. , Lee , G. , Kim , D. and Ciach , G. J. 2010 . Ground-Based Direct Measurement, in Rainfall: State of the Science (Eds. Testik, F. Y. and Gebremichael, M.) Vol. 191 . American Geophysical Union , Washington, DC. 15. Hanssen-Bauer , I. and Førland , E. 2000 . Temperature and precipitation variations in Norway 1900-1994 and their links to atmospheric circulation . Int. J. Climatol. 20 ( 14 ), 1693 – 1708 . DOI: https://doi.org/10.1002/1097-0088(20001130)20:14<1693::AID-JOC567>3.0.CO;2-7 . 16. Heikkilä , U. , Sandvik , A. and Sorteberg , A. 2011 . Dynamical downscaling of ERA-40 in complex terrain using the WRF regional climate model . Clim. Dyn. 37 ( 7–8 ), 1551 – 1564 . DOI: https://doi.org/10.1007/s00382-010-0928-6 . 17. Houze Jr. , R. A. 2012 . Orographic effects on precipitating clouds . Rev. Geophys. 50 . DOI: https://doi.org/10.1029/2011RG000365 . 18. Ikeda , K. , Rasmussen , R. , Liu , C. , Gochis , D. and Yates , D. and co-authors . 2010 . Simulation of seasonal snowfall over Colorado . Atmos. Res. 97 ( 4 ), 462 – 477 . DOI: https://doi.org/10.1016/j.atmosres.2010.04.010 . 19. Janjić , Z. I. 2000 . Comments on “Development and evaluation of a convection scheme for use in climate models” . J. Atmos. Sci. 57 ( 21 ), 3686 – 3686 . DOI: https://doi.org/10.1175/1520-0469(2000)057<3686:CODAEO>2.0.CO;2 . 20. Jiang , Q. and Smith , R. B. 2003 . Cloud timescales and orographic precipitation . J. Atmos. Sci. 60 ( 13 ), 1543 – 1559 . DOI: https://doi.org/10.1175/2995.1 . 21. Kain , J. S. 2004 . The Kain-Fritsch convective parameterization: an update . J. Appl. Meteorol. 43 ( 1 ), 170 – 181 . DOI: https://doi.org/10.1175/1520-0450(2004)043<0170:TKCPAU>2.0.CO;2 . 22. Kay , A. L. , Rudd , A. C. , Davies , H. N. , Kendon , E. J. and Jones , R. G. 2015 . Use of very high resolution climate model data for hydrological modelling: baseline performance and future flood changes . Clim. Change . 133 ( 2 ), 193 – 208 . DOI: https://doi.org/10.1007/s10584-015-1455-6 . 23. Mayer , S. , Maule , C. F. , Sobolowski , S. , Christensen , O. B. and Danielsen Sørup , H. J. and co-authors . 2015 . Identifying added value in high-resolution climate simulations over Scandinavia . Tellus A . 67 ( 24941 ), 1 – 18 . DOI: https://doi.org/10.3402/tellusa.v67.24941 . 24. Mekonnen , G. B. , Matula , S. , Doležal , F. and Fišák , J. 2015 . Adjustment to rainfall measurement undercatch with a tipping-bucket rain gauge using ground-level manual gauges . Meteorol. Atmos. Phys. 127 ( 3 ), 241 – 256 . DOI: https://doi.org/10.1007/s00703-014-0355-z . 25. MET Norway . 2015 . Accessed 12 February 2015 . Online at: http://sharki.oslo.dnmi.no/portal/page?_pageid=73,39035,73_39049&_dad=portal&_schema=PORTAL 26. Miglietta , M. and Buzzi , A. 2001 . A numerical study of moist stratified flows over isolated topography . Tellus A . 53 ( 4 ), 481 – 499 . DOI: https://doi.org/10.1111/tea.2001.53.issue-4 . 27. Miglietta , M. and Buzzi , A. 2004 . A numerical study of moist stratified flow regimes over isolated topography . Q. J. R. Meteorol. Soc. 130 ( 600 ), 1749 – 1770 . DOI: https://doi.org/10.1256/qj.02.225 . 28. Mlawer , E. J. , Taubman , S. J. , Brown , P. D. , Iacono , M. J. and Clough , S. A. 1997 . Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave . J. Geophys. Res. 102 ( D14 ), 16663 – 16682 . DOI: https://doi.org/10.1029/97JD00237 . 29. Omrani , H. , Drobinski , P. and Dubos , T. 2012 . Optimal nudging strategies in regional climate modelling: Investigation in a big-brother experiment over the European and Mediterranean regions . Clim. Dyn. 41 ( 9–10 ), 2451 – 2470 . DOI: https://doi.org/10.1007/s00382-012-1615-6 . 30. Omrani , H. , Drobinski , P. and Dubos , T. 2015 . Using nudging to improve global-regional dynamic consistency in limited-area climate modeling: What should we nudge? Clim. Dyn. 44 ( 5–6 ), 1627 – 1644 . DOI: https://doi.org/10.1007/s00382-014-2453-5 . 31. Onset . 2001 . Data Logging Rain Gauge Manual RG2 and RG2-M, Technical Report . Onset Computer Corporation , Bourne. 32. Pieri , A. B. , Von Hardenberg , J. , Parodi , A. and Provenzale , A. 2015 . Sensitivity of precipitation statistics to resolution, microphysics, and convective parameterization: A case study with the high-resolution WRF climate model over Europe . J. Hydrometeorol. 16 ( 4 ), 1857 – 1872 . DOI: https://doi.org/10.1175/JHM-D-14-0221.1 . 33. Pontoppidan , M. 2015 . Fine scale distribution of precipitation in the Voss area . Master thesis . University of Bergen . Online at: http://bora.uib.no/handle/1956/10428 34. Richard , E. , Buzzi , A. and Zängl , G. 2007 . Quantitative precipitation forecasting in the Alps: The advances achieved by the Mesoscale Alpine Programme . Q. J. R. Meteorol. Soc. 133 ( 625 ), 831 – 846 . DOI: https://doi.org/10.1002/(ISSN)1477-870X . 35. Roe , G. H. 2005 . Orographic Precipitation . Annu. Rev. Earth Planet. Sci. 33 ( 1 ), 645 – 671 . DOI: https://doi.org/10.1146/annurev.earth.33.092203.122541 . 36. Rögnvaldsson , Ó. , Bao , J.-W. , Ágústsson , H. and Ólafsson , H. 2011 . Downslope windstorm in Iceland WRF/MM5 model comparison . Atmos. Chem. Phys. 11 ( 1 ), 103 – 120 . DOI: https://doi.org/10.5194/acp-11-103-2011 . 37. Rögnvaldsson , Ó. , Bao , J. W. and Ólafsson , H. 2007 . Sensitivity simulations of orographic precipitation with MM5 and comparison with observations in Iceland during the Reykjanes Experiment . Meteorol. Zeitschrift . 16 ( 1 ), 87 – 98 . DOI: https://doi.org/10.1127/0941-2948/2007/0181 . 38. Rutledge , S. A. and Hobbs , P. V. 1983 . The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. VIII: A model for the “seeder-feeder” process in warm-frontal rainbands . J. Atmos. Sci. 40 ( 5 ), 1185 – 1206 . DOI: https://doi.org/10.1175/1520-0469(1983)040<1185:TMAMSA>2.0.CO;2 . 39. Sevruk , B. , Ondrás , M. and Chvla , B. 2009 . The WMO precipitation measurement intercomparisons . Atmos. Res. 92 ( 3 ), 376 – 380 . DOI: https://doi.org/10.1016/j.atmosres.2009.01.016 . 40. Sinclair , M. R. , Wratt , D. S. , Henderson , R. D. and Gray , W. R. 1997 . Factors affecting the distribution and spillover of precipitation in the Southern Alps of New Zealand – a case study . J. Appl. Meteorol. 36 ( 5 ), 428 – 442 . DOI: https://doi.org/10.1175/1520-0450(1997)036<0428:FATDAS>2.0.CO;2 . 41. Skamarock , W. C., Klemp, J. B., Dudhi, J., Gill, D. O., Barker, D. M. and co-authors. 2008 . A description of the advanced research WRF version 3 . Technical Report (June), 113. 42. Smith , A. , Bates , P. , Freer , J. and Wetterhall , F. 2014 . Investigating the application of climate models in flood projection across the UK . Hydrol. Process. 28 ( 5 ), 2810 – 2823 . DOI: https://doi.org/10.1002/hyp.v28.5 . 43. Smith , R. B. 2003 . A linear upslope-time-delay model for orographic precipitation . J. Hydrol. 282 ( 1–4 ), 2 – 9 . DOI: https://doi.org/10.1016/S0022-1694(03)00248-8 . 44. Smith , S. A. , Vosper , S. B. and Field , P. R. 2015 . Sensitivity of orographic precipitation enhancement to horizontal resolution in the operational Met Office Weather forecasts . Meteorol. Appl . 24 ( 2012 ), 14 – 24 . DOI: https://doi.org/10.1002/met.1352 . 45. Stohl , A. , Forster , C. and Sodemann , H. 2008 . Remote sources of water vapor forming precipitation on the Norwegian west coast at 60N - A tale of hurricanes and an atmospheric river . J. Geophys. Res. Atmos. 113 , D05102 . DOI: https://doi.org/10.1029/2007JD009006 . 46. Tachikawa , T. , Kaku , M. , Iwasaki , A. and Gesch , D. 2011 . ASTER Global Digital Elevation Model Version 2 - Summary of Validation Results . Technical report , METI & NASA , 26 p. 47. Thompson , G. , Field , P. R. , Rasmussen , R. M. and Hall , W. D. 2008 . Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: implementation of a new snow parameterization . Mon. Weather Rev. 136 ( 12 ), 5095 – 5115 . DOI: https://doi.org/10.1175/2008MWR2387.1 . 48. Thompson , G. , Rasmussen , R. M. and Manning , K. 2004 . Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part I: Description and sensitivity analysis . Mon. Weather Rev. 132 ( 2 ), 519 – 542 . DOI: https://doi.org/10.1175/1520-0493(2004)132<0519:EFOWPU>2.0.CO;2 . 49. Torma , C. , Giorgi , F. and Coppola , E. 2015 . Added value of regional climate modeling over areas characterized by complex terrain-Precipitation over the Alps . J. Geophys. Res. Atmos. 120 ( 9 ), 3957 – 3972 . DOI: https://doi.org/10.1002/2014JD022781 . 50. Tramblay , Y. , Ruelland , D. , Somot , S. , Bouaicha , R. and Servat , E. 2013 . High-resolution Med-CORDEX regional climate model simulations for hydrological impact studies: a first evaluation of the ALADIN-Climate model in Morocco . Hydrol. Earth Syst. Sci. 17 ( 10 ), 3721 – 3739 . DOI: https://doi.org/10.5194/hess-17-3721-2013 . 51. Von Storch , H. , Langenberg , H. and Feser , F. 2000 . A spectral nudging technique for dynamical downscaling purposes . Mon. Weather Rev. 128 ( 10 ), 3664 – 3673 . DOI: https://doi.org/10.1175/1520-0493(2000)128<3664:ASNTFD>2.0.CO;2 . 52. Wang , C. , Gao , S. , Liang , L. , Deng , D. and Gong , H. 2014 . Multi-scale characteristics of moisture transport during a rainstorm process in North China . Atmos. Res. 145-146 , 189 – 204 . DOI: https://doi.org/10.1016/j.atmosres.2014.04.008 . 53. Warner , T. T. 2011 . Numerical Weather and Climate Prediction . Cambridge University Press , New York . 54. Weckwerth , T. M. , Bennett , L. J. , Jay Miller , L. , Van Baelen , J. , Di Girolamo , P. and co-authors . 2014 . An observational and modeling study of the processes leading to deep, moist convection in complex terrain . Mon. Weather Rev. 142 ( 8 ), 2687 – 2708 . DOI: https://doi.org/10.1175/MWR-D-13-00216.1 . 55. Wilson , C. B. , Valdes , J. B. and Rodriguez-Iturbe , I. 1979 . On the influence of the spatial distribution of rainfall on storm runoff . Water Resour. Res. 15 ( 2 ), 321 – 328 . DOI: https://doi.org/10.1029/WR015i002p00321 .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045081496238708, "perplexity": 3442.5298790046472}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00417.warc.gz"}
https://jonathanfrech.wordpress.com/2016/09/03/collatz-conjecture/
# Collatz Conjecture The Collatz conjecture states that every positive integer $k>0$ will — if you iteratively set $k$ to $f(k)$ — result in $1$ (function shown beneath). The graph beneath shows the path length of numbers from $1$ to $10\,000$. In this range $6170$ is the number with the most steps, $261$. $f(k)={\begin{cases}\frac{k}{2}&{\text{if }}k \mod 2 = 0\\3 \cdot k+1&{\text{if }}k \mod 2 = 1\end{cases}}$ # Python 2.7.7 Code # Pygame 1.9.1 (for Python 2.7.7) # Jonathan Frech 2nd of September, 2016 # import import pygame, math, os # steps to 1 def pathlen(k): l = 0 while k != 1: k = [k/2, 3*k+1][k%2] l += 1 return l # create data data = [] maxx = 10**4 for _ in range(1, maxx+1): data.append(pathlen(_)) maxy = max(data) # graph size, surface and scale factor width, height = 1080, 720 graph = pygame.Surface([width, height]) scalex, scaley = float(width)/maxx, float(height)/maxy # render data for _ in range(0, len(data)): pygame.draw.circle(graph, [255, 0, 0], [int(_*scalex), height-int(data[_]*scaley)], 4) # save graph pygame.image.save(graph, os.getcwd() + "/collatz.png") # get the largest step count with steps #print data.index(maxy), "->", pathlen(data.index(maxy)+1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2454952746629715, "perplexity": 20967.06119696595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00141.warc.gz"}
http://jcepm.org/articles/xml/oWAg/
Research Article Journal of Construction Engineering and Project Management. December 2019. 1-15 https://doi.org/10.6106/JCEPM.2019.9.4.001 # MAIN • INTRODUCTION • LITERATURE REVIEW •   Partnering in the Transportation Industry •   Construction Manager/General Contractor Delivery Method •   Partnering in Construction Manager/General Contractor Delivery Method • RESEARCH METHODOLOGY •   Review of Literature and Solicitation Documents •   Case Studies • FINDINGS AND DISCUSSIONS •   Current State of CMGC and Partnering Practices •   Preconstruction Partnering •   Construction Partnering •   Proposed CMGC Partnering Framework • CONCLUSION ## INTRODUCTION Construction partnering, whether formal or informal, is an exercise in delivering shared project goals through building relationships of mutual trust [2]. The partnering concept became necessary in the construction industry as stakeholders looked for more collaborative ways of working [24]. In the past two decades, state Departments of Transportation (DOTs) and public sector transportation agencies turned to a nonbinding form of partnering to improve the adversarial atmosphere in which most public projects were delivered in a low bid design-bid-build (DBB) procurement [15]. With everyone “fed up” with the litigious nature of the industry, partnering represented an opportunity for owners, designers, contractors, subcontractors, and suppliers to maximize their individual abilities in a synergistic arena [24]. Partnering aligns each party’s business objectives by utilizing team-building tools and fostering an early understanding of the specific challenges of the project. After two decades of use, the results of partnering have generally been been positive [1]. Beginning in the 1990s, state DOTs also realized the need to accelerate project delivery due to the increased deterioration of the highway system and began experimenting with alternative project delivery methods (APDMs) [20]. The aim was to meet aggressive schedules and improve project performance by increasing the level of integration and collaboration between the owner, the designer, and the contractor [20]. APDMs are widely used by transportation agencies today. The fundamental principle of these APDMs, such as construction manager/general contractor (CMGC), is to improve project performance by bringing together all project parties early in the project and provide a collaborative environment for the project parties. Both partnering and the APDMs improve efficiency and project performance through enhanced communication, coordination and collaboration among stakeholders. With the advent of these APDMs, the salient question has been whether the collaborative nature of the APDMs has affected how partnering is being used by state DOTs. In order to strike a balance between the two, Ernzen et al. [6] argued that there is a need to adjust the fundamental structure of partnering to accommodate the change in the contract brought about by the APDMs. A report by the American Association of State Highway and Transportation Officials (AASHTO) stipulated that in order to implement partnering in the APDMs, it requires a shift in institutional business culture, which can create discomfort among those who must deliver the project using the APDM for the first time. Further, the report also revealed that a number of state DOTs including North Dakota and Oregon stopped using formal partnering after implementing it because they failed to make a compelling business case for the invested resources and time. However, some of these state DOTs have institutionalized the principles of partnering as routine business practices [1]. As APDMs inherently increase project integration and collaboration, partnering can provide a forum in which this alignment of goals can be achieved. The integration of partnering in APDMs can also build teams within which good business relationships form the foundation from which crises can be averted or resolved, and the project can be delivered as planned [1]. It is, therefore, important for state DOTs to explore these benefits by developing effective strategies to successfully integrate partnering in the APDMs. Given that there are no empirical studies exploring the integration of partnering in transportation projects delivered by the APDMs, the aim of this study was to fill this gap by investigating the use of partnering on CMGC projects through a review of documents from state DOTs, Federal Highway Administration (FHWA), and data from CMGC case studies. A CMGC partnering model was also developed that can be used by DOTs to effectively and seamlessly integrate partnering in CMGC. The significance of this study is in expanding the knowledge on partnering and CMGC integration and providing empirical evidence on successful integration using real CMGC projects. ## LITERATURE REVIEW Partnering in the Transportation Industry Partnering has been defined differently depending on the industry. However, in construction, the concept of partnering is described as a generic term of management approach to align project goals [3]. In transportation, the definitions from the Arizona Department of Transportation (AzDOT) and Ohio Department of Transportation (OhDOT) can be synthesized as an orchestrated collaborative teamwork by all stakeholders to establish an environment of mutual trust, open communication, cooperation and teamwork in achieving the mutually agreed upon goals and objectives [4, 21]. A simple review of these and other definitions and descriptions of partnering reveal certain common essential characteristics: shared interests, mutual goals, commitment, teamwork, trust, problem solving, and a synergistic relationship. Traditional partnering became firmly established in the United States in the 1990s and it has been broadly used by many state and federal agencies with significant reported benefits [16]. A study by Gransberg et al. [14] on partnering with data from more than 400 Texas Department of Transportation (TxDOT) projects revealed that partnered projects outperformed non-partnered projects in virtually every category if they were awarded at a price above $5 million. Another study conducted by Rogge et al. [22] for the Oregon Department of Transportation (OrDOT) showed that respondents feel partnering improves communication, trust, and teamwork Additionally, AASHTO [1] reported that state DOTs are using partnering to solve problems collaboratively, increase work efficiency, implement innovative products, provide services that exceed customer expectations, and manage project risks collaboratively. Thus, the significance of partnering is no longer in doubt. However, in order to reap the benefits of partnering, DOTs and state agencies need to evaluate their fundamental business practices to adapt to partnering’s principles and assess if their members have embraced the values associated with partnering [1]. The performance of partnered projects can be measured either in terms of tangible or intangible attributes [17]. According to AASHTO [1], the business case for partnering has mostly been reliant on the quantitative tangible attributes such as cost, time, safety and quality. However, in their research, Kereri and Harper [18], identified eight intangible attributes commonly used by state DOTs in partnered projects. These include early involvement of key participants, joint decision making, jointly developed goals, advanced communication and information tools, pre-agreed conflict resolution, team building activities, and continuous workshopping [18]. Today, most state DOTs require implementation of partnering on construction projects in excess of$10 million and optional for projects over $1 million in value [16]. A study conducted by Rogge et al.[22] reported sixteen states use formal criteria for making decisions about whether partnering should be used on a specific project. Further, while partnering is optional on projects over$1 million at the request of the contractor, for projects over $25 million, a mandatory "Training in Partnering Concepts" session is given to both the State and the contractor. According to Hannon and Zhang [16], partnering general chain of events include: (1) the facilitator conducts a one-day training event or workshop for all project team members; (2) a date and location is then set for the ‘formal’ partnering session or workshop and key stakeholders are invited where a project ‘Charter’ is the result/deliverable of the first session; (3) a schedule of ‘follow-up’ periodic partnering meetings, typically three – four months apart and with a duration of one-half day are created to assess the metrics of project goals originally stated in the Charter; and (4) a ‘Close-Out’ partnering session is scheduled and conducted for the purpose of reflection and documenting ‘lessons learned’. Construction Manager/General Contractor Delivery Method The CMGC project delivery method is fast becoming more popular in accelerating the delivery of highway projects [25]. CMGC is an integrated team approach consisting of the owner; the designer, who might be an in-house engineer; and the at-risk construction manager. The CMGC contract has two main phases: (1) preconstruction services and (2) construction [12]. Figure I illustrates the structure of the CMGC method with the two phases. The DOT hires the construction manager (CM) during the preconstruction phase and authorizes the CM to provide input during the project design. The CM generally assists in cost information, value engineering, risk management and constructability [13]. After design development or a substantial percentage (60% to 90%), the same CM becomes the general contractor (GC) and enters into a contract with the DOT to construct the project [11, 23]. ##### FIGURE I. CMGC STRUCTURE The FHWA Every Day Counts program is encouraging state DOTs to adopt CMGC as a tool to deliver badly needed rapid renewal projects [25]. This is partly because apart from fast-tracking projects, the CMGC delivery method has several benefits, which include stakeholder integration and improvement of project performance. The CMGC method allows the DOTs to deliver projects that reduce costly change orders, decrease risk, optimize the construction schedule and minimize impact to the traveling public. The CMGC delivery method is being used for transportation projects with sensitive schedules and potential constructability challenges that require special qualifications and extraordinary contractor cooperation, such as those in busy urban areas [9]. Other projects that are a good fit for the CMGC method are those that have public involvement or include right-of-way or utility issues that could affect the overall schedule. Partnering in Construction Manager/General Contractor Delivery Method The CMGC delivery method, by its design requires collaboration between the owner, design engineer, and the CM especially during the preconstruction phase [25]. This collaboration may form the basis for partnering. However, it is important to note that this collaboration may not be extended to the construction phase as the CM assumes the general contractor’s position and hence reverts to a relationship seen in a typical traditional delivery system [1]. The argument has been made that since the CMGC contractor will be working closely with the designer and the owner during the preconstruction phase, there is no need for formal partnering as the nature of the contract is one that promotes healthy relationships and collaborative business practices [1]. While this may be true, those desirable outcomes do not happen automatically when the contracts are signed. To foster and continue partnering throughout the project, partnering must be consciously established and be part of the CMGC method. The collaborative concept of CMGC includes early involvement of key participants selection as a team, joint decision making, early planning, open communication and information sharing, pre-agreed dispute resolution methods, and team building activities. All these form part of the formal partnering process providing the opportunity to establish it during the preconstruction phase of the CMGC method. The selection of the CMGC contractor is typically through best value selection, which removes the requirement to award to the lowest bidder [5]. This approach should set in motion the strategic relationships that will produce positive outcomes for both the DOT and contractor. Although partnering can be a voluntary system of working cooperatively, some DOTs such as OhDOT, OrDOT and Nevada Department of Transportation (NDOT) have embraced partnering as a way of doing business making prospective contractors aware of the use of partnering on the project through the request for qualifications or request for proposals (RFQs/RFPs) [4, 22]. With this expectation, both the contractor and DOT would avail themselves to the partnering requirements within CMGC. According to AASHTO [1], partnering should be established during the preconstruction phase aimed at achieving the following objectives: •A mutual understanding of the scope of the preconstruction services to be provided by the contractor. •Establishing an agreed methodology for the contractor to furnish priced design alternatives as required by the owner and/or its designer during the design period. •A design issue escalation ladder for resolving professional differences of opinion and conflicts between design preferences and constructability. •Establishing the protocol for negotiating the final construction cost/GMP and, if applicable, the role of the independent cost estimator during those negotiations. •Setting the ground rules for contingency ownership and management during both preconstruction and construction. The quality of the relationships developed during preconstruction will determine the required level of partnering intensity necessary during the construction phase. However, to sustain the partnership spirit through the construction phase, it is recommended that a second partnering effort should be made before the start of construction [1]. ## RESEARCH METHODOLOGY The aim of this paper was to investigate the use of partnering in CMGC project delivery in transportation. In order to achieve this aim, a two-step methodology was used. First, an intensive systematic literature review was conducted using published articles and RFP/RFQ of CMGC transportation projects. Second, case study analysis was conducted on three CMGC projects. Review of Literature and Solicitation Documents The first step was to identify journals, conferences, databases and websites that may contain relevant material for this review. The search was conducted on databases such as ScienceOpen, SCOPUS, Google Scholar, and EBSCO. Keywords such as “partnering”, “construction manager/general contractor”, “construction manager at risk” and “collaboration” were used to identify articles from the databases. The search was further limited to only articles relevant to transportation. A total of 51 articles and CMGC reports were reviewed for this study. Since this research is focused on transportation, 32 CMGC RFQ/RFPs from multiple DOTs including Ohio, Nevada, Texas, Colorado, California, Wisconsin, Arizona, Florida, Michigan, Minnesota, Oregon and Utah were collected and analyzed. The literature review was very important in this study because it provided a basis to understand the state of the art of practice of partnering with CMGC by the DOTs. Case Studies The second process was to collect data from three projects used as case studies in order to understand how partnering is being implemented in CMGC projects. Data from these projects was gathered through review of documents and interviews with project participants. The different parties from the owner (DOT), contractor, subcontractors, and consultants involved in these projects were contacted through emails and follow up phone calls to ask them of their willingness and availability to participate in the research. A total of 20 participants for case study 1, 18 for case study 2 and six for case study 3 responded to our emails and phone calls for their willingness to participate in the study. However, only 15 for project case study 1 and 13 for case study 2 could be reached for the interviews. Interviews were conducted over the phone for approximately 45-60 minutes. Interviews were conducted with participants using a structured questionnaire and they were required to respond to the questions based on their experiences on the project and observations of meetings or events relevant to the project under study. Although the focus was more on intangible attributes of the projects, the questionnaire included questions that helped better understand the projects even from the tangible attributes. The questions were thus divided into the two main sections: (1) tangible attributes (general information, project information, cost factors, and time factors) and (2) intangible attributes. During the interviews, the participants were asked if they had more information regarding partnering that they could like to share. They were also asked to attach some partnering documents for more insights on the subject. The tangible data collected enabled the description of the case studies and analysis of their quantitative performance. Table I shows the three projects used in this research as case studies. TABLE I. CASE STUDY PROJECTS Project Agency Project cost Cost Growth Schedule Time Growth Case 1: Veterans Memorial Tunnel CDOT$55 million -5.56% 24 months 0.00% Case 2: Pecos over I-70 CDOT $25.5 million 7.02% 12 months -10.10% Case 3: The Winona Bridge Rehabilitation MnDOT$145.9 million 0.00 60 months -12.5% The cost growth and time growth metrics were calculated using equation (1) and equation (2) respectively; $$Cost\;growth=\frac{(Final\;contract\;amount)-(Original\;contract\;amount)}{(Original\;contract\;amount)}$$ Eq. (1) $$Time\;growth=\frac{(Day\;charged)-(Total\;days\;allowed+Additional\;days\;granted)}{(Total\;days\;allowed+Additional\;days\;granted)}$$ Eq. (2) Where, Days charged = Actual contract duration Total days allowed = Original contract duration Case Study 1 – Colorado DOT Veterans Memorial Tunnel This Westbound I-70 twin tunnels project widened the westbound bore of the tunnel in Idaho Springs to about 53-feet. This widening was to accommodate a third lane on this highway. This project used partnering tools to enhance collaborative working relationships. A hired external facilitator was used to lead the partnering program. Initial partnering workshop was conducted for the Colorado Department of Transportation (CDOT) personnel and the CMGC contractor before the start of the project. Partnering follow-up sessions were also conducted after every six months. In addition, three partnering progress meetings were conducted. By using the CMGC project delivery method, key project participants were selected as a team and were involved early in the project. There was joint decision-making, financial transparency among key participants, use of collaborative multi-party agreements, jointly developed goals, and intensified early planning. In addition, the team had pre-agreed dispute resolution methods and they were co-located and conducted team building activities. Case Study 2 – Colorado DOT Pecos over I-70 This CMGC delivery project included replacing the existing poor bridge structure on Pecos Street over I-70 to improve traffic operations at the Pecos/I-70 interchanges. The project scope included replacing the Pecos structure, installing roundabout type intersections, and a pedestrian bridge structure that spanned the I-70. Although the project did not require formal partnering, a two-day partnering workshop was conducted where a project charter was signed by the CDOT and the contractor. The contractor was expected to adhere to all partnering requirements throughout the project. Partnering tools which are synonymous to CMGC were evident in this project. These include: early involvement of key participants, selection as a team, joint decision making, collaboration, joint decision making, intensified early planning, advanced communication and information sharing tools, pre-agreed dispute resolution methods, team building activities and co-location of team. Case Study 3 – The Winona Bridge Rehabilitation This project was Minnesota Department of Transportation (MnDOT)’s first CMGC project. The project consisted of the construction of a 450-foot main span bridge over a commercial navigational channel together with rehabilitating the existing bridge. It also consists of a deck level sidewalk, which is lit by LED accents that provide pedestrians with access across the Mississippi river which also serves as the river view. Through collaboration and partnering between the different parties involved in this project, there was a great reduction in the complexity of dealing with multiple parties within the project. Community participation, involvement and their feedback was easily incorporated into the project through partnering. Partnering also allowed for earlier engagement of all the parties in the project and through the CMGC engagement, risk was reduced. ## FINDINGS AND DISCUSSIONS The objective of this paper was to investigate the use of partnering on CMGC projects aimed at developing a partnering model that can be used by DOTs to effectively and seamlessly integrate partnering in CMGC. Through a review of documents from state DOTs, FHWA, and data from CMGC case studies, the following findings were noted and are discussed in the subsequent sub-sections. Current State of CMGC and Partnering Practices At the Federal level, CMGC is listed as one of the FHWA’s Every-Day Counts programs that the agency is advocating for as an accelerated project delivery method based on innovation aimed at reducing project cost while enhancing safety and environmental protection [8]. The FHWA recognized the use of innovative contracting methods or what became commonly known as alternative contracting methods under Special Experimental Projects No. 14 (SEP-14) [7, 8]. Furthermore, under SEP-14, FHWA required that state DOTs submit projects that they intend to undertake using alternative contracting methods for approvals. However, after the passage of Moving Ahead for Progress in the 21st Century Act (MAP-21), SEP-14 is no longer required by DOTs who intend to use CMGC project delivery method. In addition, the FHWA does not have any regulations that currently govern the use of CMGC delivery method [9, 19]. At the state level, there are efforts by individual states to recognize and incorporate CMGC as a project delivery method. This has either been institutionalized through legislation or just as an experiment. Through the review of literature and the CMGC documents, it was found that only14 states have enabling legislation to use CMGC in their transportation projects. However, more states use CMGC under the SEP-14 where the FHWA allowed them to experiment with innovative contracting methods [8, 10]. In terms of partnering, there is limited information in state DOT documents that show procedures on the usage of partnering with CMGC projects. The 32 CMGC RFPs/RFQs analyzed showed that only five had required formal partnering. Therefore, it was important to focus on case studies where CMGC was used by DOTs in order to analyze the partnering efforts employed and ascertain the necessity of using partnering with CMGC projects. In doing so, it will be easier to make a strong business case on the worthiness of using partnering in CMGC projects. The low number of DOTs that require partnering in CMGC projects largely suggests that more state DOTs are relying on the inherent nature of the CMGC contract to promote healthy relationships and collaborative business practices as a result of the CMGC contractor working closely with the designer and the DOT during the preconstruction phase. However, according to AASHTO [1] the collaborative nature may not automatically produce the desired outcomes if partnering is not consciously established. This assertion by AASHTO [1] points to the need for a structured means of managing and directing CMGC partnered projects. Generally, this research has revealed that partnering in CMGC project delivery method has two main phases; preconstruction and construction. These two main phases are essential in establishing partnering agreements between the different parties involved. In addition, it was revealed from the case studies that in some instances, there is personnel change over between these two phases as some personnel leave the project after preconstruction and new ones come in at the construction phase. This could hamper the progress and intensity of partnering during the construction phase. This is consistent with the findings of AASHTO. However, the findings of this research further revealed the dwindling fortunes of partnering efforts in CMGC partnered projects as the construction project progresses. Table II shows the comparison of partnering attributes to CMGC attributes as a project delivery method for the three case studies. The analysis show that there is an overlap between CMGC characteristics and partnering attributes. The difference is that CMGC is a project delivery method that involves procurement and key parties early on in the project. This involves two separate contracts where the first contract targets the construction manager’s input during preconstruction phase while the second one is on construction after completion of design and preparation of construction documents [10, 11]. Furthermore, in CMGC delivery method, parties are selected based on qualifications, past experience or through best value procedures [11]. By its very nature then, it shows that the preconstruction phase may be more fragmented and there is a need for the team to create and foster better working relationships at the very start of the project. However, as the project progresses into the construction phase, more CMGC characteristics that foster relationships and close working relationships become more important. TABLE II. COMPARING CMGC AS A PDM AND PARTNERING STRATEGIES/ATTRIBUTES FOR EACH CASE Strategy Attributes Case 1 Case 2 Case 3 Partnering Early Involvement of Key Participants x x Joint Decision Making x x x Jointly Developed Goals x x x Advanced Communication and Information Tools x x Pre-Agreed Conflict Resolution x x x Team Building Activities x x x External Team Building Expertise x x Continuous Workshopping x x CMGC as a PDM Two contracts (architect/engineer and contractor) x x x CM selected based on qualifications and fees x x x Some of the construction risks transferred to the GC x x x Open book costing strategy x x x Cost of the project is flexible x Subcontractors are reassigned to the CM x x x Risks can push the CM not to act as owner's agent Contractor is involved early in the project x x x “x” represents the application of such a strategy in given project. Preconstruction Partnering During preconstruction, the owner can either utilize an in-house or external design team. Whichever the case, both scenarios provide a platform through which a partnering effort is required in order to create collaborative working relationships. From the case studies, it was revealed that, for public projects, DOT personnel usually are not bound under any contract to perform within the project and in most cases, they work on multiple projects at the same time. Therefore, it is important that DOTs hold an internal meeting with the prospective personnel to be involved in the CMGC project if they intend to utilize an in-house design team prior to a preconstruction partnering meeting. A review of DOT partnering documents in addition to the case studies revealed that a meeting between the DOT staff prior to the initial partnering is essential in setting the tone and the framework for any conflicts between the CMGC project and other projects that they may be handling. Both California and Utah who have successfully delivered CMGC projects have recommended that this meeting with internal staff and design team is essential whether it is in a form of a formal or informal partnering. In general, whether with in-house or external design team, preconstruction partnering also need to include other project parties including the CMGC contractor and the owner (DOT) or owner’s representative. It is expected that in this initial partnering meeting, the expectations of the project design are discussed as well as achievements, the role of the contractor during preconstruction and the approach that will be taken to interface with the CMGC’s preconstruction staff. Construction Partnering During the construction phase, partnering is essentially used to bring on board those parties that were not part of the preconstruction partnering as well as continued partnering spirit. Typically, the parties that will be involved in the construction partnering include: •Owner’s Organization personnel which include; resident and/or project engineers and/or its inspectors, construction quality assurance personnel, safety personnel, contract administration, and operations personnel. •Design team which include; construction administration and quality oversight personnel. •CMGC Contractor personnel which include; construction project manager, superintendent, quality assurance, safety personnel, contract administration and logistics personnel, and key subcontractors. The study revealed that partnering may become less important as the project progresses into the construction phase especially where there are not much personnel change over from key project parties. This is because there is an overlap between partnering attributes and CMGC characteristics which render partnering redundant and thus becomes very difficult to build a strong business case at this point. When transitioning from preconstruction to construction phase in a CMGC delivered project, there is a need to assess the personnel change over. If there is a significant change in the personnel, then the project parties will have to revise the partnering charter to accommodate the new personnel and involve them for the continuity of the partnering efforts. The quality of relationships at this phase will depend on the quality of relationships formed during preconstruction phase as well as the personnel change over. The findings reveal that the number of personnel at this phase that did not participate in the preconstruction partnering will dictate the intensity of partnering. It is, therefore, essential that continuous partnering efforts such as partnering sessions, workshops, and/or team building activities be held to foster team relationships with the new personnel. Proposed CMGC Partnering Framework Partnering provides the momentum for CMGC to deliver on the advantages it presents. Effective partnering strategy will help CMGC delivered projects to achieve the needed innovation in meeting tight project timelines and enhance project delivery performance. Partnering should, therefore, be used to complement CMGC project delivery. Based on the information gathered from the literature review, DOTs construction documents and the case studies, the authors propose a framework model for CMGC partnering as shown in Figure II. By streamlining the partnering practices with that of CMGC, this model effectively overcomes the current shortfalls of overlap which render partnering redundant making it difficult to build a strong business case for it. Furthermore, by separating preconstruction and construction phases in the executive partnering agreement, this model will benefit project team members and partnering facilitators avoid duplication of efforts in a bid to foster relationships within the team. Other potential benefits of this model are further explained through the different stages presented by the model. ##### FIGURE II. PROPOSED CMGC PARTNERING MODEL Partnering Formation Through the operational terms in the CMGC delivery method, there is early involvement of key project parties, which is also an attribute in partnering strategy. Therefore, there is a need to have a partnering agreement between all the parties that are involved in preconstruction. This process of partnering formation will take a similar form as that of general partnering and parties will sign a partnering charter. At this point, the DOT will decide how prepared they are to handle the partnering process, the expertise available internally and/or whether they will involve a third party to facilitate the process. Also, DOT guidelines on which projects they can use partnering are reviewed and then a detailed plan on how partnering will be implemented is developed. Executing Partnering Agreement This stage starts when the contract has been awarded and key project parties have been selected. In executing the partnering agreement, the process in CMGC method differs greatly from the general model of partnering presented earlier in the paper. Here, partnering is divided into two sections; preconstruction and construction phases. This is to take into consideration the personnel change over from preconstruction to construction. Essentially, this stage starts with the initial partnering workshop/meeting. Then follow-up meetings will be conducted thereafter. At this stage, progress reviews are conducted at every meeting. It is at this stage that project corrective actions for any project occurrences are taken. Team building activities are also undertaken here in order to foster team working relationships. Measurement and Performance The proposed framework provides for a platform where the benefits of partnering to the project can be quantified. This process occurs as long as the project is ongoing, and the partnering agreement is in force. Traditionally, project performance has been measured in terms of quantitative attributes herein the model referred to as tangibles, which include time, cost, and quality. More recently, there has emerged more qualitative measures of project performance also referred to as intangible attributes [17]. Periodic cost/schedule reviews are conducted together with the other performance metrics such as quality, and safety. Other intangible benefits of partnering including trust, communication, and information sharing among others highlighted earlier in the paper. Feedback and Continuous Improvement Once the partnering process has been completed, the executing agency (DOT) needs to document the feedback from the project parties, challenges and lessons learned. This is important for institutional knowledge and to show areas of improvement in future projects. This can be used by the agency in developing its documents or build up a database to be used for future projects as well as to be used by upper management in tracking the overall performance of the partnering program. ## CONCLUSION The objective of this paper was to investigate the use of partnering on CMGC project delivery method. The research involved an extensive literature review, analysis of state DOTs CMGC solicitation documents and review of three CMGC case studies. The study found that there is limited information in state DOT documents that show procedures on the usage of partnering with CMGC projects. In addition, review of documents revealed a very low number of DOTs require partnering in CMGC projects suggesting that more state DOTs are relying on the inherent nature of the CMGC contract to promote healthy relationships and collaborative business practices. Based on the findings of this paper, the authors conclude that partnering is an important component in CMGC projects at the preconstruction stage. The benefits of using partnering in CMGC during preconstruction phase is that collaboration is established by bringing together key project parties early in the project and not just the CMGC and the DOT representatives. At this point, both parties strike a consensus on partnering formation together with the process of establishing project cost. The DOT has the responsibility of ensuring that key project parties continuously cooperate with one another by promoting partnering activities such as partnering sessions, partnering workshops, and team building activities using either an internal or external partnering facilitator. The study also revealed that partnering may become less important as the project progresses into the construction phase especially where there are not much personnel change over from key project parties. This is because there is an overlap between partnering attributes and CMGC characteristics which render partnering redundant and thus becomes very difficult to build a strong business case at this point. When transitioning from preconstruction to construction phase in a CMGC delivered project, there is a need to assess the personnel change over. If there is a significant change in the personnel, then the project parties will have to revise the partnering charter to accommodate the new personnel and involve them for the continuity of the partnering efforts. Further, this paper contributes to the body of knowledge by specifically looking at the overlap between partnering and CMGC and developing a framework that can be used by practitioners in directing partnering in CMGC projects. Through the extensive analysis of the states DOTs construction documents, literature and three case study projects, this paper has presented a proposed partnering framework for CMGC partnered projects. However, this paper is limited to three case study projects from two DOTs. It is, therefore, recommended that further investigation be conducted using more case studies across the United States. By so doing, the data collected and analyzed can give more insights on the use of partnering in CMGC projects and possibly add to the proposed framework. ## References 1 AASHTO, AASHTO Partnering Handbook 2nd edition. Washington, DC: American Association of State Highway and Transportation Officials (AASHTO), 2018. 2 Abudayyeh, O., Partnering: A Team Building Approach to Quality Construction Management. Journal of Management in Engineering, doi:10.1061/(ASCE)9742-597X(1994), vol. 10, no. 6(26), pp. 50-62, 1994 10.1061/(ASCE)9742-597X(1994)10:6(26) 3 Bayliss, R., Cheung, S.-O., Suen, H., and Wong, S.-P., Effective Partnering Tools in Construction: A Case Study on MTRC TKE Contract 604 in Hong Kong. International Journal of Project Management, doi:10.1016/ S0263-7863(03)00069-3, vol. 22, pp. 253-263, 2004. 10.1016/S0263-7863(03)00069-3 4 Brown, D. K., ODOT Partnering Handbook. Grand Rapids, OH: Ohio Department of Transportation. Retrieved from 2000, http://www.dot.state.oh.us/Divisions/ConstructionMgt/Admin/Partnering/handbook.pdf, (May 4, 2019). 5 Ellicott, M. and Grajek, K., Best Value Contracting Criteria. Cost Engineering, AACE, pp. 31-34, 1997. 6 Ernzen, J., Murdough, G., and Drecksel, D., Partnering on a design-build project: Making the three-way love affair work. Ransportation Research Record: Journal of the Transportation Research Board, 1712, pp. 202-211, 2000. 10.3141/1712-24 7 8 FHWA. Design-Build Effectiveness Study. Retrieved from USDOT - Federal Highway Authority: January, 2006, https://www.fhwa.dot.gov/reports/designbuild/designbuild.htm, (July 21, 2018) 9 10 FHWA. Construction Manager/General Contractor (CM/GC), Retrieved from USDOT-Federal Highway Administration: (June 6, 2017), https://www.fhwa.dot.gov/construction/contracts/acm/cmgc.cfm, (July 20, 2018). 11 FHWA. Construction Program Guide: Construction Manager/General Contractor Project Delivery. Retrieved from U.S. Department of Transportation: Federal Highway Administration: https://www.fhwa.dot.gov/ construction/cqit/cm.cfm, (June 27, 2017). 12 Gransberg, D. and Shane, J., Construction Manager-at-Risk Project Delivery for Highway Programs. Washington, D.C.: National Cooperative Highway Research Program, 2010. 10.17226/14350 13 Gransberg, D. and Shane, J., Defining Best Value for Construction Manager/General Contractor Projects: The CMGC Learning Curve. Journal of Management in Engineering, doi:10.1061/(ASCE)ME.1943-5479.0000275, vol. 31, no. 4, 2015. 10.1061/(ASCE)ME.1943-5479.0000275 14 Gransberg, D., Dillion, W., Reynolds, L., and Boyd, J., Quantitative Analysis of Partnered Project Performance. Journal of Construction Engineering and Management, pp. 161-166, 1999. 10.1061/(ASCE)0733-9364(1999)125:3(161) 15 Hancher, D. and Haggard, R., In Search of Partnering Excellence. Austin, Tx: Construction Industry Institute, 1991. 16 Hannon, J. and Zhang, F., A New Methodology for Partnering Transportation Projects. Advances in Human Factors, Sustainable Urban Planning and Infrastructure. AHFE 2017. Advances in Intelligent Systems and Computing. Springer, Cham, pp. 318-325, 2017. 10.1007/978-3-319-60450-3_31 17 Kereri, J. O., A comparison of project party relationships in design-bid-build and design-build project delivery methods. Journal of Architecture, Engineering and Construction, vol. 6, no. 4, pp. 26-32, 2017. 10.7492/IJAEC.2017.021 18 Kereri, J. O. and Harper, C. M., Intangible attaributes of partnering: A case study analysis . Annual meeting of the Transportation Research Board (TRB). Washington, DC: Transportation Research Board, 2019. 19 Minnesota Department of Transportation. Minnesota Department of Ttansportation. Retrieved from Hwy 43 Bridge: www.dot.state.mn.us/winonabridge/work-package5-costs.html, (July 19, 2019). 20 Murdough, G., Drecksel, D., Sharp, G., and Ernzen, J., Performance in the Project Trailer: A Partnering Evaluation Tool. Transportation Research Record: Journal of the Transportation Research Board, 1994, pp. 26-34, 2007. 10.3141/1994-04 21 Opie, B., Webb, L., and Dudzik, D., Partnering Process Survey Report 2016. Phoenix: Arizona Department of Transportation. Retrieved , from 2016, https://www.azdot.gov/docs/default-source/partnering/partnering-process- survey-report-2016.pdf?sfvrsn=2, (July 28, 2018). 22 Rogge, D., Griffith, D., and Hutchins, W., Improving the Effectiveness of Partnering. Salem, Oregon: Oregon Department of Transportation, 2002. 23 Schierholz, J., Gransberg, D., and McMinimee, J., enefits and Challenges of Implementing Construction Manager/General Contractor Project Delivery: The View from the Field. Transportation Research Board 91st Annual Meeting. Washibgton, DC: The National Academics of Sciences, Engineering and Medicine, pp. 50-62, 2012. 24 Warne, T., Partnering for Success. New York, NY: American Society of Civil Engineers, 1994. 25 West, N., Gransberg, D., and McMinimee, J., Effective Tools for Projects Delivered by Construction Manager-General Contractor Method. Journal of the Transportation Research Board, 2268, 2015. 10.3141/2268-05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.370357871055603, "perplexity": 5163.914009480742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00424.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/97050
## Files in this item FilesDescriptionFormat application/vnd.openxmlformats-officedocument.presentationml.presentation 989065.pptx (7MB) PresentationMicrosoft PowerPoint 2007 application/pdf 2695.pdf (18kB) AbstractPDF ## Description Title: CONFORMER-SPECIFIC IR SPECTROSCOPY OF LASER-DESORBED SULFONAMIDE DRUGS: TAUTOMERIC AND CONFORMATIONAL PREFERENCES OF SULFANILAMIDE AND ITS DERIVATIVES Author(s): Uhlemann, Thomas Contributor(s): Müller, Christian W.; Seidel, Sebastian Subject(s): Conformers, isomers, chirality, stereochemistry Abstract: Molecules containing the sulfonamide group R$^{1}$-chem{SO_2}-NHR$^{2}$ have a longstanding history as antimicrobial agents. Even though nowadays they are not commonly used in treating humans anymore, they continue to be studied as effective inhibitors of metalloenzyme carbonic anhydrases. These enzymes are important targets for a variety of diseases, such as, for instance, breast cancer, glaucoma, and obesity. Here we present the results of our laser desorption single-conformation UV and IR study of sulfanilamide (chem{NH_2}Ph-chem{SO_2}-NHR, R=H), a variety of singly substituted derivatives, and their monohydrated complexes. Depending on the substituent, the sulfonamide group can either adopt an amino or an imino tautomeric form. The form prevalent in the crystal is not necessarily also the tautomeric form we identified in the molecular beam after laser desorbing the sample. Furthermore, we explored the effect of complexation with a single water molecule on the tautomeric and conformational preferences of the sulfonamides. Our conformer-specific IR spectra in the NH and OH stretch region (3200--3750,wn) suggest that the intra- and intermolecular interactions governing the structures of the monomers and water complexes are surprisingly diverse. We have undertaken both Quantum Theory of Atoms in Molecules (QTAIM) and Interacting Quantum Atoms (IQA) analyses of calculated electron densities to quantitatively characterize the nature and strengths of the intra- and intermolecular interactions prevalent in the monomer and water complex structures. Issue Date: 6/21/2017 Publisher: International Symposium on Molecular Spectroscopy Citation Info: APS Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/97050 DOI: 10.15278/isms.2017.WC10 Date Available in IDEALS: 2017-07-272018-01-29 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6140156984329224, "perplexity": 11863.788513892607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867046.34/warc/CC-MAIN-20180525063346-20180525083346-00559.warc.gz"}
http://mitpdev.mit.edu/journal/10.1162/neco.1993.5.3.392
## Neural Computation May 1993, Vol. 5, No. 3, Pages 392-401 (doi: 10.1162/neco.1993.5.3.392) © 1993 Massachusetts Institute of Technology Arbitrary Elastic Topologies and Ocular Dominance Article PDF (556.34 KB) Abstract The elastic net, which has been used to produce accounts of the formation of topology-preserving maps and ocular dominance stripes (OD), embodies a nearest neighbor topology. A Hebbian account of OD is not so restricted—and indeed makes the prediction that the width of the stripes depends on the nature of the (more general) neighborhood relations. Elastic and Hebbian accounts have recently been unified, raising a question mark about their different determiners of stripe widths. This paper considers this issue, and demonstrates theoretically that it is possible to use more general topologies in the elastic net, including those effectively adopted in the Hebbian model.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015604615211487, "perplexity": 2391.554207786738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00666.warc.gz"}
http://blog-imgs-124.fc2.com/n/a/n/nankansp/201810192104R.htm
8  苣n 5   _900miOji12j 17:00 @ @ @ @ @ @ @ @ @ @ @ @ 04R 2018N1019 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ͂Ԃ bR In @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ܋ 1900,000~ 2342,000~ 3225,000~ 4135,000~ 5108,000~ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ w Ow MAXw sw g[^] @ @ @ @ @ @ @ @ 11 143 3 151 3 151 11 35.0 3 441 @ @ @ @ @ @ @ @ 3 139 11 142 11 149 10 36.7 11 434 @ @ @ @ @ @ @ @ 6 139 10 141 6 145 3 36.9 6 420 @ @ @ @ @ @ @ @ 10 137 9 139 10 142 9 37.5 10 420 @ @ @ @ @ @ @ @ 9 136 12 137 8 141 4 38.5 9 414 @ @ @ @ @ @ @ @ 8 132 6 136 9 139 6 38.7 12 408 @ @ @ @ @ @ @ @ 12 132 2 134 12 139 8 38.7 8 402 @ @ @ @ @ @ @ @ 2 131 4 131 2 134 12 38.8 2 399 @ @ @ @ @ @ @ @ 7 122 8 129 4 131 5 39.0 4 382 @ @ @ @ @ @ @ @ 4 120 7 127 7 128 7 39.2 7 377 @ @ @ @ @ @ @ @ 5 109 1 121 1 128 2 39.9 1 346 g n n R  900 Ya 1 97 5 111 5 112 1 40.2 5 332 n () 53.4 O 1000 D @ @ @ @ @ @ @ @ @ @ @ @ ѐF Sd \z MAX 1200 @ @ @ @ @ @ @ @ @ @ @ @ n t 44 s 1300 @ @ @ @ @ @ @ @ @ @ @ @ iꕃnj () x g[^ 900 S @ @ @ @ @ @ @ @ @ @ @ @ @ @ 144.44 @ @ @ O OX 3O 4O 5O 1 1 tFfXyN^ {cI @ i12j97 0 0 0 1 0 0 014 9 Ya18.9.28 sO_ 800 43 8 18.9.13 cO_ 900 80 8 D18.8.9 sO_ 1500 70 10 18.7.19 NJO_ 1400 48 12 18.7.2 NJO_ 1400 67 BNg[m[X () @ i11j121 -- 0 1 1 5 bRInC 121 bR TTr 128 bR(l)() 28 bR()(Z) 115 bR(\)(\) b؂ 92 []  6 I 54 @ i11j128 0 1 1 5 0 0 0 2 1079l  {cI 54.0 18 10109l  {cI 54.0 51 868l  {cI 54.0 137 12212l  {cI 54.0 61 1287l  {cI 54.0 81 tFc c @ i12j40.2 -- 0 0 333 483k REGC_C` @ 478k CpX @ 490k ^CZCt @ 491k C`[[ @ 490k C`[[ @ ij[COhj () @ i11j346 0 0 0 1 0 1 455 512 (2.3) 8-8 37.6 40.8 580 (1.6) 9-8-9 38.7 38.6 1500 (10.2) 7-7-7-7 49.2 40.5 1351 (3.2) 9-9-8-7 41.4 40.3 1380 (4.6) 9-10-11-12 43.8 40.7 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 2 2 uCAY^C ɓT @ i8j131 0 0 1 7 1 1 0 1 1 Ya18.4.24 NJO_ 1400 50 3 18.4.3 NJO_ 1400 48 3 18.2.27 NJO_ 1400 59 2 18.1.30 cO_ 1400 50 3 18.1.1 NJO_ 1400 66 [k () @ i7j134 0 0 0 2 0 0 0 1 bR()(\) 134 bR()() 132 bR()(\)(\) 133 bR()()(\) 129 bR(\O)(\l)(\) 127 []  6 54 @ i8j134 0 1 0 5 0 0 0 1 1053l  c 54.0 45 862l  c 54.0 44 12113l  c 54.0 62 1093l  c 54.0 57 10106l  c 54.0 67 veBWfB[ @ i11j39.9 -- 0 1 414 481k LEGCiCg @ 486k JCU[@c@ @ 481k JiJC @ 480k ~NcbL[ @ 480k TiCg @ i_XCU_[Nj () @ i8j399 0 0 1 7 1 3 524 1317 (0.0) 4-4-4-3 39.0 39.5 1339 (0.6) 6-6-5-4 39.9 40.5 1331 (1.0) 6-5-5-3 40.6 39.4 1343 (0.8) 2-2-2-2 40.8 40.1 1343 (0.6) 5-4-2-2 41.4 39.7 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 3 3 XEFvgI[@[{[h c i2j139 0 2 1 2 0 0 1 8 3 18.9.12 dO_ 900 69 2 18.8.23 NJO_ 900 85 11 18.7.19 NJO_ 1400 91 11 18.7.2 NJO_ 900 79 2 18.6.12 dO_ 900 74 C\GCC{[ () i1j151 -- 0 1 0 1 bR tNE 151 bR grEI 151 bR()(Z) 123 bR JTTM 131 bR II^J 140 []  4 54 i1j151 0 1 0 1 -- 1023l  c 54.0 52 1268l  c 54.0 77 1219l  { 54.0 110 1254l  { 54.0 83 1074l  c 54.0 66 ~iVAZ ؋` i3j36.9 0 0 1 0 0 2 1 7 444k elbc@ @ 433k elbc@ @ 427k WFCPCXe @ 425k M}Nfr[ @ 424k tFobg @ itTC`\jbNj () i1j441 0 2 1 2 0 3 216 566 (0.7) 3-4-3 37.8 37.6 561 (0.7) 4-4-3 38.3 35.6 1347 (3.1) 4-4-4-7 44.0 38 579 (3.1) 8-6-5 39.7 36.4 574 (0.4) 7-1-2 38.8 37.2 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 4 4 }c_Sbz c[ @ i10j120 1 0 0 4 0 0 0 1 8 Ya18.9.28 dO_ 800 45 9 18.9.13 cO_ 900 80 12 18.8.23 NJO_ 900 85 ~ 18.4.3 NJO_ 900 84 14 D18.3.16 dO_ 1600 69 ^}mv}x[ () @ i8j131 -- 0 0 0 1 bR ӂ邳ƂӂꂠtFA 131 bR TTr 124 bR grEI 122 bR CkV @ bRI 102 []  4 54 @ i9j131 -- -- 914l  c[ 54.0 48 1048l  z 53.0 77 12710l  {cI 54.0 58 1249l  c[ 54.0 50 14149l  z 53.0 77 |[|WVQ c ~ i5j38.5 -- 1 0 0 6 462k ANAC[^ @ 464k CpX @ 457k elbc@ @ 450k @ 453k INNIE @ ioj () @ i9j382 1 0 0 4 1 0 0 8 502 (1.5) 3-4 37.6 37.8 582 (1.8) 3-6-7 39.7 37 590 (3.6) 10-10-11 39.6 38.8 @ 0 1497 (4.0) 5-8-14-14 42.8 40.1 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 5 5 LOJnn ] @ i11j109 0 0 0 1 0 0 0 3 11 Ya18.9.28 sO_ 1500 51 10 18.9.14 dO_ 1500 73 12 18.8.20 NJO_ 1500 93 10 18.7.23 NJO_ 1500 83 11 18.7.4 NJO_ 1500 93 C}i () @ i12j111 -- -- bR(l) 111 bR()() 苣nWYi 103 bR QOPW WbL[ 111 bR()() l񂲒a 112 bR(l)()(Z) 109 []  Z6 I 56 @ i12j112 0 0 0 1 -- 1268l  z 55.0 70 1098l  ] 56.0 100 14510l  ɓT 56.0 96 121011l  ɓT 56.0 98 12511l  ] 56.0 113 V[|s[ Rc @ i9j39 -- 0 0 0 9 473k TJ[ @ 466k fBtB[m @ 457k f}uREW @ 459k ~X_Ce @ 459k tBI @ iTf[TCXj () @ i12j332 0 0 0 1 812 446 1423 (3.0) 3-4-6-8 42.1 40.1 1432 (3.4) 4-3-3-7 44.3 39.3 1416 (3.5) 7-7-9-10 43.4 38.8 1415 (3.7) 6-6-8-8 43.5 38.7 1420 (3.0) 6-5-6-8 44.6 38.3 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 6 TCn[g i3j139 0 0 0 6 0 0 0 7 4 D18.10.4 cO_ 1200 54 4 18.9.14 dO_ 1500 73 1 18.8.23 NJO_ 1500 73 2 18.7.20 NJO_ 1400 72 3 18.7.3 NJO_ 1400 52 ANAXe () @ i6j136 0 0 0 1 0 0 0 3 bRIn 136 bR()() 苣nWYi 134 bR(O)(l) 141 bR(O)(l) 145 bR()()() 139 []  5 56 i3j145 0 0 0 2 -- 1143l  56.0 66 1061l  56.0 76 932l  56.0 73 11108l  쒉 56.0 72 1255l  56.0 55 LVE\EVE {m @ i6j38.7 -- 3 1 229 510k ECxUg @ 506k fBtB[m @ 505k AU~ @ 503k UC}J @ 502k |[gIuR[ @ il () i3j420 0 0 0 6 3 1 239 1168 (1.4) 2-2-2 39.2 37.6 1401 (0.3) 1-1-1-1 41.6 39 1391 (0.5) 1-1-1-1 41.0 38.7 1327 (0.0) 1-1-1-1 41.0 38.8 1331 (0.3) 1-1-1-1 40.2 39.7 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 6 7 LOYxXg @ i9j122 0 0 0 1 0 1 1 9 12 D18.10.4 cO_ 1000 77 9 Ya18.9.28 dO_ 800 45 9 D18.9.2 cO_ 1000 62 13 D18.8.11 cO_ 1000 79 12 D18.7.27 NJO_ 1000 83 i}A (D) @ i10j127 0 0 0 5 0 0 012 bRIn 127 bR ӂ邳ƂӂꂠtFA 128 bR 鋣Aj}p[N 118 bR tWFbcJbv 110 bRIn 126 [D]  4 I 54 @ i10j128 0 0 0 7 0 0 0 1 13912l  54.0 42 949l  54.0 -5 979l  h 54.0 64 13111l  54.0 83 13913l  h 54.0 56 i{e[W {M @ i10j39.2 0 0 0 2 0 0 0 7 399k f}~_K~ @ 392k ANAC[^ @ 399k PubWA[ @ 400k CpX @ 399k Emo[fB @ iOfj (D) @ i10j377 0 0 0 1 0 1 133 1040 (1.8) 12-11 38.1 38.9 505 (1.8) 9-9 36.5 42 1052 (2.7) 7-7 39.7 38.3 1058 (3.9) 10-11 40.8 37.5 1046 (2.0) 9-9 39.0 38.4 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 8 t@XGt RѐM @ i6j132 0 0 0 3 1 1 0 3 8 D18.10.4 cO_ 1200 54 6 18.9.12 dO_ 900 69 9 18.8.23 NJO_ 900 85 10 18.7.20 NJO_ 900 87 8 18.7.6 sO_ 1400 67 s[X[h () @ i9j129 1 0 1 5 8 4 317 bRIn 129 bR tNE 141 bR grEI 136 bR JZ~ 133 bR()()(O) 122 []  8 56 ~ i5j141 7 3 210 -- 1188l  RѐM 56.0 57 1077l  RѐM 56.0 41 12912l  RѐM 56.0 50 12412l  RѐM 56.0 56 827l  RѐM 56.0 62 TQc rcF @ i7j38.7 -- 0 0 017 492k ECxUg @ 493k elbc@ @ 488k elbc@ @ 484k M}Nfr[ @ 485k ^jmzEU @ iA~[hj () @ i7j402 0 0 0 3 9 5 343 1175 (2.1) 10-11-10 39.1 38.4 576 (1.7) 10-8-9 38.1 39 576 (2.2) 11-9-9 38.4 38.4 573 (1.7) 11-9-9 38.4 37.8 1354 (3.0) 8-8-8-8 41.6 40.4 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 7 9 TNK_[ ^ ~ i5j136 0 3 1 7 0 0 0 2 5 18.9.13 cO_ 900 80 8 18.8.23 NJO_ 900 85 9 18.7.20 NJO_ 900 87 6 18.7.2 NJO_ 900 79 5 18.6.12 dO_ 900 74 uWFl () i4j139 -- 0 0 0 1 bR TTr 139 bR grEI 136 bR JZ~ 134 bR JTTM 135 bR II^J 135 []  5 56 @ i6j139 0 0 0 3 -- 1026l  N 55.0 57 1259l  ^ 56.0 62 12127l  ^ 56.0 79 12115l  ^ 56.0 57 1093l  ^ 56.0 47 TNZ[l Xm i4j37.5 -- 0 4 1 8 476k CpX @ 478k elbc@ @ 470k M}Nfr[ @ 471k M}Nfr[ @ 477k tFobg @ iTN[j () ~ i5j414 0 3 1 7 3 5 420 574 (1.0) 7-7-6 38.5 37.8 576 (2.2) 8-8-7 38.8 37.6 572 (1.6) 7-5-5 39.1 36.2 571 (2.3) 10-10-10 38.3 37.6 579 (0.9) 8-8-5 38.5 38.8 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 10 _CW[ 㒉 i4j137 1 3 112 0 0 0 6 5 D18.10.4 cO_ 1000 77 6 18.9.13 cO_ 900 80 6 18.8.23 NJO_ 900 85 7 D18.8.11 cO_ 1000 79 3 18.7.20 NJO_ 900 87 CVEb\ () i3j141 0 0 0 7 0 0 0 7 bRIn 141 bR TTr 134 bR grEI 142 bR tWFbcJbv 125 bR JZ~ 141 []  7 I 54 i4j142 0 0 0 3 -- 13148l  㒉 54.0 58 1065l  㒉 54.0 66 1247l  㒉 54.0 68 1387l  㒉 54.0 83 12114l  X 54.0 81 u[Ph[k G i2j36.7 -- 1 4 225 456k f}~_K~ @ 459k CpX @ 462k elbc@ @ 453k CpX @ 456k M}Nfr[ @ it`fseBj () i4j420 1 3 112 4 9 252 1026 (0.4) 5-6 37.9 37.1 574 (1.0) 5-4-5 38.8 37.2 570 (1.6) 5-6-5 38.6 36.8 1043 (2.4) 5-6 39.9 36.6 565 (0.9) 4-4-4 38.7 35.6 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0 8 11 [`UNE M i1j143 0 1 2 1 -- 2 18.9.13 cO_ 900 80 3 18.8.23 NJO_ 900 85 6 18.7.20 NJO_ 900 87 3 18.7.2 NJO_ 900 79 \ 18.6.15 900 @ JlVE[ () i2j142 2 1 1 6 -- bR TTr 142 bR grEI 149 bR JZ~ 138 bR JTTM 142 9[X @ []  4 Ž 54 i2j149 -- -- 1072l  M 54.0 82 1283l  M 54.0 94 1263l  c 54.0 97 1282l  c 54.0 87 434k @ jVm[ ؋` i1j35 -- 0 1 2 2 436k CpX @ 439k elbc@ @ 436k M}Nfr[ @ 433k M}Nfr[ @ 580 @ i^CLVgj () i2j434 0 1 2 1 2 2 3 8 566 (0.2) 1-1-1 38.8 35.6 563 (0.9) 1-1-1 39.0 34.6 569 (1.3) 3-2-2 39.5 34.8 565 (1.7) 1-1-2 38.9 35.2 @ @ @ @ @ @ @ @ 0 0 0 0 0 0 0 0 @ @ 12 [hAeB} N @ i7j132 0 1 419 0 0 1 9 3 Ya18.9.28 dO_ 800 45 7 18.9.13 cO_ 900 80 9 18.8.24 cO_ 1400 86 5 18.7.20 NJO_ 900 87 4 18.7.2 NJO_ 900 79 V[ () ~ i5j137 -- 0 0 0 2 bR ӂ邳ƂӂꂠtFA 137 bR TTr 130 bR()() 115 bR JZ~ 138 bR JTTM 139 []  4 53.0 @ i7j139 0 0 0 3 0 0 0 2 957l  N 53.0 2 1057l  [ 54.0 40 1019l  N 53.0 99 1255l  N 53.0 51 1266l  N 53.0 56 p_CXf c @ i8j38.8 -- 0 1 426 399k ANAC[^ @ 395k CpX @ 392k p` @ 385k M}Nfr[ @ 383k M}Nfr[ @ ilIj@[Xj () @ i6j408 0 1 419 0 1 539 496 (0.9) 7-5 36.0 40.8 575 (1.1) 8-9-8 38.0 39 1359 (3.4) 4-4-5-7 43.9 39 568 (1.2) 9-8-7 37.9 37.8 567 (1.9) 11-9-11 38.0 37.4 @ @ @ @ @ @ 0 0 0 0 0 0 0 0 0 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277101755142212, "perplexity": 877.4559899408299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583730728.68/warc/CC-MAIN-20190120184253-20190120210253-00022.warc.gz"}
http://bittooth.blogspot.com/2011/06/florida-combined-temperatures.html
## Sunday, June 26, 2011 ### Florida combined temperatures There are 22 USHCN stations in Florida, Apalachicola to Titusville, and 5 GISS stations on the list . These latter are in Miami, West Palm Beach, Orlando , Jacksonville and Tampa . Of these West Palm Beach and Orlando have only data since 1948. Location of the USHCN network stations in Florida As a matter of interest I was in Jacksonville about a month ago, and talking with someone who used to have an orange orchard, but who had seen it die with the falling temperatures in winter, and the gradual movement south of the practical temperature for growing citrus crops. That is borne out by the fall in temperatures in Jacksonville. (Incidentally there have been three stations in Jacksonville, the data is from the one remaining). The decline in temperatures in Jacksonville, FL since 1940. (GISS ) Going further down the state to Orlando, the temperature could be read as declining in recent years (remember that it is the winter temperatures that kill the oranges) Average annual temperatures for Orlando FL (GISS) – note the truncated years. Yet when one gets down to the toe of the state, then there is an increase in temperature which has been reasonably consistent, over perhaps as long as the last century. Average annual temperature for Miami, FL (GISS) When the temperature for the state as a whole is examined, the GISS average would suggest that the temperature rose at the rate of 1.6 deg F/century. The USHCN would lower the rate to 0.89 deg F per century. The difference between the two sets of data has been growing over that interval. Difference between the GISS average and that of the USHCN network (homogenized data) over the last 115 years. When one uses the Time of Observation corrected raw data, the temperature rise is about 1 degree over the measured interval. Getting the population of the different communities ran into a little difficulty at Federal Point, and I had to revert to Google Earth to find where the station was. It turns out to be near St Augustine, on Pine Island, with the nearest community being Hastings, which has a population of 756. Location of the USHCN station at Federal Point (Google Earth) And then there is Perrine, which has an East Perrine, a West Perrine and a Kendall-Perrine. Again relying on Google Earth shows that the station is right next to the Miami Equestrian Center, just north of SW 200th St which would, I suppose make it West Perrine, though in fact it the station is closer to Richmond Heights (which has a population of 9,210). It is all part of southern Miami. Location of the USHCN station at Perrine, FL Florida is 500 miles long, and 160 miles wide, stretching from 79.8 deg W to 87.62 deg W, and from 24.5 deg N to 31 deg N. The latitude of the center of the state is at 28.13 deg N, that of the average of the GISS stations is 27.86 deg N, and for the USHCN network 28.3 deg N. The elevation of the state ranges from sea level to 105 m, with a mean of 30 m. The average for the GISS stations is 12 m, and for the USHCN stations 17 m. Looking at the effect of these factors on the measured temperatures, that of latitude is the most obvious. Incidentally the regression coefficient where the homogenized data is used shows a drop from 0.94 with the TOBS, to 0.89. For Longitude the correlation, for once, is better with that parameter than with elevation. While, as noted for elevation: Average temperature as a function of elevation for the USHCN stations in Florida The regression is about the same for both TOBS and homogenized data, the correlation may be weakened by the number of places that are close to the shore. (One reason to start thinking about multiple regression plots). In terms of population, the average USHCN station is surrounded by 98,624 folk, while that of the GISS stations is 385,182, which if the correlation were with 0.6 x log(pop) would explain about 0.4 deg of the 1.66 deg average diff between the GISS and USHCN stations. (The difference in longitude would provide another 0.5 deg). However, for the state, there is no good correlation between population and temperature. (Consider the growth of Jacksonville even as the temperature has fallen). And finally the difference between the raw data and that homogenized in the USHCN series. Rather an odd shape for this state.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164548277854919, "perplexity": 2229.1037729622744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701153213/warc/CC-MAIN-20130516104553-00076-ip-10-60-113-184.ec2.internal.warc.gz"}
https://mdatools.com/docs/pca-model-complexity.html
## Model complexity Complexity of PCA model is first of all associated with selection of proper (optimal) number of components. The optimal number should explain the systematic variation of the data points and keep the random variation uncaptured. Traditionally, number of components is selected by investigation of eigenvalues or looking at plot with residual or explained variance. However, this is quite complicated issue and result of selection depends very much on quality of data and purpose PCA model is built for. More details can be found in this paper. In mdatools there are several additional instruments both to select proper number of components as well as to see if, for example, new data (test set or new set of measurements) are well fitted by the model. The first tool is the use of degrees of freedom for orthogonal and score distances, we mentioned earlier. A simple plot, where number of the degrees of freedom is plotted against number of components in PCA model can often reveal the presence of overfitting very clearly — the DoF value for the orthogonal distance, $$N_q$$ jumps up significantly. The code and its outcome below show how to make such plot separately for each distance and for both. The first plot in the figure is conventional plot with variance explained by each component. The model is made for Simdata, where it is known that the optimal number of components is two. data(simdata) Xc = simdata$spectra.c Xt = simdata$spectra.t m = pca(Xc, 6) par(mfrow = c(2, 2)) plotVariance(m, type = "h", show.labels = TRUE) plotQDoF(m) plotT2DoF(m) plotDistDoF(m) As you can see from the plots, second component explains less than 1% of variance and can be considered as non significant. However, plot for $$N_q$$ shows a clear break at $$A = 3$$, indicating that both first and second PCs are important. The $$N_h$$ values in this case do not provide any useful information. In case if outliers present it can be useful to investigate the plot for $$N_q$$ using classical and robust methods for estimation as shown in example below. m = pca(Xc, 6) par(mfrow = c(1, 2)) plotQDoF(m, main = "DoF plot (classic)") m = setDistanceLimits(m, lim.type = "ddrobust") plotQDoF(m, main = "DoF plot (robust)") In this case both plots demonstrate quite similar behavior, however if they look differently it can be an indicator for a presence of outliers. The DoF plots work only if data driven method is used for computing of critical limits. Another way to see how many components are optimal in the model, in case if you have a test set or just a new set of measurements you want to use the model with, is to employ the Extreme plot. The plot shows a number of extreme values for different $$\alpha$$. Imaging that you make several distance plots with different $$\alpha$$ values and count how many objects model found as extremes. On the other hand you know that the expected value is $$\alpha I$$. If you plot the number of extreme values vs. the expected number — this is the Extreme plot. If model captures systematic variation both for calibration and test set the points on this plot will lie within a confidence ellipse shown using light blue color. However, if model is overfitted, the points are getting outside it. Below you can see two Extreme plots made for the same data as the previous example. The left plot show results for calibration set for number of components in the PCA model equal to 2 and 3. The right plot show the results for the test set. Xc = simdata$spectra.c Xt = simdata$spectra.t m = pca(Xc, 6, x.test = Xt) par(mfrow = c(1, 2)) plotExtreme(m, comp = 2:3, main = "Extreme plot (cal)") plotExtreme(m, comp = 2:3, res = m$res$test, main = "Extreme plot (test)") As you can see, in case of calibration set, the points are lying withing confidence ellipse for both $$A = 2$$ and $$A = 3$$. However for the test set, the picture is quite different. In case of $$A = 2$$ most of the points are inside the interval, but for $$A = 3$$ all of them are clearly outside. We hope these new tools will make the use of PCA more efficient.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323820352554321, "perplexity": 676.3059825331826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00575.warc.gz"}
https://tex.stackexchange.com/questions/219912/build-a-box-that-starts-after-next-line-break
# Build a box that starts after next line break I have reason to insert notes in-text in boxes. I defined the following command for doing so: \newcommand{\Lnote}[1]{% \textsuperscript{\footnotesize{\thelnote}} \begin{center} \fbox{\parbox{0.7\textwidth}{\textsuperscript{\footnotesize{\thelnote}}#1}} \end{center} \stepcounter{lnote} } This does most of what I want: the notes are numbered by the lnote counter, etc. However, I would like it if the box containing the note didn't break the text, but rather occurred at the next line break. That is, if I wrote this: Some really long thing that takes up more than one line.\Lnote{Something that should be noted.} You know, something that keeps on going on and on. I would like it to not go: Some really long thing that takes up more than one line. Boxed in Something that should be noted.'' You know, something that keeps on going on and on. But rather Some really long thing that takes up more than one line. You know, something that keeps on Boxed in Something that should be noted.'' going on and on. (Note: I've ommitted the numbering in the examples of what I don't want and what do want. But I do want numbering, just like it is in the command as it already is. The important part is that I always want the boxed in note stuff to come at the next line break, not right where the note is noted, though I do want the lnote number to occur at the spot where the \Lnote command is invoked.) • You can use \vadjust to insert material after the current line. – Ulrike Fischer Dec 29 '14 at 21:14 Use the \vadjust feature, plus some more low level constructions. \documentclass{article} \usepackage{lipsum} \makeatletter \newcommand\Lnote[1]{% \@bsphack \nopagebreak \smallskip \moveright1cm\hbox{% \fbox{\parbox{\dimexpr\textwidth-2cm}{#1}}% }% }% \@esphack } \makeatother \begin{document} Some really long thing that takes up more than one line. \Lnote{Boxed in Something that should be noted.'' \lipsum[3]} You know, something that keeps on going on and on. \end{document} I use the \tabto* macro with the \TabPrevPos of the tabto package to get to the middle of the line and back again. In the midst of that, I use a bottom-center lap \bclap of stackengine package to make the box. In this way, the subsequent text may continue on the line prior to the box, using the normal flow of LaTeX linebreaking. Here I implement it as \Lnote{}. EDITED to mimic the appearance laid out by the OP (superscript counter, \parbox of .7\textwidth) \documentclass{article} \usepackage{tabto,stackengine,lipsum} \newcounter{lnote} \newcommand\Lnote[1]{% \stepcounter{lnote}% \tabto*{.5\textwidth}% \textsuperscript{\footnotesize{\thelnote}}% \parbox[t]{0.7\textwidth}{#1}}}}% \tabto*{\TabPrevPos}% } \begin{document} Some really long thing that takes up more than one line. \Lnote{Boxed in Something that should be noted.'' \lipsum[4]} You know, something that keeps on going on and on. \Lnote{Next one} \lipsum[1] \end{document} You can do that very simply with the generic package ìnsbox, and more precisely with its \InsertBoxC command. Then everything is automatically centred, so you don't have to use a center environment in the definition of \Lnote: \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \input{insbox} \newcounter{lnote} \newcommand{\Lnote}[1]{ \stepcounter{lnote}\vspace*{-\fboxsep} \fbox{\parbox{0.7\textwidth}{\textsuperscript{\footnotesize{\thelnote}}#1}} } \begin{document} Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec varius dapibus metus eget ultrices. Nulla sagittis mauris rutrum, blandit augue eget, laoreet augue. Phasellus enim odio, sagittis in mi sed, fringilla mollis odio. Phasellus quis purus ultricies, tempor purus at, tempus quam. Donec ultricies, ligula ac pretium porttitor, nibh nunc % \InsertBoxC{\Lnote{Boxed in Something that should be noted.''}}% Integer eros nibh, cursus at est sed, volutpat tristique justo. Donec ornare facilisis lorem, id feugiat elit pellentesque at. Nulla odio mauris, luctus sed faucibus id, dignissim dictum velit. Morbi vehicula velit at massa tristique rhoncus. \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8119146823883057, "perplexity": 4404.91360046304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00007.warc.gz"}
https://www.physicsforums.com/threads/difference-equation.93693/
Difference equation 1. Oct 12, 2005 Benny Hi, I've been working on a difference equation and I just can't get the answer. Can someone checking my working? $$w_{n + 1} = 2w_n + 1$$ w_1 = 2w_0 + 1 w_2 = 2w_1 + 1 = 2(2w_0 + 1) + 1 = 2^2w_0 + 1 + 2^1 $$\Rightarrow w_n = 2^n w_0 + \sum\limits_{i = 0}^{n - 1} {2^i } = 2^n w_0 + \sum\limits_{i = 0}^n {2^i } - 2^n = 2^n w_0 + \frac{{1 - 2^{n + 1} }}{{1 - 2}} - 2^n$$ $$w_n = 2^n w_0 ' + 2^{n + 1} - 1 - 2^n = 2^n \left( {w_0 ' - 1} \right) + 2^{n + 1} - 1$$...I have written w_0 with a dash so as to enable me to get a 'nicer' looking answer. It is a little ambiguous but hopefully people understand what I've done. I've simply taken 2^n as a common factor of two of the terms so that I get 2^n multipled by something. In the next line I replace that 'thing' by w_0. $$w_n = 2^n w_0 + 2^{n + 1} - 1$$ Where I have used a primed w_0 so that I could get an answer which resembles the book's. The book's answer is the same as mine except where I have a negative one, it has a negative two. I don't know where I'm going wrong. Can someone help me out? Last edited: Oct 12, 2005 2. Oct 12, 2005 arildno Your answer is incorrect, since your formula predicts $$w_{0}=2^{0}w_{0}+2-1=w_{0}+1$$ Similarly $$w_{1}=2w_{0}+2^{2}-1=2w_{0}+3$$ You have correctly found: $$w_{n}=2^{n}w_{0}+2^{n+1}-1-2^{n}$$ Rewrite this as follows: $$2^{n}w_{0}+2^{n+1}-1-2^{n}=w_{0}2^{n}+2^{n}(2-1)-1=w_{0}2^{n}+2^{n}-1=2^{n}(w_{0}+1)-1$$ 3. Oct 12, 2005 Benny Thanks for the help but I still don't understand how the book got $$w_n = 2^{n + 1} - 2 + 2^n v_0$$ (I've typed the answer exactly as it is given with the v_0 and not the w_0). Is my corrected answer(the one you included in your reply) somehow equivalent to the book's answer? Or is it possble to get 'different' general solutions depending on the solution procedure? Last edited: Oct 12, 2005 4. Oct 12, 2005 arildno Your difference equation says that $$w_{1}=2w_{0}+1$$ but their formula says: $$w_{1}=2^{2}-2+2w_{0}=2+2w_{0}$$ Last edited: Oct 12, 2005 5. Oct 12, 2005 Benny Hmm...I know that the answers in books are never (rarely) 100% with their answers but since it is so rare for an error to be in there I just assumed that their answer had to be correct. Thanks for clearing that up. Similar Discussions: Difference equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210748076438904, "perplexity": 706.8176920272372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00347.warc.gz"}
https://www.math.sissa.it/publications?s=year&amp%3Bamp%3Bo=asc&amp%3Bf%5Bauthor%5D=328&o=desc&f%5Bauthor%5D=22
Publications Export 66 results: Filters: Author is Andrea Malchiodi  [Clear All Filters] 2016 . Existence and non-existence results for the SU(3) singular Toda system on compact surfaces. Journal of Functional Analysis [Internet]. 2016 ;270:3750 - 3807. Available from: http://www.sciencedirect.com/science/article/pii/S0022123615004942 . Symmetry properties of some solutions to some semilinear elliptic equations. Annali della Scuola Normale Superiore di Pisa. Classe di scienze. 2016 ;16:1209–1234. 2013 . An improved geometric inequality via vanishing moments, with applications to singular Liouville equations. Communications in Mathematical Physics 322, nr.2 (2013): 415-452 [Internet]. 2013 . Available from: http://hdl.handle.net/1963/6561 . A variational Analysis of the Toda System on Compact Surfaces. Communications on Pure and Applied Mathematics, Volume 66, Issue 3, March 2013, Pages 332-371 [Internet]. 2013 . Available from: http://hdl.handle.net/1963/6558 2012 . A Codazzi-like equation and the singular set for C1 smooth surfaces in the Heisenberg group. Journal fur die Reine und Angewandte Mathematik, Issue 671, October 2012, Pages 131-198 [Internet]. 2012 . Available from: http://hdl.handle.net/1963/6556 . Non-uniqueness results for critical metrics of regularized determinants in four dimensions. Communications in Mathematical Physics, Volume 315, Issue 1, September 2012, Pages 1-37 [Internet]. 2012 . Available from: http://hdl.handle.net/1963/6559 . Weighted barycentric sets and singular Liouville equations on compact surfaces. Journal of Functional Analysis 262 (2012) 409-450 [Internet]. 2012 . Available from: http://hdl.handle.net/1963/5218 2011 . Axial symmetry of some steady state solutions to nonlinear Schrödinger equations. Proc. Amer. Math. Soc. 139 (2011), 1023-1032 [Internet]. 2011 . Available from: http://hdl.handle.net/1963/4100 . A class of existence results for the singular Liouville equation. Comptes Rendus Mathematique 349 (2011) 161-166 [Internet]. 2011 . Available from: http://hdl.handle.net/1963/5793 . Critical points of the Moser-Trudinger functional. SISSA; 2011. Available from: http://hdl.handle.net/1963/4592 . New improved Moser-Trudinger inequalities and singular Liouville equations on compact surfaces. Geometric and Functional Analysis 21 (2011) 1196-1217 [Internet]. 2011 . Available from: http://hdl.handle.net/1963/4099 . Supercritical conformal metrics on surfaces with conical singularities. Int Math Res Notices (2011) 2011 (24): 5625-5643 [Internet]. 2011 . Available from: http://hdl.handle.net/1963/4095 2010 . Concentration of solutions for some singularly perturbed mixed problems: Asymptotics of minimal energy solutions. Ann. Inst. H. Poincare Anal. Non Lineaire 27 (2010) 37-56 [Internet]. 2010 . Available from: http://hdl.handle.net/1963/3409 . Concentration of solutions for some singularly perturbed mixed problems. Part I: existence results. Arch. Ration. Mech. Anal. 196 (2010) 907-950 [Internet]. 2010 . Available from: http://hdl.handle.net/1963/3406 2009 Malchiodi A. Some new entire solutions of semilinear elliptic equations on Rn. Adv. Math. 221 (2009) 1843-1909 [Internet]. 2009 . Available from: http://hdl.handle.net/1963/3645 2008 Malchiodi A. Concentrating solutions of some singularly perturbed elliptic equations. Front. Math. China 3 (2008) 239-252 [Internet]. 2008 . Available from: http://hdl.handle.net/1963/2657 Malchiodi A. Entire solutions of autonomous equations on Rn with nontrivial asymptotics. Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl. 19 (2008) 65-72 [Internet]. 2008 . Available from: http://hdl.handle.net/1963/2640 . Existence of conformal metrics with constant $Q$-curvature. Ann. of Math. 168 (2008) 813-858 [Internet]. 2008 . Available from: http://hdl.handle.net/1963/2308 Malchiodi A. Morse theory and a scalar field equation on compact surfaces. Adv. Differential Equations 13 (2008) 1109-1129 [Internet]. 2008 . Available from: http://hdl.handle.net/1963/3531 Malchiodi A. Topological methods for an elliptic equation with exponential nonlinearities. Discrete Contin. Dyn. Syst. 21 (2008) 277-294 [Internet]. 2008 . Available from: http://hdl.handle.net/1963/2594
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1502925604581833, "perplexity": 1893.0261977847254}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00542.warc.gz"}
https://socratic.org/questions/how-do-you-translate-the-product-of-a-number-and-three-into-a-mathematical-expre
Algebra Topics # How do you translate "the product of a number and three" into a mathematical expression? Jun 5, 2018 $3 n$ #### Explanation: "The product of a number and three" should be $3 n$ where n is the number. IN the answer of the question you would have to explain something like this: Let n be the number. The product then is $3 n$. The reason for this is that you could also let m stand for the number, in which case the answer would be $3 m$. ##### Impact of this question 5132 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324216246604919, "perplexity": 426.2399087493751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00772.warc.gz"}
https://www.spectroscopyonline.com/view/spectrometers-elemental-spectrochemical-analysis-part-i-basic-spectrometer
# Spectrometers for Elemental Spectrochemical Analysis, Part I: The Basic Spectrometer Spectroscopy Spectroscopy, Spectroscopy-01-01-2010, Volume 0, Issue 0 An overview of the instrumentation used in elemental spectrochemical analysis. A spectrometer consists of four basic modules: an excitation source, a dispersing element, a detector, and a read-out system. Sir Isaac Newton (1642–1727) showed that the white light from the sun could be dispersed ("spread out") into a continuous series of colors. His apparatus was essentially the first spectroscope, consisting of an aperture or opening to define a light beam, a lens, a prism, and a screen (1). In this simplest of spectrometers, the sun provides the excitation source, the prism is the dispersing element, and the eye is the detector of the spectrum on the screen. In this case the "readout" is provided by the brain. Figure 1 In emission spectroscopy, the excitation source excites the electrons of the sample atoms. In optical emission, the electrons transit to higher energy states and emit light on the return to lower energy levels. In X-ray fluorescence, an inner shell electron is ejected, an outer shell electron "drops down" to occupy this vacancy, and an X-ray is emitted. The dispersing system "spreads out" the light given off at various wavelengths. The detector measures the light intensity at the various wavelengths of interest. The read-out system provides a visual indication of the amount of light at these wavelengths. ### The Excitation Source There are many different ways to supply energy to a sample so that the atoms of the sample are induced to give off their characteristic radiation. The earliest method used was a simple flame. In 1826, the English inventor Talbot studied the color changes of flames when different salts were introduced. For example, the chlorides of lithium, sodium, and potassium produce red, orange, and lilac colors, respectively. The German scientists Bunsen and Kirchhoff furthered the work of Talbot in the 1860s and were the first to realize that spectral lines are associated with elements and not molecules. Figure 2 (2) shows their spectroscope with a Bunsen burner providing the excitation. Note the other spectroscope modules: "D" is the Bunsen burner (excitation source), "E" shows a sample stand, "F" is the prism (dispersing element), and "C" is the viewer for detection and read-out. Figure 2 Other excitation sources to be discussed in this series are arc/spark, X-ray excitation, the "hollow cathode lamp" used in atomic absorption, glow discharge, and the inductively coupled plasma. In this introductory article, we will focus on the various dispersing systems and detectors and then show how the modules are put together into a functioning spectrometer. ### The Dispersing System The first dispersing system was the simple prism. The prism functions by refraction, with the different light wavelengths "bending" by different amounts. The grating was introduced in 1814 by Joseph von Frauenhofer for astronomical studies. In the X-ray region of the electromagnetic (EM) spectrum, the first dispersing element was a crystal. We should note that sometimes the dispersing system and detector modules are one and the same as in the case of the semiconductor detectors used in energy dispersive X-ray fluorescence (EDXRF). Also, in optical emission–absorption spectrometers, the dispersing system plus detector often is called simply the "optical system" and sometimes the spectrometer, although this last is a misnomer as this term applies to the complete instrument. The Grating The grating is an optical component that disperses light. It is composed of many parallel, equally spaced slits or indentations (grooves). Just how many grooves the grating has is an important measure of its ability to disperse light. This quantity is measured in grooves per millimeter or grooves per inch (N). The distance (d) between successive grooves is called the "grating constant," and is simply the reciprocal of N: that is, d = 1/N. Typical gratings used in modern optical emission spectrometers have 1800, 2400, or 3600 grooves/mm. There are two basic types of gratings: The transmission grating, where the light goes through the slits of the grating, and the reflection grating, either plane or concave, where light is reflected from the grooved grating surface. The principle of operation is the same for both types. The Simple Grating Equation The grating works due to the interference of light waves. Consider the configuration shown in Figure 3, with two slits of a transmission grating, light entering from the left, parallel to the optic axis, and a screen some distance away to the right. The light waves impinging on the small openings spread out in circular wavefronts (a phenomenon called "diffraction"). Now, for light from the two slits to reinforce each other (add) so that we see light on the screen, the distance Y must be some (integral) multiple of the wavelength of light. That is, Y = ± nλ, where λ is the light wavelength and n is an integer. If we assume the light waves are one wavelength apart, n = 1, then Y = ± λ. Figure 3 Notice that the triangle with sides X and B is similar to the triangle with the sides Y and d. Therefore, from the law of similar triangles (geometry), we have the relationship, X/B = Y/d. That is, side X is to side B as side Y is to side d. Because Y = λ and d = 1/N, we find by substitution that where X is the distance of the image on the screen from the optic axis, B is approximately equal to the distance from the grating to the screen (given the dimensions involved), N is the number of grooves per mm on the grating, and λ is the wavelength of light. This is a very simple grating equation for the special case of light input perpendicular to the grating. What does this grating equation tell us? First, that the distance X is dependent upon the wavelength of light. That is, the greater the wavelength, the greater the distance X from the optic axis, or the angle θ. Stated another way, different wavelengths of light will appear at different locations on the screen. Second, the greater the value of N, the number of grooves, the greater is the distance X or the angle θ. This means that the more grooves per millimeter the grating has, the greater the dispersion or separation of the light on the screen. Using equation 1, we can calculate where a particular light wavelength (that is, spectral line) will appear on the screen, given the number of grooves per millimeter on the grating (N) and the distance of the grating (B). We can then place some sort of light-intensity measuring device (detector) at this location. The light measured then will be related to the element producing this particular spectral line at the particular light wavelength. The variable X is a measure of the dispersion or spread of the light wavelengths and "B" is related to the focal length of the optical system. Note that the light which appears at location X in Figure 3 also will appear an equal distance (–X) below the center line (optic axis) in the drawing. Furthermore, if the light from the two slits had been separated by two light wavelengths (n = 2), there would appear another spot of light on the screen at a distance 2X from the center line or optic axis. This second "spot" is less bright than the first, however. This process continues for n = 3, 4, and so forth, with each succeeding light "spot" where the two waves from the slits interfere constructively becoming less intense. The integer n is referred to as the spectral order. In general, measurement of light in "first order" (n = 1) is preferable because it is more intense than succeeding spectral orders. Further details on the grating equation are included in Appendix I. The Crystal The dispersion of X-rays from crystals follows essentially the same law as that of light from a grating. This is to be expected since they are both electromagnetic waves. In 1912, the German physicist Max von Laue (1879–1960) suggested using crystals to diffract X-rays. Later that same year, two German physicists, Walter Friedrich and Paul Knipping, acting on his suggestion, showed the diffraction of X-rays in zinc-blende (sphalerite). The incident X-rays striking the atoms give reflected radiation that may be reinforced by the reflections from lower lattice planes (Figure 4). The requirement for positive reinforcement is that the extra path taken by the reflected beam from the lower plane of atoms (highlighted in red) must equal some integer number of wavelengths, just as in the case of the grating. That is, nλ = 2d sin θ, where θ is the angle of incidence of the X-ray beam, d is the distance between planes, and n is an integer number. This condition is known as Bragg's law. (Note: This condition is equivalent to the grating equation may be seen by setting φ = θ in equation 2 of Appendix I.) Figure 4 ### The Detector The first detector for visible light radiation was the eye. This was followed by photographic film, which produced a permanent record of the spectrum. In the early 1940s, the photomultiplier tube was developed. With the discovery of the transistor in 1949 began the development of semiconductor detectors. The photodiode was the first, followed by the linear diode array (LDA) and various other solid-state devices. The nature of the detector determines the terminology used in reference to the instrument: • Spectroscope: Eye as detector • Spectrograph: Photographic film as detector • Spectrometer: Photomultiplier tube or semiconductor detector The Photomultiplier Tube (Optical) In the case of atomic emission and absorption instrumentation, photomultiplier tubes (PMT) have long been popular. A PMT is essentially a vacuum tube with a cathode and multiple anodes (called "dynodes"), as shown in Figure 5. A power supply of approximately 1000 V is required. This is divided up among the dynodes in, for example, 100-V increments, depending upon the number of dynodes. Figure 5 When a photon strikes the cathode, an electron is emitted. This process is called the photoelectric effect, first elucidated by Einstein in 1905. The negatively charged electron is accelerated to the first dynode, which is at a positive 100-V potential. Upon striking this dynode, additional electrons are ejected. These are then accelerated to the second dynode at a positive 200-V potential, emitting more electrons. These now impact the third dynode, at a positive 300 V, producing still more electrons. This process continues until the final dynode (also called the "anode" in this case) has collected a huge number of electrons resulting from the initial impinging photon. It should be clear why the PMT is also referred to as an "electron multiplier." The spectral sensitivity of the PMT depends upon the coating material of the cathode and the window material. The appropriate sensitivity for the wavelength region of interest must be selected accordingly (see manufacturer's specifications for details). There is a new generation of these detectors called "channel photomultipliers" (CPM) that promise both higher gain and lower noise. Proportional Counter (X-Ray) In X-ray fluorescence spectrometers, one of the first detectors was the proportional counter, one of the various types of ionization chambers, including the Geiger counter (3). Shown in the diagram of Figure 6, it consists of a gas-filled tube, which is the cathode (–), and a thin wire in the center making up the anode (+). A high-voltage source (~2000 V) is connected across the anode and cathode. Normally no current flows. However, when an incoming X-ray enters the tube, it produces ionization in the gas, allowing a current to flow through the circuit and producing an output signal. Figure 6 Solid-State Detectors (Optical) Solid-state detectors have gained popularity in both optical emission and X-ray instruments in the past several decades. These are referred to collectively as charge-transfer devices (CTDs), which include such solid-state detectors as LDAs, charge coupled devices (CCDs), and charge injection devices (CIDs). These offer distinct advantages inasmuch as they can measure multiple parts of the spectrum at the same time, which allows the simultaneous measurement of multiple spectral lines of the analyte and permits real time background measurements. The LDA (Figure 7) is essentially a chain of photodiodes. These devices are common in barcode readers, for example. The photodiode is a semiconductor diode that can detect light and convert it into an electrical voltage or current. One may think of a photodiode as a light-emitting diode (LED) in reverse. Figure 7 When a photon of sufficient energy strikes the diode, it produces an electron-hole pair — that is, an electron and a positively charged electron hole. If a voltage is applied across the diode, then the electrons will move to the positive junction and the "hole" to the negative. This generates an electrical current which can be accurately measured. The currently popular semiconductor detectors, CCDs and CIDs, are more sophisticated. They allow detection of light in a two-dimensional array, so the readout electronics are more complicated. One might think of them as a collection of coupled, side-by-side LDAs. Solid-State Detectors (X-Ray) In the X-ray region, the silicon PIN (positive–intrinsic–negative) and more recent silicon drift detectors (SDDs) obviate the need for a crystal-dispersing element (for certain applications) and provide complete spectral information. These are photodiodes sensitive to the X-ray region of the EM spectrum. Figure 8 Figure 8 shows a pictorial diagram of this detector. (FET stands for field-effect transistor, a signal amplifier.) The schematic diagram of Figure 9 shows the internal workings. The large central "intrinsic" region is a charge-depleted silicon slab sandwiched between the P (positive anode) and N (negative cathode) layers of the diode. Figure 9 The incoming X-rays interact with the silicon atoms such that one electron-hole pair is produced for every 3.6 eV. A voltage is applied across the diode so that when an incoming X-ray produces ionization in the silicon region, a charge is immediately transferred. The detector is thermoelectrically cooled to about –25 °C to decrease the leakage current and thus reduce noise. The lower background provided by the reduced noise enhances performance by enabling lower limits of detection. The function of the read-out system is to take the electrical signal from the detector and provide an indication of the intensity of the spectral lines of interest. A schematic diagram of a computer-based readout system is shown in Figure 10. The signal from the detector is amplified and integrated (added) over the complete measurement time. The total is sent to an analog-to-digital converter, which provides a digital output that the computer can work with. The computer then compares this intensity signal to calibration data stored in memory to provide an output in the form of concentration of the elements present. Figure 10 For X-ray systems using the crystal dispersing element (called wavelength dispersive X-ray fluorescence or WDXRF), the readout system is essentially the same as shown in Figure 10. For X-ray spectrometers using semiconductor detectors (EDXRF), the charge from the detector is converted to a voltage signal, which is proportional to the energy of the incoming X-ray. These voltages are then sorted by a multichannel analyzer (MCA) before being fed to the computer. The readout for the spectrometer must be such that the counts (that is, number of photons collected within a given period of time) or the calculated concentration is displayed and stored along with the sample identification. Depending upon the actual instrument, it is possible to recalculate concentrations on-line if necessary (that is, the sample was analyzed using the incorrect analytical program), otherwise, off-line (manual) calculations might be required. The calculated concentrations are obtained by comparing the obtained signal (photon counts) with predetermined calibration curves. ### Spectrometer Systems How are these four spectrometer modules integrated to provide a complete spectrometer system? The EM radiation generated by the excitation source proceeds to the optical system (dispersing element plus detectors) and the detector signals are analyzed by the read-out system. The optical system itself consists of three elements: • light input from the excitation source, • the dispersing element (grating, crystal), and • the detector with output to the readout system. The light path from the plasma generated by the optical emission or absorption excitation sources is either direct or by fiber-optic cable. The direct light path is simply a tube that generally is evacuated or filled with argon gas. At the end of the light path is the entrance slit. This is simply a thin (about 10–20 μm) vertical slit that makes a thin vertical line image of the light from the excitation source. The light from the entrance slit proceeds to the dispersing element. The "light path" for X-rays is from the X-ray tube or radioisotope excitation source to the sample and then to the crystal or semiconductor detector as shown in the schematic of Figure 11. Figure 11 The Rowland Circle One very common means of assembling the entrance slit, grating, and detector is through the use of what is called the Rowland circle, after the American physicist Henry A. Rowland who developed it in the late 19th century. This means that the entrance slit, grating, and detectors lie on the circumference of a circle, the "Rowland" circle (4). (Historical note: Henry Rowland [1848–1901], a professor of physics at Johns Hopkins University from 1875 on, was the first to develop the [mechanical] means of ruling [inscribing the lines on] the concave grating. He also did experimental work resulting in accurate determinations of the value of the ohm and the mechanical equivalent of heat.) The radius of curvature of the Rowland circle is one half of that of the concave grating so that everything is in focus at the circumference where the detectors are placed. That is, the images of the entrance slit are brought into focus somewhere on the circle and here we place detectors. This configuration is shown in Figure 12. Figure 12 One correction must be made to the above in the case of PMTs. The detectors are not actually placed on the circumference of the Rowland circle. Their cathode-sensitive light-input entrances are too large and would receive the light input from many wavelengths simultaneously. Therefore, in front of the detectors, and on the Rowland circle are placed exit slits: generally 25–150 μm vertical slits, like the entrance slit, that block out most light wavelengths other than the one of interest. The various types of solid state detectors, with arrays of very-small-dimension semiconductor light-sensitive diodes that provide the simultaneous measurement of multiple spectral lines, do not require the use of these exit slits. There are other optical systems (for example, the Littrow, Ebert, Wadsworth, and echelle configurations), but for many modern spectrometers, the Rowland design is probably the most common. This configuration often is called the "Paschen-Runge mount." Polychromators Versus Monochromators There are two very basic types of optical systems: The polychromator with many detectors set at fixed wavelengths, and the monochromator with one (or few) detectors set up in a manner such that many wavelengths may be scanned. The polychromator is a simultaneous optical system with a fixed set of spectral lines. Such systems are designed for fast results on routine analytical tasks. This is the type of optical system used in the vast majority of spark spectrometers. The monochromator is a sequential optical system, which allows scanning from wavelength to wavelength. This scanning of wavelengths is usually accomplished by moving the detector along the focal curve or by moving the grating. The monochromator provides greater flexibility for the chemist or operator, but at the expense of speed of analysis. Optical emission spectrometers with one or more polychromators and a monochromator are currently available and so, in many ways, combine the best of both worlds: speed and flexibility. Such systems are particularly of interest for research laboratories and independent testing services, where a great variety of analytical tasks can be encountered. The introduction of solid-state detectors, with their intrinsic ability of carrying out multiple wavelength detection has been gradually replacing the conventional combination of mono- and polychromators traditionally used with PMTs. This is accomplished by simply placing these CTDs at the end of the optical path, either as a pair of two-dimensional, or as a set of linear arrays, depending upon the optical arrangement, and providing simultaneous measurement of the whole spectra in integration times around 1 min. ### Conclusions The various modules of a spectrometer have been addressed: • An excitation source provides energy to excite the atoms of the sample. • Light-dispersing elements have been discussed. • Various detectors for the emitted radiation have been covered. • A simple model of the readout system has been presented. • Finally, we have shown how these modules are connected into a complete spectrometer system. ### Appendix I: The Grating Equation Refer back to Figure 3. We will start with the relationship X/B = Y/d. For any right triangle, trigonometry defines the sine of the angle θ as the side opposite the angle (X) divided by the hypotenuse (B). That is, sin θ = X/B. Substituting this relation and the necessary condition for maximum light (Y = ± n λ) into the first equation above we find, where the plus or minus simply means that the maxima (and minima) of light are symmetric about the optic axis. This is called the Grating Equation for the special case where the incoming light is at 90° to the grating. The more general grating equation includes the sine of the angle of incidence of the light to the grating, with respect to the grating normal. The grating normal is a line drawn perpendicular to the surface of the grating. Since this is the form used in most all optical systems, it is presented below: where φ is the angle of incidence of the light. Where this angle is zero, that is, the incoming light is perpendicular to the grating or parallel to the grating normal, we have sin(0) = 0. Then the general form of the grating equation reduces to that given by the first equation above. ### Acknowledgment The author would like to thank Carlos Coutinho for several helpful discussions. Volker Thomsen is a consultant in spectrochemical analysis and lives in Atibaia, SP, Brazil. He can be reached at vbet1951@uol.com.br. ### References (1) V. Thomsen, Spectroscopy 21(10) 32–42 (2006). (2) This illustration is from their 1860 paper in Annalen der Physik. It is available online at http://en.wikipedia.org/wiki/Gustav_Kirchhoff. (3) G.F. Knoll, Radiation Detection and Measurement, 3rd Edition (Hoboken, New Jersey, John Wiley & Sons, 2000). (4) V. Thomsen, Modern Spectrochemical Analysis of Metals: An Introduction for Users of Arc/Spark Instrumentation (ASM International, Materials Park, Ohio, 1996).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8412654399871826, "perplexity": 1077.285332670415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00625.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2008_v45n3_699
HOMOLOGY OF THE GAUGE GROUP OF EXCEPTIONAL LIE GROUP G2 Title & Authors HOMOLOGY OF THE GAUGE GROUP OF EXCEPTIONAL LIE GROUP G2 Choi, Young-Gi; Abstract We study homology of the gauge group associated with the principal $\small{G_2}$ bundle over the four-sphere using the Eilenberg-Moore spectral sequence and the Serre spectral sequence with the aid of homology and cohomology operations. Keywords exceptional Lie group $\small{G_2}$;gauge group;iterated loop space;Dyer-Lashof operation;Serre spectral sequence; Language English Cited by References 1. M. F. Atiyah and R. Bott, The Yang-Mills equations over Riemann surfaces, Phil. Trans. R. Soc. Lond. A 308 (1982), 523-615 2. R. Bott, A note on the Samelson product in the classical groups, Comment. Math. Helv. 34 (1960), 249-256 3. W. Browder, On differential Hopf algebra, Trans. Am. Math. Soc. 107 (1963), 153-176 4. Y. Choi, On the Bockstein lemma, Topology Appl. 106 (2000), no. 2, 217-224 5. W. Browder, Homology of the classifying space of Sp(n) gauge groups, Israel J. Math. 151 (2006), 167-177 6. F. R. Cohen, T. Lada, and J. P. May, The Homology of Iterated Loop Spaces, Lect. Notes. Math. Vol. 533, Springer, 1976 7. M. C. Crabb and W. A. Sutherland, Counting homotopy types of gauge groups, Proc. London Math. Soc. 81 (2000), no. 3, 747-768 8. R. Kane, On loop spaces without p torsion, Pacific J. Math. 60 (1975), no. 1, 189-201 9. A. Kono and S. Tsukuda, 4-manifolds X over BSU(2) and the corresponding homotopy types Map(X;BSU(2)), J. Pure Appl. Algebra 151 (2000), no. 3, 227-237 10. J. P. Lin, On the collapses of certain Eilenberg-Moore spectral sequence, Topology Appl. 132 (2003), no. 1, 29-35 11. G. Masbaum, On the cohomology of the classifying space of the gauge group over some 4-complexes, Bull. Soc. Math. France 119 (1991), no. 1, 1-31 12. J. W. Milnor and J. C. Moore, On the structure of Hopf algebras, Ann. of Math. (2) 81 (1965), 211-264 13. M. Mimura, The Homotopy groups of Lie groups of low rank, J. Math. Kyoto Univ. 6 (1967), 131-176 14. M. Mimura, Homotopy theory of Lie groups, Handbook of algebraic topology edited by I. M. James, North-Holland (1995), 953-991 15. H. Toda, Composition Methods in Homotopy Groups of Spheres, Annals of Mathematics Studies, No. 49, Princeton University Press, Princeton, N. J., 1962
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7395085096359253, "perplexity": 584.7921596993856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540909.75/warc/CC-MAIN-20161202170900-00326-ip-10-31-129-80.ec2.internal.warc.gz"}
https://problemsolving.io/should-i-take-this-course.html
# Should I take this course? If you are not sure whether you should take this course, go through the following programming problems. If you feel comfortable with solving them with Python, then you probably don’t need this course. If you have any doubt, please drop us an email. ## Problem 1 (easy) Write a program that consumes a sequence of numbers representing daily rainfall amounts as entered by a user. The sequence may contain the number -999 indicating the end of the data of interest. Print the average of the non-negative values in the sequence up to the first -999 (if it shows up). There may be negative numbers other than -999 in the sequence. ## Problem 2 (easy-medium) Write a function called is_abecedarian that returns True if the letters in a word appear in alphabetical order (double letters are ok). How many abecedarian words are there in the words.txt file (click here to download). Source: Think Python - Wikibooks ## Problem 3 (medium) The Ackermann function, A(m, n) is defined: Write a function named ack(m, n) that evaluates Ackerman’s function. ## Problem 4 (medium) Write a function find_anagrams(words_path) that return a list of all the lists of words that are anagrams in the words.txt file from problem 2. Here is an example of what the return might look like: [ ['deltas', 'desalt', 'lasted', 'salted', 'slated', 'staled'] ['retainers', 'ternaries'] ['generating', 'greatening'] ['resmelts', 'smelters', 'termless'] ] Source: Think Python - Wikibooks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41033732891082764, "perplexity": 1881.0169710005198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605555.73/warc/CC-MAIN-20190423154842-20190423180842-00319.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-3-differentiation-3-6-trigonometric-functions-exercises-page-140/38
## Calculus (3rd Edition) For $y=\sin x$, we have $$y'=\cos x, \quad y''=-\sin x$$ and hence $y''=-y$. For $y=\cos x$, we have $$y'=-\sin x, \quad y''=-\cos x$$ and hence $y''=-y$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997974038124084, "perplexity": 60.89429639798814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732696.67/warc/CC-MAIN-20201203190021-20201203220021-00272.warc.gz"}
http://www.komal.hu/verseny/feladat.cgi?a=feladat&f=C1035&l=en
English Információ A lap Pontverseny Cikkek Hírek Fórum Rendelje meg a KöMaL-t! VersenyVizsga portál Kísérletek.hu Matematika oktatási portál C. 1035. In a mathematics competition, there were three problems. 56 participants solved at least one problem. 2 participants solved all problems. Out of those solving the second problems, 10 more solved the third problem than the first one. The number of those solving both of the first and second problems was 10 larger than the number of those solving the third problem only. All participants solving the first and third problems solved the second problem, too. There were 14 participants altogether who solved the first problem only or the second problem only. How many participants solved the third problem? (5 points) Deadline expired on 10 June 2010. Google Translation (Sorry, the solution is published in Hungarian only.) Megoldás. Azok között, akik a másodikat megoldották, 10-zel többen oldották meg a harmadikat, mint az elsőt. Így $\displaystyle m=l+10$. Az elsőt és a másodikat is megoldó versenyzők 10-zel többen voltak, mint akik csak a harmadikat oldották meg. Így $\displaystyle k+2=z+10$. Aki megoldotta az elsőt és a harmadikat is, az a másodikat is megoldotta. Így $\displaystyle l=0$. Akik csak az első, vagy csak a második feladatot oldották meg összesen 14-en voltak. Így $\displaystyle x+y=14$. $\displaystyle x+z+8+14-x+0+2+10+z=56 \qquad \Rightarrow \qquad z=11.$ Tehát a harmadik feladatot 23-an oldották meg. Statistics on problem C. 1035. 133 students sent a solution. 5 points: 106 students. 4 points: 10 students. 3 points: 12 students. 2 points: 2 students. 1 point: 1 student. 0 point: 1 student. Unfair, not evaluated: 1 solution. • Problems in Mathematics of KöMaL, May 2010 • Támogatóink: Morgan Stanley
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7079850435256958, "perplexity": 26318.815561710788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828189.71/warc/CC-MAIN-20171024071819-20171024091819-00306.warc.gz"}
http://mathhelpforum.com/advanced-statistics/69349-probability-distribution-graph-greater-than-1-what-does-mean-print.html
# Probability distribution in graph greater than 1: what does that mean? • January 21st 2009, 10:58 PM 1 Attachment(s) Probability distribution in graph greater than 1: what does that mean? This is a question in my review exercise: http://www.mathhelpforum.com/math-he...achmentid=9749 I noticed that at x=1, probability=2. Now what does that actually mean? I cannot concieve a probability greater than one, and I have searched my book's examples in vain. Isn't probability supposed to be always lower than or equal to one? :help: • January 21st 2009, 11:07 PM mr fantastic Quote: What gets calculated is [tex]\Pr(a \leq X \leq b)[tex], which is given by $\int_a^b f(x) \, dx$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230133891105652, "perplexity": 1462.402800879592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00024-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.ias.ac.in/describe/article/pram/058/01/0091-0100
• Strong anisotropy in the low temperature Compton profiles of electron momentum distribution in α-Ga metal • # Fulltext https://www.ias.ac.in/article/fulltext/pram/058/01/0091-0100 • # Keywords Strong anisotropy; Compton profiles; α-gallium; low temperature • # Abstract Compton profiles of momentum distribution of conduction electrons in the orthorhombic phase of α-Ga metal at low temperature are calculated in the band model for the three crystallographic directions (100), (010), and (001). Unlike the results at room temperature, previously reported by Lengeler, Lasser and Mair, the present results show strong anisotropy in the Compton profiles with the momentum distribution along (001) direction being substantially different from the other two directions. While experimental data on Compton profiles at low temperatures are not available for comparison with theory, the resistivity data in α-Ga at low temperature strongly support this anisotropic behaviour. Besides, the electronic heat capacity constant γ available from both experiment and present calculation suggests that the conduction electron distribution at low temperature in the orthorhombic phase is markedly different from the free-electron-like-distribution at room temperature, thus lending additional support to anisotropic behaviour of Compton profiles. It would be nice to have Compton profiles data from experiment at low temperature for direct comparison with theory. It is hoped that the present work would stimulate enough interest in that direction. • # Author Affiliations 1. Department of Physics, Chikiti Mahavidyalaya, Chikiti - 761 010, India 2. Department of Physics, Berhampur University, Berhampur - 760 007, India • # Pramana – Journal of Physics Volume 95, 2021 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229922652244568, "perplexity": 3046.4526990210966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00512.warc.gz"}
https://online.stat.psu.edu/stat480/book/export/html/688
# 7.5 - Types of Log Messages 7.5 - Types of Log Messages When you are trying to write programs that work, you'll no doubt encounter some messages in the log window that you'll need to interpret to figure out what SAS is trying to tell you. In this section, we investigate three different kinds of messages —errors, warnings, and notes — that SAS displays in the log window. ## Example 7.7: ERROR Messages In general, when SAS displays ERROR messages in your log window — in red as illustrated — your program will not run because it contains some kind of syntax error or spelling mistake. The following code causes SAS to print an ERROR message in the log window: DATA one; input A B C; DATALINES; 1 2 3 4 5 6 7 8 9 ; RUN; PROC PRINT / DATA = one; RUN; First, launch and run the program, and then look in the log window to see the ERROR message that SAS displays: as a result of the inappropriately placed forward slash (/) in the PROC PRINT statement. ## Example 7.8 The location of an error is typically easy to find, because it is usually underlined, but it is often tricky trying to figure out the source of the error. Sometimes what is wrong in the program is not what is underlined in the log window but something else earlier in the program. The following program illustrates such an event: DATA one; INPUT A B C DATALINES; 1 2 3 4 5 6 7 8 9 ; RUN; First, review the program and note that the problem with the code is that the INPUT statement is missing its required semi-colon (;). Then, launch and run the program, and then look in the log window to see the ERROR message that the code produces: You should see that SAS underlines the 1 in the first data line rather than the end of the INPUT statement. The moral of the story here is to not only look at what SAS underlines but also at the few lines of code immediately preceding the underlined statement. ## Example 7.9: WARNING Messages When SAS displays WARNING messages in your log window — in green as illustrated — your program will typically run. A warning may mean, however, that SAS has done something that you didn't intend. It is for this reason that you'll always want to check the log window after submitting a program to make sure that it doesn't contain WARNING messages about the execution of your program. The following code results in SAS printing a WARNING message in the log window: DATA example2; IMPUT a b c; DATALINES; 112 234 345 115 367 . 190 110 111 ; RUN; As you can see by the red-colored font displayed in the (Enhanced) Program Editor, the keyword INPUT has been incorrectly typed as IMPUT. If you don't catch the misspelling in the Program Editor, SAS will, whenever possible, attempt to correct your spelling of certain keywords. In these cases, SAS prints a WARNING message in the log window to alert you to how it interpreted your program in order to get it to run. Launch and run the program, and then look in the log window to see the WARNING message: that the code produces. Note, too, that in spite of the WARNING message, SAS is still able to complete the DATA step by changing the spelling of IMPUT to INPUT. ## Example 7.10: NOTE Messages NOTE messages, which are displayed in blue as illustrated, are less straightforward than either warnings or errors. Sometimes notes just give you information, like telling you the execution time of each step in your program. Sometimes, however, a NOTE can indicate a problem with the way SAS executed your program. The following code results in SAS printing a NOTE in the log window: DATA example2; INPUT a b c; DATALINES; 112 234 345 115 367 190 110 111 ; RUN; Launch and run the program, and then look in the log window to see the NOTE that the code produces: You should see that SAS appropriately warns you that it went to a new line when the INPUT statement didn't find the third data value in the second data line. Incidentally, you can correct this problem either by adding the following line of code just before the INPUT statement: INFILE DATALINES MISSOVER; or by adding a missing value (.) to the end of the second data line. ## Example 7.11 Beware that not every NOTE that appears in the log window is a problem. The following code is an example in which SAS going to the new line is exactly what is wanted: DATA example2; INPUT a b c; DATALINES; 101 111 118 215 620 910 ; RUN; PROC PRINT data = example2; RUN; Obs a b c 1 101 111 118 2 215 620 910 In this case, the programmer purposefully entered one data value in each record. As long as it is what the programmer intended, SAS will go to a new line in each case and thereby read in 2 observations with 3 variables. Launch and run the program. Then, review the output to understand how SAS read in the data, and then review the log window: to see that the NOTE about SAS going to a new line when the INPUT statement reached past the end of a line is just what the doctor ordered. [1] Link ↥ Has Tooltip/Popover Toggleable Visibility
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1554134339094162, "perplexity": 1192.9627706328456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00450.warc.gz"}
https://export.arxiv.org/abs/2108.00859?context=eess
eess (what is this?) # Title: Spatio-temporal estimation of wind speed and wind power using machine learning: predictions, uncertainty and technical potential Abstract: The growth of wind generation capacities in the past decades has shown that wind energy can contribute to the energy transition in many parts of the world. Being highly variable and complex to model, the quantification of the spatio-temporal variation of wind power and the related uncertainty is highly relevant for energy planners. Machine Learning has become a popular tool to perform wind-speed and power predictions. However, the existing approaches have several limitations. These include (i) insufficient consideration of spatio-temporal correlations in wind-speed data, (ii) a lack of existing methodologies to quantify the uncertainty of wind speed prediction and its propagation to the wind-power estimation, and (iii) a focus on less than hourly frequencies. To overcome these limitations, we introduce a framework to reconstruct a spatio-temporal field on a regular grid from irregularly distributed wind-speed measurements. After decomposing data into temporally referenced basis functions and their corresponding spatially distributed coefficients, the latter are spatially modelled using Extreme Learning Machines. Estimates of both model and prediction uncertainties, and of their propagation after the transformation of wind speed into wind power, are then provided without any assumptions on distribution patterns of the data. The methodology is applied to the study of hourly wind power potential on a grid of $250\times 250$ m$^2$ for turbines of 100 meters hub height in Switzerland, generating the first dataset of its type for the country. The potential wind power generation is combined with the available area for wind turbine installations to yield an estimate of the technical potential for wind power in Switzerland. The wind power estimate presented here represents an important input for planners to support the design of future energy systems with increased wind power generation. Comments: 45 pages, 21 figures Subjects: Signal Processing (eess.SP); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Data Analysis, Statistics and Probability (physics.data-an); Applications (stat.AP) MSC classes: 68T99, 68T37, 62H11 ACM classes: J.2; I.2; G.3 Cite as: arXiv:2108.00859 [eess.SP] (or arXiv:2108.00859v1 [eess.SP] for this version) ## Submission history From: Federico Amato [view email] [v1] Thu, 29 Jul 2021 09:52:36 GMT (13338kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7576080560684204, "perplexity": 1831.7469248605016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00156.warc.gz"}
https://www.skybluetrades.net/blog/2014/01/2014-01-20-data-analysis-fft-12/index.html
# Haskell FFT 12: Optimisation Part 2 In the last article, we did some basic optimisation of our FFT code. In this article, we’re going to look at ways of reordering the recursive Cooley-Tukey FFT decomposition to make things more efficient. In order to make this work well, we’re going to need more straight line transform “codelets”. We’ll start by looking at our $N=256$ example in detail, then we’ll develop a general approach. ## Experiments with N=256 So far, we’ve been using a fixed scheme for decomposing the overall Fourier matrix for our transforms, splitting out single prime factors one at a time in increasing order of size, ending up with a base transform of a prime number length, which we then process using either a specialised straight line codelet, or Rader’s algorithm. However, there’s nothing in the generalised Danielson-Lanczos recursion step that we’re using that requires us to use prime factors (we can build the necessary $I+D$ matrices for any factor size), and there’s nothing that constrains the order in which we process factors of our input length. For the $N=256$ example, we’ve been decomposing the input length as $2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2$, but there’s no reason why we couldn’t decompose it as $16 \times 16$, $4 \times 8 \times 8$ or any other ordering of factors that multiply to give 256. There are several things that could make one of these other choices of factorisations give a faster FFT: 1. We could have a specialised straight line transform for $N=16$ say, which would make the $16 \times 16$ factorisation more attractive; 2. We could save work by doing less recursive calls: there’s going to be less overhead in doing a single Danielson-Lanczos step for the $16 \times 16$ decomposition than in doing seven steps for the $2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2$ decomposition; 3. We could take advantage of special structure in the Danielson-Lanczos $I+D$ matrices for certain factor sizes (we won’t do this here, but it’s a possibility if we ever develop specialised “twiddlets” as well as “bottomlets”); 4. One factorisation might result in better cache performance than another (this obviously is more relevant for larger transform sizes). In order to take advantage of different factorisations in the $N=256$ case, it will be advantageous to have specialised straight line base transforms for various powers-of-two input lengths. I’ve implemented these specialised transforms for $N=2, 4, 8, 16, 32, 64$ (again just converting the FFTW codelets to Haskell). For $N = 256 = 2^8$, we can think about transform plans that make use of any of these base transforms: if we use the $N=64$ base transform, we account for 6 of those 8 factors of two, leaving us with 2 factors of two to deal with via Danielson-Lanczos steps. We can treat these as a $2 \times 2$ decomposition, or as a single step of size 4. Similarly, if we use the $N=32$ base transform, we have 3 factors of 2 left over to deal with using Danielson-Lanczos steps, which we can treat as one of $\{ 2 \times 2 \times 2, 2 \times 4, 4 \times 2, 8 \}$ –note that the order of factors may be relevant here: it’s quite possible that a $2 \times 4$ decomposition could have different performance behaviour than a $4 \times 2$ decomposition. For each base transform size, we thus have a number of decomposition possibilities for dealing with the “left over” factors of two. If there are $m$ left over factors, there are $2^{m-1}$ possible decompositions. To see why this is so, note that this decomposition problem is isomorphic to the combinatorical compositions problem, the determination of the number of ways of writing an integer $m$ as the sum of a sequence of strictly positive integers (where the order of the sequence matters, making this distinct from the partition problem). For instance, for $m=3$, we can write $3=1+1+1$, $3=2+1$, $3=1+2$, $3=3$, isomorphic to writing $8=2^1 \times 2^1 \times 2^1$, $8=2^2 \times 2^1$, $8=2^1 \times 2^2$, $8=2^3$. The composition problem can be solved by thinking of writing down a row of $m$ ones, with $m-1$ gaps (shown as boxes): If we now assign each box either a comma (”,”) or a plus sign (“+”), we end up with a unique composition. Since there are $m-1$ gaps and two possibilities for each gap, there are $2^{m-1}$ total compositions. For $N=256$, we can determine the possible compositions of the left over factors for each possible choice of base transform. In the following figure, possible FFT plans for $N=256$ are shown, given the availability of base transforms of sizes 2, 4, 8, 16, 32, 64. Each dot represents a factor of two (eight for each instance: $256=2^8$). The factors for the base transform are surrounded by a frame, and factors that are treated together in a Danielson-Lanczos step are joined by a line (i.e. three dots joined by a line represent a Danielson-Lanczos step of size 8): In total, there are 126 possiblities: Base $\;\;\;$ $m$ $\;\;\;$ $2^{m-1}$ 64 2 2 32 3 4 16 4 8 8 5 16 4 6 32 2 7 64 126 It’s not too hard to derive the number of plans for an arbitrary power-of-two input vector length. Suppose $N=2^M$ and we have base transforms for all $2^i$, $i = 1, 2, \dots, B$. Then, if $M > B$, the total number of plans $P$ is $$P = \sum_{b=1}^B 2^{M-b-1} = 2^{M-1} \sum_{b=1}^B 2^{-b} = 2^{M-1} \frac{2^B-1}{2^B} = 2^{M-B-1} (2^B - 1).$$ For $N=256$ and bases up to $N=64=2^6$, we have $M=8$, $B=6$ and $P = 2 (2^6 - 1) = 126$. It turns out not to be much harder to find a general expression for the number of plans in the more complex mixed-radix case, as we’ll see below. A priori, we can’t say much about which of these factorisations is going to give us the best FFT performance, but we can measure the performance, using the same benchmarking approach that we’ve been taking all along. We need some code to generate the factorisations and to produce a value of our Plan type from this information, but given these things we can measure the execution time of the $N=256$ FFT using each of the 126 plans show above. We’d expect plans using larger base transforms to do well, since these straight line base transforms are highly optimised, but it’s not obvious what choice of factorisation for the “left over” factors is going to be faster. Here are the execution times from the fastest and slowest 20 plans out of the 126 (for comparison, the execution time using the $2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2$ plan we’ve been using up to now is about 145 µs–the top entry in the right hand column here): Two things stand out from these results. First, the fastest transforms are those using the largest base transforms ($N=64$ or $N=32$). This isn’t too much of a surprise, since these plans offload the largest proportion of the FFT processing to optimised straight line code. Conversely, the slowest plans are those that use the smallest ($N=2$ or $N=4$) base transforms. Further, the slowest of the slow transforms appear to be those that use the largest Danielson-Lanczos steps. For example, the overall slowest plan (by a factor of about two in time) uses an $N=2$ base transform and a single Danielson-Lanczos step of size 128. These larger Danielson-Lanczos steps could be more efficient if we had specialised “twiddlets” for them, but as it is currently, they do a lot of wasted work and so are less efficient than a series of smaller decompositions. Exactly where the trade-off between larger and smaller Danielson-Lanczos steps lies isn’t immediately obvious. For example, the fastest plans using an $N=16$ base transform use $4 \times 4$, $2 \times 2 \times 4$ and $4 \times 2 \times 2$ Danielson-Lanczos steps for the other factors, but the time differences are small, perhaps not even larger than the margin of error in measurement. We can get a further idea of how this trade-off works by looking at the best plans for $N=1024$. Here shows the best 40 plans for this input length size (for comparison, the execution time for the “standard” $N=1024$ transform from the earlier articles is about 550 µs): Again we see that the fastest FFTs plans use the largest specialised base transforms along with a combination of smallish Danielson-Lanczos steps to form the overall transform. In particular, we see that none of the fastest plans involve Danielson-Lanczos steps of sizes greater than eight. The situation here is simpler than for the general mixed-radix transform case that we’ll deal with next, but these results give some idea of two approaches to use for a heuristic to choose reasonable plans to test for a given problem size: 1. Try the largest few base transforms that are available; 2. Use relatively small Danielson-Lanczos steps. Within these constraints, the choice of which plan is best should be settled empirically by benchmarking examples for each plan. Now let’s think about the general mixed-radix case. In terms of base transforms, for larger prime factors we can use Rader’s algorithm (which should now be faster since we’ve improved the performance of the powers-of-two transforms that are used for the necessary convolution), but it would be useful to have some other specialised straight line base transforms available too. I’ve now implemented specialised transforms for all $N$ up to 16, plus 20, 25, 32 and 64. This is the same set of specialised base transforms used in the standard distribution of FFTW (and indeed, the versions I’m using are just Haskell translations of the FFTW C codelets). As possible base transforms on which to build a full FFT, we should consider any of the specialised base transform sizes that are factors of $N$, and we should also consider transforms using Rader’s algorithm for any prime factors greater than 13 (the largest prime for which we have a specialised base transform). We should never consider the naïve DFT algorithm for a base transform, since it’s almost certain that there will be a better way, now that we have specialised transforms for a range of small prime and other sizes (remember the $O(N^2)$ scaling of the naïve DFT!). Once we’ve selected a base transform size (which we’ll call $N_b$ in what follows) we need to decide what to do with the “left over” factors. Suppose that the prime factorisation of the input length $N$ can be written as $$N = f_1^{m_1} f_2^{m_2} \dots f_D^{m_D} N_b$$ where the $f_i$, $i = 1, 2, \dots, D$ are unique prime factors and the $m_i$ are “multiplicities”, i.e. the number of powers of each unique factor included in the left over part of $N$. Let’s think about the number of FFT plans that we can build in this case. First of all, we need to decide on an order for the factors. Ignoring the issue of duplicate factors, there are $N_D = m_1 + m_2 + \dots + m_D$ factors and the total number of possible orderings of these is just the factorial of this number. To take account of duplicate factors, we need to divide this overall factorial by the factorials of the individual multiplicities. The total number of distinct orderings of factors is thus $$\frac{N_D!}{\prod_{i=1}^D m_i!}.$$ Having chosen an ordering for the factors, we then need to compute the number of distinct ways to compose adjacent prime factors to give composite factors. This is entirely analogous to the situation in the $2^N$ case, and we can use the same result. One might thus think that the total number of plans for this choice of base transform is then $$\frac{N_D!}{\prod_{i=1}^D m_i!} 2^{N_D - 1}.$$ However, because it’s possible for different compositions of different factor permutations to result in the same plan, the number is somewhat less than this. (As an example, think of $12 = 2 \times 2 \times 3$. If we use a base transform of size 2, we have two distinct permutations of the remaining factors, i.e. $(2,3)$ and $(3,2)$. Each of these has two compositions, $(2,3)$ and $(6)$ and $(3,2)$ and $(6)$. Because of the commutativity of ordinary multiplication we get a plan with a single size-6 Danielson-Lanczos step from each permutation.) In order to get an accurate count of the number of plans for different values of $N$, we need to generate the plans to check for this kind of overlap. This requires a little bit of sneakiness. For a given $N$, we need to determine the possible base transforms, then we need to generate all distinct permutations of the “left over” factors to use for calculating compositions. We need to retain the unique result vectors of factors (some prime, some composite) to use as the sizes for Danielson-Lanczos steps. The most difficult part is the generation of the permutations–because we may have duplicates in the list of factors, we don’t want to use a standard permutation algorithm. If we did this, for the factors $2^{10}$, we would generate $10!$ permutations (all the same, because all of our factors are identical) instead of the single distinct permutation! Wikipedia is our friend here. There’s a simple algorithm to generate all distinct permutations of a multiset in lexicographic order, starting from the “sorted” permutation, i.e. the one with all the entries in ascending numerical order. It goes like this–for a sequence $a_j$, $1 \leq j \leq n$: 1. Find the largest index $k$ such that $a_k < a_{k+1}$. If no such index exists, the permutation is the last permutation. 2. Find the largest index $l$ such that $a_k < a_l$. 3. Swap the value of $a_k$ with that of $a_l$. 4. Reverse the sequence from $a_{k+1}$ up to and including the final element $a_n$. Short and sweet! Here’s some (not very clever or efficient, but good enough) code (here, Data.Vector is imported, and Data.List is imported qualified as L): -- One step of multiset permutation algorithm. permStep :: Vector Int -> Maybe (Vector Int) permStep v = if null ks then Nothing else let k = maximum ks l = maximum $filter (\i -> v!k < v!i)$ enumFromN 0 n in Just $revEnd k (swap k l) where n = length v ks = filter (\i -> v!i < v!(i+1))$ enumFromN 0 (n-1) swap a b = generate n $\i -> if i == a then v!b else if i == b then v!a else v!i revEnd f vv = generate n$ \i -> if i <= f then vv!i else vv!(n - i + f) -- Generate all distinct multiset permutations in lexicographic order: -- input must be the "sorted" permutation. allPerms :: Vector Int -> [Vector Int] allPerms idp = idp : L.unfoldr step idp where step v = case permStep v of Nothing -> Nothing Just p -> Just (p, p) It’s more or less a direct Haskell transcription of the algorithm from the Wikipedia page. I’m sure it could be sped up and cleaned up a lot, but it works well enough for this application. (There’s also the Johnson-Trotter loopless permutation algorithm, described in Chapter 28 of Richard Bird’s Pearls of Functional Algorithm Design, which looks monstrously clever and deserved some study. Probably overkill for now, since the code in here is plenty quick enough for what we need here.) Given a way to generate all the permutations for each set of “left over” factors, we can calculate the compositions of each and retain the distinct ones for counting (we just use a Set from Data.Set to hold the results to maintain distinctness). We can do this for each possible base transform for a given $N$ and sum them to get the total number of possible FFT plans. This plot shows the number of plans available for each input vector length in the range 8–1024 (using all the range of specialised base transforms that I’ve implemented): All prime input lengths have only a single possible plan, there are some larger highly factorisable input lengths that have thousands of possible plans, and most input lengths have a number of plans lying somewhere between these two extremes. The primary challenge in selecting a good plan from this range of possibilities is to choose a heuristic that requires us to measure the performance of only a few plans: out of the hundreds or thousands of possibilities for a given input vector length, most plans will be duds. We need to pick out a handful of likely candidate plans for benchmarking. In order to develop some heuristic plan selection rules, we’ll do some benchmarking experiments, as we did for the $N=256$ case in the last section. We’ll look at some input vector lengths that have lots of possible plans, some that have only a few possible plans and some that lie in the middle range: Size $\;\;\;$ Factors $\;\;\;$ No. of plans 960 26 × 3 × 5 2400 800 25 × 52 516 512 29 252 216 23 × 33 230 378 2 × 33 × 7 109 208 24 × 13 40 232 23 × 29 16 238 2 × 7 × 17 10 92 22 × 23 6 338 2 × 132 5 1003 17 × 59 2 115 5 × 23 2 For each of these problem sizes, the table below shows the execution times for each of the fastest ten plans or for all possible plans, if there are less than ten (the base transform sizes are highlighted in bold): $N=960$ (235.25 µs) $\;\;\;\;\;\;$ $N=378$ (71.52 µs) $\;\;\;\;\;\;$ $N=92$ (285.60 µs) 3 × 2 × 5:32 115.13 µs 3 × 3 × 3:14 52.16 µs 23:4 43.65 µs 3 × 5:64 116.19 µs 9 × 3:14 59.54 µs 23 × 2:2 65.04 µs 3 × 5 × 2:32 117.68 µs 3 × 9:14 60.95 µs 2 × 23:2 76.00 µs 2 × 5 × 3:32 117.89 µs 3 × 2 × 7:9 68.10 µs 46:2 125.16 µs 5 × 3:64 118.49 µs 2 × 3 × 7:9 68.44 µs 4:23 295.37 µs 2 × 3 × 5:32 118.99 µs 2 × 7 × 3:9 68.74 µs 2 × 2:23 299.11 µs 5 × 2 × 3:32 119.04 µs 7 × 6:9 69.49 µs 5 × 3 × 2:32 119.29 µs 6 × 7:9 69.73 µs $N=338$ (67.50 µs) 6 × 5:32 121.05 µs 7 × 2 × 3:9 69.83 µs 13 × 2:13 66.91 µs 5 × 6:32 124.21 µs 7 × 3 × 2:9 70.76 µs 2 × 13:13 67.30 µs 26:13 106.41 µs $N=800$ (192.05 µs) $N=208$ (32.85 µs) 13 × 13:2 212.18 µs 5 × 5:32 94.64 µs 4 × 4:13 28.97 µs 169:2 1701.35 µs 2 × 4 × 4:25 101.56 µs 2 × 2 × 4:13 30.31 µs 4 × 2 × 4:25 101.67 µs 2 × 4 × 2:13 31.70 µs $N=1003$ (2413.41 µs) 2 × 2 × 2 × 4:25 104.10 µs 4 × 2 × 2:13 32.04 µs 59:17 1843.99 µs 4 × 4 × 2:25 104.20 µs 2 × 8:13 33.21 µs 17:59 2538.96 µs 2 × 2 × 4 × 2:25 104.56 µs 2 × 2 × 2 × 2:13 33.58 µs 2 × 4 × 2 × 2:25 106.56 µs 8 × 2:13 34.26 µs $N=115$ (362.84 µs) 4 × 2 × 2 × 2:25 106.61 µs 13:16 35.31 µs 23:5 46.50 µs 2 × 2 × 2 × 2 × 2:25 107.50 µs 16:13 45.04 µs 5:23 373.16 µs 2 × 5 × 4:20 108.47 µs 2 × 13:8 47.74 µs $N=512$ (278.94 µs) $N=232$ (584.14 µs) 4 × 4:32 55.10 µs 29:8 87.52 µs 2 × 4:64 55.63 µs 29 × 2:4 113.47 µs 4 × 2:64 55.66 µs 2 × 29:4 136.21 µs 2 × 2 × 4:32 56.01 µs 29 × 4:2 142.41 µs 2 × 2 × 2:64 56.65 µs 29 × 2 × 2:2 159.00 µs 4 × 2 × 2:32 57.00 µs 2 × 29 × 2:2 185.15 µs 2 × 4 × 2:32 57.34 µs 2 × 2 × 29:2 229.41 µs 2 × 2 × 2 × 2:32 58.71 µs 4 × 29:2 231.43 µs 8:64 60.27 µs 58:4 254.22 µs 2 × 8:32 62.38 µs 58 × 2:2 294.84 µs $N=216$ (69.19 µs) $N=238$ (316.93 µs) 3 × 6:12 31.10 µs 17:14 51.85 µs 2 × 3 × 3:12 31.18 µs 17 × 2:7 69.44 µs 3 × 2 × 3:12 31.51 µs 2 × 17:7 69.68 µs 6 × 3:12 31.63 µs 34:7 109.80 µs 3 × 3 × 2:12 32.60 µs 17 × 7:2 127.19 µs 2 × 9:12 35.79 µs 7 × 17:2 160.65 µs 3 × 2 × 4:9 36.03 µs 2 × 7:17 327.17 µs 6 × 4:9 36.67 µs 7 × 2:17 329.79 µs 2 × 3 × 4:9 37.00 µs 14:17 339.61 µs 9 × 2:12 37.04 µs 119:2 823.30 µs It’s important to note that we don’t need to use these results to identify the best plan, just a reasonable set of plans to test empirically. When a call is made to the plan function, we’re going to use Criterion to benchmark a number of likely looking plans and we’ll take the fastest one. It doesn’t matter if this benchmarking takes a while (a few seconds, say) since we’ll only need to do it once for each input length (we’ll eventually save the best plan away in a “wisdom” file so that we can get at it immediately if a plan for the same input length is requested later on). If some transform takes about 200 µs on average, then we can run 5,000 tests in one second. Since Criterion runs 100 benchmarks to get reasonable timing statistics, we could test about 50 plans in a second. Let’s aim to come up with a heuristic that yields 20-50 plans for a given input length, test them all, then pick the best one. So, based on these results and the earlier results for the powers-of-two case, what appear to be good heuristics for plan selection? • First, make use of the larger specialised base transforms. In both the $N=256$ and $N=1024$ cases, the fastest plans all made use of the size-64 or size-32 base transforms, and likewise in the general experiments shown above, the faster plans tend to make use of the larger base transforms. • Specialised base transforms are generally faster than using Rader’s algorithm for a larger prime factor as the base transform. For example, for $N = 92 = 2^2 \times 23$, plans using the size-4 or size-2 base transforms are faster than those using Rader’s algorithm for the factor of 23. • Using multiple Danielson-Lanczos steps of smaller sizes is generally faster than trying to fold everything into one big composite step. For example, the results for the $N=256$ case show that the slowest plans tend to be those using the largest Danielson-Lanczos steps. • In cases where there’s no but to use Rader’s algorithm for the base transform, it can make a difference to use a base transform size that doesn’t need any padding for the Rader algorithm convolution. For example, for $N = 1003 = 17 \times 59$, using a base transform of size 17 is significantly faster than using a base transform of size 59, presumably because, for the size-17 transform, Rader’s algorithm requires us to perform a convolution of size 16, and no zero padding is needed, while for the size-59 transform, the convolution requires padding of an intermediate vector to length 128 (the smallest power of two that works). Based on this, here’s a set of heuristics for choosing plans to test: 1. Determine the usable base transforms for the given $N$; sort the specialised base transforms in descending order of size, followed by any other prime base transforms, in descending order of size with transform sizes of the form $2^m+1$ before any others (i.e. those that don’t require any padding for the Rader’s algorithm convolution). 2. For each base transform, generate all the plans that make use of Danielson-Lanczos steps for the remaining “left over” factors that are “small”, in the sense that they involve only one, two or three factors at a time (i.e. don’t bother generating plans that use Danielson-Lanczos steps of larger sizes). 3. Limit the number of generated plans to around 50 so that the benchmarking step doesn’t take too long. 4. Benchmark all the generated plans and select the fastest! This approach may not find the optimal plan every time, but it should have a relatively good chance and won’t require the benchmarking of too many plans. There is one aspect of all this that would require revisiting if we had specialised “twiddlets” for the Danielson-Lanczos steps. In that case, we would want to pick plans where we could use a large specialised base transform and make use of the largest possible specialised Danielson-Lanczos “twiddlets”. For the moment, since we don’t have such specialised machinery for the Danielson-Lanczos steps, the approach above should be a good choice. ## Implementation The code described here is the pre-release-3 version of the GitHub repository. Generation of the candidate plans for a given input size is basically a question of determining which base transforms are usable, then generating all of the permutations and combinations of the “left over” factors for each base transform. The plans are sorted according to the heuristic ordering described above and we take the first 50 or so for benchmarking. For the base transforms, we define a helper type called BaseType to represent base transforms and to allow us to define an Ord instance for the heuristic ordering of the base transforms. We also define a newtype wrapper, SPlan, to wrap the whole of a plan definition, again so that we can write a custom Ord instance based on the heuristic ordering of plans: -- | Base transform type with heuristic ordering. data BaseType = Special Int | Rader Int deriving (Eq, Show) -- | Newtype wrapper for custom sorting. newtype SPlan = SPlan (BaseType, Vector Int) deriving (Eq, Show) -- | Base transform size. bSize :: BaseType -> Int bSize (Special b) = b -- | Heuristic ordering for base transform types: special bases come -- first, then prime bases using Rader's algorithm, ordered according -- convolution. instance Ord BaseType where compare (Special _) (Rader _) = LT compare (Rader _) (Special _) = GT compare (Special s1) (Special s2) = compare s1 s2 compare (Rader r1) (Rader r2) = case (isPow2 $r1 - 1, isPow2$ r2 - 1) of (True, True) -> compare r1 r2 (True, False) -> compare r1 (2 * r2) (False, True) -> compare (2 * r1) r2 (False, False) -> compare r1 r2 -- | Heuristic ordering for full plans, based first on base type, then -- on the maximum size of Danielson-Lanczos step. instance Ord SPlan where compare (SPlan (b1, fs1)) (SPlan (b2, fs2)) = case compare b1 b2 of LT -> LT EQ -> compare (maximum fs2) (maximum fs1) GT -> GT Here’s the code for the generation of candidate plans: -- | Generate test plans for a given input size, sorted in heuristic -- order. testPlans :: Int -> Int -> [(Int, Vector Int)] testPlans n nplans = L.take nplans $L.map clean$ L.sortBy (comparing Down) $P.concatMap doone bs where vfs = allFactors n bs = usableBases n vfs doone b = basePlans n vfs b clean (SPlan (b, fs)) = (bSize b, fs) -- | List plans from a single base. basePlans :: Int -> Vector Int -> BaseType -> [SPlan] basePlans n vfs bt = if null lfs then [SPlan (bt, empty)] else P.map (\v -> SPlan (bt, v))$ leftOvers lfs where lfs = fromList $(toList vfs) \\ (toList$ allFactors b) b = bSize bt -- | Produce all distinct permutations and compositions constructable -- from a given list of factors. leftOvers :: Vector Int -> [Vector Int] leftOvers fs = if null fs then [] else S.toList $L.foldl' go S.empty (multisetPerms fs) where n = length fs go fset perm = foldl' doone fset (enumFromN 0 (2^(n - 1))) where doone s i = S.insert (makeComp perm i) s -- | Usable base transform sizes. usableBases :: Int -> Vector Int -> [BaseType] usableBases n fs = P.map Special bs P.++ P.map Rader ps where bs = toList$ filter ((== 0) . (n mod)) specialBaseSizes ps = toList $filter isPrime$ filter (> maxPrimeSpecialBaseSize) fs The main testPlans function calls usableBases to determine what base transforms can be used for a given input size, distinguishing between specialised straight-line transforms (the Special constructor of the BaseType type) and larger prime transforms that require use of Rader’s algorithm (the Rader constructor for BaseType). For each possible base transform, the basePlans function determines the “left over” factors of the input size and uses the leftOvers function to generate a list of possible Danielson-Lanczos steps, which are then bundled up with the base transform information into values of type SPlan. The leftOvers function calculates all possible permutations of the multiset of left-over factors (using the multiPerms function, which is essentially identical to the multiset permutation code shown earlier), and for each permutation calculates all possible compositions of the factors. Plans are collected into an intermediate Set to remove duplicates. The full set of candidate plans is then sorted in testPlans in descending order of desirability according to the planning heuristics, and the first nplans plans are returned for further processing. There are several places where this code could be made more efficient (there’s quite a bit of intermediate list construction, for instance), but there’s really not too much point in expending effort on it, since the planning step should only be done once before executing an FFT calculation multiple times using the same plan. In any case, the plan generation is relatively quick even for quite large input sizes. The top-level code that drives the empirical planning process is shown here: -- | Globally shared timing environment. (Not thread-safe...) timingEnv :: IORef (Maybe Environment) timingEnv = unsafePerformIO (newIORef Nothing) {-# NOINLINE timingEnv #-} -- | Plan calculation for a given problem size. empiricalPlan :: Int -> IO Plan empiricalPlan n = do case wis of Just p -> return $planFromFactors n p Nothing -> do let ps = testPlans n nTestPlans withConfig (defaultConfig { cfgVerbosity = ljust Quiet , cfgSamples = ljust 1 })$ do menv <- liftIO $readIORef timingEnv env <- case menv of Just e -> return e Nothing -> do meas <- measureEnvironment liftIO$ writeIORef timingEnv $Just meas return meas let v = generate n (\i -> sin (2 * pi * fromIntegral i / 511) :+ 0) tps <- CM.forM ps$ \p -> do let pp = planFromFactors n p pptest <- case plBase pp of bpl@(RaderBase _ _ _ _ csz _) -> do cplan <- liftIO $empiricalPlan csz return$ pp { plBase = bpl { raderConvPlan = cplan } } _ -> return pp ts <- runBenchmark env $nf (execute pptest Forward) v return (sum ts / fromIntegral (length ts), p) let (rest, resp) = L.minimumBy (compare on fst) tps liftIO$ writeWisdom n resp let pret = planFromFactors n resp case plBase pret of bpl@(RaderBase _ _ _ _ csz _) -> do cplan <- liftIO $empiricalPlan csz return$ pret { plBase = bpl { raderConvPlan = cplan } } _ -> return pret We’ll describe the “wisdom” stuff in a minute, but first let’s look at the Nothing branch of the case expression at the top of the empiricalPlan function. The testPlans function we looked at above is used to generate a list of candidate plans and we then use Criterion to run benchmarks for each of these plans, choosing the plan with the best execution time. There are a few wrinkles to make things a little more efficient. First, we have a global IORef that we use to store the Criterion timing environment information–this avoids repeated calls to measureEnvironment to determine the system clock resolution. Using an IORef in this way is not thread-safe, which is something we would have to fix to make this a production-ready library. We’ll not worry about it for now. Second, we make Criterion only collect a single sample for each plan, just to make the benchmarking go quicker. If we have a number of plans that are within a few percent of each other in terms of their execution times, it probably doesn’t matter too much exactly which one we choose, so there’s not much to be gained from running lots of benchmarking tests to get accurate timing information. In most cases, the differences between plans are large enough that we can easily identify the best plan from a single benchmark run. Finally, we have to do some slight messing around to fix up the plans for the convolution step in prime base transforms using Rader’s algorithm. From a user’s point of view, all of this complexity is hidden behind the empiricalPlan function: you call this with your input size and you get a Plan back, one that’s hopefully close to the optimal plan that we can generate. Once the optimal plan for a given input size has been determined on a given machine, it won’t change, since it depends only on fixed details of the machine architecture (processor, cache sizes and so on). Instead of doing the work of running the empirical benchmarking tests every time that empiricalPlan is called, we can thus save the planning results away in “wisdom” files for reuse later on. Here’s the code to do this: -- | Read from wisdom for a given problem size. readWisdom :: Int -> IO (Maybe (Int, Vector Int)) home <- getEnv "HOME" let wisf = home </> ".fft-plan" </> show n ex <- doesFileExist wisf case ex of False -> return Nothing True -> do let (wisb, wisfs) = read wist :: (Int, [Int]) return $Just (wisb, fromList wisfs) -- | Write wisdom for a given problem size. writeWisdom :: Int -> (Int, Vector Int) -> IO () writeWisdom n (b, fs) = do home <- getEnv "HOME" let wisd = home </> ".fft-plan" wisf = wisd </> show n createDirectoryIfMissing True wisd writeFile wisf$ show (b, toList fs) P.++ "\n" We save one file per input size in a directory called ~/.fft-plan, writing and reading the (Int, Vector Int) planning information using Haskell’s basic show and read capabilities. Whenever empiricalPlan is called, we check to see if we have a wisdom file for the requested input size and only generate plans and run benchmarks if there is no wisdom. Conversely, when we benchmark and find an optimal plan, we save that information to a wisdom file for later. ## Benchmarking We can measure the performance of the pre-release-3 code in the same way as we’ve done for earlier versions. Here’s a view of the performance of this version of the code that should be pretty familiar by now: and here are the ratio plots showing the relative performance of the original unoptimised version of the code (pre-release-1), the current version (pre-release-3) and FFTW: It appears that the empirical optimisation approach we’ve taken here has been quite successful. The “pre-release-3” tab above shows that for most input sizes in the range that we’re benchmarking, our code is around 10 times slower than FFTW, and never more than 20 times slower. In the previous article, we saw that, for the pre-release-2` code version, most input lengths were between 40 and 100 times slower than FFTW. The “Speed-up” tab also shows that we’ve significantly increased the range of input lengths getting 50-fold or better speedups compared to the original unoptimised code. Most of the remaining slower cases can be put down to our implementation of Rader’s algorithm. When the input length $N$ is not of the form $2^m+1$, allocation is required of a zero-padded vector is required for the convolution in Rader’s algorithm. It ought to be possible to avoid this allocation and speed up the code a little, which ought to help with some of the slower cases (most of which are either prime lengths, or involve a comparatively large prime factor). In the next (and penultimate) article in this series, we’ll clear up this issue a little, play with some compiler flags and catalogue the remaining opportunities for optimisation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5593200325965881, "perplexity": 1331.3702282213505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00207.warc.gz"}
http://betterexplained.com/articles/an-intuitive-introduction-to-limits/
# An Intuitive Introduction To Limits Get the Math, Better Explained eBook and turn Huh? to Aha! Limits, the Foundations Of Calculus, seem so artificial and weasely: “Let x approach 0, but not get there, yet we’ll act like it’s there… ” Ugh. Here’s how I learned to enjoy them: • What is a limit? Our best prediction of a point we didn’t observe. • How do we make a prediction? Zoom into the neighboring points. If our prediction is always in-between neighboring points, no matter how much we zoom, that’s our estimate. • Why do we need limits? Math has “black hole” scenarios (dividing by zero, going to infinity), and limits give us a reasonable estimate. • How do we know we’re right? We don’t. Our prediction, the limit, isn’t required to match reality. But for most natural phenomena, it sure seems to. Limits let us ask “What if?”. If we can directly observe a function at a value (like x=0, or x growing infinitely), we don’t need a prediction. The limit wonders, “If you can see everything except a single value, what do you think is there?”. When our prediction is consistent and improves the closer we look, we feel confident in it. And if the function behaves smoothly, like most real-world functions do, the limit is where the missing point must be. ## Key Analogy: Predicting A Soccer Ball Pretend you’re watching a soccer game. Unfortunately, the connection is choppy: Ack! We missed what happened at 4:00. Even so, what’s your prediction for the ball’s position? Easy. Just grab the neighboring instants (3:59 and 4:01) and predict the ball to be somewhere in-between. And… it works! Real-world objects don’t teleport; they move through intermediate positions along their path from A to B. Our prediction is “At 4:00, the ball was between its position at 3:59 and 4:01″. Not bad. With a slow-motion camera, we might even say “At 4:00, the ball was between its positions at 3:59.999 and 4:00.001″. Our prediction is feeling solid. Can we articulate why? • The predictions agree at increasing zoom levels. Imagine the 3:59-4:01 range was 9.9-10.1 meters, but after zooming into 3:59.999-4:00.001, the range widened to 9-12 meters. Uh oh! Zooming should narrow our estimate, not make it worse! Not every zoom level needs to be accurate (imagine seeing the game every 5 minutes), but to feel confident, there must be some threshold where subsequent zooms only strengthen our range estimate. • The before-and-after agree. Imagine at 3:59 the ball was at 10 meters, rolling right, and at 4:01 it was at 50 meters, rolling left. What happened? We had a sudden jump (a camera change?) and now we can’t pin down the ball’s position. Which one had the ball at 4:00? This ambiguity shatters our ability to make a confident prediction. With these requirements in place, we might say “At 4:00, the ball was at 10 meters. This estimate is confirmed by our initial zoom (3:59-4:01, which estimates 9.9 to 10.1 meters) and the following one (3:59.999-4:00.001, which estimates 9.999 to 10.001 meters)”. Limits are a strategy for making confident predictions. ## Exploring The Intuition Let’s not bring out the math definitions just yet. What things, in the real world, do we want an accurate prediction for but can’t easily measure? What’s the circumference of a circle? Finding pi “experimentally” is tough: bust out a string and a ruler? We can’t measure a shape with seemingly infinite sides, but we can wonder “Is there a predicted value for pi that is always accurate as we keep increasing the sides?” Archimedes figured out that pi had a range of $\displaystyle{3 \frac{10}{71} < \pi < 3 \frac{1}{7} }$ using a process like this: It was the precursor to calculus: he determined that pi was a number that stayed between his ever-shrinking boundaries. Nowadays, we have modern limit definitions of pi. What does perfectly continuous growth look like? e, one of my favorite numbers, can be defined like this: $\displaystyle{e = \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n}$ We can’t easily measure the result of infinitely-compounded growth. But, if we could make a prediction, is there a single rate that is ever-accurate? It seems to be around 2.71828… Can we use simple shapes to measure complex ones? Circles and curves are tough to measure, but rectangles are easy. If we could use an infinite number of rectangles to simulate curved area, can we get a result that withstands infinite scrutiny? (Maybe we can find the area of a circle.) Can we find the speed at an instant? Speed is funny: it needs a before-and-after measurement (distance traveled / time taken), but can’t we have a speed at individual instants? Hrm. Limits help answer this conundrum: predict your speed when traveling to a neighboring instant. Then ask the “impossible question”: what’s your predicted speed when the gap to the neighboring instant is zero? Note: The limit isn’t a magic cure-all. We can’t assume one exists, and there may not be an answer to every question. For example: Is the number of integers even or odd? The quantity is infinite, and neither the “even” nor “odd” prediction stays accurate as we count higher. No well-supported prediction exists. For pi, e, and the foundations of calculus, smart minds did the proofs to determine that “Yes, our predicted values get more accurate the closer we look.” Now I see why limits are so important: they’re a stamp of approval on our predictions. ## The Math: The Formal Definition Of A Limit Limits are well-supported predictions. Here’s the official definition: $\displaystyle{ \lim_{x \to c}f(x) = L }$ means for all real ε > 0 there exists a real δ > 0 such that for all x with 0 < |x − c| < δ, we have |f(x) − L| < ε Math EnglishHuman English $\displaystyle{ \lim_{x \to c}f(x) = L }$ means When we “strongly predict” that f(c) = L, we mean for all real ε > 0for any error margin we want (+/- .1 meters) there exists a real δ > 0there is a zoom level (+/- .1 seconds) such that for all x with 0 < |x − c| < δ, we have |f(x) − L| < εwhere the prediction stays accurate to within the error margin There’s a few subtleties here: • The zoom level (delta, δ) is the function input, i.e. the time in the video • The error margin (epsilon, ε) is the most the function output (the ball’s position) can differ from our prediction throughout the entire zoom level • The absolute value condition (0 < |x − c| < δ) means positive and negative offsets must work, and we’re skipping the black hole itself (when |x – c| = 0). We can’t evaluate the black hole input, but we can say “Except for the missing point, the entire zoom level confirms the prediction f(c) = L.” And because f(c) = L holds for any error margin we can find, we feel confident. Could we have multiple predictions? Imagine we predicted L1 and L2 for f(c). There’s some difference between them (call it .1), therefore there’s some error margin (.01) that would reveal the more accurate one. Every function output in the range can’t be within .01 of both predictions. We either have a single, infinitely-accurate prediction, or we don’t. Yes, we can get cute and ask for the “left hand limit” (prediction from before the event) and the “right hand limit” (prediction from after the event), but we only have a real limit when they agree. A function is continuous when it always matches the predicted value (and discontinuous if not): $\displaystyle{\lim_{x \to c}{f(x)} = f(c)}$ Calculus typically studies continuous functions, playing the game “We’re making predictions, but only because we know they’ll be correct.” ## The Math: Showing The Limit Exists We have the requirements for a solid prediction. Questions asking you to “Prove the limit exists” ask you to justify your estimate. For example: Prove the limit at x=2 exists for $\displaystyle{f(x) = \frac{(2x+1)(x-2)}{(x - 2)}}$ The first check: do we even need a limit? Unfortunately, we do: just plugging in “x=2″ means we have a division by zero. Drats. But intuitively, we see the same “zero” (x – 2) could be cancelled from the top and bottom. Here’s how to dance this dangerous tango: • Assume x is anywhere except 2 (It must be! We’re making a prediction from the outside.) • We can then cancel (x – 2) from the top and bottom, since it isn’t zero. • We’re left with f(x) = 2x + 1. This function can be used outside the black hole. • What does this simpler function predict? That f(2) = 2*2 + 1 = 5. So f(2) = 5 is our prediction. But did you see the sneakiness? We pretended x wasn’t 2 [to divide out (x-2)], then plugged in 2 after that troublesome item was gone! Think of it this way: we used the simple behavior from outside the event to predict the gnarly behavior at the event. We can prove these shenanigans give a solid prediction, and that f(2) = 5 is infinitely accurate. For any accuracy threshold (ε), we need to find the “zoom range” (δ) where we stay within the given accuracy. For example, can we keep the estimate between +/- 1.0? Sure. We need to find out where $\displaystyle{|f(x) - 5| < 1.0}$ so \begin{align*} |2x + 1 - 5| &< 1.0 \\ |2x - 4| &< 1.0 \\ |2(x - 2)| &< 1.0 \\ 2|(x - 2)| &< 1.0 \\ |x - 2| &< 0.5 \end{align*} In other words, x must stay within 0.5 of 2 to maintain the initial accuracy requirement of 1.0. Indeed, when x is between 1.5 and 2.5, f(x) goes from f(1.5) = 4 to and f(2.5) = 6, staying +/- 1.0 from our predicted value of 5. We can generalize to any error tolerance (ε) by plugging it in for 1.0 above. We get: $\displaystyle{|x - 2| < 0.5 \cdot \epsilon}$ If our zoom level is “δ = 0.5 * ε”, we’ll stay within the original error. If our error is 1.0 we need to zoom to .5; if it’s 0.1, we need to zoom to 0.05. This simple function was a convenient example. The idea is to start with the initial constraint (|f(x) – L| < ε), plug in f(x) and L, and solve for the distance away from the black-hole point (|x – c| < ?). It’s often an exercise in algebra. Sometimes you’re asked to simply find the limit (plug in 2 and get f(2) = 5), other times you’re asked to prove a limit exists, i.e. crank through the epsilon-delta algebra. ## Flipping Zero and Infinity Infinity, when used in a limit, means “grows without stopping”. The symbol ∞ is no more a number than the sentence “grows without stopping” or “my supply of underpants is dwindling”. They are concepts, not numbers (for our level of math, Aleph me alone). When using ∞ in a limit, we’re asking: “As x grows without stopping, can we make a prediction that remains accurate?”. If there is a limit, it means the predicted value is always confirmed, no matter how far out we look. But, I still don’t like infinity because I can’t see it. But I can see zero. With limits, you can rewrite $\displaystyle{\lim_{x \to \infty}}$ as $\displaystyle{\lim_{\frac{1}{x} \to 0}}$ You can get sneaky and define y = 1/x, replace items in your formula, and then use $\displaystyle{\lim_{y \to 0^+}}$ so it looks like a normal problem again! (Note from Tim in the comments: the limit is coming from the right, since x was going to positive infinity). I prefer this arrangement, because I can see the location we’re narrowing in on (we’re always running out of paper when charting the infinite version). ## Why Aren’t Limits Used More Often? Imagine a kid who figured out that “Putting a zero on the end” made a number 10x larger. Have 5? Write down “5″ then “0″ or 50. Have 100? Make it 1000. And so on. He didn’t figure out why multiplication works, why this rule is justified… but, you’ve gotta admit, he sure can multiply by 10. Sure, there are some edge cases (Would 0 become “00″?), but it works pretty well. The rules of calculus were discovered informally (by modern standards). Newton deduced that “The derivative of x^3 is 3x^2″ without rigorous justification. Yet engines whirl and airplanes fly based on his unofficial results. The calculus pedagogy mistake is creating a roadblock like “You must know Limits™ before appreciating calculus”, when it’s clear the inventors of calculus didn’t. I’d prefer this progression: • Calculus asks seemingly impossible questions: When can rectangles measure a curve? Can we detect instantaneous change? • Limits give a strategy for answering “impossible” questions (“If you can make a prediction that withstands infinite scrutiny, we’ll say it’s ok.”) • They’re a great tag-team: Calculus explores, limits verify. We memorize shortcuts for the results we verified with limits (d/dx x^3 = 3x^2), just like we memorize shortcuts for the rules we verified with multiplication (adding a zero means times 10). But it’s still nice to know why the shortcuts are justified. Limits aren’t the only tool for checking the answers to impossible questions; infinitesimals work too. The key is understanding what we’re trying to predict, then learning the rules of making predictions. Happy math. ## Other Posts In This Series Kalid Azad loves sharing Aha! moments. BetterExplained is dedicated to learning with intuition, not memorization, and is honored to serve 250k readers monthly. Math, Better Explained is a highly-regarded Amazon bestseller. This 12-part book explains math essentials in a friendly, intuitive manner. "If 6 stars were an option I'd give 6 stars." -- read more reviews 1. Joe says: Indeed, one of the great tragedies of mathematical education is that we teach calculus backwards. The epsilon-delta business of Cauchy and Weirestrass is, of course, key in the field of analysis. But high school and university students are there to learn calculus, not calculus of variations, right? For 150 years, we did quite well sticking with Liebniz’s notion of infinitesimal quantities, a concept that’s all but disappeared from modern calculus courses. (I hadn’t heard of an ‘infinitesimal’ until I stumbled upon this site in the midst of my high school calculus course). Anyway, keep up the good work, Kalid. Another excellent article. 2. Sean Roberts says: I love this website & its emails & how through this site I confirm that there are other people out there that find math to be magical. 3. Liana says: Hello, great post as always! May I ask a question? About limits in the indeterminate form 0/0, I can’t understand why algebraic manipulation works! Any insight will be welcomed! Thanks, Liana 4. kalid says: @Joe: Great point, thanks for the comment. Exactly, we teach high school calculus as if we’re hard-nosed theoreticians interested in the mechanics of how calculus is put together (a bit like learning organic chemistry to see how gasoline is combusted before taking driver’s ed). Happy you enjoyed the article. @Sean: Really appreciate it! @Liana: Great question. I’d like to do a follow-up on some of the subtleties about how to resolve indeterminate forms. In this example, my intuition is the points *outside* the black hole do not have any issue with (x – 2) [for example], so can divide it out easily. And we are actually using the surrounding points, not the the “black hole” itself, to make the estimate. 5. ashvini says: this was dynamite ! “The calculus pedagogy mistake is creating a roadblock like “You must know Limits™ before appreciating calculus”, when it’s clear the inventors of calculus didn’t” loved it. thanks a ton. when i was in junior school [1977], my village school teacher used a rope and sticks to explain pi. it used to be called ‘sulba sutra’ [string [as in rope and string] principles] in ancient india. indians didnt give much importance to rigorous proof. if something could be directly measured [like d/dt [x square] = 2x] then that was it! i wish more teachers would tech more ‘practical’ maths. string theory is my fav example of useless maths – 30 yrs of research funding sunk, without a SINGLE falsi-fiable result to show for, to sink ones teeth into. ashvini, new delhi india. 6. koushik says: hi, very good explanation indeed. More than your mathematical know how, what really matters is logical approach. But the beauty of this problem is that, the result turns out to be in mathematical form. 7. Excellent, finally! Well done Kalid. I remember my HS teacher butchering this 8. kalid says: @ashvini: Definitely — I need to experience an idea firsthand before I am truly comfortable with it. @koushik: Thanks for the comment; yep, limits give us a logical framework to make the best predictions possible. @Brit: Awesome, glad you liked it :). 9. Amrish says: Excellent work as always. One comment. “The error margin (epsilon, ε) is the function result, i.e. the position of the ball. “ I thought the error margin was the difference between the actual position of the ball and the predicted position of the ball. 10. kalid says: Whoops, I should clarify, thanks. The error margin is the maximum amount the points in the visible range are allowed to vary from your prediction. Every point in the zoom range must lie within the error margin for us to feel confident. 11. Thanks a lot! You know, this calculus stuff is not really in my syllabus but i have a big interest in physics and as we all know, it’s close to impossible to appreciate many higher level concepts of physics without a thorough knowledge of calculus and so i decided to go through those heavy books on this haunted topic and guess what, i derived some sort of half baked knowledge but a big thanks to you that i was finally able to understand the basics of calculus and make certain crucial amendments to my foundation. 12. Jar says: Tremendous explanation.the epsilon delta concept is fascinating.thanks 13. gulrez says: wonderful explanation !!!!!!even a kid can understand limit through ur article. but i have a question. math has black hole type scenario like infinty ‘somathing divided by zero etc .r irrational numbers also behave like limits because we just approach them not get them.i mean we just get closr to them not precisely evaluate them .waiting for an article on limits and irrational number from god of mathmatical explanation (kalid sir) 14. kalid says: @Abhineet: Awesome, glad it’s helping. Cementing the foundation for ideas is great. @Gulrez: Happy it worked. I’m not well versed in number theory, but irrational numbers (like e, pi) can be defined as limits, i.e. the result of some process that continues forever (after all, how many sides do you put on a shape to make it a circle?). I’d like to do more on this. 15. Majo says: What program you use for illustrations? Thanks 16. Kalid says: Hi Majo, I use PowerPoint to make the diagrams. 17. Joe Morin says: In your epsilon-delta example, you have epsilon in units of distance (+/- 0.1 meters) and delta in units of time (+/- 0.1 seconds), so the units on one side of the inequality do not balance with the units on the other side. Since I teach physics and not math, this was confusing to me. Could you please explain? Otherwise I find your explanations extremely helpful and I plan to continue this series once I get past this obstacle. I remember working very hard at this in college very many years ago without truly understanding it, but now I’m on the verge of actually understanding it. Thanks Joe 18. kalid says: Hi Joe, great question. Notice we actually have 2 separate inequalities, essentially: * If time (the function input) is within a certain range, then distance (the function output) must be within a certain range. * i.e., when 0 < |x − c| < δ, we have |f(x) − L| < ε Time and distance are never compared directly, just corresponding times and distances. Time of x results in distance of f(x), but x and f(x) never appear in the same inequality. You’re right though, it wouldn’t make sense to compare them, and applying units is a good check to see if the variables have been mixed along the way. —- Whoops! I re-read what you wrote, and understand better. In the definition of the limit, the two quantities are not compared. But when comparing the conditions that makes each meet its threshold, they could be. Here’s how I see it: We are comparing inputs (seconds) and outputs (meters) and trying to equate them, not from a “units” perspective, but from an accuracy one. I.e., how does a “meter” of accuracy translate into seconds? (An accuracy of +/- 1 meter may require a time interval of +/- 0.1 seconds). The “meter” and “second” aren’t really the SI units anymore, they are inputs and outputs in a particular system [because in a different function, a meter of accuracy may require more seconds, or may not be possible at all if the function oscillates wildly]. In other words: what range of meters has the same accuracy as a given range of seconds? (“Ranges of precision” between the inputs and outputs can be compared, even if the units can’t be.) 19. Joe Morin says: Okay. Thanks. I was confusing the variable “x” with position, since I use it so often that way; but in your example “x” is time and f(x) is the position. I worked it through using “t” for time, and I understand it now. Joe 20. Tim says: Quick clarification/correction: In your Flipping Zero and Infinity section, you have an error. Since $\displaystyle{x\to +\infty}$, you must have that $\displaystyle{\frac{1}{x}\to 0}$ *from the right*, and thus$\displaystyle{y\to 0^+}$. Having $\displaystyle{x\to\infty}$ is a one-sided limit, but stating $\displaystyle{y\to 0}$ is a two-sided limit. This is a source of many an error on an AP exam… 21. kalid says: Hi Tim, that’s an excellent addition, and something that would have tripped me up as well! Appreciate the note, I’ll revise the article. Hi Kalid. Nice article. You said the number of integers is neither odd nor even , but I guess it is clearly odd . How ? well let the number of positive integers be x . Then number of negatives is also x. There is one more integer remaining , 0 . Thus number of integers is 2x+1 , which is clearly odd. So why do you say that we can’t tell whether number of integers is odd or even ? 23. Hi Kalid, I really enjoyed your gentle introduction to calculus and the finding pi articles. I just recently purchased your book as a token of appreciation. I have a question regarding the zoom levels; it was stated that: “The predictions agree at increasing zoom levels. Imagine the 3:59-4:01 range was 9.9-10.1 meters, but after zooming into 3:59.999-4:00.001, the range widened to 9-12 meters.” I don’t see how zooming in increased the range. If you’re looking more precisely, then wouldn’t the range be much smaller? Cheers, Dave 24. Kalid says: @A Googler: Good argument! In my head I was thinking about the number of positive integers, but being able to match them up like that might make the resolution more clear. (I’m still not sure if an infinite number is “allowed” to be even or odd, but if it stays odd as it “grows”, maybe?). @Dave: Thanks for the support! Good point, it’s not really possible that the range would *increase* as you zoomed. Imagine the range never diminishing though — things not getting more accurate at all as you zoomed in. Then your confidence/predictions wouldn’t be greater as you looked closer, and you wouldn’t feel comfortable in your predictions. Excellent feedback here! 25. Raifu says: Hi Kalid what a great website you have, really enjoyed your article! Anyway, talking about limits I still have some questions: 1. Why do we need limits? Say that you have an equation which results an indeterminate form 0/0. Then you make a simplification to find the limit. Since limit means “as x approaches to..” then the result is not the exact answer, right? What confuses me is what advantage or benefit that we get by knowing the limit. I mean we know the result cannot be determinated, but we still insist to get its limit? What for? Do you agree that limit is not a certain answer, and if you do (or if it’s true) this will lead us to my 2nd question 2. If my concept (or mindset) about limit as an uncertain thing is true, then derivatives and integral suppose to be uncertain things. Is it true? Please correct my mindset if it’s wrong. These limit, derivatives, and integral things are driving me crazy right now. I really want to understand the analogy, logic, mentality, etc of these matters Cheers 26. Diane Tran says: Thank you so much for taking the time to explain to people about math in ways that people can actually relate to. You are awesome and will go far! You should definitely think about teaching as a professor. 27. Anonymous says: What you do is brillllliant. I’m a student in the eleventh grade, and I think your website is really opening up new avenues- and I’ve been here for exactly 15 minutes! I’m definitely coming here more, and wow. Hats off to you, dude. You should be very proud. :’) 28. kalid says: A bit late, but here goes anyway @Raifu: I think limits are useful, even for indeterminate forms like sin(x)/x, because we can get a reasonable idea for a starting point. Many situations begin at t=0, but if we have t as the denominator, we technically have an indeterminate form. But we “know” that the position at t=0 was valid, so limits give us a nice estimation of what it should be. Derivatives/integrals only work for functions that match their limit at every point, which are called continuous. You’re correct though, there are functions which are ill-behaved (do not match their limits) and we can’t use the regular calculus tools on them! (I’m not super familiar with this, I just know that if a function is discontinuous you need to be very careful, and work around the discontinuity, etc.) @Diane, Anonymous: Thanks so much! Glad you’re enjoying the site. I’d like to keep exploring more avenues to explain things :). 29. Omer Abid says: Kalid, In s nutshell, what is different between the limit approach and the infinitsimal approach? 30. kalid says: Hi Omer, I’d say the main difference is that infinitesimals create a new class of numbers (which are too small to measure with our existing numbers), and limits stay within our number system (making a solid prediction about what happens if we *could* have our number disappear). Functionally, the results are the same, but I prefer infinitesimals because we can treat dy, dx, etc. as actual microscopic quantities (similar to physics). Technically, with limits, you’re not allowed to separate dy/dx into variables (dy/dx is a shorthand for a larger limit). 31. Eric V says: Great post as always Kalid! I’ve seen a few indications here of the use of limits toward the task of making sense of some indeterminate forms (e.g. 0/0 but there are others). It may only be my unique myopia but this use of limits has always been only in my periphery. I’ve always thought of limits as, well, to be honest, an essentially useless bit of formalism that we feel the need to put in our proofs lest the demagogues of Rigor chide us too harshly. But then I started to appreciate the role of Rigor… At least I have seen enough to try to fight the transformation to demagogue. Enough philosophy. In my mind there’s always been a more natural method to attack the indeterminate forms in their varied flavors (0/0, x/0, inf/inf) and that is l’Hopitals rule. Lets just pretend that limits don’t actually appear in l’Hopitals rule. I would like to see how you present this topic as I don’t see it in the site guide. I see l’Hopitals rule answering the question better because it talks about rates of change (derivatives) instead of the limit talking about how close can you get to ‘there’ without actually getting there. Limits are like two cars playing chicken and assuming neither will turn. Then we ask what happens to drivers at the point where they are infinity close to crashing but not quite there. It just seemed to me a bunch of mathematic trickery. I say l’Hopitals rule (as I delude myself and ignore the limits in the definition) more like two lemmings running toward a cliff with a trough tied between them and a marble in the trough. We then ask, which way will the marble roll as the lemmings approach oblivion? I don’t have to look at really, really close but not quite there. I just ask which lemming runs to its death faster? If the lemming on the left has a higher rate of change then I can predict the marble will tilt left as the trough goes over the edge. If I have f(x)/g(x) and I know they both reach 0, I just ask which rushes toward 0 faster, rate of change of f(x) with respect to x or df/dx. I’ve seen a few receptions of the idea that infinity is not a number, just a concept. I’ve heard the same from every instructor who writes it in an equation and insists it has all the properties of a number . I throw in this next little tie bit because I can’t resist. Its a question I often pose to myself in an effort to gain more insight, though I usually just puzzle myself further: If I say ‘You never reach infinity’, is that not equivalent to saying ‘You reach infinity at never’? 32. Tim McGrath says: Hi Khalid– I like infinitesimals too. Even if they’re not traditional numbers, they don’t bother me at all. To me they’re just like infinite sums–like an infinite series that converges on a number or a point. And that’s just what an integral is–an infinite sum, correct? So even though infinity is inherent in the notion of an infinite sum, it is acceptable to mathematicians because it doesn’t go shooting off into the unknowable. In any case, I have a related question: Would it make you or the math community squeamish to consider a limit as an example of what might be called “finite infinity”? A “finite infinity,” in my view, is simply a converging series, an infinity that has a finite limit. I ask because because Emily Dickinson uses this very phrase, and I don’t think she was taking poetic license. Though not well-versed in mathematics, she had an intuitive understanding of the subject. She knew, for example, that there is no difference between infinity and infinity minus one. And she knew as well as Cantor did that infinity comes in different sizes. So what do you think? Is “finite infinity,” in your opinion, an acceptable mathematical term? Could I use it without embarrassing myself in a forum that might be read by math as well as poetry geeks? Thanks. –Tim 33. Anonymous says: Hi Kalid, I just found your site, and it’s wonderful! Regarding the intuition of limit: after years of struggling with it, I came up with an insight that finally satisfied me. I would be very happy to hear your opinion. The epsilon/delta definition of limit formalizes the concept of gapless extension (or unbroken continuation): Let f be a function with a “black hole” at point c (using your nice analogy). We wish to extend its definition to include f(c) as well. Suppose we have a candidate value L for f(c). Now, if we can show there are no gaps between L and the neighboring values of c, then L is the “most consistent” candidate and we can happily define f(c) = L This gaplessness is shown with the formal definition of limit: For any ε > 0 (i.e. for any potential gap between L and the neighboring values of the black hole) there is a δ > 0 (a neighborhood of c) such that for all 0 < |x − c| < δ we have |f(x) − L| 0, it follows that there are no gaps whatsoever between L and the neighboring values. So L is the perfect candidate for f(c) = L. Similar arguments can be presented for limits of successions, series, integrals etc… Now, why is gaplessness the best criterion to extend a set of known values to an unknown value? Because nature works that way, at least macroscopically. A basic principle of natural philosophy is “Natura non facit saltus” (Nature does not proceed by jumps, http://en.wikipedia.org/wiki/Natura_non_facit_saltus). Therefore by defining e.g. instantaneous rate of change as the gapless extension of Δy/Δx values to the case Δx = 0, we are working according to the (macroscopic) laws of nature, which would account for the incredible success of calculus. I would love to hear your opinion on the above. Again, thanks for your wonderful and enlightening site. – Max 34. Massimo Coletti says: Hi Kalid, Sorry for posting twice, but the previous comment got somehow messed up… I just found your site, and it’s wonderful! Regarding the intuition of limit: after years of struggling with it, I came up with an insight that finally satisfied me. I would be very happy to hear your opinion. The epsilon/delta definition of limit formalizes the concept of gapless extension (or unbroken continuation): Let f be a function with a “black hole” at point c (using your nice analogy). We wish to extend its definition to include f(c) as well. Suppose we have a candidate value L for f(c). Now, if we can show there are no gaps between L and the neighboring values of c, then L is the “most consistent” candidate and we can happily define f(c) = L. This gaplessness is shown by the formal definition of limit: For any ε > 0 (i.e. for any potential gap between L and the neighboring values of the black hole) there is a δ > 0 (a neighborhood of c) such that for all 0 < |x − c| < δ we have |f(x) − L| 0, it follows that there are no gaps whatsoever between L and the neighboring values. So L is the perfect candidate for f(c) = L. Similar arguments can be presented for limits of successions, series, integrals etc… Now, why is gaplessness the best criterion to extend a set of known values to an unknown value? Because nature works that way, at least macroscopically. A basic principle of natural philosophy is “Natura non facit saltus” (Nature does not proceed by jumps, http://en.wikipedia.org/wiki/Natura_non_facit_saltus). Therefore by defining e.g. instantaneous rate of change as the gapless extension of Δy/Δx values to the case Δx = 0, we are working according to the (macroscopic) laws of nature, which would account for the incredible success of calculus. I would love to hear your opinion on the above. Again, thanks for your wonderful and enlightening site. – Max
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223356008529663, "perplexity": 1287.1382204859872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380394.54/warc/CC-MAIN-20141119123300-00253-ip-10-235-23-156.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CC/textbook/CC2/chapter/Ch8/lesson/8.3.1/problem/8-65
### Home > CC2 > Chapter Ch8 > Lesson 8.3.1 > Problem8-65 8-65. Josue called his father to say that he was almost home. He had traveled $61.5$ miles, which was $\frac { 3 } { 4 }$ of the way home. Write and solve an equation to calculate the total distance he will travel to get home. Let a variable $x$ represent the total distance to home. If three fourths of the total distance, $x$, is $61.5$ miles, how can you represent this as an equation? $\frac{3}{4}\textit{x}=61.5$ What does $x$ equal?
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116324186325073, "perplexity": 1130.9120736569416}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00219.warc.gz"}
https://www.physicsforums.com/threads/maximum-and-minimum-of-a-function.511429/
# Homework Help: Maximum And Minimum of a function 1. Jul 2, 2011 ### realism877 Function:x+x2/3 I got 0 and -(8/27) as the criticical numbers. Am I right? I also want to know if this function as a max and a min. Note:2/3 is an exponent. 2. Jul 2, 2011 ### micromass Hi realism877! Use the x2 and the x2 buttons for displaying exponents. Your critical points are correct. So can you tell me when the function is increasing and decreasing?? This will give you information about possible maxima and minima. 3. Jul 2, 2011 ### SammyS Staff Emeritus Hello realism877. (Use the ' X2 ' button above the "Advanced Message" box for exponents (superscripts)). How did you get the critical numbers? What did you get for a first derivative? x+x2/3 has relative extrema. 4. Jul 2, 2011 ### realism877 The function is increasing (-infinity, -8/27)u(0,+infinity) Decreasing (-8/27) Absolute max= (-8/27, .1481) Absolute min= (0,0) Are my findings correct? 5. Jul 2, 2011 ### micromass You mean that it's decreasing in (-8/27,0), right? These certainly are relative minima and maxima, but that doesn't make them absolute. the function increases after 0, so it might get very big. The function increases before -8/27, so it might get really small. The only way to know is by calculating the limits in $\pm \infty$ and see to where the function increases... 6. Jul 2, 2011 ### realism877 Yes, (-8/27, 0) decreasing Are they local maxima and minima? The values I posted. 7. Jul 2, 2011 ### micromass The function increases before -8/27 and decreases after, so it's a local maximum. The same with local minimum. But you'll need to do some more work to determine whether they are global maxima/minima. 8. Jul 2, 2011 ### realism877 There are none. The function is (-infinity, +infinity) Am I on to something? 9. Jul 2, 2011 ### micromass Yes, you are entirely correct! You might want to prove that the range is $(-\infty,+\infty)$... 10. Jul 2, 2011 ### SammyS Staff Emeritus Since the critical point and thus the rel. min. at x=0 comes from the non-existence of the derivative at x=0, you might want to show (or state) that the function is continuous at x = 0 . --- just for completeness. 11. Jul 2, 2011 ### realism877 Thanks I did the second derivative check, and I tried to get the critical numbers. it resulted to 1=0. From my calculator it looks like there is concavity down. But how can I determine concavity algebraically? 12. Jul 2, 2011 ### realism877 How do I show? 13. Jul 2, 2011 ### SammyS Staff Emeritus To be continuous at x=0: limx→0+ f(x) = limx→0- f(x) = f(0)​ For concavity: Isn't the second derivative negative everywhere, except at x = 0, where it does not exist ?​ 14. Jul 2, 2011 ### realism877 No,,when I tried to get the 0s from the second derivative, I got this 0=1 How do I know where thereis convavity, if I don't have 0s to do a number line to check where is positive or negative? 15. Jul 2, 2011 ### ArcanaNoir I don't know why you got 0=1, you shouldn't have. To determine concavity, use the second derivative and choose a point in each interval between critical points, including a point further left than your smallest critical point and a point further right than your largest critical point. Positive values mean concave up, negative concave down. If you're having trouble with the second derivative, try finding it again. If you can't get it, what do you have as the second derivative? I just have to say I think this is a squirrely function...bends and corners... icky. 16. Jul 2, 2011 ### realism877 I'm on a mobile device, and I'm not near the paper where I did the work. I do remember not being able to solve for x. Can someone verify for me that there are no 0s for the second derivative test? 17. Jul 2, 2011 ### SammyS Staff Emeritus There are no zeros for the second derivative, but there is a critical point. Look at the second derivative (when you get back to your working paper). It should be obvious that it is negative wherever it's defined. 18. Jul 3, 2011 ### realism877 I don't understand. How can I get a criticial point if I can't get a 0? 19. Jul 3, 2011 ### ArcanaNoir The second derivative test actually favors non-zero values. You're looking for positivity or negativity. When the second derivative = zero, it's a possible inflection point. Everywhere else, it's indicative of concavity. In this situation, it's all the same concavity everywhere. 20. Jul 3, 2011 ### realism877 Where do I plug in values in?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8338850736618042, "perplexity": 2152.8449810958527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212639.36/warc/CC-MAIN-20180817163057-20180817183057-00515.warc.gz"}
https://open.library.ubc.ca/cIRcle/collections/48630/items/1.0371948
# Open Collections ## BIRS Workshop Lecture Videos ### Mahler measure and the Vol-Det Conjecture Champanerkar, Abhijit #### Description For a hyperbolic link in the 3-sphere, the hyperbolic volume of its complement is an interesting and well-studied geometric link invariant. Similarly, the determinant of a link is one of the oldest diagrammatic link invariant. In previous work we studied the asymptotic behavior of volume and determinant densities for alternating links, which led us to conjecture a surprisingly simple relationship between the volume and determinant of an alternating link, called the Vol-Det Conjecture. In this talk we outline an interesting method to prove the Vol-Det Conjecture for infinite families of alternating links using a variety of techniques from the theory of dimer models, Mahler measures of 2-variable polynomials and the hyperbolic geometry of link complements in the thickened torus.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248534798622131, "perplexity": 467.72090549139006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00077.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/92713-kinematics-problem.html
# Math Help - Kinematics problem 1. ## Kinematics problem - correct problem appears in the next post 2. To begin with I'll just make this question more readable and fill in the bits that have obviously been removed in the copy and paste. Also I've corrected any equations that I think haven't been written correctly. Let me know if I guess the missing bits incorrectly. I've highlighted the bits of text I added in red: 1. A pilot attempts to fly with constant speed $v$ from the point $P = (D, 0)$ on the $x$-axis to the origin $O = (0; 0)$. A wind blows with speed $w$ in the positive $y$ direction. The pilot is not familiar with vector addition and thinks the shortest path to $O$ is achieved by flying his plane so that it always points directly towards $O$. (a) Show that the actual flight path of the plane (in Cartesian coordinates) is given by $y(x) = f(x) \sinh [g(x)]$ ; (1) where $f(x)$ and $g(x)$ are scalar functions of $x$ that are to be determined. (b) Consider the three cases $w > v$, $w = v$, and $w < v$ separately, and determine in which cases the pilot actually reaches $O$. (c) Show that the flight path of the plane (in polar coordinates) is given by the polar equation $r = r(\theta) = D ( \cos \theta )^{c_1} (1 + \sin \theta )^{c_2}$ ; (2) where $c_1$ and $c_2$ are constants (that depend on $v$ and $w$) that are to be determined. (d) Show that the radial component of the acceleration $a_r = 0$ and determine the transverse component $a$ in terms of the polar coordinates $r$ and $\theta$ (at time $t$). (e) Determine the flight path of the plane (in Cartesian coordinates) if the wind is blowing with a velocity $w$ in a direction that makes an angle with the vertical (i.e., the $y$ axis). I'll put up a solution in a little while. 3. It's late in the UK right now so will do so tomorrow now. 4. Hi! I have been struggling with this problem for quite some time so would also appreciate it if you would be able to post an answer ASAP or at least some clarification! Thanks for your assistance! 5. hey, yes the corrections are right. not sure why it didn't include them when i pasted the question. thanks for changing them 6. Here are the solutions (a) to (d): (a) Spoiler: The pilot's apparent (to himself) velocity is the relative velocity, $\mathbf{v'}$ , given by $\mathbf{v'} = \mathbf{v} - \mathbf{w}$ where $\mathbf{v}$ is the actual velocity and $\mathbf{w} = w \hat{\mathbf{j}}$ is the wind velocity. Now, since he is always flying towards $O$ we have that $\mathbf{v'} = -v \hat{\mathbf{e}}_r = -\frac{v}{\sqrt{x^2 +y^2}} (x \hat{\mathbf{i}} + y\hat{\mathbf{j}})$ so $\mathbf{v} = \mathbf{v'} + \mathbf{w}$ $= -v \hat{\mathbf{e}}_r +w \hat{\mathbf{j}}$ $= -\frac{v x}{\sqrt{x^2 +y^2}} \hat{\mathbf{i}} + \left(w -\frac{vy}{\sqrt{x^2 +y^2}} \right)\hat{\mathbf{j}}$. Now, note that $v_x = \dot{x}$ and $v_y = \dot{y}$ so $\frac{dy}{dx} = \frac{v_y}{v_x}$ $\Leftrightarrow \frac{dy}{dx} = \frac{y}{x} - \frac{w}{v} \sqrt{1+ \left(\frac{y}{x}\right)^2}$. Now let $y = z x$ so $\frac{dy}{dx} = x \frac{dz}{dx} + z$ so our equation becomes $x \frac{dz}{dx} + z = z - \frac{w}{v} \sqrt{1 + z^2}$ $\Leftrightarrow x \frac{dz}{dx} = - \frac{w}{v} \sqrt{1 + z^2}$ Integrating and using the fact that at $x=D$, $y = 0$ we have $\int_{0}^{z} \frac{1}{\sqrt{1+{z'}^2}} \, \mathrm{d}z' = -\frac{w}{v} \int_{D}^{x} \frac{1}{x'}\, \mathrm{d}x'$ giving us $\sinh^{-1} z = \frac{w}{v} \ln \left(\frac{D}{x} \right)$, $\Leftrightarrow z = \sinh \left[ \frac{w}{v} \ln \left(\frac{D}{x} \right) \right]$, $\boxed{ \Leftrightarrow y = x \sinh \left[ \frac{w}{v} \ln \left(\frac{D}{x} \right) \right]}$. (b) Spoiler: If you recall the definition of the $\sinh$ function (in terms of exponentials) we can rewrite this as $y = \frac{x}{2} \left(\exp\left[\frac{w}{v} \ln \left(\frac{D}{x} \right) \right] - \exp\left[-\frac{w}{v} \ln \left(\frac{D}{x} \right) \right] \right)$ $= \frac{x}{2} \left( \left(\frac{D}{x}\right)^{\frac{w}{v}} - \left( \frac{D}{x}\right)^{-\frac{w}{v}} \right)$ $= \frac{1}{2} \left( D^{\frac{w}{v}} x^{1-\frac{w}{v}} - D^{-\frac{w}{v}} x^{1+ \frac{w}{v}} \right)$ Now what we're interested in is the behaviour of $y$ as $x \to 0$ since if $y \to 0$ then the pilot will arrive at $O$. When $w > v$ then $1- \frac{w}{v} < 0$ so then as $x \to 0$, $y \to \infty$ so the pilot never arrives at O. This should be obvious since even if he were to head directly into the wind he would always be moving vertically away from the $x$-axis! When $w < v$ then $1- \frac{w}{v} > 0$ so as $x \to 0$, $y \to 0$ also. Hence the pilot does infact arrive at $O$ in this case. When $w = v$ then $1- \frac{w}{v} = 0$ so we have that $y = \frac{1}{2} \left(D - D^{-1} x^2 \right)$ so at $x=0$ we have $y = \frac{D}{2}$ so again the pilot does not reach $O$. (c) Spoiler: Continuing from the previous part you can substitute for $x$ and $y$ using $x = r \cos \theta$ and $y = r \sin \theta$ so the equation from (b) becomes: $2 r \sin \theta = D^{\frac{w}{v}} (r \cos \theta)^{1-\frac{w}{v}} - D^{-\frac{w}{v}} (r \cos \theta)^{1+ \frac{w}{v}}$ then division by $r \cos \theta$ gives $2 \tan \theta = D^{\frac{w}{v}} (r \cos \theta)^{-\frac{w}{v}} - D^{-\frac{w}{v}} (r \cos \theta)^{\frac{w}{v}}$ and if we let $u = \left(\frac{r\cos \theta}{D}\right)^{\frac{w}{v}}$ we get the quadratic $u^2 + 2 \tan \theta \, u - 1 = 0$, which is easily solved as $u = - \tan \theta \pm \sec \theta$ but since $r = D$ when $\theta = 0$ then $u = \sec \theta -\tan \theta = \frac{1}{\cos \theta} (1- \sin \theta)$ but note that $1- \sin^2 \theta \equiv \cos^2 \theta$ $\Leftrightarrow (1-\sin \theta) (1+ \sin \theta) \equiv \cos ^2 \theta$ $\Leftrightarrow 1- \sin \theta = \cos^2 \theta (1+ \sin \theta)^{-1}$ so $u = \cos \theta (1+ \sin \theta)^{-1}$ and putting back in terms of $r$ we get $\boxed{r = D \left( \cos \theta \right)^{\frac{v}{w} -1} \left(1+ \sin \theta \right)^{-\frac{v}{w}}}$. Note that had I used the substitution $u = \left(\frac{r\cos \theta}{D}\right)^{-\frac{w}{v}}$, solving the quadratic would have directly yielded an expression in terms of $(1 + \sin \theta)$ rather than having to use the identity. I did it this way because I felt people would naturally choose the positive exponent so wanted to show how it could be done. An alternative method would be to convert the velocity in terms of polar coordinate vectors and relate to the general expression for velocity in polar coordinates to get a differential equation in terms of $r$ and $\theta$ and then solve. (d) Spoiler: Recall that we can express $\mathbf{v}$ as $\mathbf{v} = -v \hat{\mathbf{e}}_r +w \hat{\mathbf{j}}$ (don't worry about the mixed geometry just yet as it makes life easier) so then the acceleration is given by the derivative wrt $t$ of this so we have $\mathbf{a} = \frac{d}{dt} \mathbf{v} = -v \dot{\theta} \hat{\mathbf{e}}_{\theta}$ since $\frac{d}{d \theta} \hat{\mathbf{e}}_{r} = \hat{\mathbf{e}}_{\theta}$, so $\mathbf{a} \cdot \hat{\mathbf{e}}_r = 0$ hence $a_r = 0$. Now, from the above, we know that the transverse acceleration, $a_t$ , is given by $a_t = -v \dot{\theta}$. Now note that in polar coordinates $\mathbf{v} = \dot{r} \, \hat{\mathbf{e}}_{r} + r \dot{\theta} \hat{\mathbf{e}}_{\theta}$ so if we express our $\mathbf{v}$ in polar coordinates it will give us $\dot{\theta}$ at time $t$. So to convert to polar coordinates recall that $\hat{\mathbf{j}} = \sin \theta \, \hat{\mathbf{e}}_{r} + \cos \theta \, \hat{\mathbf{e}}_{\theta}$ $\Rightarrow \mathbf{v} = \left(w \sin \theta - v \right) \, \hat{\mathbf{e}}_{r} + w \cos \theta \,\hat{\mathbf{e}}_{\theta}$, hence by comparison we have $r \dot{\theta} = w \cos \theta$ so $a_t = -v \dot{\theta} = -\frac{v w}{r} \cos \theta$. $\boxed{a_t = -\frac{v w}{r} \cos \theta }$. Will put up solution to (e) in a little while (going to have my supper!). If anyone else would like to put up the solution to (e) in the meantime they are welcome to! 7. I love ya~~~!!! 8. Thanks, I ended up figuring out a,b and c but your solution to d helped alot. for e however, i'm still clueless. midterm soon but i feel much more prepared now. 9. ## Concluding part! (e) Spoiler: First let $\alpha$ be the angle that $\mathbf{w}$ makes with the $y$-axis (positive in clockwise sense). The thing to notice about this problem is that if you were to rotate the $x$- $y$ coordinate system clockwise through an angle of $\alpha$, to the new $x'$- $y'$ coordinate system, the wind is then again parallel to the $y'$-axis, so we arrive back at our original problem with the only difference being that our initial starting point for the trajectory is now at ( $D \cos \alpha$ , $D \sin \alpha$ ). So we can treat this problem exactly the same as part (a) up to the part where the initial conditions come in i.e. the limits of the integral in terms of $z$. So at the starting point in the $x'$- $y'$ frame coordinate system we have $x' = D \cos \alpha$ and $z = y' / x' = \tan \alpha$ so the integral becomes $\int_{\tan \alpha}^{z} \frac{1}{\sqrt{1+{z'}^2}} \, \mathrm{d}z' = -\frac{w}{v} \int_{D \cos \alpha}^{x'} \frac{1}{x''}\, \mathrm{d}x''$ where $z'$ and $x''$ are the dummy variables of integration in place of $z$ and $x'$ respectively. From the integral we get: $\sinh^{-1} z - \sinh^{-1} \left(\tan \alpha \right) = -\frac{w}{v} \ln \left(\frac{x'}{D \cos \alpha} \right)$. Note this solution can be tidied up in various ways but I'll leave that to you. Finally, the solution is in terms of the rotated coordinates $x'$ and $y'$ so it needs to be put back in terms of $x$ and $y$ using the transformation equations $x' = x \cos \alpha - y \sin \alpha$, $y' = x \sin \alpha + y \cos \alpha$ and $z = \frac{y'}{x'} = \frac{x \cos \alpha - y \sin \alpha}{x \sin \alpha + y \cos \alpha}$. The final result is then a big messy implicit equation in terms of $x$ and $y$ which I can't be bothered to late $\chi$ so I'll leave to you to work out from the above!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 148, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571272134780884, "perplexity": 210.18236300156184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657121798.11/warc/CC-MAIN-20140914011201-00326-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.wias-berlin.de/annual_report/2001/node74.html
[Next]: Numerical analysis of complex stochastic models [Up]: Project descriptions [Previous]: Inference for complex statistical models [Contents]   [Index] ## Applied mathematical finance Cooperation with: A. Bachi (University of Twente, The Netherlands), B. Coffey (Merrill Lynch, New York, USA), J.P. Dogget (Risk Waters Group, London, UK), H. Föllmer, W. Härdle, U. Küchler, R. Stehle (Humboldt-Universität zu Berlin), H. Haaf (Münchener Rückversicherung AG, München), A.W. Heemink, H. van der Weide (Technische Universiteit, Delft, The Netherlands), P. Kloeden (Johann Wolfgang Goethe-Universität, Frankfurt am Main), J. Kremer, C. März, T. Sauder, T. Valette (Bankgesellschaft Berlin AG, Berlin), O. Kurbanmuradov (Turkmenian Academy of Sciences, Ashkhabad), M. Schweizer (Technische Universität Berlin/Universität München), G. Stahl (Bundesaufsichtsamt für das Kreditwesen), U. Wystup (Commerzbank AG, Frankfurt am Main) Supported by: Bankgesellschaft Berlin AG, SWON Netherlands (Dutch Research Association), BMBF: Effiziente Methoden zur Bestimmung von Risikomaßen'' (Efficient methods for valuation of risk measures - 03SCM6B5) Description: The project Applied mathematical finance of the Research Group Stochastic Algorithms and Nonparametric Statistics is concerned with the stochastic modeling of financial data, the valuation of derivative instruments (options), and risk management for banks. The implementation of the developed models and their application in practice is done in cooperation with financial institutions. Since the Basel Committee's proposal for An internal model-based approach to market risk capital requirements'' (1995) was implemented in national laws, banks have been allowed to use internal models for estimating their market risk and have been able to compete in the innovation of risk management methodology. Since all banks are required to hold adequate capital reserves with regard to their outstanding risks, there has been a tremendous demand for risk management solutions. These problems of risk measurement and risk modeling are the subject of the BMBF project Efficient methods for valuation of risk measures'', which started in January 2001 in cooperation with and with support of the Bankgesellschaft Berlin AG. Methods for the valuation of transition densities of diffusions, or more generally, stochastic differential equations are useful in financial modeling. In joint work with the projects Inference for complex statistical models'' and Numerical methods for stochastic models'' we discovered a general root-N consistent Monte Carlo estimator for a diffusion density ([16]). Within the SWON Netherlands project new progress has been made with respect to the unified modeling of stocks and interest rates. This was presented at RISK Europe 2001 (Paris) ([20]). 1. Risk management for financial institutions  (S. Jaschke, O. Reiß, J. Schoenmakers, V. Spokoiny, J.-H. Zacharias-Langhans). Although the basic principles of the evaluation of market risks are now more or less settled, in practice many thorny statistical and numerical issues remain to be solved. Specifically the industry standard, the approximation of portfolio risk by the so-called delta-gamma normal'' approach, can be criticized because of the quadratic loss approximation and the Gaussian assumptions. Further, in the context of the Basel II'' consultations, fundamental questions arise in the area of Credit Risk Modeling. In cooperation with Bankgesellschaft Berlin AG we work on a project concerning the problem of efficient valuation of complex financial instruments, for example American options and convertible bonds. For standard American options our objective was to increase the speed and accuracy of various algorithms, for example by Richardson extrapolation. Further we focused on the problem of how to incorporate credit risk in the valuation of highly complex instruments like, e.g., ASCOTs (Asset Swapped Convertible Option Transactions). The close cooperation with traders in the bank proved to be very fruitful in testing and comparing several models that combine credit and market risk. In preparation of the lecture Risk Management for Financial Institutions'', given by Stefan Jaschke in the winter terms 2000/01 and 2001/02, an extensive review of the general literature on the subject was done. The practical implementation of an enterprise-wide risk management system needs an understanding of the economic, statistical, numerical, social, and information technology aspects of the problem. The insights gained from the study of the general literature allow to assess not only the inner-mathematical relevancy, but also the practical relevancy of new ideas and open problems. One of the problems that arose in the consulting with Bankgesellschaft Berlin led to a study of the Cornish-Fisher approximation in the context of delta-gamma-normal approximations ([10]). An overview of the approximation methods in the context of delta-gamma-normal models was given by [13]. The relation between coherent risk measures , valuation bounds, and certain classes of portfolio optimization problems was established in [12]. One of the key results is that coherent risk measures are essentially equivalent to generalized arbitrage bounds, also called good deal bounds'' by Cerny and Hodges. The results are economically general in the sense that they work for any cash stream spaces, be it in dynamic trading settings, one-step models, or deterministic cash streams. They are also mathematically general as they work in (possibly infinite-dimensional) linear spaces. The valuation theory seems to fill a gap between arbitrage valuation on the one hand and utility maximization (or equilibrium theory) on the other hand. Coherent valuation bounds strike a balance in that the bounds can be sharp enough to be useful in the practice of pricing and still be generic, i.e. somewhat independent of personal preferences, in the way many coherent risk measures are somewhat generic. Coherent risk measures are so important because of the deficiencies of the currently used quantile-VaR, which is not coherent. These deficiencies of quantile-VaR as a risk measure are discussed and contrasted with the properties of coherent risk measures in [11], which was submitted to the Basel Committee in the consultation period of the Basel II'' proposal. Generalizations of coherent risk measures are currently being studied by S. Jaschke and P. Mathé. In the context of the BMBF project Efficient methods for valuation of risk measures'' we concentrated on the problem of estimating the Value-at-Risk  for large portfolios by full Monte Carlo valuation. In this respect we closely work together with the project Numerical methods for stochastic models''. In order to obtain fairly accurate results by this method in acceptable time, variance reduction techniques like importance sampling or stratified sampling have to be used ([7]). To apply these techniques one typically needs some a priori estimation of the value to be determined. The industrial standard delta-gamma-normal approximation poses, while being standard, computational problems which demand careful analysis. We developed well-adapted algorithms for the generalized eigenvalue problem and for the Fourier inversion arising in this context. 2. Interest rate modeling , calibration, and pricing of non-standard derivatives (G.N. Milstein, O. Reiß, J. Schoenmakers). Previously we established a conceptual approach of deriving parsimonious correlation structures suitable for the implementation in the LIBOR/EurIBOR market model  given by where the LIBOR/EurIBOR processes Li are defined in [t0,Ti] with being day count fractions and = deterministic volatility functions. Further, is a d-dimensional Wiener process under the so-called terminal measure By imposing additional constraints on a known ratio correlation structure, motivated by economically sensible assumptions concerning forward LIBOR/EurIBOR correlations, we yield a semi-parametric framework of non-degenerate correlation structures from which we derive systematically low parametric structures with, in principle, any desired number of parameters [14, 22, 23]. See (1) for an example correlation structure with three parameters ,where m is the number of LIBORs involved. = (1) As a result, such correlation structure combined with a suitable parametrization of the norm of the deterministic LIBOR/EurIBOR volatility provides a parsimonious multi-factor model with a realistic correlation structure. This allows for stable simultaneous calibration to caps and swaptions via approximative swaption formulas. In the global markets the payment dates of swaps and caps are differently settled. In this respect we improved existing approximation methods for swaptions by taking this issue into account. Further we proposed the incorporation of a stabilizing penalty factor in the RMS object function which prevents the calibration routine from running into degenerate parameter regions. By this penalty function calibration remains stable even if the market data set under consideration contains some internal misalignments. Within the thus constructed framework we carried out various calibration tests which has led to new insights concerning the relationship between the cap and swap markets. Our results will be presented at Risk Europe 2002. Within an economical context we study the concept of assets and interest rates in a unified model which is completely specified by the assets alone. This allows endogenous derivations of dynamic relations between assets and interest rates from global structural assumptions (homogeneity and some spherical symmetry) on the market. For instance, if c is the drift and b0 the volatility of the short rate, the drift and the volatility of the stock index, and the correlation between short rate and index, we obtained: (2) We analyzed such relations further and studied connections among the numeraire portfolio (which is in fact the inverse of the pricing kernel), observable indices, interest rate dynamics, and risk premia. This research was presented at Risk Europe 2001 ([20]). Within the framework of a risk management system it is necessary not only to validate financial instruments but also to compute their derivations, the so-called Greeks. Due to symmetry relations in a financial market or homogeneity relations of a financial product we obtained relations between the Greeks of a derivative. These results can be used to avoid usually instable numerical differentiations ([21]). In [15] we developed a Monte Carlo approach for computing option sensitivities. There we find these quantities by Monte Carlo simulation of a corresponding system of stochastic differential equations using weak solution schemes. It turns out that with one and the same control function a variance reduction can be achieved simultaneously for the claim value as well as for the deltas. Recently, we started to investigate Monte Carlo methods for the determination of exercise boundaries of certain American options. The idea is to extend an exercise boundary known up to a certain maturity time by a Monte Carlo procedure. In this procedure we utilize a more sophisticated algorithm for the simulation of stochastic differential equations in the neighborhood of a boundary ([17]). References: 1.  P. ARTZNER, F. DELBAEN, J.M. EBER, D. HEATH, Coherent measures of risk, Math. Finance, 9 (1998), pp. 203-228. 2.  A. BRACE, D. GATAREK, M. MUSIELA, The market model of interest rate dynamics, Math. Finance, 7 (1997), pp. 127-155. 3.  P. EMBRECHTS, C. KLÜPPELBERG, T. MIKOSCH, Modelling Extremal Events, Springer, Berlin, 1997. 4.  P. EMBRECHTS, A. MCNEIL, D. STRAUMANN, Correlation: Pitfalls and alternatives, RISK Magazine, 1999. 5.  J. FRANKE, W. HÄRDLE, G. STAHL, Measuring Risk in Complex Stochastic Systems, to appear in: Lecture Notes in Statist., Springer, Berlin. 6.  P. GLASSERMAN, X. ZHAO, Arbitrage-free discretization of lognormal forward Libor and swap rate models, Finance Stoch., 4 (2000), pp 35-68. 7.  P. GLASSERMAN, P. HEIDELBERGER, P. SHAHABUDDIN, Importance sampling and stratification for Value-at-Risk, Proceedings of the Sixth International Conference on Computational Finance, MIT Press, Cambridge, Mass., 2000. 8.  W. HÄRDLE, H. HERWARTZ, V. SPOKOINY, Multiple volatility modelling, in preparation. 9.  F. JAMSHIDIAN, LIBOR and swap market models and measures, Finance Stoch., 1 (1997), pp. 293-330. 10.  S. JASCHKE, The Cornish-Fisher-expansion in the context of delta-gamma-normal approximations, http://www.jaschke-net.de/papers/CoFi.Pdf , Disc. Paper 54, Humboldt-Universität zu Berlin, Sonderforschungsbereich 373, Berlin, 2001. 11.  , Quantile-VaR is the wrong measure to quantify market risk for regulatory purposes, http://www.jaschke-net.de/papers/VaR-is-wrong.Pdf , Disc. Paper 55, Humboldt-Universität zu Berlin, Sonderforschungsbereich 373, Berlin, 2001. 12.  S. JASCHKE, U. KÜCHLER, Coherent risk measures and good-deal bounds, Finance Stoch., 5 (2001), pp. 181-200. 13.  S. JASCHKE, Y. YIANG Approximating value at risk in conditional Gaussian models, to appear in: Applied Quantitative Finance, chapter 1, W. Härdle, T. Kleinow, G. Stahl, Eds., http://www.xplore-stat.de/ebooks/ebooks.html . 14.   O. KURBANMURADOV, K.K. SABELFELD, J. SCHOENMAKERS, Lognormal random field approximations to LIBOR market models, WIAS Preprint no. 481, 1999, to appear in: J. Comput. Finance. 15.   G.N. MILSTEIN, J. SCHOENMAKERS, Numerical construction of a hedging strategy against the multi-asset European claim, WIAS Preprint no. 507, 1999, to appear in: Stochastics Stochastics Rep. 16.  G.N. MILSTEIN, J. SCHOENMAKERS, V. SPOKOINY, Transition density estimation for stochastic differential equations via forward-reverse representations, WIAS Preprint no. 680, 2001. 17.  G.N. MILSTEIN, M.V. TRETYAKOV, Simulation of a space-time bounded diffusion, Ann. Appl. Probab., 9 (1999), pp. 732-779. 18.  K.R. MILTERSEN, K. SANDMANN, D. SONDERMANN, Closed-form solutions for term structure derivatives with lognormal interest rates, J. Finance, 52 (1997), pp. 409-430. 19.  R.B. NELSON, An Introduction to Copulas, Springer, New York, 1999. 20.  O. REISS, J. SCHOENMAKERS, M. SCHWEIZER, Endogenous interest rate dynamics in asset markets, WIAS Preprint no. 652, 2001. 21.  O. REISS, U. WYSTUP, Computing option price sensitivities using homogeneity and other tricks, J. Derivatives, 9 (2001), pp. 41-53. 22.  J. SCHOENMAKERS, B. COFFEY, LIBOR rate models, related derivatives and model calibration, WIAS Preprint no. 480, 1999. 23.  , Stable implied calibration of a multi-factor LIBOR model via a semi-parametric correlation structure, WIAS Preprint no. 611, 2000. [Next]: Numerical analysis of complex stochastic models [Up]: Project descriptions [Previous]: Inference for complex statistical models [Contents]   [Index] LaTeX typesetting by I. Bremer 9/9/2002
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965079545974731, "perplexity": 2478.0130880078123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
https://thatsmaths.com/2022/03/10/hyperreals-and-nonstandard-analysis/?shared=email&msg=fail
### Hyperreals and Nonstandard Analysis Following the invention of calculus, serious concerns persisted about the mathematical integrity of the method of infinitesimals. Leibniz made liberal use of infinitesimals, with great effect, but his reasoning was felt to lack rigour. The Irish bishop George Berkeley criticised the assumptions underlying calculus, and his objections were not properly addressed for several centuries. In the 1800s, Bolzano, Cauchy and Weierstrass developed the ${\varepsilon}$${\delta}$ definition of limits and continuity, which allowed derivatives and integrals to be defined without recourse to infinitesimal quantities. In the ${\varepsilon}$${\delta}$ formalism, limits are defined by something like this: the limit of a sequence ${\{x_n\}}$ is ${x}$ iff, for any positive ${\varepsilon}$, however small, we can find a number ${N}$ such that, for ${n>N}$ we have ${|x_n - x| < \varepsilon}$. We can then define the derivative of a function in the familiar way: $\displaystyle f^\prime(x) = \lim_{\Delta x\rightarrow 0} \left(\frac{f(x+\Delta x)-f(x)}{\Delta x} \right) \,. \ \ \ \ \ (1)$ The ${\varepsilon}$${\delta}$ formalism has been a source of strife for generations of students of mathematics. Is there any way to avoid it? The Hyperreal Numbers ${\mathbb{R}^*}$ In the 1960s, Abraham Robinson showed that the familiar system of real numbers ${\mathbb{R}}$ can be extended to a (much) larger set called the hyperreals. The system of hyperreal numbers, denoted ${\mathbb{R}^*}$, contains all the real numbers ${\mathbb{R}}$ and also infinitesimal numbers and infinite numbers. One of the axioms used to define the hyperreals is that there exists (at least) one infinitesimal number, ${\varepsilon}$. The multiplicative inverse of an infinitesimal is infinite, and we usually denote ${1/\varepsilon}$ by ${\omega}$, so that ${\varepsilon\,\omega = 1}$. A number ${\varepsilon \in \mathbb{R}^*}$ is infinitesimal if it is smaller than every positive real number and larger than every negative real. The only real number that is infinitesimal is ${0}$. An infinite number is any element of ${\mathbb{R}^*}$ that is either greater than every real number or less than every real number. Microscope to look at monads [from Keisler, 2000]. The hyperreals are discussed in great detail in two books by H. Jerome Keisler, an elementary text introducing calculus using hyperreals (Keisler, 2000) and a more advanced text (Keisler, 2007). Both are freely available online. We can visualise the hyperreals by imagining a cloud around each real number ${x}$, consisting of elements of ${\mathbb{R}^*}$ that differ from ${x}$ by an infinitesimal. This cloud is called the monad of x, ${\mathrm{monad}(x)}$, a term introduced by Leibniz. Keisler used the technique of a microscope, zooming in on the neighbourhood of ${x}$. The idea is shown in the Figure above. Telescope to look at galaxies [from Keisler, 2000]. For each element of ${\mathbb{R}^*}$, we can consider the set of numbers that differ from ${x}$ by a finite quantity. This set is called the galaxy of ${x}$, ${\mathrm{galaxy}(x)}$. This idea is visualised in the Figure here: Much detail is provided in the books of Keisler. We list some important properties of ${\mathbb{R}^*}$: • The set monad(0) of infinitesimal elements is a subring of ${\mathbb{R}^*}$: sums, differences, and products of infinitesimals are infinitesimal. • Any two monads are equal or disjoint. • The set galaxy(0) of finite elements is a subring of ${\mathbb{R}}$: sums, differences, and products of finite elements are finite. • Any two galaxies are either equal or disjoint. • ${x}$ is infinite if and only if ${x^{-1}}$ is infinitesimal. • ${\mathbb{R}^*}$ has positive and negative infinitesimals. • ${\mathbb{R}^*}$ has positive and negative infinite elements. • There are infinitely many infinitesimals and infinitely many galaxies in ${\mathbb{R}^*}$. • The product of an infinitesimal and an infinite element may be infinitesimal, finite or infinite. Nonstandard Analysis For any real number ${x}$, the set ${\mathrm{monad}(x)}$ contains precisely one real number, ${x}$ itself. For any hyperreal ${y \in \mathrm{monad}(x)}$, we call ${x}$ the standard part of ${y}$, and write ${x = \mathrm{st}(y)}$. The standard part function rounds off each finite hyperreal to the nearest real. Robinson used the term nonstandard analysis for his development of calculus using hyperreal numbers. He was able to define derivatives and integrals in a direct way. The standard part may be used to define the derivative: $\displaystyle f^\prime(x) = \mathrm{st} \left(\frac{f(x+\Delta x)-f(x)}{\Delta x} \right) \,, \ \ \ \ \ (2)$ where ${\Delta x}$ is an infinitesimal. Similarly, the integral is defined as the standard part of a suitable infinite sum. Robinson proved that the system of hyperreals is logically consistent iff the system of real numbers is consistent. This settled the centuries-old arguments about the logical soundness of arguments using infinitesimals. The classic introduction to nonstandard analysis is Robinson’s book Non-standard analysis (Robinson, 1996). [There is much more to say. I hope to return soon to this topic.] Sources ${\bullet}$ Keisler, H. Jerome, 2000: Elementary Calculus: An Infinitesimal Approach. On-line edition, revised January 2022. https://people.math.wisc.edu/~keisler/calc.html ${\bullet}$ Keisler, H. Jerome, 2007: Foundations of Infinitesimal Calculus. On-line Edition, https://people.math.wisc.edu/~keisler/foundations.html ${\bullet}$ Robinson, Abraham, 1996: Non-standard analysis, Princeton University Press, ISBN 978-0-6910-4490-3. *    *    *
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912222027778625, "perplexity": 1911.3716627396213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00507.warc.gz"}
http://math.stackexchange.com/questions/282065/does-black-have-a-winning-strategy-in-gomokufreestyle
# Does Black have a winning strategy in Gomoku(freestyle)? Gomoku is actually a finite two-person game of perfect information. Moreover, if we consider draw as victory of White, then by Zermelo's theorem, exactly one of the two has a winning strategy, either Black or White. In other words, either Black is destined to win, if he does not make any error, or White can at least make a draw. So my question is which one? the Black or the White? I have asked a similar question for Go, however, the terminal answer for board 19$\times$19 is still unknown despite of Black having more or less some advantages. However, for Gomoku there is another story. A programmer asserted that Black has a winning strategy in Gomoku(freestyle). Moreover, (s)he announced (s)he had found this winning strategy and written a program named Gomoku Terminator which "completely terminated the gomoku game". Furthermore, (s)he claimed that the one who first beat the program can earn a bonus $¥920000$ (about $€92000$). But no one has taken this bonus since 2006. So there seems to be sufficient reasons to believe (s)he is right. But I still have a doubt: Do PCs nowadays have enough capability to calculate the whole game tree? Note that the game-tree complexity of Gomoku is PSPACE-complete. So another question arose: Does Gomoku Terminator(v1.22) really have the winning algorithm for Black? - The Wikipedia article you linked to states that L. Victor Allis showed in $1994$ that black wins on a $15\times15$ board. The "Gomoku Terminator" site you link to has an image of the upper portion of a board with $15$ columns. Thus it seems that this program merely does what was known to be possible in $1994$, and this has no bearing on the open question of the $19\times19$ board. Allis' thesis states that Gomoku used to be played on $19\times19$ boards because that's the size of Go boards, but that the $15\times15$ board has now become the standard.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5327128171920776, "perplexity": 793.5032709247478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445291.19/warc/CC-MAIN-20151124205405-00008-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/38491/solving-ricci-flow-equation-for-a-%242d%24-kahler-manifold
# Solving Ricci flow equation for a $2D$ Kahler manifold + 5 like - 0 dislike 72 views For the $2D$ Kahler manifold, the Ricci flow equation (which is also a one-loop RG equation for the $\sigma$-model on this space) can be written in the form $\frac{\partial^2 \Phi}{\partial u^2}=\frac{\partial \Phi}{\partial u} \frac{\partial \Phi}{\partial \tau}$, where $\Phi$ is related to the conformal factor of the metric, $\Omega(u, \tau)$. The solution gives the behaviour of $\Omega$ as we move along the RG time $\tau$, giving the scale-dependence of the $\sigma$-model QFT. The equation looks simple, so I suspect that it admits an explicit solution, but I can't find it. This post imported from StackExchange Physics at 2017-02-16 08:55 (UTC), posted by SE-user Andrey Feldman recategorized Feb 16, 2017 To be clear, you are explicitly and only asking for solutions about the PDE $\frac{\partial^2 \Phi}{\partial u^2}=\frac{\partial \Phi}{\partial u} \frac{\partial \Phi}{\partial \tau}$? This post imported from StackExchange Physics at 2017-02-16 08:55 (UTC), posted by SE-user Emilio Pisanty @EmilioPisanty Yep. This post imported from StackExchange Physics at 2017-02-16 08:55 (UTC), posted by SE-user Andrey Feldman The analytical solution exists, if the boundary conditions allow the solution of the form $\omega (u, \tau)=a(\tau)+b(\tau) \psi(u)$ for $\psi(u)=\mathrm{exp} \left[ \pm \lambda u \right]$, $\mathrm{cosh} \left[\lambda u+A \right]$, $\mathrm{sinh} \left[\lambda u+A \right]$, $\mathrm{cos} \left[\lambda u+A \right]$, and $\frac{1}{\omega}=\frac{\partial \Phi}{\partial u}$. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7653744220733643, "perplexity": 1301.284554620483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00324.warc.gz"}
https://dsp.stackexchange.com/questions/26549/fourier-transform-series-dft-dfs-textbook-problem-simple
# Fourier Transform/Series DFT/DFS textbook problem (simple?) Suppose $x_c(t)$ is a periodic continuous time signal with period 1 ms and for which the Fourier series is \begin{align*} x_c(t) &= \sum\limits_{k=-9}^9 a_k e^{j(2000 \pi k t)} \\ \end{align*} The Fourier series coefficients $a_k$ are zero for $|k| > 9$. $x_c(t)$ is discretely sampled such that: \begin{align*} x[n] &= x_c\left(\frac{n}{6000}\right) \\ &= \sum\limits_{k=-9}^9 a_k e^{j(\pi k n/3)} \\ \end{align*} $x[n]$ is periodic with $N=6$ Question: Find the DFS coefficients of $x[n]$ in terms of $a_k$. My Work: The DFS coefficients of a periodic signal are: \begin{align*} W_N &= e^{-j(2\pi/N)} \\ X[k] &= \sum\limits_{n=0}^{N-1} x[n] W_N^{kn} \\ \end{align*} Changing the variable $k$ in $x[n]$ to $m$ to avoid conflict and combining yields: \begin{align*} X[k] &= \sum\limits_{n=0}^{N-1} \sum\limits_{m=-9}^9 a_m e^{j(\pi m n/3)} W_N^{kn} \\ X[k] &= \sum\limits_{n=0}^{N-1} \sum\limits_{m=-9}^9 a_m e^{j(\pi n/3 (m - k))} \\ \end{align*} I'm stumped on how to simplify or process this further. I suspect this is the wrong approach. The problem gives Fourier series coefficients of the continuous function, there should be a direct way to convert them to the discrete Fourier series coefficients. Textbook Answer: The answer given by the textbook is as follows. I am trying to figure out how to get to this answer. \begin{align*} X[k] &= 2\pi \begin{cases} a_0 + a_6 + a_{-6} & k = 0 \\ a_1 + a_7 + a_{-5} & k = 1 \\ a_2 + a_8 + a_{-4} & k = 2 \\ a_3 + a_9 + a_{-3} + a_{-9} & k = 3 \\ a_4 + a_{-2} + a_{-8} & k = 4 \\ a_5 + a_{-1} + a_{-7} & k = 5 \\ \end{cases} \end{align*} • Can you double-check if it is not $N$ instead of $2\pi$ in the $X[k]$ equation ? – Gilles Oct 21 '15 at 12:15 • The textbook solution for $X[k]$ absolutely has $2\pi$ not $N$. $N=6$, btw, so the variable wouldn't be necessary. – clay Oct 21 '15 at 13:18 $$x[n]=\sum_{k=-9}^9a_ke^{j2\pi kn/N}\tag{1}$$ with $N=6$ is correct. If you compare $(1)$ to the IDFT (or DFS) $$x[n]=\frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{j2\pi kn/N}\tag{2}$$ you'll notice that in $(2)$ each term $e^{j2\pi kn/N}$ occurs only once for each value of $k$ (for fixed $n$), whereas in $(1)$, due to the periodicity of the complex exponential, each term $e^{j2\pi kn/N}$ occurs several times. E.g., for $k=0$ in $(2)$, in $(1)$ you get contributions for $k=0$, for $k=6$, and for $k=-6$ (because $N=6$). Consequently, comparing $(1)$ and $(2)$ gives $$X[0]=N(a_0+a_6+a_{-6})$$ For all other values of $k$, the procedure is completely analogous: search for indices $k$ in $(1)$ which have the same complex exponential term $e^{j2\pi kn/N}$. You basically have to keep adding and subtracting the number $N=6$ to the indices, as long you remain in the range $|k|\le 9$, so e.g. for $X[2]$ you get contributions for $k=2$, $k=2+6=8$, and $k=2-6=-4$, because they all have the same exponential term $e^{j4\pi n/N}$. PS: The term $2\pi$ in your textbook result doesn't make sense to me, it should be $N$, as already pointed out in a comment.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000008225440979, "perplexity": 401.54364255301937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00017.warc.gz"}