url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.antionline.com/showthread.php?272669-Writing-to-the-Bootsector-of-a-floppy&mode=hybrid
Writing to the Bootsector of a floppy I wrote this to write to the bootsector of a floppy, it uses a dll and java (the dll does the dirty work, java gives a command line prompt). You going to have to unzip it, and configure (right click and change the path in) the r.bat to locate the jre (most of the times its in C:\Program Files\Java\jre1.5.0_07\bin\java.exe but open your \program files\java\to find out, you can copy/paste its path). I don't have a floppy drive, so I have no clue how well this works (if at all).
2015-01-28 05:24:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024712800979614, "perplexity": 4309.2241583154355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122123549.67/warc/CC-MAIN-20150124175523-00108-ip-10-180-212-252.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2085208/what-does-the-denominator-in-the-second-derivative-mean
# What does the denominator in the second derivative mean? This occurred to me a few days ago. We know that the derivative of a function $y=f(x)$ is $\frac{dy}{dx}$. This is because it represents how $y$ changes with $x$, which is the rate of change of $y$, or more specifically, the gradient of a function. Then the second derivative is the rate of change of rate of change, or the rate of change of gradient. Since a general rate of change is $\frac{d}{dx}$, the second derivative is $(\frac{d}{dx})(\frac{dy}{dx})$. Thus, the expanded form is $\frac{d^2y}{dx^2}$. My question is, is the denominator $d(x)^2$ or is it $(dx)^2$? Surely, it would be the latter, because when you expand $(\frac{d}{dx})(\frac{dy}{dx})$, the $(dx)(dx)$ would become $(dx)^2$. But then why is it never written with brackets? I'm sure that would confuse some people and I only realised it myself when I started thinking about the second derivative properly, in terms of what it actually means. • I find that thinking of it as notation rather than a rigorous representation is best, but this is a very interesting question, and I'm excited to see the responses. – The Count Jan 5 '17 at 19:52 • Treating $dx$ as one symbol is quite common: the same happens when talking about infinitesimal quantities such as line elements. – Chappers Jan 5 '17 at 20:09 • Sidenote: under some interpretations, the chain rule says that $d(x^2)=2xdx$. (which is not what the second derivative is about) – πr8 Jan 5 '17 at 20:12 • Thank you everyone for your responses and for confirming my belief. @Mark S. I enjoyed reading your linked answer, it all made great sense and I now understand the whole concept a lot better. Thank you also pseudoeuclidean for the insight into how it is different from a typical algebraic expression, it now makes a lot more sense. – AkThao Jan 5 '17 at 21:02 ## 3 Answers You are completely correct. When we write $\frac{d^2y}{dx^2}$, we really mean to write $\frac{d^2y}{(dx)^2}$, but those parentheses make the denominator difficult to read and write. Mathematicians accept the form $\frac{d^2y}{dx^2}$ without question, because it is not exactly an algebraic expression (although sometimes it can be treated as one), rather it is a notation that represents the concept of finding the rate of change of the rate of change. • When $dx$ is used to denote an infinitesimal either in Leibniz's slightly informal work or in nonstandard analysis (with an implied standard-part function) it pretty much is an algebraic expression. – Mark S. Jan 5 '17 at 21:37 You're right that it should really be $(dx)^2$ (if you were asking "why should that be?", see my answer here). I suspect the brackets/parentheses aren't written because Leibniz himself didn't write them when he first presented the notation and everyone just followed his lead. • By the way, can't $(\mathrm dx)^2$ also be interpreted as the $2$-tensor field $\mathrm dx\otimes\mathrm dx$ ? – Maximilian Janisch Apr 9 at 12:47 • @Maximilian I'd have to review, but my gut says "in an integral, sure. But not in the fraction under discussion here." – Mark S. Apr 9 at 16:55 You are correct that $$dx^2$$ actually means $$(dx)^2$$. Or, even better, $$(\text{d}x)^2$$ (note the non-italicized $$d$$ because it is an operator not a variable). However, if you are treating differentials as algebraic units, you can't use the standard notation of $$\frac{d^2y}{dx^2}$$. If you actually perform the derivative of $$\frac{dy}{dx}$$ you will notice that $$\frac{dy}{dx}$$ is a quotient, and therefore requires the quotient rule for solving. After simplifying everything, the second derivative using algebraically-manipulable differentials is actually $$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$$. Here, $$d^2y$$ is a shorthand for $$\textrm{d}(\textrm{d}(y))$$ and $$dx^2$$ is a shorthand for $$(\textrm{d}x)^2$$. Using this notation, the differentials of the second derivative can be algebraically manipulated freely, while using the standard notation they cannot. The paper "Extending the Algebraic Manipulability of Differentials" explores this idea further.
2020-12-05 10:06:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926381528377533, "perplexity": 281.0796999996736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00463.warc.gz"}
http://math.stackexchange.com/questions/393316/relationship-between-covariant-contravariant-basis-vectors
# Relationship between covariant/contravariant basis vectors I'm starting to learn some of the basics of covariant and contravariant vectors. I'm a little confused about the difference between a covariant and a contravariant basis vector. I know that the vector components transform using the metric tensor: $V^i = M^{ij} V_j$. How do I transform between the covariant basis vectors and contravariant basis vectors? - What is the background of your question? Unless I am missing something, this doesn't seem to be standard math notation, but rather some physics convention. –  Simon Markett May 16 '13 at 9:34 Yes, this is Einstein summation notation. The OP should be aware that notation and conventions differ here between mathematics and physics; the physics convention may be somewhat more convenient for computations but IMO physicists don't do a good job of explaining tensors. –  Qiaochu Yuan May 16 '13 at 22:48 Also I seem to recall that the standard convention for what gets called covariant and what gets called contravariant is "wrong" here (it doesn't agree with how the terms are used in category theory). So that's annoying too. –  Qiaochu Yuan May 16 '13 at 22:52 Covariant and contravariant bases are dual to one another and are physics nomenclature for constructs that arise in differential geometry. The problem here is that physicists often need to use differential geometry (for example, for relativity) long before they have seen a proper course on differential geometry. I will try to provide you with an extremely quick view of what contra/covariant bases are, and how you move between them. First, let $M$ be a smooth manifold. For our purposes, the most important part of being a smooth manifold is the Local Euclidean property, which says that for each point $p \in M$ there are open neighbourhoods $U \subseteq M, \tilde U \subseteq \mathbb R^n$ and a homeomorphism $\phi: U \to \mathbb R^n$ for some $n$; that is, if we look closely at the manifold, it just looks like $\mathbb R^n$. If we denote $\phi$ component-wise as $\phi = (x^1,\ldots,x^n)$ then we can describe points $q \in U \subseteq M$ in terms of $(x^1(q),\ldots,x^n(q))$ in $\mathbb R^n$. The next step is to consider the tangent space to $M$ at the point $p$, denoted by $T_pM$. There are quite a few ways of defining this guy, my favourite being as derivations on $M$ through $p$, but for intuition-sake, think of $T_pM$ as the collection of all vectors through $p$ which are tangent to $M$. This is a vector space, and so has a dual space $T_p^*M$ called the cotangent space. The coordinates $(x^1,\ldots,x^n)$ we defined at $p$ induce a basis on the tangent space, and that basis is typically written as $\left\{ \left.\frac{\partial}{\partial x^1}\right|_p, \ldots, \left.\frac{\partial}{\partial x^1}\right|_p\right\}$, but for our purpose let's call it $\{ v_1,\ldots,v_n\}$. Similarly, there is an induced basis on the cotangent space (which turns out to be the dual basis), often denoted $\{dx^1,\ldots,dx^n\}$ but again, for our purposes, we will write $\{v^1,\ldots,v^n\}$. These are your covariant and contravariant bases, respectively. But you are now likely confused as covariant vectors have subscripts and contravariant vectors have superscripts. This is because the Einstein summation convention says that that an arbitrary vector $v= \sum_{i=1}^n a_i v^i$ should be written by omitting the summation sign, so that $v = a_iv^i$. But the $v^i$ are in a sense assumed to be given, so the physicists shorten it further and just write $v = a_i$, and now you get the subscripts that you wanted. The same argument works for contravariant vectors. The next question is how to move between them. A priori, there is no reason to suspect this should even be possible: They live in completely different spaces! However, we can define a Riemannian metric on $M$ which, intuitively speaking, is an non-degenerate inner product on each tangent space. This is what the physicists call a metric (tensor). Non-degenerate inner products on finite dimensional vector spaces give a natural identification between the vector space and its dual. More precisely, if $\langle\cdot,\cdot\rangle: V \times V \to \mathbb R$ then we can define an isomorphism $V \to V^*$ by $v \mapsto \langle v, \cdot \rangle$ (You can actually more generally define what are called musical isomorphisms). In local coordinates, our inner product can be written (in Einstein convention) as $g^{ij}v_i \otimes v_j$ (or just $g^{ij}$ if we omit the vectors), and hence the isomorphism above corresponds to $$v \mapsto \langle v, \cdot \rangle , \qquad a_i \mapsto g^{ij} a_i.$$ Again by laziness, we avoid carrying around the metric tensor and just write $a^j = g^{ij}a_i$, which is the concept of contraction (better known to mathematicians as the interior product). You can of course move the other direction as well. In local coordinates, the metric tensor $g^{ij}$ looks like an invertible matrix. Let $g_{ij}$ be its inverse, in which case $a_j = g_{ij} a^i$. It is definitely a lot to take in, but if you understand the rigorous mathematics behind these things, the physics actually becomes significantly easier. - +1, for the musical isomorphism link (at least) –  Nikos M. Oct 31 '14 at 17:37
2015-05-26 06:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519466161727905, "perplexity": 180.61190641977907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928780.77/warc/CC-MAIN-20150521113208-00293-ip-10-180-206-219.ec2.internal.warc.gz"}
https://edurev.in/course/quiz/attempt/566_Test-Physics-Mock-9/d041af2a-23dc-4a46-a26f-b626905b7d9a
Courses # Test: Physics Mock- 9 ## 45 Questions MCQ Test NEET Mock Test Series | Test: Physics Mock- 9 Description This mock test of Test: Physics Mock- 9 for NEET helps you for every NEET entrance exam. This contains 45 Multiple Choice Questions for NEET Test: Physics Mock- 9 (mcq) to study with solutions a complete question bank. The solved questions answers in this Test: Physics Mock- 9 quiz give you a good mix of easy questions and tough questions. NEET students definitely take this Test: Physics Mock- 9 exercise for a better result in the exam. You can find other Test: Physics Mock- 9 extra questions, long questions & short questions for NEET on EduRev as well by searching above. QUESTION: 1 Solution: QUESTION: 2 Solution: QUESTION: 3 ### Atomic mass number of an element is 232 and its atomic number is 90. The end product of this radioactive element is an isotope of lead. The number of alpha and beta particles emitted is Solution: QUESTION: 4 A small solid cylinder rolls up along a curved surface (fig.) with an initial velocity v. It will ascend up to a height ‘h’ equal to Solution: Since the cylinder is rolling, it has translational and rotational kinetic energies. The total kinetic energy is K = (½) Mv+ (½) 2 where M is the mass, I is the moment of inertia and ω is the angular velocity of the cylinder. At the highest point, the kinetic energy will be zero since the entire energy will be converted into gravitational potential energy Mgh where h is the height. So. we have (½) Mv+ (½) 2 = Mgh The moment of inertia of the cylinder about its own axis is I = (½) MR2 where is the radius of the cylinder. The angular velocity ω = v/R. Substituting these in the above equation, (½) Mv+ (½)×(½) MR2 ×v2/R2 = Mgh Or, (¾)Mv2 = Mgh, from which h = 3v2/4g QUESTION: 5 A bullet of mass m is fired into a block of wood of mass M which hangs on the end of a pendulum and gets embedded in it. When the bullet strikes the wooden block, the pendulum starts to swing with a maximum rise R. The velocity of the bullet is given by Solution: QUESTION: 6 The electrical resistance of a mercury column in a cylindrical container is R. When the same mercury is poured into another cylindrical container twice the radius of cross-section, the resistance of mercury column now is Solution: QUESTION: 7 Copper and sillicon is cooled from 300 K to 60 K, the specific resistance Solution: QUESTION: 8 Which of the following statements is correct Solution: QUESTION: 9 A transformer is used to light a 100W and 110V lamp from a 220V mains. If the main current is 0.5A, the efficiency of the transformer is approximately Solution: QUESTION: 10 The phase difference between the current and voltage at resonance is Solution: QUESTION: 11 An electric dipole consists of two opposite charges of magnitude ± q seperated by 2a. When the dipole is placed in uniform electric field E, to have minimum potential energy, then dipole moment p makes which the following angle with E Solution: QUESTION: 12 Five equal capacitors connected in series have a resultant capacitance of 4 $\mu$F. The total energy stored in these, when these are connected in parallel and charged to 400V is Solution: QUESTION: 13 A body of mass 2 kg is kept by pressing to a vertical wall by a force of 100 N. The friction between wall and body is 0.3. The the frictional force is equal to Solution: QUESTION: 14 The acceleration due to gravity at a height 1/10th of the radius of the earth above the earth's surface is 8.2 m-s⁻2. Its value at a point at the same distance below the surface of the earth is Solution: QUESTION: 15 A geostationary satellite Solution: QUESTION: 16 The maximum wavelength of radiations emitted at 900 K is 4μm. What will be the maximum wavelength of radiations emitted at 1200 K Solution: QUESTION: 17 A galvanometer having a resistance of 50 Ω gives a full scale deflection for a current of 0.05A. The length in meter of a resistance wire of area of cross section 2.97 x 10 − 3 c m 2 that can be used to convert the galvanometer into an ammeter which can be read a maximum of 5A current is [specific resistance of the wire = 5 x 10−7 c m 2 Ω − m ] Solution: QUESTION: 18 The pressure and temperature of two different gases is P and T having the volume V for each. They are mixed keeping the same volume and temperature, the pressure of the mixture will be Solution: QUESTION: 19 A galvanometer having a resistance of 8 Ω is shunted by a wire of resistance 2 Ω. If the total current is 1 A, then current passing through the shunt will be Solution: QUESTION: 20 A bar magnet of magnetic moment M is kept in a uniform magnetic field of strength B, making angle θ with its direction. The torque acting on it is Solution: QUESTION: 21 The moment of inertia of a uniform thin rod of length L and mass M about an axis passing through a point at a distance of L/3 from one of its ends and perpendicular to the rod is Solution: QUESTION: 22 In old age, arteries carrying blood in the human body become norrow, resulting in an increase in the blood pressure. This follows from Solution: QUESTION: 23 The acceleration of a particle is increasing linearly with time t as bt. The particle starts from the origin with an initial velocity ν₀. the distance travelled by the particle in time t will be Solution: QUESTION: 24 Which of the following affets the elasticity of a substance? Solution: QUESTION: 25 A body is projected at such an angle that the horizontal range is three times the greatest height. The angle of projection is Solution: QUESTION: 26 A stone of mass 1 kg tied of the end of a 1 m long string, is whirled in a horizontal circle, with a uniform angular velocity of 2 rad-s⁻1. Tension of the string is Solution: QUESTION: 27 A parachutist of weight 'w' strikes the ground with his legs fixed and comes to rest with an upward acceleration of magnitude 3 g. Force exerted on him by ground during landing is Solution: QUESTION: 28 Two white dots are 1 mm apart on a black paper. They are viewed by eye of pupil diameter 3 mm. Approximately, what is the maximum distance at which these dots can be resolved by the eye? (Take wavelength of light =500 nm) Solution: QUESTION: 29 Two simple harmonic motions with the same frequency act on a particle at right angles i.e., along x and y-axis. If the two amplitudes are equal and the phase difference is π/2, the resultant motion will be Solution: QUESTION: 30 Range of a projectile is R, when the angle of projection is 30o. Then, the value of the other angle of projection for the same range is Solution: QUESTION: 31 A plano convex lens is made of refractive index 1.6. The radius of curvature of the curved surface is 60 cm. The focal length of the lens is Solution: QUESTION: 32 If I, α and τ are the moment of inertia, angular acceleration and torque respectively of a body rotating about any axis with angular velocity ω, then Solution: QUESTION: 33 The dimensional formula for angular velocity in Solution: QUESTION: 34 Two forces each of magnitude F, acting o a particle yield a resultant force of magnitude F. The angle between the forces is Solution: QUESTION: 35 Mercury does not wet glass, wood or iron because Solution: QUESTION: 36 Two wire with resistances R and 2R are connected in parallel, the ratio of heat generated in 2R and R is Solution: QUESTION: 37 If R₁ and R₂ are respectively the filament resistances of a 200 watt bulb and 100 watt bulb designed to operate on the same voltage, then Solution: QUESTION: 38 540 g of ice at 0oC is mixed with 540 g of water at 80oC. The final temp. of the mixture in oC will be Solution: QUESTION: 39 If the heat of 110 J is added to a gaseous system, change in internal energy is 40 J. Then the amount of external work done is Solution: QUESTION: 40 The velocity v (in cm/sec) of a particle is given in terms of time t (in sec) by the relation v=at + b/t+c; the dimensions of a, b and c are Solution: QUESTION: 41 Length can not be measured by Solution: QUESTION: 42 In Young's double slit experiment, the width of the fringes obtained with light of wavelength 6000 Angstrom is 2.0mm. The fringe width, if the entire apparatus is immersed in a liquid of refractive index 1.33, will be Solution: Fringe width in Young's double slit experiment β = Dλ /d When apparatus is immersed in liquid, only wavelength of light (λ) changes Wavelength of light is liquid, λ' = λ /n , where n = refractive index of medium Initial fringe width, β = Dλ /d .....(i) Fringe width in liquid, β' = Dλ' /d .....(ii) Dividing Equation (ii) by Equation (i), we get QUESTION: 43 If the tension and diameter of a sonometer wire of fundamental frequency n are doubled and density is halved, then its fundamental frequency will become Solution: QUESTION: 44 The potential energy of a weight less spring compressed by a distance a is propotional to Solution: QUESTION: 45 A shell of mass 200 gm is ejected from a gun of mass 4 kg by an explosion that generates 1.05 kJ of energy . The initial velocity of the shell is : Solution:
2021-05-13 18:27:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.667548656463623, "perplexity": 947.7396606452029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00639.warc.gz"}
https://astronomy.stackexchange.com/tags/luminosity/new
# Tag Info 0 It is the luminosity in a straight strip like this: (Credit: West 29 / CC BY-SA from here) where the center of the strip passes a distance $x$ from the center of the circle at closest approach. So starting from the center and moving out, each successive strip would have a progressively larger value of $x$, but $x$ is a constant for a given strip (with \$x = ... Top 50 recent answers are included
2020-08-12 07:11:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979718089103699, "perplexity": 617.8578023532103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00437.warc.gz"}
https://math.stackexchange.com/questions/1748789/primes-of-the-form-2p21-p-prime-have-h21-as-a-prime-divisor/1748806
# Primes of the form $(2p)^{2}+1$, $p$ prime, have $h^{2}+1$ as a prime divisor? I'm an undergraduate student and I usually ask questions here about things I'm struggling with in my academical mathematical studies, but this particular question is actually more like a curiosity. More specifically, I'm playing around with that "famous" problem of creating a sequence that only generates prime numbers. I know that many people have tried this and it is a very hard problem, but this is part of mathematics, right? I mean, attacking hard problems as a (yet) naive student at the very least to make you realize how difficult they actually are and hopefully motivates for further study on the issue. Feel free to say that I'm wasting my time, though. So, to the question: I realized that the numbers $N$ of the form $N=(2p)^{2}+1$, where $p$ is prime, are often prime numbers (at least for small $p)$. This is sort of justified by the fact that it will never by divisible by a prime $q\equiv 3\pmod 4$, since it would imply that $-1$ is a quadratic residue mod $q$. Hence, all its prime divisors are $\equiv 1\pmod 4$, and in particular, all of its divisors are of this form, since $1\cdot 1\equiv 1\pmod 4$. Now, I thought of that famous result that says that any prime $\equiv 1\pmod 4$ can be written as a sum of two squares of integers, and hence such a number $N$ would be of the form (in prime factorization) $$N=(a^{2}+b^{2})(c^{2}+d^{2})\dots(x^{2}+y^{2}).$$ My question (for now, since I've just started these investigations), is: Is it true that, when $N$ is not prime, one of these prime factors will always be the form $h^{2}+1$? This is happening in my particular examples. I guess this is a hard question, and I'd appreciate any efforts, or references. Thanks. • I think it's always good for students like us to try to solve old problems even if we do not know the full solution because it gives us the skills to tackle new mathematical problems by ourselves which can be useful in mathematical research or when a complex problem in industry comes up that needs to be manipulated before it can be solved easily. In any case, I'm glad you asked this question because you'll interest the other people, like me, here on Math StackExchange, too, because this is an interesting problem. Hard problems like these are why I like this site so much. – Noble Mushtak Apr 18 '16 at 23:15 • Have you read about Golbachs theorem? – N.S.JOHN Apr 24 '16 at 5:24 Your hypothesis has many examples that seem to work: $$p \in \{2, 5, 7, 11, 13, 19, 29, 31, 37, 41, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 101, 103, 109, 131, 139, 149, 151, 157, 163, 179, 181, 191, 193, 197, 199\}$$ All have $(2p)^2+1$ being prime or having a prime divisor in the form of $h^2+1$ according to a Python program I just made). However, your hypothesis does not work from all prime $p$. Consider $p=17$. In this case, $(2p)^2+1=1157=13*89$, yet neither $13$ nor $89$ can be written as $h^2+1$ for $h \in \Bbb{N}$. I wonder, though, if there is a condition for the prime $p$ in which this works. P.S. For those interested, here is my Python code. Sorry it's not commented, I put this together very quickly using previous code I had written in order to answer this problem as quickly as I could. Some examples where no prime factors are $h^2 + 1$ p 4 p^2 + 1 17 1157 prime fac 13 . 89 . 23 2117 prime fac 29 . 73 . 43 7397 prime fac 13 . 569 . 97 37637 prime fac 61 . 617 . 107 45797 prime fac 41 . 1117 . 113 51077 prime fac 13 . 3929 . 127 64517 prime fac 149 . 433 . 137 75077 prime fac 193 . 389 . 167 111557 prime fac 281 . 397 . 173 119717 prime fac 13 . 9209 . 227 206117 prime fac 53 . 3889 . 263 276677 prime fac 337 . 821 . 277 306917 prime fac 13 . 23609 . 283 320357 prime fac 457 . 701 . 307 376997 prime fac 277 . 1361 . 313 391877 prime fac 29 . 13513 . 347 481637 prime fac 13 . 37049 . 353 498437 prime fac 41 . 12157 . 383 586757 prime fac 29 . 20233 . 397 630437 prime fac 229 . 2753 . 433 749957 prime fac 13 . 57689 . 443 784997 prime fac 181 . 4337 . 463 857477 prime fac 61 . 14057 . 467 872357 prime fac 41 . 21277 . 487 948677 prime fac 29 . 32713 . 503 1012037 prime fac 13 . 77849 . 523 1094117 prime fac 193 . 5669 . 557 1240997 prime fac 29 . 42793 . 577 1331717 prime fac 317 . 4201 . 607 1473797 prime fac 13 . 73 . 1553 . 613 1503077 prime fac 509 . 2953 . 617 1522757 prime fac 421 . 3617 . 643 1653797 prime fac 181 . 9137 . 673 1811717 prime fac 29 . 62473 . 727 2114117 prime fac 53 . 113 . 353 . 757 2292197 prime fac 53 . 61 . 709 . 787 2477477 prime fac 97 . 25541 . 823 2709317 prime fac 13 . 208409 . 853 2910437 prime fac 73 . 39869 . 857 2937797 prime fac 1489 . 1973 . 863 2979077 prime fac 53 . 56209 . 877 3076517 prime fac 41 . 75037 . 907 3290597 prime fac 89 . 36973 . 953 3632837 prime fac 13 . 113 . 2473 . 977 3818117 prime fac 229 . 16673 . 997 3976037 prime fac 13 . 305849 . 1093 4778597 prime fac 233 . 20509 . 1097 4813637 prime fac 1721 . 2797 . 1117 4990757 prime fac 269 . 18553 . 1123 5044517 prime fac 41 . 61 . 2017 . 1153 5317637 prime fac 13 . 97 . 4217 . 1223 5982917 prime fac 1153 . 5189 . 1237 6120677 prime fac 109 . 233 . 241 . 1283 6584357 prime fac 13 . 137 . 3697 . • Is $127$ the only mersenne prime such that if equal to $p$, it doesn’t have a prime divisor in the form $h^2 + 1$? – Feeds Feb 19 '18 at 15:40 • QEdit: No. The Mersenne Prime $2^{31} - 1 = 2147483647$ is also part of your list. Notice that $127 = 2^7 - 1$ and we also have that $7 = 2^3 - 1$ and $31 = 2^5 - 1$. This is to say, they are Mersenne Primes as well. – Feeds Feb 19 '18 at 15:49
2019-09-16 14:11:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.4293889105319977, "perplexity": 1369.6239646551974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00004.warc.gz"}
https://math.paperswithcode.com/paper/riemannian-geometry-on-hom-rho-commutative
## Riemannian geometry on Hom-$ρ$-commutative algebras 29 Oct 2018  ·  Zahra Bagheri, Esmaeil Peyghan · Recently, some concepts such as Hom-algebras, Hom-Lie algebras, Hom-Lie admissible algebras, Hom-coalgebras are studied and some of classical properties of algebras and some geometric objects are extended on them. In this paper by recall the concept of Hom-$\rho$-commutative algebras, we intend to develop some of the most classical results in Riemannian geometry such as metric, connection, torsion tensor, curvature tensor on it and also we discuss about differential operators and get some results of differential calculus using them... The notions of symplectic structures and Poisson structures are included and an example of $\rho$-Poisson bracket is given. read more PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Categories Differential Geometry Rings and Algebras
2021-10-27 07:51:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3268015384674072, "perplexity": 2286.9739072895513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00322.warc.gz"}
http://math.stackexchange.com/questions/855800/how-do-i-define-a-string-in-formal-language-by-means-of-a-definition-of-tuple
# How do I define a string in formal language by means of a definition of tuple? I'm constructing mathematical notions and definition from the bottom of the mathematical structure. So whenever I learn, or encounter new concepts, I try to define it step by step, without using any informally premised concepts, even if they look obvious (except primitive notions, of course.). String in formal language is, informally speaking, a juxtaposition of finitely many symbols. I think, to define this term rigorously, we have to define order first. And I already defined ordered pair, or generally tuple which contains a concept of order. Roughly speaking, we can define an ordered pair by only adopting set-element concept, $(a,b)=\{a,\{a,b\}\}$. I think this definition is the most foundational way to define order. So I tried to define string by means of this concept, but having problem as I already defined tuple. How do I define string in a very foundational way? - can't you do the same thing? $(a,b,c)=\{a,\{a,b\},\{a,b,c\}$ –  dREaM Jul 3 '14 at 21:26 @Bananarama I don't get your point. Of course I can do the same thing and also can generalize to n-tuple. BTW, your definition is wrong, if we use inductive way. –  Novice Jul 3 '14 at 22:00 Wait, is it just ok to think string as tuple? For example, can we think a string $math$ as $(m,a,t,h)$? Should I define $string$ $notation$, which removes every commas and parentheses? Oh wait, isn't it so called $polish$ $notation$? –  Novice Jul 3 '14 at 22:01 Is polish notation of a tuple is a string?! –  Novice Jul 3 '14 at 22:04 @novice, I don't know, I'm merely giving my opinion, take a chill pill. –  dREaM Jul 3 '14 at 22:12 A word $w$ in the context of formal languages and automata theory is a sequence of symbols from an alphabet $\Sigma$ $$w \in \Sigma^I = \{ \varphi : I \to \Sigma \}$$ for some index set $I$. The usual set theoretic modeling for sequences and maps apply. Symbol and alphabet are just different names for element and non-empty set. A set of words is called a language. The length of a word is $|w| = \mbox{card}(I)$. A special word is $\varepsilon$ the word of length $0$, the empty word. It is the neutral element of the concatenation operation $$\cdot : \Sigma^* \times \Sigma^* \to \Sigma^* \\ \cdot(u,v) = uv$$ with $|uv| = |u| + |v|$ and $uv$ having the first $|u|$ sequence elements like $u$ and the remaining $|v|$ like $v$, where I used the set of finite words $\Sigma^*$ $$\Sigma^* = \cup_{n \in \mathbb{N}_0} \Sigma^n$$ where $\Sigma^n$ is the set of words of length $n$ over $\Sigma$, while one can define infinite words as well (see Büchi automata). See Free Monoid for a more algebraic perspective. -
2015-09-01 16:40:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486129879951477, "perplexity": 617.8319639548602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645195983.63/warc/CC-MAIN-20150827031315-00055-ip-10-171-96-226.ec2.internal.warc.gz"}
https://analytixon.com/2019/07/11/finding-out-why-13/
The pivotal role that event correlation technology plays in todays applications has lead to the emergence of different families of event correlation approaches with a multitude of specialized correlation semantics, including computation models that support the composition and extension of different semantics. However, type-safe embeddings of extensible and composable event patterns into statically-typed general-purpose programming languages have not been systematically explored so far. Event correlation technology has often adopted well-known and intuitive notations from database queries, for which approaches to type-safe embedding do exist. However, we argue in the paper that these approaches, which are essentially descendants of the work on monadic comprehensions, are not well-suited for event correlations and, thus, cannot without further ado be reused/re-purposed for embedding event patterns. To close this gap we propose PolyJoin, a novel approach to type-safe embedding for fully polyvariadic event patterns with polymorphic correlation semantics. Our approach is based on a tagless final encoding with uncurried higher-order abstract syntax (HOAS) representation of event patterns with n variables, for arbitrary $n \in \mathbb{N}$. Thus, our embedding is defined in terms of the host language without code generation and exploits the host language type system to model and type check the type system of the pattern language. Hence, by construction it impossible to define ill-typed patterns. We show that it is possible to have a purely library-level embedding of event patterns, in the familiar join query notation, which is not restricted to monads. PolyJoin is practical, type-safe and extensible. An implementation of it in pure multicore OCaml is readily usable. In statistical data assimilation (SDA) and supervised machine learning (ML), we wish to transfer information from observations to a model of the processes underlying those observations. For SDA, the model consists of a set of differential equations that describe the dynamics of a physical system. For ML, the model is usually constructed using other strategies. In this paper, we develop a systematic formulation based on Monte Carlo sampling to achieve such information transfer. Following the derivation of an appropriate target distribution, we present the formulation based on the standard Metropolis-Hasting (MH) procedure and the Hamiltonian Monte Carlo (HMC) method for performing the high dimensional integrals that appear. To the extensive literature on MH and HMC, we add (1) an annealing method using a hyperparameter that governs the precision of the model to identify and explore the highest probability regions of phase space dominating those integrals, and (2) a strategy for initializing the state space search. The efficacy of the proposed formulation is demonstrated using a nonlinear dynamical model with chaotic solutions widely used in geophysics. Library: Local Gaussian Process Model for Large-Scale Dynamic Computer Experiments (DynamicGP) Fits localized GP model for dynamic computer experiments via singular value decomposition of the response matrix Y for large N (the number of observations) using the algorithm proposed by Zhang et al. (2018) . The current version only supports 64-bit architecture. We analyze several formalizations of conditional probability and find a new one that encompasses all. Our main result is that a preference relation on random quantities called a plausible preorder induces a coherent conditional expectation; and vice versa, that every coherent function can be extended to a conditional expectation induced by a plausible preorder. The advantages of our approach include a convenient justification of probability laws by the properties of plausible preorders, independence on probability interpretations, or the ability to extend conditional probability to any nonzero condition. In particular, if C is a nonzero condition and \Prob is coherent, then it can be extended so that \Prob(0|C)=0, \Prob(C|C)=1 and \Prob(1|C)=1, no matter whether \Prob(C) is zero or whether it is defined. The connectivity of a network conveys information about the dependencies between nodes. We show that this information can be analyzed by measuring the uncertainty (and certainty) contained in paths along nodes and links in a network. Specifically, we derive from first principles a measure known as effective information and describe its behavior in common network models. Networks with higher effective information contain more information within the dependencies between nodes. We show how subgraphs of nodes can be grouped into macro-nodes, reducing the size of a network while increasing its effective information, a phenomenon known as causal emergence. We find that causal emergence is common in simulated and real networks across biological, social, informational, and technological domains. Ultimately, these results show that the emergence of higher scales in networks can be directly assessed, and that these higher scales offer a way to create certainty out of uncertainty. Neuroimaging datasets keep growing in size to address increasingly complex medical questions. However, even the largest datasets today alone are too small for training complex machine learning models. A potential solution is to increase sample size by pooling scans from several datasets. In this work, we combine 12,207 MRI scans from 15 studies and show that simple pooling is often ill-advised due to introducing various types of biases in the training data. First, we systematically define these biases. Second, we detect bias by experimentally showing that scans can be correctly assigned to their respective dataset with 73.3% accuracy. Finally, we propose to tell causal from confounding factors by quantifying the extent of confounding and causality in a single dataset using causal inference. We achieve this by finding the simplest graphical model in terms of Kolmogorov complexity. As Kolmogorov complexity is not directly computable, we employ the minimum description length to approximate it. We empirically show that our approach is able to estimate plausible causal relationships from real neuroimaging data.
2019-07-21 05:22:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5777254104614258, "perplexity": 775.8090300685199}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526888.75/warc/CC-MAIN-20190721040545-20190721062545-00336.warc.gz"}
https://www.zbmath.org/?q=an%3A1453.62230
## Clustering and disjoint principal component analysis.(English)Zbl 1453.62230 Summary: A constrained principal component analysis, which aims at a simultaneous clustering of objects and a partitioning of variables, is proposed. The new methodology allows us to identify components with maximum variance, each one a linear combination of a subset of variables. All the subsets form a partition of variables. Simultaneously, a partition of objects is also computed maximizing the between cluster variance. The methodology is formulated in a semi-parametric least-squares framework as a quadratic mixed continuous and integer problem. An alternating least-squares algorithm is proposed to solve the clustering and disjoint PCA. Two applications are given to show the features of the methodology. ### MSC: 62-08 Computational methods for problems pertaining to statistics 62H30 Classification and discrimination; cluster analysis (statistical aspects) 62H25 Factor analysis and principal components; correspondence analysis Full Text: ### References: [1] Cattell, R.B., The scree test for the number of factors, Multivariate behavioral research, 1, 245-276, (1966) [2] DeSarbo, W.S.; Jedidi, K.; Cool, K.; Schendel, D., Simultaneous multidimensional unfolding and cluster analysis: an investigation of strategic groups, Marketing letters, 2, 129-146, (1990) [3] De Soete, G.; Carroll, J.D., K-means clustering in a low-dimensional Euclidean space, (), 212-219 [4] De Soete, G.; Heiser, W.J., A latent class unfolding model for analyzing single stimulus preference ratings, Psychometrika, 58, 545-565, (1993) · Zbl 0826.62098 [5] Gabriel, K.R., The biplot graphic display of matrices with application to principal component analysis, Biometrika, 58, 453-467, (1971) · Zbl 0228.62034 [6] Heiser, W.J., Clustering in low-dimensional space, (), 162-173 [7] Heiser, W.J.; Groenen, P.J.F., Cluster differences scaling with a within-clusters loss component and a fuzzy successive approximation strategy to avoid local minima, Psychometrika, 62, 63-83, (1997) · Zbl 0889.92037 [8] Kaiser, H.F., The varimax criterion for analytic rotation in factor analysis, Psychometrika, 23, 187-200, (1958) · Zbl 0095.33603 [9] Milligan, G.W.; Cooper, M., An estimation of procedures for determining the number of clusters in a data set, Psychometrika, 50, 159-179, (1985) [10] Vichi, M.; Kiers, H.A.L., Factorial $$k$$-means analysis for two way data (2001), Computational statistics and data analysis, 37, 49-64, (2001) · Zbl 1051.62056 [11] Vichi, M., Double $$k$$-means clustering for simultaneous classification of objects and variables, (), 43-52 [12] Vichi, M., Discrete and continuous models for two way data (2002), (), 139-147 [13] Vichi, M.; Rocci, R; Kiers, H.A.L., Simultaneous component and clustering models for three-way data: within and between approaches, Journal of classification, 24, 1, 71-98, (2007) · Zbl 1144.62045 [14] Vigneau, E.; Qannari, E.M., Clustering of variables around latent component — application to sensory analysis, Communications in statistics, simulation and computation, 32, 4, 1131-1150, (2004) · Zbl 1100.62582 [15] Zou, H.; Hastie, T.; Tibshirani, R., Sparse principal component analysis, Journal of computational and graphical statistics, 15, 2, 262-286, (2006) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-07-05 21:00:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249046564102173, "perplexity": 6685.3172628389675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00483.warc.gz"}
https://code-research.eu/en/if-snow-is-falling-at-a-rate-of-14-inch-every-30-minutes-how-much-snow-will-fall-in-4-12-hours.23013.html
Cleopatra202 2 # If snow is falling at a rate of 1/4 inch every 30 minutes, how much snow will fall in 4 1/2 hours. $30min=0.5h\\\\\frac{1}{4}inch=0.25inch\\\\4\frac{1}{2}h=4.5h$ $0.25inch\ -\ 0.5h\\\\2\cdot0.25inch\ -\ 2\cdot0.5h\to0.5inch\ -\ 1h\\\\\\0.5inch\ -\ 1h\\0.5inch\ -\ 1h\\0.5inch\ -\ 1h\\0.5inch\ -\ 1h\\0.25inch\ -0.5h\\------------+\\2.25inch\ -\ 4.5h$
2021-11-29 06:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6035333275794983, "perplexity": 975.2508798831234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358688.35/warc/CC-MAIN-20211129044311-20211129074311-00089.warc.gz"}
https://mathhelpboards.com/threads/how-to-numerically-or-otherwise-solve-these-in-mathematica-or-matlab.1163/
# how to numerically(or otherwise) solve these in mathematica(or matlab) #### caffeinemachine ##### Well-known member MHB Math Scholar The values of t_1,t_2 and t_3 are to be found out corresponding to the values of the other variables inputted by the user. I tried using "FindRoot" in Mathematica 7 but it says "solutions don't converge, perturb the initial values". I tried many set of initial values but none worked. Is there a way to solve this?? cos(t_3)*l_3*cos(t_1)*cos(t_2)-cos(t_3)*l_3*sin(t_1)*cos(a_1)*sin(t_2)+sin(t_1)*sin(a_1)*sin(t_3)*l_3+cos(t_1)*l_1+l_7+cos(t_7)*l_6=0 cos(t_3)*l_3*cos(t_1)*cos(a_1)*sin(t_2)+cos(t_3)*l_3*sin(t_1)*cos(t_2)+sin(t_1)*l_1-cos(t_1)*sin(a_1)*sin(t_3)*l_3-sin(t_7)*cos(a_7)*l_6+s_7*sin(a_7)=0 sin(a_1)*sin(t_2)*cos(t_3)*l_3+cos(a_1)*sin(t_3)*l_3+sin(t_7)*sin(a_7)*l_6+s_7*cos(a_7)+s_1=0 #### CaptainBlack ##### Well-known member The values of t_1,t_2 and t_3 are to be found out corresponding to the values of the other variables inputted by the user. I tried using "FindRoot" in Mathematica 7 but it says "solutions don't converge, perturb the initial values". I tried many set of initial values but none worked. Is there a way to solve this?? cos(t_3)*l_3*cos(t_1)*cos(t_2)-cos(t_3)*l_3*sin(t_1)*cos(a_1)*sin(t_2)+sin(t_1)*sin(a_1)*sin(t_3)*l_3+cos(t_1)*l_1+l_7+cos(t_7)*l_6=0 cos(t_3)*l_3*cos(t_1)*cos(a_1)*sin(t_2)+cos(t_3)*l_3*sin(t_1)*cos(t_2)+sin(t_1)*l_1-cos(t_1)*sin(a_1)*sin(t_3)*l_3-sin(t_7)*cos(a_7)*l_6+s_7*sin(a_7)=0 sin(a_1)*sin(t_2)*cos(t_3)*l_3+cos(a_1)*sin(t_3)*l_3+sin(t_7)*sin(a_7)*l_6+s_7*cos(a_7)+s_1=0 Does FindRoot attempt to find a numerical of symbolic solution? .. OK Google can answer this one as: numerical. Have you forced FindRoot to not use symbolic computations as suggested in the documentation? Are you sure that there is a solution for the values of parameters you are interested in? It may be worth adding the constraints $$-\pi \le t_1,t_2,t_3 < \pi$$. My personal approach to this kind of problem is brute force, form an objective function equal to the sum of the squares of the left hand sides then employ a global optimisation method to find the minimum of the objective in the feasible cube. If the minimum is sufficiently small to suggest the presense of a zero near by use the minimising point as a start point in some more sophisticated solution method. CB Last edited: #### caffeinemachine ##### Well-known member MHB Math Scholar Does FindRoot attempt to find a numerical of symbolic solution? .. OK Google can answer this one as: numerical. Have you forced FindRoot to not use symbolic computations as suggested in the documentation? Are you sure that there is a solution for the values of parameters you are interested in? It may be worth adding the constraints $$-\pi \le t_1,t_2,t_3 < \pi$$. My personal approach to this kind of problem is brute force, form an objective function equal to the sum of the squares of the left hand sides then employ a global optimisation method to find the minimum of the objective in the feasible cube. If the minimum is sufficiently small to suggest the presense of a zero near by use the minimising point as a start point in some more sophisticated solution method. CB Thank you CaptainBlack for your reply. These equations result from the analysis of spatial 7R linkage. I can't be sure that there is a solution for the values of the parameters I am entering. If you know about spatial mechanism then you would also have sympathy with me As for the inequality you mentioned in your post, frustratingly, mathematica doesn't entertain inequalities in its methods like FindRoot, NSolve etc which are numerical solvers. Inequalities are allowed in "exact solvers". "Have you forced FindRoot to not use symbolic computations as suggested in the documentation?". I don't know what is meant by this. Can you give me the link of the documentation you are talking about. Sadly, I am not trained in any theory numerical techniques at all, not at this point at least. So I can't understand the approach you mention in the last paragraph of your post. #### BillSimpson ##### New member We have no information on the domain of your input variables, but for some values the minimum with respect to t1,t2,t3 of the sum of squared error of your three equations is much greater than zero. Here is an example that searches for a set of parameters and then finds t1,t2,t3 that has a minimum squared error near zero. In[17]:= While[True, a1=RandomReal[{-Pi,Pi}];a7=RandomReal[{-Pi,Pi}]; l1=RandomReal[{-Pi,Pi}]; l3=RandomReal[{-Pi,Pi}];l6=RandomReal[{-Pi,Pi}];l7=RandomReal[{-Pi,Pi}]; s1=RandomReal[{-Pi,Pi}];s7=RandomReal[{-Pi,Pi}];t7=RandomReal[{-Pi,Pi}]; v=NMinimize[ (Cos[t3]*l3*Cos[t1]*Cos[t2]-Cos[t3]*l3*Sin[t1]*Cos[a1]*Sin[t2]+Sin[t1]*Sin[a1]*Sin[t3]*l3+Cos[t1]*l1+l7+ Cos[t7]*l6)^2+ (Cos[t3]*l3*Cos[t1]*Cos[a1]*Sin[t2]+Cos[t3]*l3*Sin[t1]*Cos[t2]+Sin[t1]*l1-Cos[t1]*Sin[a1]*Sin[t3]*l3-Sin[t7]*Cos[a7]*l6+ s7*Sin[a7])^2+ (Sin[a1]*Sin[t2]*Cos[t3]*l3+Cos[a1]*Sin[t3]*l3+Sin[t7]*Sin[a7]*l6+s7*Cos[a7]+s1)^2, {t1,t2,t3}]; If[First[v]<.001,Break[]] ]; {a1,a7,l1,l3,l6,l7,s1,s7,t7,v} Out[18]={0.9676285734957135, -0.6055832455969292, 0.7416052518248044, -2.4754030471760835, -0.139880741669415, -1.387680009683257, -2.5026792960158537, 0.3545824940403457, -2.7526143572135373, {6.001980115750534*^-32,{t1-> -0.7680206685789617,t2-> -1.5678943805744407, t3-> -0.1648144297428493}}} You can modify that to fix the values of some of your parameters if they are known or to generate random values within a given range or possibly even to include a some of those parameters in the NMinimize search. Last edited: #### caffeinemachine ##### Well-known member MHB Math Scholar We have no information on the domain of your input variables, but for some values the minimum with respect to t1,t2,t3 of the sum of squared error of your three equations is much greater than zero. Here is an example that searches for a set of parameters and then finds t1,t2,t3 that has a minimum squared error near zero. In[17]:= While[True, a1=RandomReal[{-Pi,Pi}];a7=RandomReal[{-Pi,Pi}]; l1=RandomReal[{-Pi,Pi}]; l3=RandomReal[{-Pi,Pi}];l6=RandomReal[{-Pi,Pi}];l7=RandomReal[{-Pi,Pi}]; s1=RandomReal[{-Pi,Pi}];s7=RandomReal[{-Pi,Pi}];t7=RandomReal[{-Pi,Pi}]; v=NMinimize[ (Cos[t3]*l3*Cos[t1]*Cos[t2]-Cos[t3]*l3*Sin[t1]*Cos[a1]*Sin[t2]+Sin[t1]*Sin[a1]*Sin[t3]*l3+Cos[t1]*l1+l7+ Cos[t7]*l6)^2+ (Cos[t3]*l3*Cos[t1]*Cos[a1]*Sin[t2]+Cos[t3]*l3*Sin[t1]*Cos[t2]+Sin[t1]*l1-Cos[t1]*Sin[a1]*Sin[t3]*l3-Sin[t7]*Cos[a7]*l6+ s7*Sin[a7])^2+ (Sin[a1]*Sin[t2]*Cos[t3]*l3+Cos[a1]*Sin[t3]*l3+Sin[t7]*Sin[a7]*l6+s7*Cos[a7]+s1)^2, {t1,t2,t3}]; If[First[v]<.001,Break[]] ]; {a1,a7,l1,l3,l6,l7,s1,s7,t7,v} Out[18]={0.9676285734957135, -0.6055832455969292, 0.7416052518248044, -2.4754030471760835, -0.139880741669415, -1.387680009683257, -2.5026792960158537, 0.3545824940403457, -2.7526143572135373, {6.001980115750534*^-32,{t1-> -0.7680206685789617,t2-> -1.5678943805744407, t3-> -0.1648144297428493}}} You can modify that to fix the values of some of your parameters if they are known or to generate random values within a given range or possibly even to include a some of those parameters in the NMinimize search.
2021-10-16 23:54:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6009842157363892, "perplexity": 783.1260683731823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00324.warc.gz"}
http://clay6.com/qa/365/prove-that-is-an-increasing-function-of-in-left-0-large-right-
Browse Questions # Prove that $y =\large { \frac{4\sin\theta}{(2+\cos\theta)}}-\normalsize \theta$ is an increasing function of $\theta$ in $\left[0, \: \large {\frac{\pi}{2}}\right]$ This question has appeared in model paper 2012 Toolbox: • A function $f(x)$ is said to be a strictly increasing function on $(a,b)$ if $x_1 < x_2\Rightarrow f(x_1) < f(x_2)$ for all $x_1,x_2\in (a,b)$ • If $x_1 < x_2\Rightarrow f(x_1) > f(x_2)$ for all $x_1,x_2\in (a,b)$ then $f(x)$ is said to be strictly decreasing on $(a,b)$ • A function $f(x)$ is said to be increasing on $[a,b]$ if it is increasing (decreasing) on $(a,b)$ and it is increasing (decreasing) at $x=a$ and $x=b$. • The necessary sufficient condition for a differentiable function defined on $(a,b)$ to be strictly increasing on $(a,b)$ is that $f'(x) > 0$ for all $x\in (a,b)$ • The necessary sufficient condition for a differentiable function defined on $(a,b)$ to be strictly decreasing on $(a,b)$ is that $f'(x) < 0$ for all $x\in (a,b)$ Step 1: Let $f(x)=\large\frac{4\sin \theta}{(2+\cos \theta)}$$-\theta Differentiating w.r.t \theta we get, f'(x)=\large\frac{(2+\cos \theta).4\cos \theta-4\sin\theta(0-\sin\theta)}{(2+\cos \theta)^2}$$-1$ $\qquad=\large\frac{8\cos \theta+4\cos^2\theta+4\sin^2\theta}{(2+\cos \theta)^2}$$-1 (But \sin^2\theta+\cos^2\theta=1) \qquad=\large\frac{8\cos \theta+4}{(2+\cos \theta)^2}$$-1$ Step 2: On simplifying we get, $f'(x)=\large\frac{8\cos \theta+4-(2+\cos \theta)^2}{(2+\cos\theta)^2}$ $\qquad=\large\frac{4\cos \theta-cos^2\theta}{(2+\cos\theta)^2}$ $\qquad=\large\frac{\cos \theta(4-cos^2\theta)}{(2+\cos\theta)^2}$ $\Rightarrow \cos \theta(4-\cos ^2\theta)>0$ $(4-\cos^2\theta) >0$ or $\cos^2\theta$ is not $>1$ $\Rightarrow \theta \in (0,\large\frac{\pi}{2})$
2017-04-25 00:47:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722570776939392, "perplexity": 211.84343735668676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00396-ip-10-145-167-34.ec2.internal.warc.gz"}
http://asacars.it/math-puzzles-with-symbols.html
# Math Puzzles With Symbols While a fairly easy puzzle to solve, something like this at the beginning of a game can give players an early win and the confidence and motivation to tackle more difficult puzzles later. Unlike MS Word, only one symbol can be added each time. Now to create a crossword that is apt for the Christmas spirit, you can use this amazing template. Mathematical Terms - 7 letters. Shop Formula, graph, math symbols 4 jigsaw puzzle created by prophoto. Flashcards. Kids will need to know basic math concepts to complete these, so they're best suited for kids going into first grade or older. Our students learn to read, write, compose and publish music. IQ Test Questions. All answers are given. For example, 44 + 44 = 88. In all other respects this set of worksheets is identical to 'Let's Draw Symbols' above. The Tickets and Rewards System makes learning a game, and also teaches important math skills as children keep track of tickets earned and spent. To Support Student Learning During COVID-19, Hooda Math has removed ads from Timed Tests, Manipulatives, and Tutorials until September 8, 2020. Used by millions of K-8 students worldwide, FIM develops critical skills and improves the way students feel about math. The printable math crosswords number puzzles are used by students to practice and review math skills learned in school. All states countries and nations in the world have their own unique set of national symbols. Whosoever shall solve these puzzles shall rule the universe. Cryptogram puzzles also use letter substitutions to create the puzzles. The exercises also introduce you to the correct symbols used in math to represent these, so < for less than, > for greater than and = for equal. Support independent sellers. The NRICH Project aims to enrich the mathematical experiences of all learners. Puzzazz is the best way to buy and solve puzzles in the digital world. The printable math crosswords number puzzles are used by students to practice and review math skills learned in school. Find free pictures, photos, clip art, images and information related to math symbols, shapes and numbers right here at Science Kids. Answers This page will help you find all of CodyCross Answers of All the Levels. See how well your children remember the symbolism of Christmas with this fun crossword puzzle. Strimko Adding to Ten Puzzles. Mathematical symbols are, as the name implies, symbols used in mathematical calculations and formulas. There are three switches in the ground floor room. A map symbol is a geographical device used to visually represent a real-world phenomenon or characteristic on the map. BrainPOP - Animated Educational Site for Kids - Science, Social Studies, English, Math, Arts & Music, Health, and Technology. Math Keyboard. Learn all about addition, subtraction, multiplication, division, fractions and algebra. The title of the puzzle holds a clue as to what it will be. All games are free to play and new content is added every week. Math Symbols. Reasoning, Visual Perception, Math Skills, Social Skills and Personal Skills Ages: 6 and up Players: 1-20 SET’S Skill Connections SET is the award-winning puzzle game that is truly a challenge for the whole class. Choose from many different types of printable and online CROSSWORDS plus a daily US style crossword. Find free images of shapes such as triangles, rectangles, pentagons, hexagons and octagons. Find the Pair is a challenging online concentration memory game that requires players to spot the identical pairs in each level. The Math Salamanders provide you with: A huge bank of free educational Math worksheets. Math Cat #1 uses one quadrant. ▲ Triangle Symbols. It has wide coverage comprising several different alphabets (Western European, Cyrillic, Hebrew, Runic, Ogham), mathematical, and physical symbols. Not only do these papyri show that the priests had mastered all the processes of arithmetic, including a theory of number, but had developed formulas enabling them to find solutions of problems with one and two unknowns, along with "think. Name: _____ Section: _____ Periodic Table – Symbols and Names Across 4. a geologic chart that divides Earth's history into units. As usual the solution as also provided in the worksheet. The Hieroglyphic Typewriter and Math Calculator is included. Sometimes students or those who deal with mathematics, physics or various kinds of calculations may need to type a degree sign, but we do not Degree symbol can be used in case if we're dealing with angles, or when we need to operate with temperature and use Celsius degree. Geometry Puzzles Problem 1 (But first, one last logic puzzle). Mensa Math & Logic Puzzles Join the ranks of the geniuses and leave behind the merely brilliant!. D I V I D E 4. Degree symbol is °. Greek letters. See more ideas about maths puzzles, fun math, math tricks. While it looks like a much harder problem, it's just. Have fun! More puzzles coming up soon. Four mathematical papyri still survive, most importantly the Rhind mathematical papyrus dating to 1832 B. Plus, visit our web site for a quick, fun puzzle of the day. A set of 12 worksheets. 0 is in the center. (3) As a Logic Puzzle, it is lacking, because it has an infinite number of answers. Hi! This blog is filled with a bunch of things you probably didn't know. There are many different symbols each which have their own purpose. Is it possible?. As a child solves the puzzle, they might come up with 1 = 44 / 44. Popular Math Resources. There are two symbols, and we can say the goal is to bring For instance: 1 1 0 0 1 0 1 1 1. , \infty - Infinity. Puzzle 5 - The Alkali Metals. Today's crossword puzzle clue is a quick one: Summation symbol in math. Middle and high school complete lesson plan map skills. 49D: Symbol of opportunity (DOOR) — this was eerily vague to me, and it ran right through a *bunch* of answers I was not entirely sure about. 22 math puzzles with answers to test your logic. Mathematics or math is considered to be the language of science, vital to understanding and explaining science behind natural occurrences and phenomena. Games, Auto-Scoring Quizzes, Flash Cards, Worksheets, and tons of resources to teach kids the multiplication facts. In this lesson we are going to first come up with a list of some common vocabulary that is used in mathematics, like Palm Beach State College, and make sure we know what operation goes with each key word, so we are able to convert english words into mathematical symbols. Matchstick mathematical puzzles are rearrangement puzzles in which a number of matchsticks are arranged as numbers in equation or shape like square or rectangle and it is required to move matches. A new twist on a classic favorite, these math crossword puzzles are both fun and challenging. Set a 1 3 5 7 297 299 150 odd numbers. To use the light version of MathType in any version of Word, from the Insert menu choose Object > Microsoft Equation. mathematical representations. Crossword puzzles during Christmas with the family is a custom that is maintained by many. Any post that isn't a math meme will result in a ban, see the pinned post for further. For example, 44 + 44 = 88. It is a simple math equation picture puzzle to test your brain. You can get all kinds of Math Symbols below and use them anywhere. Comments Off on A Strange Math Symbol Puzzle. Math puzzle. This resource is designed for UK teachers. When you are solving math problems, you can substitute symbols for the blank spaces. Modern Teaching Aids. In total there are 43,252,003,274,489,856,000 different arrangements of the small cubes. , pig and hat). Cool Math Puzzles With Answers #1 - Funny Logical Maths Puzzle If Can you arrange four 9's and use of atmost 2 math symbols , make the total be 100? View Answer. Students Sign In; My. Sections remaining to be done: Table 3 onwards from symbols. Mahjongg: Mahjongg is a classic game of pattern recognition. Matchstick mathematical puzzles are rearrangement puzzles in which a number of matchsticks are arranged as numbers in equation or shape like square or rectangle and it is required to move matches to… We are listing down top 10 matchstick puzzles kids should try while playing with mathematics. Change answer; Math. ads This crossword clue … Symbols of change, in math Crossword Clue Read More ». All answers are given. Please check your puzzle carefully to make sure all of your words are there. Author: Created by Mathematical Symbols. Yahoo Answers is a great knowledge-sharing platform where 100M+ topics are discussed. Pick a number and related symbol and let the math magic crystal ball guess it. Please check back often for new topics and features!. Find math symbols stock images in HD and millions of other royalty-free stock photos, illustrations and vectors in the Shutterstock collection. Test your thinking and show that you can answer our cool maths puzzles. Counting Chain Arrows can be used to count out large numbers on a bead chain and do skip counting. National Geographic map skills for elementary school-age children. See more ideas about maths puzzles, fun math, math tricks. Middle and high school complete lesson plan map skills. it’s A 28 letters crossword puzzle definition. Mathematical and Logical Puzzles. Maths puzzles includes number, logical and riddles with tricks. Multiplication. For Word 2011. Identify apparent relationships between corresponding terms. Missing Operations. More advanced mathematical skills are based on an early math “foundation”—just like a house is built on a strong foundation. There are seven levels for kids to choose from in each area of practice. left cell must be less than 2. KEYWORDS: Classroom materials, software, newsgroups Math Forum - Advanced Geometry. The Base 10 Puzzles is exclusively available to course participants who have completed Making Math Real: Kindergarten (Level I) and Making Math Real: The 4 Operations and the 400 Math Facts (Levels II-V). Mind Stretchers. Similar Site to Englishmedialab. a unit of time that contains the Age of Dinosaurs. Alt Codes ». If you can solve these math riddles easily, you’ll want to try your hand at what an MIT professor called the “hardest logic puzzle ever. it’s A 23 letters crossword definition. Solve interactive Math symbol worksheets. Play hundreds of free online games including racing, action, dress up, escape, arcade, puzzle and brain games. Math Puzzles II. Similar Puzzles +1 vote. In these pictogram puzzles, students become word spies and uncover secret words and phrases. ∞ ORDER NOW. Scott Pakin, The Comprehensive LaTeX Symbol List, 2017. This is an easy math puzzle. Add Symbol as an Object Key. The Base 10 Puzzles is exclusively available to course participants who have completed Making Math Real: Kindergarten (Level I) and Making Math Real: The 4 Operations and the 400 Math Facts (Levels II-V). The same math symbols are used throughout the civilized world. Is it possible?. Math Videos. A great set of maths puzzles for upper primary children. Number Puzzles and Sequences. As I was saying, the classic version of this puzzle uses four 4's as the digits. All symbols such as hearts, flowers, arrows, objects and much more! Use them on Facebook, Twitter, Instagram or in your blog posts! Are you looking for some nice symbols to decorate your Facebook, Instagram or Twitter posts and comments? Or maybe to brighten up your username?. Hi! This blog is filled with a bunch of things you probably didn't know. You can sign your school up for free using the button above or by clicking here. Play unlimited sudoku puzzles online. Choose from many different types of printable and online CROSSWORDS plus a daily US style crossword. The on screen QWERTY keyboard incorporates alphabet and number symbols together with a selection of determinative signs. Picture graphs also included. This is a great back-to-school activity for algebra and pre-algebra classes. If you're searching to copy bulk symbols. Let your loved ones enjoy a memory one piece at a time with custom puzzles. Mahjongg: Mahjongg is a classic game of pattern recognition. This allows them to start with the math problems appropriate for their grade level and move on to more advanced problem solving when ready. Let us see if you can answer this math equation picture puzzle correctly and find the value of the missing number which replaces the question mark?. Symbols are immutable (cannot be changed) and are unique. Math Playground. (taken from Lights out puzzle solver and modified). Everyone learns or shares information via question and answer. ” Grigorita Ko/Shutterstock Animal riddle. The JavaScript ES6 introduced a new primitive data type called Symbol. Prob & Stat Vocab Probability and Statistics Vocabulary List (Definitions for Middle School Teachers) B • Bar graph – a diagram representing the frequency distribution for nominal or discrete data. More advanced mathematical skills are based on an early math “foundation”—just like a house is built on a strong foundation. Table of Geometric Symbols. Middle and high school complete lesson plan map skills. Mathematical symbols and terminology can be confusing and can be a barrier to learning and understanding basic numeracy. Math Symbols. Use math symbols to produce 6 [duplicate] Ask Question Asked 2 years, 4 months ago. 1 Greek and Hebrew letters. Step 2: Plug in x = 4. Number of Problems: 4-6 recommended for printing. Free pdf math puzzles for kids - math puzzles for young learners, math games and exercises, printable worksheets for children, printables, online, for teachers and parents, teach your kids math. 0 is in the center. E Q U A L S 5. Mind Stretchers. Logical/Mathematical intelligence refers to an individual's ability to do things with data: collect, and organize, analyze and interpret, conclude and predict. Tags : free math workbooks. 1st graders can choose their topic and level of difficulty and solve math problems to move ahead in the game. The game reinforces letter sounds by having students combine letters to form words for the pictures they see (e. Quizlet makes simple learning tools that let you study anything. Each puzzle consists of a grid containing blocks surrounded by bold lines. I want to make a dictionary in Python, but with math symbols (because then i want to plot that with the symbols). Use math symbols to produce 6 [duplicate] Ask Question Asked 2 years, 4 months ago. Four mathematical papyri still survive, most importantly the Rhind mathematical papyrus dating to 1832 B. With that in mind, let’s give it another try. The software has a built-in equation editor to ensure that all mathematical symbols and expressions are available, and supports the importing of images. Math puzzle. Learn vocabulary, terms and more with flashcards, games and other study tools. All games are free to play and new content is added every week. Use the clues to complete the puzzle. Mathematical symbols are, as the name implies, symbols used in mathematical calculations and formulas. A logical puzzle is a problem that can be solved through deductive reasoning. Math shapes the world around us in fascinating ways. Free Math Puzzles: Ready for some puzzle action? Click on the link and off you go. Nonstop fun with the world's best logic puzzles. When n is a positive integer, exponentiation corresponds to repeated multiplication; in other words, is product of a number b. Solution: Step 1: Plug in. Practice Tests. Easter Crossword Puzzle Create your own Crossword or use our premade Easter word/clue list. ’ Read about other examples and their history here. This practice activity should be used only with math skills for which they have students have previously received teacher instruction and for which they have demonstrated initial independence. Here are six more, all starting with a three-by-three grid. Visual perception skills are essential tools in reading, writing, and math success as well as everyday tasks such as completing puzzles, cutting, drawing, completing math problems, copying information from a board, understanding symbols, dressing as well as many other skills. NCTM is committed to offering standards-based resources that improve the teaching and learning of mathematics for each and every student. There are two symbols, and we can say the goal is to bring For instance: 1 1 0 0 1 0 1 1 1. Solve 5 puzzles to get a prize. First In Math establishes a culture of math success in schools; creates interest and lessens fear of mathematics in children of all skill levels. QuickMath allows students to get instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices. Kids from pre-K to 8th grade can practice math skills recommended by the Common Core State Standards in exciting game formats. ; Fun Math activities, puzzles and games to play. Jigsaw Puzzles 1000 Piece Jigsaws Kids Puzzles 500 Piece Jigsaws Wood Puzzles 300 Piece Jigsaws Puzzles by Brand 2000 Piece Jigsaws 750 Piece Jigsaws Buffalo Games MasterPieces Melissa & Doug BePuzzled Spin Master Games ONLINE Springbok Puzzles Cra-Z-Art The Lang The LANG Companies Ceaco Springbok Wrebbit Puzzles Galison Galison; Michael. These symbols are located in the Science group. This comprehensive set of illustrations for teachers and students consists of ClipArt for all levels of K-12 math classes. Math Results And Formulas. Remove eight matches and leave four squares. In this lesson we are going to first come up with a list of some common vocabulary that is used in mathematics, like Palm Beach State College, and make sure we know what operation goes with each key word, so we are able to convert english words into mathematical symbols. (1) 2 2 2 = 6, Using + , 2 + 2 + 2 = 6 (2) 3 3 3 = 6, Using * and - , 3 * 3 - 3 = 9 - 3 = 6, (3) 4 4 4 = 6 , Using √ and + ,. Mahjongg: Mahjongg is a classic game of pattern recognition. T3 U6b Solving Problems & Puzzles (Days 1-3) (Fred Daynes) Money Alphabet (Matt Shaw) Number Puzzles (Melanie Braithwaite) Countdown (Cat Chambers) Maths Puzzles (Graeme Smith) PDF HTML 1 - 2 - 3; 3 Homework Activities (Elizabeth Chalk) PDF; Triples (Grant Whitaker) Witch's Cauldron (Grant Whitaker) - Smart Notebook Version (Max Smith) Witch's. Department of Mathematics, UC Davis · One Shields Ave · Davis, CA 95616 · (530) 752-0827. Useful for memorizing addition facts and subtraction facts. math worksheets by grade level. Knowledge and training. This page gives a summary of the types of logical puzzles one might One of the simplest types of logical puzzles is a syllogism. Everything in the real world requires us to perform some kind of mathematical operation. Students, teachers, parents, and everyone can find solutions to their math problems instantly. 9 ; Number of people in library puzzle answer Answer is 11. You can edit your text in the box and then copy it to your This online mathematical keyboard is limited to what can be achieved with Unicode characters. This collection starts with "Beginner" codes for Cubs and young Scouts, and "Substitution" codes which are a bit harder, but still suitable for Scouts, and then a few tougher puzzles. With that in mind, let’s give it another try. Algebra articles, problems, and puzzles. Printable Cryptogram Puzzles. Mathematical puzzles or math puzzles are based on logics and a bit of mathematics. In total there are 43,252,003,274,489,856,000 different arrangements of the small cubes. 5cm Eco-friendly EVA Puzzle Symbols Digital Russian Geometry Math Puzzle Mats Education Toys For Kids. a unit of time that contains the Age of Dinosaurs. Operator Search is a type of arithmetic puzzle in which you need to find proper mathematical operators in a given equation such that the equation holds true. An encrypting procedure can encrypt a continuous stream of symbols (stream encryption) or divide it into blocks (block encryption). Related Resources. Can you pick the Mathematical Symbols (picture click)? Test your knowledge on this science quiz and compare your score to others. math puzzle worksheets. Crossword Puzzle The Periodic Table - Displaying top 8 worksheets found for this concept. Use logic to solve a two-by-three Sudoku puzzle with symbols. occurs when many species die out at the same time. Thanks in advance!. To plot each point, first move right or left, then up or down. The puzzle can get trickier if one of your lists is the amount of time someone takes. Take a look at our math pictures for kids featuring a range of interesting shapes, numbers, symbols, clip art and photos. Prob & Stat Vocab Probability and Statistics Vocabulary List (Definitions for Middle School Teachers) B • Bar graph – a diagram representing the frequency distribution for nominal or discrete data. And two on the side of a truck. At the end of the first part of the guide is a list of references cited. If you find a link that is broken, please let us know so that we can update our information. I want to make a dictionary in Python, but with math symbols (because then i want to plot that with the symbols). The virtual worlds at Math Blaster and JumpStart are filled with a variety of cool math games for kids. Gordian Knot Hunab Ku Uraeus Flower of Life Borromean Rings Globus Cruciger Vesica Pisces The Caduceus Holy Grail Merkaba The Infinity Medicine Wheel The Labyr. Change answer; Math. As usual the solution as also provided in the worksheet. Puzzle 8 - The Halogen Group. National Symbols Worksheets. Glue or tape the two pages of the puzzle board together, as shown below. Browse or search thousands of free teacher resources for all grade levels and subjects. All of the puzzles are carefully designed to help young readers increase their speed, confidence, accuracy, and ability in solving math problems that involve algebra and other symbol systems both in the classroom and in the real world. They are graded for difficulty, so that there should be something of interest for everyone. Please check back often for new topics and features!. Whosoever shall solve these puzzles shall rule the universe. Dana from University of Colorado at Boulder Math Forum K-12 Geometry ADD. Math Crossword Puzzle # 1 Addition, Subtraction, Multiplication (by a single-digit number) Math Crossword Puzzle # 2 Number Patterns. Share them with your family at the dinner table, solve them with your co-workers during a coffee break, use them to spark a conversation during car ride. Welcome to Puzzlemaker! Puzzlemaker is a puzzle generation tool for teachers, students and parents. These brain teasers require math thinking. Dollar $Euro € Canadian Dollar C$ Australian Dollar AUD (by request) English Pound £ (by request). Funster’s Quick Thinks Math • Show/Hide Solution. For this Wikibook, it is the intention of the contributors to give the readers some tidbits/history and examples related to that puzzles mentioned and of course there are some puzzles which provided entertainment / interactive question for readers themselves at the end of the page. This is a great back-to-school activity for algebra and pre-algebra classes. National Curriculum Compliant maths worksheets for teachers. This practice activity should be used only with math skills for which they have students have previously received teacher instruction and for which they have demonstrated initial independence. Answers This page will help you find all of CodyCross Answers of All the Levels. To plot each point, first move right or left, then up or down. It is a simple math equation picture puzzle to test your brain. You can even print some of them off to work on. Step 3: Add the two. Math Cats' Art Gallery: The drawings in this math art slideshow were created as part of a friendly Logo-L contest in the spring of 1997. This book is meant to be a math problem solving textbook for Grade 4-7 students and teachers. The Math for Love Kindergarten curriculum lays the foundation for a powerful conceptual understanding of counting and cardinality, simple addition, and subtraction, by leveraging the fun of games and hands-on materials. Logic Math Math Quizzes Logic Puzzles Math Worksheets Math Resources Math Activities Reto Mental Logic Problems Math Enrichment. It allows to copy multiple text symbols together. 1) First Possibility. Funster’s Quick Thinks Math • Show/Hide Solution. A Computer Science portal for geeks. What mathematical symbol can be placed between 5 and 9, to get a number greater than 5 and smaller than 9? Place + and - math symbols. 9781981095001 kostenloser versand für alle bücher mit versand und verkauf duch amazon. Algorithms (6) Coding (4) CS World (3) Math (2) Uncategorized (3) Archives Archives. In all other respects this set of worksheets is identical to 'Let's Draw Symbols' above. " The first decimal number system is said to have been invented in Ancient Egypt around 5000BC. Sign up today!. Mathematics is full of symbols: lines, dots, arrows, Latin letters, Greek letters, upper indexes, lower indexes All this variety of symbols may seem a mishmash of nonsense. A word match puzzle is one where the user has to match a word (or phrase) to its corresponding phrase. In the logic puzzle, Cheryl gives her friends Albert and Bernard different clues as to when her birthday is out of a selection of dates. Mathematical Terms - 7 letters. Free Puzzle Games from AddictingGames. The goal of the puzzle is to return a scrambled cube to a state in which each side has a single colour. Enter your greetings in line1, line2 and line3. Numbers 1-100. Math isn't only about numbers - math skills are found in puzzles and games too. kids-puzzles. List of all mathematical symbols and signs - meaning and examples. Looking for something fun to do this Chinese New Year? Why not print out some of our Chinese New Year Puzzles - we've got a selection suitable for every Chinese new Year, and others for specific years. Visual perception refers to the ability to make sense of what is seen with our eyes. Number Puzzles Questions and Answers. Easily create beautiful interactive video lessons for your students you can integrate right into your LMS. 4/4 Riddles & Puzzles Trivia Mentalrobics Puzzle Games Community. A wide variety of math puzzles games options are available to you, such as material, style, and gender. Math Crossword Puzzle # 3 Associative Property: (5 + 6) x 2. Features PDF printable worksheets, word search puzzles, spelling match, scrambled sentences, cross words and more. Description. Whosoever shall solve these puzzles shall rule the universe. Also added Complex number division. pdf (To do). Logical Puzzles Questions and Answers. e the last item on the answers box. The Crossword Solver finds answers to American-style crosswords, British-style crosswords, general knowledge crosswords and cryptic crossword puzzles. Math Symbols. 1st graders can choose their topic and level of difficulty and solve math problems to move ahead in the game. The math word search puzzles here are organized by easy, medium, and challenging puzzles and a recommended grade level is attached to each. Measuring Puzzles Starter Puzzles Puzzle Games Logic Puzzles Jigsaw Puzzles Number Puzzles Card Puzzles Einstein Puzzles Sam Loyd Puzzles Algebra Puzzles. But he had only two surgical gloves. Scoot! Scoot! is a whole-class math game that will have students moving from desk to desk as they solve math problems. Practice math the fun way, on your mobile phone or tablet like iPad, iPhone, or Android. The Hieroglyphic Typewriter and Math Calculator is included. Math Crossword Puzzle # 14 Simple word problems. kids-puzzles. Math Play has a large collection of free online math games for elementary and middle school students. The hint below lists the symbols that are used in this solution. Find cool math games, interesting facts, printable worksheets, quizzes, videos and so much more!. Will wants to operate for three different persons who were wounded. Find the solution to earn a gold card in the 24 game. Geometry symbols All geometry symbols I could think of are compiled on this page. Alphadoku Puzzles. Our HUGE collection of online games will keep you entertained for hours. Upload your favorite photos to create a personalized jigsaw puzzle that the whole family will love. 0 is the least common digit even though 1,000 has three zero's! Explanations for both riddles. Puzzles Games, free puzzle games online, jigsaw puzzle games, sudoku puzzles games, word puzzles games, math puzzles games and more. Solving Math For Adults Riddles Here we've provide a compiled a list of the best math for adults puzzles and riddles to solve we could find. Will wants to operate for three different persons who were wounded. If we can get all the symbols to be equal to the goal, or the goal+2, we can use this transform to solve the puzzle. 22 math puzzles with answers to test your logic. Step 3: Add the two. Next time when searching the web for a clue, try using the search term “Change symbols, in math crossword” or “Change symbols, in math crossword clue” when searching for help with your puzzles.   Learn and practice easy and hard tricky math puzzles problems with answers for your competitive exams, interviews, quizzes, enterance tests with images. Number notation is a type of writing system that is used in math. Until Google gets it done, Quizizz is pretty easy to do math formulas. We categorize and review the games listed here to help you find the math games you are looking for. Play WORD SEARCH puzzles or see the WORDPLAY section for anagrams, brainteasers and other printable word games. Maths Puzzles Collection of Best Maths Puzzles. Math History. Math Picture Puzzles With Answers. In the toddler years, you can help your child begin to develop early math skills by introducing ideas like: (From Diezmann & Yelland, 2000, and Fromboluti & Rinck, 1999. ; Graded sheets so that you can easily select the right level of difficulty. Quia Web allows users to create and share online educational activities in dozens of subjects, including Mathematics. These questions are all frequently used in all Exams. Riddles, brainteaser and logic puzzles with answers. Forums > Engineering Questions and Answers > Puzzles PUZZLE ::: Using only two 2's and any combination of mathematical signs, symbols and functions can you make 5 ?? Last Post 20 Sep 2012 09:49 PM by ffinybryn. Math Greeting Card Puzzles: Make a fun math greetings worksheet puzzle to solve. By this logic, the problem is treating each bat as a symbol, rather than all of the bats together being a unit. 891383, -102. Description. Com provides free math worksheets for teachers, parents, students, and home schoolers. com Math Site. Play WORD SEARCH puzzles or see the WORDPLAY section for anagrams, brainteasers and other printable word games. History of Math Symbols. IQ Test Questions. * any mathematical symbols or operations? You may not use any other numbers. The Math Symbols is a pictogram Unicode character or emojis. There are seven levels for kids to choose from in each area of practice. Free math lessons and math homework help from basic math to algebra, geometry and beyond. Each unique puzzle sheet is in printer-friendly format and can be printed off directly for immediate use. Free Eductional/Learning Games for Kids - Choose a category: Math and Numbers Games - Word Games / Word Search Puzzles - Picture Games / Spot the Difference Games - Sudoko Games - Put it on the Shelf Online Games - Memory Games - Logic Puzzles & Games - Maze Games - Other Learning Games. This is a list of Math symbols used in all branches of mathematics to express a formula or to represent a constant. Play the best free Daily Puzzle Games, Logic Puzzles and Japanese Logic Games. At Puzzles to Print you will find hundreds of printable puzzles that are absolutely free and ready to print, as well as PDF puzzle books that are easy to purchase and download. Funster’s Quick Thinks Math • Show/Hide Solution. Translate each of the following into its most likely mathematical meaning and indicate with the appropriate symbol: (+), subtraction (-), multiplication (´), division (¸), power (xy), unknown (?), equal (=), or parentheses ( ). We categorize and review the games listed here to help you find the math games you are looking for. Description. You can even print some of them off to work on. Solve calculus and algebra problems online with Cymath math problem solver with steps to show your work. Modern Teaching Aids carries the largest range of teaching resources and education supplies available in Australia for primary schools, high schools, secondary schools, childcare centres, daycare centres, preschools and OOSH. On this page you'll find a variety of Pre-Algebra and Algebra printables. Build your English language vocabulary with the activities we offer. 30 Replies 1447 Upvotes. Available now for iPad, iPhone, and iPod Touch, with some of the best puzzles you'll find anywhere by world-class constructors. Related Resources. These symbols are located in the Science group. Our word search generator uses a basic word filter to prevent the accidental, random When you create your puzzle, please check it over it carefully to be sure unintended words were not added by our random letter generator. 1996 Puzzle - Ruth Carver, Math Forum For students in grades 3-12 with a general knowledge of mathematics. Measuring Puzzles Starter Puzzles Puzzle Games Logic Puzzles Jigsaw Puzzles Number Puzzles Card Puzzles Einstein Puzzles Sam Loyd Puzzles Algebra Puzzles. The math delimiters can also be customized. Consider any given arrangement for the reduced puzzle. After clicking the More arrow, click the menu at the top of the symbols list to see each grouping of symbols. Easier to grade, more in-depth and best of all 100% FREE! Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade and more!. Much better contact Killer Sudoku-X Other types of puzzle It's not just Futoshiki you can solve online at puzzlemix. All of our Montessori Products are made based on the AMI standards and especially the authentic blueprints approved by Mr. Many teachers use these as a weekly challenge. About 5% of these are Puzzle, 3% are Wooden Toys, and 20% are Other Toys & Hobbies. Quia Web allows users to create and share online educational activities in dozens of subjects, including Mathematics. Some of them might be easy while others might be tough. Math Cats' Art Gallery: The drawings in this math art slideshow were created as part of a friendly Logo-L contest in the spring of 1997. The Department of Mathematics is one of nine departments within the College of Natural Sciences. Click on Math Symbols to copy it to the clipboard and paste to use on Instagram, TikTok, Facebook, Twitter, your emails, blog, etc. Multiplication. Old and New Mathematical Puzzles. Learn high school geometry for free—transformations, congruence, similarity, trigonometry, analytic geometry, and more. Math shapes the world around us in fascinating ways. Mathematical Symbols Image: Useful List of Math Symbols in English. Students can get a helping hand mastering patterns, functions, and algebraic thinking on a tour of puzzling. Free for students, parents and educators. Students in grades kindergarten through college can challenge one another – age and experience are not advantages. In the slide where you want to insert the symbol, click Alt+= to insert equitation: 2. The equal sign indicates that the expression located on the left side of this equation (3m - 4) is equal to the right side of the equation. Coolmath Games is a brain-training site, for everyone, where logic & thinking & math meets fun & games. These puzzles are copyrighted by the University of Oregon Department of Linguistics, but may be copied or printed for personal or classroom use. Learn Math Vocabulary in English through pictures and video. This practice activity should be used only with math skills for which they have students have previously received teacher instruction and for which they have demonstrated initial independence. A wide range of learning materials that will enrich any classroom or homeschooling environment. Will wants to operate for three different persons who were wounded. Students, teachers, parents, and everyone can find solutions to their math problems instantly. These questions are all frequently used in all Exams. However, all versions of Word support MathType, including Word 2011. + Addition, Plus, Positive. Each clue describes something associated with Christmas. Make practicing math FUN with these innovative and seasonal - 3rd grade math ideas! Take a peak at all the 3rd grade math worksheets and math games to learn addition, subtraction, multiplication, division, measurement, graphs, shapes, telling time, adding money, fractions, and skip counting by 3s, 4s, 6s, 7s, 8s, 9s, 11s, 12s, and other third. Free Montessori Downloads including-Dot Game Paper-Dot Game Lesson Plan-Geography Map Labels-Puzzle Words-Room Labels-Planing and Record Keeping, ages 3 to 6 curriculum-Math Overview-US President List. Print the PDF: Christmas Symbols Crossword Puzzle. And if you can't find the answer, they're waiting for you at the end. Codewords are like crossword puzzles - but have no clues! Instead, every letter of the alphabet has been replaced by a number, the same number representing the same letter throughout the puzzle. Department of Mathematics, UC Davis · One Shields Ave · Davis, CA 95616 · (530) 752-0827. Puzzle 3 - Strange Symbols. For instance, perhaps you know that a group of people raced a mile and finished in 6, 8, 15, and 25 minutes. The New York Times Crossword is a must-try word puzzle for all crossword fans. Alphadoku puzzles are similar to Sudoku puzzles but they use letters instead of numbers. A jigsaw master cannot reproduce by hand a Voronoi tessellation, which is quite a beautiful mathematical form. Try introducing solving equations with this hands-on activity that uses heart symbols instead of variables. A hen and a half lays an egg and a half in a day and a half. TheMathMom presents puzzles for every age and taste. The virtual worlds at Math Blaster and JumpStart are filled with a variety of cool math games for kids. These symbols occupy pride of place during celebrations of historical or patriotic events. They were introduced even before the written language was introduced. a geologic chart that divides Earth's history into units. Brilliant Puzzles offers huge and continuously growing variety of high quality Puzzle Boxes, trick boxes, Japanese puzzle box, Secret Boxes from artist around the world. You are tasked with placing one mathematical symbol in this gap in order to generate a number that is bigger than 3 but smaller than 7. Remove four matches and leave five squares. Mathematical Symbols KS3 KS4 puzzle. Pictograph worksheets provide the best-practice environment for students to reinforce the knowledge in analyzing data. it’s A 28 letters crossword puzzle definition. Solve interactive Math symbol worksheets. We will continue to add to the collection and cover a wide are of math topics. Symbol type grid puzzles are named as such due to it needed mostly symbols (number, alphabet or at times symbol) to fill inside the cells to solve the grid puzzle. Mathematical ideas are identified and embedded in a carefully sequenced set of tasks and explored in depth to allow students to develop rich mathematical understandings and meaningful skills. National Symbols Worksheets. With increasing variety and difficulty, these puzzles will keep you entertained and intrigued for hours. Answers This page will help you find all of CodyCross Answers of All the Levels. Puzzles Games, free puzzle games online, jigsaw puzzle games, sudoku puzzles games, word puzzles games, math puzzles games and more. This math worksheet gives your child practice identifying and drawing lines of symmetry on shapes and symbols. Each puzzle has nine letters to place in the grid so none of them repeat in any row, column, or block. NEW games added every week. Children do not get confused on the use of chess symbols, not will they become handicapped when they learn the variables in high schools just because they learn substitution at young age. This is a normal-sized Sudoku puzzle commonly found in books and newspapers. Lesson plans, unit plans, and classroom resources for your teaching needs. Good luck! Easy Puzzle 7,988,658,410-- Select a puzzle-- Select a puzzle. Mathematical Terms - 9 letters. The colon “:” means divide, and you must follow the standard order of operations, meaning that. Student Resources. Most AsciiMath symbols attempt to mimic in text what they look like rendered, like oo for ∞. Here are some examples of the Greek letters in action. Mathematical notations are used in mathematics, the physical sciences, engineering, and economics. Example 1: What is the value of. Find the value of "A" from first equation and apply it on second equation and. You find these math keys come in handy Confused by what you see? Well, this is a computer, and although it uses some traditional math symbols, it also throws in a couple special oddball. Make practicing math FUN with these inovactive and seasonal - free 2nd grade math worksheets and math games to learn addition, subtraction, multiplication, measurement, graphs, shapes, telling time, adding money, fractions, and skip counting by 3s, 4s, 6s, 7s, 8s, 9s, 11s, 12s, and other second grade math. Here you can find interactive games designed to make math drills fun and entertaining. All About Shapes: Interactive online shape puzzle on Grade 2 Math. Since in the instructions, it has been stated that any sign can be used, therefroe we take the liberty to solve the puzzle with various signs. 5cm Eco-friendly EVA Puzzle Symbols Digital Russian Geometry Math Puzzle Mats Education Toys For Kids. Prob & Stat Vocab Probability and Statistics Vocabulary List (Definitions for Middle School Teachers) B • Bar graph – a diagram representing the frequency distribution for nominal or discrete data. Practice your math facts using addition, subtraction, multiplication and division flashcards, from basic math facts to 3-digit problems. To help them they also get in the bottom of the worksheet a code key table where they can find which symbol represent which letter. The Bananas, Clock, Hexagon Viral Logic Puzzle, which math puzzle enthusiast Presh Talwalkar posted to his Mind Your Decisions blog, is the latest riddle to have us admittedly stumped. It’s called Occam’s Razor, which is a principle of Logic, not Math. Each "word" must add up to the number provided in the clue above it or to the left. Math may be complicated for some children. Math Games offers online games and printable worksheets to make learning math fun. Try free NYT games like the Mini Crossword, Ken Ken, Sudoku & SET plus our new subscriber-only puzzle Spelling Bee. 30 Replies 1447 Upvotes. Old and New Mathematical Puzzles. Tenner grids are very challenging math grid puzzles that are based on logic and addition skills. Check out the MathJax website for more information! For widest browser compatibility, the use of MathJax is recommended. Summation symbol in math. (Note – Trick) SherlockHolmes Expert Asked on 6th December 2017 in Math Puzzles. These puzzles are regardless of age group as it requires only a bit of mathematics but high logics. Replicating the puzzle as is. Math Comic #234 - "Going Mad (with Add Libs)" 4-6-16. Print the PDF: Christmas Symbols Crossword Puzzle. Then power your machine with a robot on a hamster wheel. Make practicing math FUN with these innovative and seasonal - 3rd grade math ideas! Take a peak at all the 3rd grade math worksheets and math games to learn addition, subtraction, multiplication, division, measurement, graphs, shapes, telling time, adding money, fractions, and skip counting by 3s, 4s, 6s, 7s, 8s, 9s, 11s, 12s, and other third. It allows to copy multiple text symbols together. Printable Cryptogram Puzzles. Make your own photo puzzle with Shutterfly. However, all versions of Word support MathType, including Word 2011. These math word search puzzles really are a great resource for teachers and homeschooling parents. Replicating the puzzle as is. Each word search puzzle has a theme and nice illustrations around the grid, to color in to add more fun. (4) But the field of Logic has a method to deal with that. Each unique puzzle sheet is in printer-friendly format and can be printed off directly for immediate use. Sign up today!. Power of maths of powers. The New York Times Crossword is a must-try word puzzle for all crossword fans. Change answer; Math. Games and Puzzles from Jefferson Lab. In this article, we will enhance our reasoning skills by solving questions on puzzles involving numbers and letters. Knowledge and training. com offers 1,150 math puzzles games products. The Bananas, Clock, Hexagon Viral Logic Puzzle, which math puzzle enthusiast Presh Talwalkar posted to his Mind Your Decisions blog, is the latest riddle to have us admittedly stumped. Counting Chain Arrows can be used to count out large numbers on a bead chain and do skip counting. Find out why our students win so many awards. There are a few trick questions. Let us see if you can answer this math equation picture puzzle correctly and find the value of the missing number which replaces the question mark?. HTML Math Symbols, Math Entities and ASCII Math Character Code Reference. The Hieroglyphic Typewriter and Math Calculator is included. Read the question carefully and then plug in the numbers. Each "word" must add up to the number provided in the clue above it or to the left. Create and print customized word search, criss-cross, math puzzles, and more-using your own word lists. 1:14:00 PM math puzzles, WhatsApp Forwards, WhatsApp Picture Puzzles, whatsapp puzzles. Support independent sellers. Puzzles with Answers. Math isn't only about numbers - math skills are found in puzzles and games too. Marcy Cook Math - Math Tile Cards and Books for Grade School Children. We didn't even begin to use algebra symbols until a couple of days down the road, at which point the transition from shapes to algebra was seamless for most kids. Visit us again: we intend to place here new puzzles and the solutions. BrainPOP - Animated Educational Site for Kids - Science, Social Studies, English, Math, Arts & Music, Health, and Technology. Math logic puzzles for kids add the extra challenge of solving math problems along with the logic puzzle. Learn vocabulary, terms and more with flashcards, games and other study tools. @Gieron: But then, what's with the math symbols/arrow on the ground? And why can you see a staircase behind the glass (where the arrow is As it is, the puzzles (and the game in general) make a LOT of sense. Mathematical symbols dictionary. Choose the correct symbol for each clue from the word bank to correctly complete the puzzle. 891383, -102. Any number you enter into a single cell on the board must be a number between 1 and 9. You can access them easily and quickly from the library panel. 20 eyes belong to 10 people. integration) with symbolic expressions. Free printable math puzzles are available for everyone and even parents and teachers can encourage and suggest the child to practice the cool math puzzles for increasing the joy of thinking. The challenge is to make sure the symbols on the tiles line up with their neighbours!. A N G L E 8. In Print Layout view Click the Document Elements tab of the Ribbon. FREE Sudoku Puzzle Games unblocked. We didn't even begin to use algebra symbols until a couple of days down the road, at which point the transition from shapes to algebra was seamless for most kids. Math Pictures for Kids. The following table shows Unicode symbol, HTML code, CSS code, and official HTML name for the characters categorized under math symbols. Other sizes exist also. These Math Worksheets are great for teachers, homeschoolers and parents. Most of the questions were chosen with Some of the puzzles are also appropriate for class work - an initial worked example on the board will help a lot. In this article, we will enhance our reasoning skills by solving questions on puzzles involving numbers and letters. Download Math stock photos. Find math symbols stock images in HD and millions of other royalty-free stock photos, illustrations and vectors in the Shutterstock collection. There are no answers provided as there are many different, correct ways of choosing which one doesn't belong.   Learn and practice easy and hard tricky math puzzles problems with answers for your competitive exams, interviews, quizzes, enterance tests with images. Mathematical Patterns and Puzzles. Toothpick Math Puzzles Geometric toothpick puzzles that help develop problem solving and critical thinking skills. Great for elementary math or anyone wanting a little brain exercise. Make Your Own Custom Printables. It’s due to him that I love math puzzles, and this is one of the first problems (of many) that he gave me when I was growing up. Share them with your family at the dinner table, solve them with your co-workers during a coffee break, use them to spark a conversation during car ride. Make practicing math FUN with these innovative and seasonal - 3rd grade math ideas! Take a peak at all the 3rd grade math worksheets and math games to learn addition, subtraction, multiplication, division, measurement, graphs, shapes, telling time, adding money, fractions, and skip counting by 3s, 4s, 6s, 7s, 8s, 9s, 11s, 12s, and other third. Dark Discussions at Cafe Infinity "ganesh Administrator Registered: 2005-06-28 Posts: 32,116 Re: crème de la crème 817) Otto Diels Otto Paul Hermann Diels was born in. Useful for memorizing addition facts and subtraction facts. For example, 44 + 44 = 88. Practise maths online with unlimited questions in more than 200 year 3 maths skills. 200 __ 300 __ 600 __ 400 __ 200 = 200 Question from Dr. KEYWORDS: Classroom materials, software, newsgroups Math Forum - College Level Geometry ADD. Choose from a variety of Math puzzle options with different sizes, number of pieces, and board material. You can check if your answer is correct by. Comments Off on A Strange Math Symbol Puzzle. It was last seen in American quick crossword. Match pairs of shapes together with the help of your memory. The keys include Latin symbols together with their hieroglyph equivalents and descriptions, which allow. Math coloring pages and worksheets are a fun way to help your children learn. Clustered around the numeric keypad, like yuppies lurking near Starbucks, are various math-symbol keys. More advanced mathematical skills are based on an early math “foundation”—just like a house is built on a strong foundation. Solve this combined genius math puzzle - Puzzles only for genius with answer. The software has a built-in equation editor to ensure that all mathematical symbols and expressions are available, and supports the importing of images. Remove six matches and leave five squares. The puzzles in this book -designed to reinforce mathematical terms, concepts and skills- provide a fun alternative to traditional maths revision activities. For this Wikibook, it is the intention of the contributors to give the readers some tidbits/history and examples related to that puzzles mentioned and of course there are some puzzles which provided entertainment / interactive question for readers themselves at the end of the page. In the following example, if we consider the 4 mathematical operators ( + , − , × , ÷ ) (+ , - , \times , \div) ( + , − , × , ÷ ) , then there are 4 3 = 64 4^3=64 4 3 = 6 4.
2021-04-18 18:51:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24056898057460785, "perplexity": 2609.7892709050248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038507477.62/warc/CC-MAIN-20210418163541-20210418193541-00347.warc.gz"}
https://ltwork.net/co-che-trao-doi-khi-tao-phoi-la--14136624
Cơ chế trao đổi khí tạo phổi là Question: Cơ chế trao đổi khí tạo phổi là A treat bag has 2 green gum balls to read gumball and one blue gumball, if you were to reach into the treat bag right as a fraction, with the probability A treat bag has 2 green gum balls to read gumball and one blue gumball, if you were to reach into the treat bag right as a fraction, with the probability would be that you would randomly choose.... G is the centroid of the triangle below. Given CG = 13, CA = 24, FG = 5, and AD = 21,find the segments below. G is the centroid of the triangle below. Given CG = 13, CA = 24, FG = 5, and AD = 21, find the segments below. $G is the centroid of the triangle below. Given CG = 13, CA = 24, FG = 5, and AD = 21, find the se$... Which of the following is a promise advertisers could make to promote healthy living using fitness equipment? Which of the following is a promise advertisers could make to promote healthy living using fitness equipment? "use full body movements to improve your health." "get into a bikini by this summer with our muscle toner machine." "powerlift the heaviest weights in three complete workouts." "the jui... .▓▓▓▓..▓▓.▓..▓▓.▓▓.▓▓▓▓..▓▓.▓▓.▓▓.▓▓▓▓..▓▓.▓▓.▓.▓▓.▓▓.▓▓.▓.▓.▓▓.▓▓▓.▓▓.▓▓.▓.▓.▓▓.▓▓.▓.▓▓..▓▓.▓▓..▓▓.▓▓.▓▓.▓▓.▓▓.▓.▓.▓.▓.▓.^.^.▓.▓.❤.▓.▓.▓.▓.ٮ.▓.▓▓.▓▓ .▓▓▓▓ ..▓▓.▓ ..▓▓.▓▓.▓▓▓▓ ..▓▓.▓▓.▓▓.▓▓▓▓ ..▓▓.▓▓.▓.▓▓.▓▓ .▓▓.▓.▓.▓▓.▓▓▓.▓▓ .▓▓.▓.▓.▓▓.▓▓.▓ .▓▓..▓▓.▓▓..▓▓.▓▓ .▓▓.▓▓.▓▓ .▓.▓ .▓.▓ .▓.^.^.▓ .▓.❤... Write an equation and solve. show the equation, your work in solving the equation, and the solution in your answer. manuel Write an equation and solve. show the equation, your work in solving the equation, and the solution in your answer. manuel went shopping and bought 3 pairs of pants. he also spent five dollars on a pair of socks. when he got home, he realized that he had spent a total of seventy-one dollars. what wa... Which atomic model has a positively charged nucleas and q negatively charged cloud of electrons giving the atom it's size?​ which atomic model has a positively charged nucleas and q negatively charged cloud of electrons giving the atom it's size?​... Read the following excerpt from 'Ellis Island' by Barbara Davis-Pyle. Then I smiled because all of the questions were over. The Read the following excerpt from "Ellis Island" by Barbara Davis-Pyle. Then I smiled because all of the questions were over. The men asked Papa and Mama to read some Italian words from a book, and the official stamped our papers. Then he grinned and said in English, “Welcome to the United States of...
2022-08-16 10:02:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39809322357177734, "perplexity": 4205.225480857738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00402.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2002_AMC_12B_Problems/Problem_10&diff=27097&oldid=22980
# Difference between revisions of "2002 AMC 12B Problems/Problem 10" ## Problem How many different integers can be expressed as the sum of three distinct members of the set $\{1,4,7,10,13,16,19\}$? $\mathrm{(A)}\ 13 \qquad\mathrm{(B)}\ 16 \qquad\mathrm{(C)}\ 24 \qquad\mathrm{(D)}\ 30 \qquad\mathrm{(E)}\ 35$ ## Solution Each number in the set is congruent to 1 modulo 3. Therefore, the sum of any three numbers is a multiple of 3. We can make all multiples of three between 1+4+7=12 (the minimum sum) and 13+16+19=48 (the maximum sum), inclusive. There are $\frac{48}{3}-\frac{12}{3}+1=13 \Rightarrow \boxed{\mathrm{(A)}}$ integers we can form.
2021-04-12 15:31:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31806430220603943, "perplexity": 405.08034481206766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067870.12/warc/CC-MAIN-20210412144351-20210412174351-00128.warc.gz"}
https://en.m.wikipedia.org/wiki/Party-list_representation_in_the_House_of_Representatives_of_the_Philippines
# Party-list representation in the House of Representatives of the Philippines Party-list representation in the House of Representatives of the Philippines refers to a system in which 20% of the House of Representatives is elected. While the House is predominantly elected by a plurality voting system, known as a first-past-the-post system, party-list representatives are elected by a type of party-list proportional representation. The 1987 Constitution of the Philippines created the party-list system. Originally, the party-list was open to underrepresented community sectors or groups, including labor, peasant, urban poor, indigenous cultural, women, youth, and other such sectors as may be defined by law (except the religious sector). However, a 2013 Supreme Court decision clarified that the party-list is a system of proportional representation open to various kinds of groups and parties, and not an exercise exclusive to marginalized sectors. National parties or organizations and regional parties or organizations do not need to organize along sectoral lines and do not need to represent any marginalized and underrepresented sector.[1] The determination of what parties are allowed to participate—who their nominees should be, how the winners should be determined, and the allocation of seats for the winning parties—has been controversial ever since the party-list election was first contested in 1998 and has resulted in several landmark COMELEC and Supreme Court cases. Party-list representatives are indirectly elected via a party-list election wherein the voter votes for the party and not for the party's nominees (closed list); the votes are then arranged in descending order, with the parties that won at least 2% of the national vote given one seat, with additional seats determined by a formula dependent on the number of votes garnered by the party. No party wins more than three seats. If the number of sectoral representatives does not reach 20% of the total number of representatives in the House, parties that haven't won seats but garnered enough votes to place them among the top sectoral parties are given a seat each until the 57 seats[needs update] are filled. A voter therefore has two parallel votes in House of Representatives elections—for district representative and for the under-represented sectoral-party list representative/s. Neither vote affects the other. Party-list representation makes use of the tendency for proportional representation systems to favor single-issue parties, and applies that tendency to allow underrepresented sectors to represent themselves in the law-making process. ## Manner of election ### Constitution The Constitution mandates that the sectoral representatives shall compose 20% of the House of Representatives. For three consecutive terms after the ratification of the constitution, one-half of the seats allocated to party-list representatives were filled "by selection or election."[2] For the 1987, 1992 and 1995 elections, the president appointed sectoral representatives, subject to the confirmation from the Commission on Appointments, half of whose members are derived from the House of Representatives. ### Party-List System Act Election Method Legislative districts Sectoral representatives Underhang 20% quota Seats won 1998 R.A.7941 206 52 14 38 2001 VFP 205 51 14 37 2004 VFP 209 52 24 28 2007 VFP 218 54 22 32 BANAT 53 1 2010 BANAT 229 57 56 1 On March 3, 1995, Republic Act No. 7941 or the Party-List System Act was signed into law. It mandated that "the state shall promote proportional representation in the election of representatives to the House of Representatives through a party-list system". The five political parties with the highest number of members at the start of the 10th Congress of the Philippines were banned from participating. Each voter can vote one party via closed list; votes are then tallied nationwide as one at-large district, with the number of sectoral representatives not to surpass 20% of the total number of representatives. The law provided that each party that has 2% of the national vote be entitled one seat each, and an additional seat for every 2% of the vote thereafter until a party has three seats. This means that a party can win the maximum three seats if it surpasses 6% of the national vote.[3] While the law was first used for the 1998 election, and several parties did meet the 2% quota during the succeeding elections, they did not fill up the required 20% allocation for party-list representatives of the constitution. Furthermore, the votes for parties that had more than 6% of the vote were considered wasted.[4] Ateneo de Manila University mathematics professor Felix Muga II said that "Any seat allocation formula that imposes a seat-capping mechanism on the party-list proportional representation voting system contradicts the social justice provision of the 1987 Constitution."[5] Any vacancy is filled by the person next in line on the list; in cases where a seated sectoral representative switches parties, that representative loses their seat and the person next in line on the list assumes the seat. ### Contestations #### Veterans Federation Party et al. vs. COMELEC Party-list results 2001: Note: Majority of the parties were disqualified after the election. 2004: 2007: 2010: Key: • Inner ring: Proportion of votes, excluding spoiled/invalid votes. • Gray: Parties that did not win seats. • Middle ring (2007 only): Proportion of seats won as per VFP vs. COMELEC. • Outer ring: Proportion of seats won (for 2007, this is the final allocation as per BANAT vs. COMELEC). • Black: Unfilled seats. In 2000, the Veterans Federation Party (VFP), the Akbayan! Citizens' Action Party and several other parties sued the COMELEC which led a case in the Supreme Court; the court ruling changed the way how the seats are allocated for the winning parties. In 1998, only 14 representatives were elected out of 13 winning parties, well short of the then 52 representatives needed to fill up 20% of the House. The so-called "Panganiban formula," named after Chief Justice Artemio Panganiban, calculates that the number of seats a party will win is dependent on the number of votes of the party with the highest number of votes.[6] The court maintained the four inviolable parameters: First, the twenty percent allocation – the combined number of all party-list congressmen shall not exceed twenty percent of the total membership of the House of Representatives, including those elected under the party list. Second, the two percent threshold – only those parties garnering a minimum of two percent of the total valid votes cast for the party-list system are “qualified” to have a seat in the House of Representatives; Third, the three-seat limit – each qualified party, regardless of the number of votes it actually obtained, is entitled to a maximum of three seats; that is, one “qualifying” and two additional seats. Fourth, proportional representation – the additional seats which a qualified party is entitled to shall be computed “in proportion to their total number of votes.” The court came up with the following procedure on how to determine how many seats a party wins. First, the party with the highest number of votes gets at least one seat. It can win additional seats for every 2% of the national vote until it reaches the three-seat limit. Therefore: ${\displaystyle TP_{s}=1~{\mbox{if}}~g\ >=0.02}$ ${\displaystyle TP_{s}=2~{\mbox{if}}~g\ >=0.04}$ ${\displaystyle TP_{s}=3~{\mbox{if}}~g\ >=0.06}$ where: • TPs is the number of seats of the top party. • g is the percentage of votes garnered by the sectoral organization, For the other parties surpassing the 2% threshold, they all automatically win one seat; additional seats will be won according to the following formula. :${\displaystyle \mathrm {S} =({\frac {\mathrm {PV} }{\mathrm {TP} }})\times {TP_{s}}}$ where: • S is the number of seats • PV is the votes for the party • TP is the votes of the top party. • TPs is the number of seats of the top party. The product, disregarding integers, is the number of additional seats for the party. Prior to the adopting the "Panganiban formula," the court considered applying the Niemayer formula used in the allocation of seats in the German Bundestag. However, since R.A. 7941 limits the maximum number of seats for each party to three, of the existence of a 2% quota, and that 20% of the seats can be filled up, the court instead devised the formula above to ensure that the 20% allocation for sectoral representatives would not be exceeded, the 2% threshold will be upheld, the three-seat limit enforced and the proportional representation be respected.[7] The formula was first used in determining the result of the 2001, and was first applied in the 2004 elections. The use of this formula by the COMELEC had been labeled by certain groups as to "annihilate independent voices in the House," according to Akbayan representative Etta Rosales.[8] The court upheld this in subsequent cases, such as the Partido ng Manggagawa vs. COMELEC and Citizens' Battle Against Corruption vs. COMELEC.[9] Panganiban in 2010 remarked in a lecture at the Ateneo Law School that "It's very complicated and there must be an easier formula to compute," adding that the party-list law has to be amended by Congress.[10] #### BANAT vs. COMELEC In 2007, another party-list group, the Barangay Association for National Advancement and Transparency (BANAT, now Barangay Natin!) sued the COMELEC for not proclaiming the full number of party-list representatives (they were not among on those who were proclaimed winners). As with the other cases, the Supreme Court condensed all the cases to one case. The court ruled on April 21, 2009, that the 2% election threshold unconstitutional, and stipulated that for every four legislative districts created, one seat for sectoral representatives should be created; this thereby increased the sectoral seats in the 14th Congress from 22 to 55; the Supreme Court, however, upheld the 3-seat cap.[11] To determine the number of seats for sectoral representatives, the formula for the quotient is: ${\displaystyle S=\left({\frac {D}{0.8}}\right)\times 0.2}$ where: • S is the number of seats allocated for sectoral representation, • D is the total number of district representatives, and • D / 0.8 is the total number of members of the House. To get the first guaranteed seat, a sectoral party or organization should at least get 2% of the total votes cast for partly list elections. The formula for the quotient is: ${\displaystyle g={\frac {P}{V}}}$ where: • g is the percentage of votes garnered by the sectoral organization, • P is the total number of votes gained by the sectoral organization, and • V is the total number of votes cast in the party list representation election. Therefore: ${\displaystyle R_{1}=1~{\mbox{if}}~g\ >=0.02}$ If the total number of guaranteed seats awarded is less than the total number of seats reserved for sectoral representatives (S), the unassigned seats will awarded in the second round of seat allocation. To get the number of additional seats, this formula will be followed. ${\displaystyle R_{2}=(S-T_{1})\times g}$ where: • ${\displaystyle {R_{2}}}$  is the total number of additional seats awarded to the sectoral organization, • S is the number of seats allocated for party-list representatives, • ${\displaystyle {T_{1}}}$  is the total number awarded seats ${\displaystyle ({R_{1}})}$  in the first round of seat allocation, and • g is the percentage of votes garnered by the sectoral organization. Note: ${\displaystyle {R_{2}}}$  should appear as whole integer. If the total number of seats awarded after two rounds is still less than the total number of seats reserved for sectoral representatives (S), the remaining seats will be assigned to sectoral organizations next in rank (one seat each organization) whose ${\displaystyle {R_{2}}}$  result is 0 until all available seats are completely distributed. ${\displaystyle T_{3}=(S-T_{1}-T_{2})\ }$ where: • ${\displaystyle {T_{3}}}$  is the total number of sectoral organizations next in rank (in Round 2) to be given with one seat, • S is the number of seats allocated for party-list representatives, • ${\displaystyle {T_{1}}}$  is the total number awarded seats in the first round of seat allocation, and • ${\displaystyle {T_{2}}}$  is the total number awarded seats in the second round of seat allocation. This is essentially a Hare quota, with the following exceptions: • The 2% election threshold automatically awards parties one seat; this means that the total seats that will be disputed is the difference of the number of party-list seats and the number of parties that surpassed the threshold. • The fractional remainder is disregarded. The seats that could've been distributed from the fractional remainders are given to parties that quotas less than 1 after the threshold. • The party cannot win more than three seats. With the large number of parties contesting, this means the share of the votes the parties get are small—in 2010, the party with the most votes (Ako Bicol Political Party) won 5.20% of the vote—the only way a party's votes can be wasted is if its quota after the threshold is 4 or more. This can be affected if several parties surpassed the threshold (thus lessening the number of seats to be distributed), or if a party wins via a landslide. In 2010, AKB's quota after threshold was 2.33, or, disregarding decimals, 2. This entitled them to 2 additional seats aside from the automatic 1 seat they've won by surpassing the threshold. Senator Joker Arroyo criticized the ruling of the Supreme Court, saying that the court "overreached itself and engaged in judicial legislation." Arroyo later compared with parties with between "155,000 to 197,000 votes... a measly 1 percent to 1.24 percent of the votes" to a city which needs a population of 250,000 or more to obtain its own legislative district.[12] #### Summary Method First seat Second seat Third seat R.A. 7941 2% of vote 4% of vote 6% of vote VFP vs. COMELEC 2% of the vote Party with most votes: 4% of the vote Party with most votes: 6% of the vote Other parties: Total votes divided by votes of the party with most votes; quotient will be multiplied by the number of seats the party with the most votes have. Product, disregarding decimals, is the number of seats. BANAT vs. COMELEC 2% of the vote Hare quota, without decimals, from the seats that are not yet allocated. If quota has not been met, parties with less than 2% of the preferences will get one seat until quota is met. #### Example In 2010, there are 57 party-list seats being contested, with 29,311,294 valid votes cast, and 12 parties having at least 2% of the vote. Ako Bicol Political Party topped the vote, receiving 1,524,006 votes or 5.20% of the vote. • First round: ${\displaystyle R_{1}=1~{\mbox{since}}~0.0519\ >=0.02}$ • Second round: ${\displaystyle R_{2}=(57-12)\times 0.0519}$ ${\displaystyle R_{2}=45\times 0.0519}$ ${\displaystyle R_{2}=2.3397}$ Disregarding decimals, ${\displaystyle R_{2}=2}$ • Both rounds: ${\displaystyle S=1+2=3}$ • Hence, AKB won three seats in the House of Representatives. Akbayan Citizens' Action Party received 1,061,947 votes or 3.62% of the vote. • First round: ${\displaystyle R_{1}=1~{\mbox{since}}~0.0362\ >=0.02}$ • Second round: ${\displaystyle R_{2}=(57-12)\times 0.0362}$ ${\displaystyle R_{2}=45\times 0.0362}$ ${\displaystyle R_{2}=1.6303}$ Disregarding decimals, ${\displaystyle R_{2}=1}$ • Both rounds: ${\displaystyle S=1+1=2}$ • Hence, Akbayan won two seats in the House of Representatives. Alagad received 227,281 or 0.78% of the vote. • First round: ${\displaystyle R_{0}=0~{\mbox{since}}~0.0078\ <0.02}$ • Second round: At this point, 35 seats have already been awarded. ${\displaystyle R_{2}=(57-35)\times 0.0078}$ ${\displaystyle R_{2}=22\times 0.0078}$ ${\displaystyle R_{2}=0.1706}$ Disregarding decimals, ${\displaystyle R_{2}=0}$ • Both rounds: ${\displaystyle S=0+0=0}$ • However, not all seats have been distributed. Therefore: Alagad won one seat in the House of Representatives. A much simpler understanding of the formula is as follows: • The topnotcher, and on rare occasions, the 2nd placed party, gets 3 seats. • The other parties that got 2% or more of the valid votes gets 2 seats each • The next 40 or so parties get 1 seat each ## Issues concerning party-list group nominees ### Major parties' involvement While the party-list system has been used by some sectors that have not been able to participate in government in order to have a voice in Congress, allegations from left-leaning party-list organizations state that several parties were used as fronts by then-President Gloria Macapagal Arroyo's ruling administration to further its interests. Parties such as 1-UTAK, purportedly representing transport groups, and PACYAW, which claims to advocate athletes and sports personnel, have government officials for nominees.[13] The first nominee of Ang Galing Pinoy, for instance, a group claiming to represent security guards and tricycle drivers, was former Pampanga 2nd district representative Mikey Arroyo, the son of the former president; Arroyo won a seat through Ang Galing Pinoy in the 2010 election.[14] ### Connections with the CPP-NPA Left-leaning parties in the Bagong Alyansang Makabayan (New Patriotic Alliance) bloc including Bayan Muna (Nation First), Kabataan Party-list (Youth Party-list), GABRIELA Women's Party, and Anakpawis, have been criticized in that the personalities in these parties were merely pursuing "ideological objectives" within Congress to support the outlawed Communist Party of the Philippines' objective of overthrowing the ruling system through "bloody means."[15][16][17] In January 2021, President Rodrigo Duterte urged leaders of the Congress to abolish the party list system, due to allegations that some parties, particularly the Makabayan bloc, were "sympathizers or connected" to the Communist Party of the Philippines and the New Peoples' Army (NPA).[18] ### Ang Bagong Bayani-OFW Labor Party vs. COMELEC In 2002, the Supreme Court ruled in Ang Bagong Bayani-OFW Labor Party vs. COMELEC that nominees "must be Filipino citizens belonging to marginalized and unrepresented sectors, organizations and parties, as the constitution intended to give genuine power to the people, not only by giving more law to those who have less in life, but more so by enabling them to become veritable lawmakers themselves."[19] ### BANAT vs. COMELEC In the same BANAT vs. COMELEC case stated above, while the ponencia thereof pointed out that neither the 1987 Constitution nor R.A. 7941 prohibits major political parties from participating in the party-list election, it was emphasized that they must do so by establishing or forming coalitions with sectoral organizations for electoral or political purposes. In fact, Associate Justice Antonio Carpio noted that "it is not necessary that the party-list organization's nominee 'wallow in poverty, destitution and infirmity' as there is no financial status required by the law."[20] This effectively allowed anyone to be nominated by a party participating in the party-list election. However, by a vote of 8–7, the Supreme Court still decided to continue disallowing major political parties from participating in the party-list elections, directly or indirectly. ## Qualification to the ballot In Bagong Bayani-OFW Labor Party vs. COMELEC, the Supreme Court laid down the requirements in which groups can qualify to the ballot:[21] 1. Political party, sector, organization or coalition must represent the marginalized and underrepresented groups 2. Political party must show, however, that they represent the interests of the marginalized and underrepresented 3. Religious sector may not be represented in the party-list system 4. The party or organization must not be an adjunct of, or a project organized or an entity funded or assisted by, the government 5. The party must not only comply with the requirements of the law; its nominees must likewise do so 6. Not only the candidate party or organization must represent marginalized and underrepresented sectors; so also must its nominees 7. The nominee must likewise be able to contribute to the formulation and enactment of appropriate legislation that will benefit the nation as a whole In Atong Paglaum vs. COMELEC, the Supreme Court ruled that the party-list system is not for sectoral parties only, but also for non-sectoral parties. The Supreme Court then laid down the basic on which organizations can join:[21] 1. Three different groups may participate in the party-list system: 1. national parties or organizations, 2. regional parties or organizations, and 3. sectoral parties or organizations 2. National parties or organizations and regional parties or organizations do not need to organize along sectoral lines and do not need to represent any "marginalized and underrepresented" sector. 3. Political parties can participate in party-list elections provided they register under the party-list system and do not field candidates in legislative district elections. A political party, whether major or not, that fields candidates in legislative district elections can participate in party-list elections only through its sectoral wing that can separately register under the party-list system. The sectoral wing is by itself an independent sectoral party, and is linked to a political party through a coalition. 4. Sectoral parties or organizations may either be "marginalized and underrepresented" or lacking in "well-defined political constituencies." 5. A majority of the members of sectoral parties or organizations that represent the "marginalized and underrepresented" must belong to the "marginalized and underrepresented" sector they represent. Similarly, a majority of the members of sectoral parties or organizations that lack "well-defined political constituencies" must belong to the sector they represent. The nominees of sectoral parties or organizations that represent the "marginalized and underrepresented," or that represent those who lack "well-defined political constituencies," either must belong to their respective sectors, or must have a track record of advocacy for their respective sectors. The nominees of national and regional parties or organizations must be bona-fide members of such parties or organizations. 6. National, regional, and sectoral parties or organizations shall not be disqualified if some of their nominees are disqualified, provided that they have at least one nominee who remains qualified. ## Results Year Participating parties Seats Topnotcher Turnout Disputed Won Underhang Losing parties' vote Party Votes % of valid votes Seats won Valid votes % of total Total % of voters 1998 123 51 14 37 63% APEC 503,487 5.50% 2 9,155,309 31.26% 29,285,775 69% 2001 162 51 14 37 36% Bayan Muna 1,708,253 11.30% 3 15,118,815 51.29% 29,474,309 51% 2004 66 52 28 24 35% Bayan Muna 1,203,305 9.46% 3 13,241,974 39.52% 33,510,092 40% 2007 93 54 53 1 33% Buhay 1,169,338 7.30% 3 15,950,900 48.63% 32,800,054 49% 2010 178 57 57 0 30% Ako Bikol 1,524,006 5.20% 3 30,092,613 78.88% 38,149,371 79% 2013 112 58 56 2 25% Buhay 1,270,608 4.60% 3 28,600,124 71.24% 40,144,207 71% 2016 115 59 59 0 22% Ako Bikol 1,664,975 5.14% 3 32,377,841 71.98% 44,980,362 72% 2019 134 61 61 0 23% ACT-CIS 2,651,987 9.51% 3 27,884,790 58.96% 47,296,442 74% 2022 177 63 55 0 29.57% ACT-CIS 2,111,091 5.74% 3 36,802,064 65.61% 56,095,234 83.07% ## References 1. ^ "SC shakes up party list in new verdict". April 5, 2013. 2. ^ "CONSTITUTION – Article VI: LEGISLATIVE DEPARTMENT". Commission on Elections. August 10, 2009. Archived from the original on June 23, 2010. Retrieved November 29, 2010. 3. ^ "REPUBLIC ACT No. 7941 AN ACT PROVIDING FOR THE ELECTION OF PARTY-LIST REPRESENTATIVES THROUGH THE PARTY-LIST SYSTEM, AND APPROPRIATING FUNDS THEREFOR". Commission on Elections. August 10, 2009. Archived from the original on June 17, 2010. Retrieved November 29, 2010. 4. ^ Rosario Braid, Florangel (November 9, 2010). "Should We Retain Party-List System?". Manila Bulletin. Retrieved November 29, 2010. 5. ^ Ordoñez, Elmer (May 2, 2009). "'Reductio ad absurdum'". The Manila Times. Archived from the original on August 11, 2011. Retrieved April 14, 2011. 6. ^ Dizon, Nikko (July 2, 2007). "Dilemma over partylist formula delays winners' proclamation". Philippine Daily Inquirer. Archived from the original on July 13, 2012. Retrieved November 29, 2010. 7. ^ "Veterans Federation Party et al. vs. COMELEC". Supreme Court of the Philippines. Archived from the original on April 1, 2012. Retrieved November 29, 2010. 8. ^ Dizon, Nikko (June 5, 2007). "Only Buhay may get three seats". Philippine Daily Inquirer. Archived from the original on October 9, 2012. Retrieved December 30, 2010. 9. ^ "Mechanism of proportional representation". Philippine Daily Inquirer. April 23, 2009. Archived from the original on October 9, 2012. Retrieved April 14, 2011. 10. ^ Legaspi, Anita (November 19, 2010). "Ex-SC chief sees urgent need to amend partylist law". GMANews.tv. Retrieved November 29, 2010. 11. ^ Panganiban, Artemio (April 25, 2009). "Party-list imponderables". Philippine Daily Inquirer. Archived from the original on April 27, 2009. Retrieved April 14, 2011. 12. ^ Arroyo, Joker (April 28, 2009). "Supreme Court and judicial legislation". Manila Bulletin. Retrieved April 14, 2011. 13. ^ Calonzo, Andreo (March 27, 2010). "'Arroyo to use party-list seats to win as House Speaker'". GMANews.tv. Retrieved April 8, 2010. 14. ^ "SC: It's final, Mikey can represent tricycle drivers in Congress". GMANews.tv. February 27, 2011. Retrieved April 14, 2011. 15. ^ "Disqualification of leftist party-list groups eyed". GMANews.tv. January 16, 2010. Retrieved April 14, 2011. 16. ^ Uy, Jocelyn R. (October 27, 2012). "Akbayan hits back, seeks ouster of Red party-listers". Philippine Daily Inquirer. Archived from the original on October 28, 2012. Retrieved January 12, 2021. The groups identified the party-list organizations Gabriela, Anakpawis, Alliance of Concerned Teachers, Kabataan and Katribu Indigenous Peoples Sectoral Party as some of the CPP fronts that “as part of their political struggle, to infiltrate, manipulate and exploit the country’s free and democratic institutions. 17. ^ Reganit, Jose Cielito (March 21, 2019). "Anti-communist group calls on voters to reject Makabayan bloc". Philippine News Agency. Archived from the original on March 23, 2019. Retrieved January 12, 2021. 18. ^ Terrazola, Vanne Elaine (January 7, 2021). "Abolish party-list system, Duterte urges Congress". Manila Bulletin. Archived from the original on January 8, 2021. Retrieved January 12, 2021. 19. ^ Panganiban, Artemio (May 6, 2007). "Another slap on the Comelec". Philippine Daily Inquirer. Archived from the original on March 26, 2012. Retrieved April 14, 2011. 20. ^ "BANAT vs COMELEC". Supreme Court of the Philippines. April 21, 2009. Retrieved May 8, 2014. 21. ^ a b Panganiban, Artemio V. (November 4, 2018). "Party-list, an experiment gone berserk". INQUIRER.net. Retrieved April 5, 2022. ## See also Methods of determining winners in party-list proportional representation:
2022-08-14 19:04:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 35, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3701435923576355, "perplexity": 6840.13346408105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00532.warc.gz"}
http://pubzone.org/pages/publications/showAuthor.do?userId=67.941
Publications # Publications :: Search ## Show author Select a publication Show Title Venue Rating Date Journal article Paul G. Spirakis, Leslie Ann Goldberg, Antonín Kucera, Giuseppe Persiano. Statement from EATCS President and vice Presidents about the recent US travel restrictions to foreigners. Bulletin of the EATCS 2017, Volume 121 (0) 2017 Journal article Andreas Galanis, Andreas Göbel 0001, Leslie Ann Goldberg, John Lapinskas, David Richerby. Amplifiers for the Moran Process. J. ACM 2017, Volume 64 (0) 2017 Conference paper Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum. A Complexity Trichotomy for Approximately Counting List TOCT 2016, Volume 9 (0) 2017 Conference paper Radu Curticapean, Holger Dell, Fedor V. Fomin, Leslie Ann Goldberg, John Lapinskas. A Fixed-Parameter Perspective on #BIS. CoRR 2017, Volume 0 (0) 2017 Journal article Andrei A. Bulatov, Leslie Ann Goldberg, Mark Jerrum, David Richerby, Stanislav Zivny. Functional clones and expressibility of partition functions. Theor. Comput. Sci. 2017, Volume 687 (0) 2017 Journal article Jacob Focke, Leslie Ann Goldberg, Stanislav Zivny. The Complexity of Counting Surjective Homomorphisms and Compactions. CoRR 2017, Volume 0 (0) 2017 Conference paper Andreas Galanis, Leslie Ann Goldberg, Kuan Yang. Approximating Partition Functions of Bounded-Degree Boolean Counting Constraint Satisfaction Problems. 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland 2017 (0) 2017 Conference paper Andreas Galanis, Leslie Ann Goldberg, Daniel Stefankovic. Inapproximability of the Independent Set Polynomial Below the Shearer Threshold. 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland 2017 (0) 2017 Conference paper Martin E. Dyer, Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum, Eric Vigoda. Random Walks on Small World Networks. CoRR 2017, Volume 0 (0) 2017 Conference paper Leslie Ann Goldberg, Heng Guo 0001. The Complexity of Approximating complex-valued Ising and Tutte partition functions. Computational Complexity 2017, Volume 26 (0) 2017 Journal article Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, Daniel Stefankovic. Inapproximability of the independent set polynomial in the complex plane. CoRR 2017, Volume 0 (0) 2017 Conference paper Andreas Galanis, Leslie Ann Goldberg. The complexity of approximately counting in 2-spin systems on Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016 2016 (0) 2016 Journal article Leslie Ann Goldberg, Mark Jerrum. A complexity trichotomy for approximately counting list H-colourings. CoRR 2016, Volume 0 (0) 2016 Journal article Leslie Ann Goldberg, Mark Jerrum. The complexity of counting locally maximal satisfying assignments of Boolean CSPs. Theor. Comput. Sci. 2016, Volume 634 (0) 2016 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, David Richerby. Counting Homomorphisms to Square-Free Graphs, Modulo 2. TOCT 2016, Volume 8 (0) 2016 Conference paper Leslie Ann Goldberg, Rob Gysel, John Lapinskas. Approximately counting locally-optimal structures. J. Comput. Syst. Sci. 2016, Volume 82 (0) 2016 Conference paper Jin-Yi Cai, Andreas Galanis, Leslie Ann Goldberg, Heng Guo 0001, Mark Jerrum, Daniel Stefankovic, Eric Vigoda. #BIS-hardness for 2-spin systems on bipartite bounded degree graphs in the tree non-uniqueness region. J. Comput. Syst. Sci. 2016, Volume 82 (0) 2016 Journal article Josep Díaz, Leslie Ann Goldberg, David Richerby, Maria J. Serna. Absorption time of the Moran process. Random Struct. Algorithms 2016, Volume 49 (0) 2016 Conference paper Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum. Approximately Counting H-Colorings is $\#\mathrm{BIS}$-Hard. SIAM J. Comput. 2016, Volume 45 (0) 2016 Conference paper Andreas Galanis, Andreas Göbel 0001, Leslie Ann Goldberg, John Lapinskas, David Richerby. Amplifiers for the Moran Process. 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy 2016 (0) 2016 Conference paper Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum. A Complexity Trichotomy for Approximately Counting List H-Colourings. 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy 2016 (0) 2016 Conference paper Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, Heng Guo 0001, Daniel Stefankovic. Approximation via Correlation Decay When Strong Spatial Mixing Fails. 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy 2016 (0) 2016 Conference paper Andrei A. Bulatov, Leslie Ann Goldberg, Mark Jerrum, David Richerby, Stanislav Zivny. Functional Clones and Expressibility of Partition Functions. CoRR 2016, Volume 0 (0) 2016 Conference paper Martin E. Dyer, Leslie Ann Goldberg, David Richerby. Counting 4×4 matrix partitions of graphs. Discrete Applied Mathematics 2016, Volume 213 (0) 2016 Conference paper Andreas Galanis, Leslie Ann Goldberg, Kuan Yang. Approximating partition functions of bounded-degree Boolean counting Constraint Satisfaction Problems. CoRR 2016, Volume 0 (0) 2016 Conference paper Andreas Galanis, Leslie Ann Goldberg. The complexity of approximately counting in 2-spin systems on k-uniform bounded-degree hypergraphs. Inf. Comput. 2016, Volume 251 (0) 2016 Conference paper Leslie Ann Goldberg, John Lapinskas, Johannes Lengler, Florian Meier, Konstantinos Panagiotou, Pascal Pfister. Asymptotically Optimal Amplifiers for the Moran Process. CoRR 2016, Volume 0 (0) 2016 Conference paper Andreas Galanis, Leslie Ann Goldberg, Daniel Stefankovic. Inapproximability of the independent set polynomial below the Shearer threshold. CoRR 2016, Volume 0 (0) 2016 Journal article Leslie Ann Goldberg, Mark Jerrum, Colin McQuillan. Approximating the partition function of planar two-state spin systems. J. Comput. Syst. Sci. 2015, Volume 81 (0) 2015 Journal article Xi Chen, Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum, Pinyan Lu, Colin McQuillan, David Richerby. The complexity of approximating conservative counting CSPs. J. Comput. Syst. Sci. 2015, Volume 81 (0) 2015 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, David Richerby. Counting Homomorphisms to Square-Free Graphs, Modulo 2. CoRR 2015, Volume 0 (0) 2015 Journal article Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum. Approximately Counting H-Colourings is #BIS-Hard. CoRR 2015, Volume 0 (0) 2015 Conference paper Andreas Galanis, Leslie Ann Goldberg. The complexity of approximately counting in 2-spin systems on $k$-uniform bounded-degree hypergraphs. CoRR 2015, Volume 0 (0) 2015 Conference paper Leslie Ann Goldberg. Evolutionary Dynamics on Graphs: Invited Talk. Proceedings of the 2015 ACM Conference on Foundations of Genetic Algorithms XIII, Aberystwyth, United Kingdom, January 17 - 20, 2015 2015 (0) 2015 Conference paper Leslie Ann Goldberg, Rob Gysel, John Lapinskas. Approximately Counting Locally-Optimal Structures. Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part I 2015 (0) 2015 Conference paper Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum. Approximately Counting H-Colourings is #\mathrm BIS # BIS -Hard. Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part I 2015 (0) 2015 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, David Richerby. Counting Homomorphisms to Square-Free Graphs, Modulo 2. Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part I 2015 (0) 2015 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, Colin McQuillan, David Richerby, Tomoyuki Yamakami. Counting List Matrix Partitions of Graphs. SIAM J. Comput. 2015, Volume 44 (0) 2015 Journal article Leslie Ann Goldberg, Mark Jerrum. The complexity of Boolean #MaximalCSP. CoRR 2015, Volume 0 (0) 2015 Conference paper Andreas Galanis, Andreas Göbel 0001, Leslie Ann Goldberg, John Lapinskas, David Richerby. Amplifiers for the Moran Process. CoRR 2015, Volume 0 (0) 2015 Conference paper Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, Heng Guo 0001, Daniel Stefankovic. Approximation via Correlation Decay when Strong Spatial Mixing Fails. CoRR 2015, Volume 0 (0) 2015 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, David Richerby. Counting Homomorphisms to Cactus Graphs Modulo 2. 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014), STACS 2014, March 5-8, 2014, Lyon, France 2014 (0) 2014 Journal article Josep Díaz, Leslie Ann Goldberg, George B. Mertzios, David Richerby, Maria J. Serna, Paul G. Spirakis. Approximating Fixation Probabilities in the Generalized Moran Process. Algorithmica 2014, Volume 69 (0) 2014 Conference paper Leslie Ann Goldberg, Mark Jerrum. The Complexity of Approximately Counting Tree Homomorphisms. TOCT 2014, Volume 6 (0) 2014 Journal article Martin E. Dyer, Leslie Ann Goldberg, David Richerby. Counting $4\times 4$ Matrix Partitions of Graphs. CoRR 2014, Volume 0 (0) 2014 Conference paper Josep Díaz, Leslie Ann Goldberg, David Richerby, Maria J. Serna. Absorption Time of the Moran Process. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain 2014 (0) 2014 Conference paper Jin-Yi Cai, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, Mark Jerrum, Daniel Stefankovic, Eric Vigoda. #BIS-Hardness for 2-Spin Systems on Bipartite Bounded Degree Graphs in the Tree Non-uniqueness Region. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain 2014 (0) 2014 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, David Richerby. The complexity of counting homomorphisms to cactus graphs modulo 2. TOCT 2014, Volume 6 (0) 2014 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, Colin McQuillan, David Richerby, Tomoyuki Yamakami. Counting List Matrix Partitions of Graphs. IEEE 29th Conference on Computational Complexity, CCC 2014, Vancouver, BC, Canada, June 11-13, 2014 2014 (0) 2014 Conference paper Leslie Ann Goldberg, Heng Guo 0001. The complexity of approximating complex-valued Ising and Tutte partition functions with applications to quantum simulation. CoRR 2014, Volume 0 (0) 2014 Journal article Leslie Ann Goldberg, Rob Gysel, John Lapinskas. Approximately counting locally-optimal structures. CoRR 2014, Volume 0 (0) 2014 Conference paper Leslie Ann Goldberg, Mark Jerrum. The Complexity of Computing the Sign of the Tutte Polynomial. SIAM J. Comput. 2014, Volume 43 (0) 2014 Journal article Leslie Ann Goldberg, Mark Jerrum. Approximating the Tutte polynomial of a binary matroid and other related combinatorial polynomials. J. Comput. Syst. Sci. 2013, Volume 79 (0) 2013 Journal article Benjamin Doerr, Leslie Ann Goldberg. Adaptive Drift Analysis. Algorithmica 2013, Volume 65 (0) 2013 Journal article Leslie Ann Goldberg, Paul W. Goldberg, Piotr Krysta, Carmine Ventre. Ranking Games that have Competitiveness-based Strategies CoRR 2013, Volume 0 (0) 2013 Journal article Leslie Ann Goldberg, Paul W. Goldberg, Piotr Krysta, Carmine Ventre. Ranking games that have competitiveness-based strategies. Theor. Comput. Sci. 2013, Volume 476 (0) 2013 Conference paper Xi Chen, Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum, Pinyan Lu, Colin McQuillan, David Richerby. The complexity of approximating conservative counting CSPs. 30th International Symposium on Theoretical Aspects of Computer Science, STACS 2013, February 27 - March 2, 2013, Kiel, Germany 2013 (0) 2013 Journal article Leslie Ann Goldberg, Mark Jerrum. The Complexity of Approximately Counting Tree Homomorphisms CoRR 2013, Volume 0 (0) 2013 Journal article Peter Bürgisser, Leslie Ann Goldberg, Mark Jerrum, Pascal Koiran. Computational Counting (Dagstuhl Seminar 13031). Dagstuhl Reports 2013, Volume 3 (0) 2013 Conference paper Andreas Göbel 0001, Leslie Ann Goldberg, Colin McQuillan, David Richerby, Tomoyuki Yamakami. Counting list matrix partitions of graphs. CoRR 2013, Volume 0 (0) 2013 Conference paper Leslie Ann Goldberg, Mark Jerrum. A Polynomial-Time Algorithm for Estimating the Partition Function of the Ferromagnetic Ising Model on a Regular Matroid. SIAM J. Comput. 2013, Volume 42 (0) 2013 Journal article Andreas Göbel 0001, Leslie Ann Goldberg, David Richerby. The Complexity of Counting Homomorphisms to Cactus Graphs Modulo 2. CoRR 2013, Volume 0 (0) 2013 Conference paper Andrei A. Bulatov, Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum, Colin McQuillan. The expressibility of functions on the boolean domain, with applications to counting CSPs. J. ACM 2013, Volume 60 (0) 2013 Journal article Jin-Yi Cai, Leslie Ann Goldberg, Heng Guo, Mark Jerrum. Approximating the Partition Function of Two-Spin Systems on Bipartite Graphs. CoRR 2013, Volume 0 (0) 2013 Journal article Josep Díaz, Leslie Ann Goldberg, David Richerby, Maria J. Serna. Absorption Time of the Moran Process. CoRR 2013, Volume 0 (0) 2013 Journal article Leslie Ann Goldberg. The EATCS Award 2014 - Call for nominations. Bulletin of the EATCS 2013, Volume 111 (0) 2013 Conference paper Andrei A. Bulatov, Martin E. Dyer, Leslie Ann Goldberg, Markus Jalsenius, Mark Jerrum, David Richerby. The complexity of weighted and unweighted #CSP. J. Comput. Syst. Sci. 2012, Volume 78 (0) 2012 Conference paper Andrei A. Bulatov, Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum. Log-supermodular functions, functional clones and counting CSPs. 29th International Symposium on Theoretical Aspects of Computer Science, STACS 2012, February 29th - March 3rd, 2012, Paris, France 2012 (0) 2012 Conference paper Prasad Chebolu, Leslie Ann Goldberg, Russell Martin. The complexity of approximately counting stable roommate assignments. J. Comput. Syst. Sci. 2012, Volume 78 (0) 2012 Conference paper Leslie Ann Goldberg, Mark Jerrum. The Complexity of Computing the Sign of the Tutte Polynomial (and Consequent #P-hardness of Approximation). Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I 2012 (0) 2012 Conference paper Leslie Ann Goldberg, Mark Jerrum. The Complexity of Computing the Sign of the Tutte Polynomial (and consequent #P-hardness of Approximation) CoRR 2012, Volume 0 (0) 2012 Conference paper Josep Díaz, Leslie Ann Goldberg, George B. Mertzios, David Richerby, Maria J. Serna, Paul G. Spirakis. Can Fixation be Guaranteed in the Generalized Moran Process? CoRR 2012, Volume 0 (0) 2012 Conference paper Leslie Ann Goldberg, Mark Jerrum, Colin McQuillan. Approximating the partition function of planar two-state spin systems CoRR 2012, Volume 0 (0) 2012 Conference paper Xi Chen, Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum, Pinyan Lu, Colin McQuillan, David Richerby. The complexity of approximating conservative counting CSPs CoRR 2012, Volume 0 (0) 2012 Conference paper Leslie Ann Goldberg, Mark Jerrum. Inapproximability of the Tutte polynomial of a planar graph. Computational Complexity 2012, Volume 21 (0) 2012 Journal article Prasad Chebolu, Leslie Ann Goldberg, Russell Martin. The complexity of approximately counting stable matchings. Theor. Comput. Sci. 2012, Volume 437 (0) 2012 Conference paper Leslie Ann Goldberg, Mark Jerrum. Approximating the partition function of the ferromagnetic potts model. J. ACM 2012, Volume 59 (0) 2012 Journal article Martin E. Dyer, Leslie Ann Goldberg, Markus Jalsenius, David Richerby. The complexity of approximating bounded-degree Boolean #CSP. Inf. Comput. 2012, Volume 220 (0) 2012 Conference paper Josep Díaz, Leslie Ann Goldberg, George B. Mertzios, David Richerby, Maria J. Serna, Paul G. Spirakis. Approximating fixation probabilities in the generalized Moran process. Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012 2012 (0) 2012 Conference paper Benjamin Doerr, Leslie Ann Goldberg, Lorenz Minder, Thomas Sauerwald, Christian Scheideler. Stabilizing consensus with the power of two choices. SPAA 2011: Proceedings of the 23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures, San Jose, CA, USA, June 4-6, 2011 (Co-located with FCRC 2011) 2011 (0) 2011 Conference paper Leslie Ann Goldberg, Mark Jerrum. A Polynomial-Time Algorithm for Estimating the Partition Function of the Ferromagnetic Ising Model on a Regular Matroid. Automata, Languages and Programming - 38th International Colloquium, ICALP 2011, Zurich, Switzerland, July 4-8, 2011, Proceedings, Part I 2011 (0) 2011 Journal article Benjamin Doerr, Leslie Ann Goldberg. Adaptive Drift Analysis CoRR 2011, Volume 0 (0) 2011 Journal article Andrei A. Bulatov, Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum. Log-supermodular functions, functional clones and counting CSPs CoRR 2011, Volume 0 (0) 2011 Journal article Leslie Ann Goldberg, Mark Jerrum. A Counterexample to rapid mixing of the Ge-Stefankovic Process CoRR 2011, Volume 0 (0) 2011 Conference paper Josep Díaz, Leslie Ann Goldberg, George B. Mertzios, David Richerby, Maria J. Serna, Paul G. Spirakis. Approximating Fixation Probabilities in the Generalized Moran Process CoRR 2011, Volume 0 (0) 2011 Journal article Andrei Bulatov, Martin E. Dyer, Leslie Ann Goldberg, Markus Jalsenius, Mark Jerrum, David Richerby. The complexity of weighted and unweighted #CSP CoRR 2010, Volume 0 (0) 2010 Conference paper Leslie Ann Goldberg, Paul W. Goldberg, Piotr Krysta, Carmine Ventre. Ranking games that have competitiveness-based strategies. Proceedings 11th ACM Conference on Electronic Commerce (EC-2010), Cambridge, Massachusetts, USA, June 7-11, 2010 2010 (0) 2010 Journal article Leslie Ann Goldberg, Mark Jerrum. Approximating the Tutte polynomial of a binary matroid and other related combinatorial polynomials CoRR 2010, Volume 0 (0) 2010 Conference paper Leslie Ann Goldberg, Mark Jerrum. Approximating the Partition Function of the Ferromagnetic Potts Model. Automata, Languages and Programming, 37th International Colloquium, ICALP 2010, Bordeaux, France, July 6-10, 2010, Proceedings, Part I 2010 (0) 2010 Conference paper Benjamin Doerr, Leslie Ann Goldberg, Lorenz Minder, Thomas Sauerwald, Christian Scheideler. Brief Announcement: Stabilizing Consensus with the Power of Two Choices. Distributed Computing, 24th International Symposium, DISC 2010, Cambridge, MA, USA, September 13-15, 2010. Proceedings 2010 (0) 2010 Conference paper Prasad Chebolu, Leslie Ann Goldberg, Russell Martin. The Complexity of Approximately Counting Stable Matchings. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 13th International Workshop, APPROX 2010, and 14th International Workshop, RANDOM 2010, Barcelona, Spain, September 1-3, 2010. Proceedings 2010 (0) 2010 Conference paper Benjamin Doerr, Leslie Ann Goldberg. Drift Analysis with Tail Bounds. Parallel Problem Solving from Nature - PPSN XI, 11th International Conference, Kraków, Poland, September 11-15, 2010, Proceedings, Part I 2010 (0) 2010 Conference paper Benjamin Doerr, Leslie Ann Goldberg. Adaptive Drift Analysis. Parallel Problem Solving from Nature - PPSN XI, 11th International Conference, Kraków, Poland, September 11-15, 2010, Proceedings, Part I 2010 (0) 2010 Journal article Leslie Ann Goldberg, Mark Jerrum. A polynomial-time algorithm for estimating the partition function of the ferromagnetic Ising model on a regular matroid CoRR 2010, Volume 0 (0) 2010 Conference paper Leslie Ann Goldberg, Mark Jerrum, Marek Karpinski. The mixing time of Glauber dynamics for coloring regular trees. Random Struct. Algorithms 2010, Volume 36 (0) 2010 Journal article Prasad Chebolu, Leslie Ann Goldberg, Russell Martin. The Complexity of Approximately Counting Stable Roommate Assignments CoRR 2010, Volume 0 (0) 2010 Conference paper Martin E. Dyer, Leslie Ann Goldberg, Mark Jerrum. A Complexity Dichotomy For Hypergraph Partition Functions. Computational Complexity 2010, Volume 19 (0) 2010 Conference paper Leslie Ann Goldberg, Martin Grohe, Mark Jerrum, Marc Thurley. A Complexity Dichotomy for Partition Functions with Mixed Signs. SIAM J. Comput. 2009, Volume 39 (0) 2010 Conference paper Peter Bürgisser, Leslie Ann Goldberg, Mark Jerrum. 10481 Abstracts Collection - Computational Counting. Computational Counting, 28.11. - 03.12.2010 2010 (0) 2010 Conference paper Peter Bürgisser, Leslie Ann Goldberg, Mark Jerrum. 10481 Executive Summary - Computational Counting. Computational Counting, 28.11. - 03.12.2010 2010 (0) 2010
2017-12-11 19:11:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5887017846107483, "perplexity": 13720.519825093073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513866.9/warc/CC-MAIN-20171211183649-20171211203649-00534.warc.gz"}
https://www.fiberoptics4sale.com/blogs/electromagnetic-optics/108667654-solution-path-for-electromagnetic-problems
# Solution Path for Electromagnetic Problems There are quite some fundamental equations of electromagnetic theory, which sometimes could be confusing to use. The chart below shows how to approach a problem, evaluate the physical situation, find out boundary values and constraints, and then follow a clear path in order to achieve a solution. Engineers should refer to this chart, which will clarify the steps and procedures.
2021-07-31 18:42:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071907162666321, "perplexity": 645.8490311891624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154099.21/warc/CC-MAIN-20210731172305-20210731202305-00273.warc.gz"}
https://math.stackexchange.com/questions/1327435/iterating-until-a-diagram-commutes
Iterating until a diagram commutes I'm coming across the following 'commuting' diagram a lot in my work, and I think it should have a neat categorical formulation. But I can't work it out for myself, and don't know what too google for. Maybe somebody can give me a pointer? Assume I have 4 functions/morphisms. • $f : A \rightarrow B$. • $down : A \rightarrow A'$. • $up : B' \rightarrow B$. • $g : A' \rightarrow A' \oplus B'$. Here $\oplus$ is the disjoint sum of sets. I want to say the following diagram 'commutes': but it doesn't 'typecheck' because the codomain of $g$ doesn't match the domain of $up$. However, it does 'commute eventually' in the following sense. For all $a \in A$ the following holds. There exist $a_0, ..., a_{n-1} \in A'$ and $b' \in B'$ such that: • $down(a) = a_0$, • $g(a_0) = inl(a_1)$, • $g(a_1) = inl(a_2)$, • ... • $g(a_{n-2}) = inl(a_{n-1})$, • $g(a_{n-1}) = inr(b')$, and • $up(b') = f(a)$. Here $inl(a)$ means $a$ is in the 'left' part of the sum $A' \oplus B'$ while $inr(b)$ means $b$ is in the 'right' part of $A' \oplus B'$. In other words, for the diagram to 'commute', $g$ should be applied until it yields something of type $B$. In my application, this will always eventually happen. If we were sloppy and ignored the $inl/inr$ we could write: $\forall a\in A. \exists n. f(a) = up ( g^n(down(a)))$. What's a good categorical rendering of this? Some abstract conception of iteration or recursion? In which parts of mathematics do similar constructions crop up? • So $g(\text{down}(A))$ is contained in $B'$ ? Jun 16, 2015 at 12:32 • @StefanHamcke Not necessarily. The 'transitive closure' of $g^*$ applied to $down(A)$ is in $B'$. For each individual $a \in A'$, the number of times I need to apply $g$ before I arrive in $B'$ is typically different. Jun 16, 2015 at 12:38 • I see. Is this true for all point in $A'$, or just for those in the image of $A$?. I mean, for each $a\in A'$, can you apply $g$ some finite amount of times until you arrive in $B'$? Jun 16, 2015 at 12:46 • @StefanHamcke In my applications this is true for all of $A'$, It's very important that this is true. Jun 16, 2015 at 12:47 Since you're interested in considering objects that have elements I'll assume we are working with $\mathbf{Sets}$ the category of sets and functions. From the function $g \colon A' \to A' \oplus B'$ and from $inr \colon B' \to A' \oplus B'$ you can obtain the function $\bar g \colon A' \oplus B' \to A' \oplus B'$. By definition this function is such that • for every $a \in A'$ we have $\bar g(inl(a))=g(a)$; • for every $b \in B'$ we have $\bar g(inr(b))=b$. Now using your notation we have that $inl(a_1)=\bar g(inl(a_0))$ and an easy induction shows that for every $n=1,\dots,n$ we have that $inl(a_n)=\bar g^n(inl(a_0))$. So your requirement that the sequence of the $a_n$'s eventually falls in $B$ can be expressed as $$\forall a \in A'\ \exists b \in B'\ \exists n \in \mathbb N\ \bar g^n(inl(a))=inr(b)\ .$$ This should imply that $B'$ is the colimit of the diagram $$A' \oplus B' \stackrel{\bar g}{\longrightarrow}A' \oplus B' \stackrel{\bar g}{\longrightarrow}A' \oplus B' \stackrel{\bar g}{\longrightarrow}A' \oplus B' \stackrel{\bar g}{\longrightarrow}A' \oplus B' \stackrel{\bar g}{\longrightarrow}A' \oplus B' \dots$$ where the colimit maps $(in^g_i \colon A' \oplus B' \to B')_{i \in \mathbb N}$ should send every $inl(a)$ into the eventual limit of the sequence $\{\bar g^n(inl(a))\}_{n \in \mathbb N}$ and every $inr(b)$ into $b$ (itself). Now you could express your condition via the commutativity of the following diagram $$\require{AMScd} \begin{CD} A @>{f}>> B \\ @V{down}VV @A{up}AA\\ A' @>{in^g_1 \circ inl}>> B' \end{CD}$$ where $inl$ is the embedding of $A'$ into $A' \oplus B'$ while $in^g_1$ is the first structure map of the above mentioned colimit diagram $(A' \oplus B' \stackrel{in^g_i}{\longrightarrow} B')_{i \in \mathbb N}$. Hope this helps.
2022-10-06 23:18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372554421424866, "perplexity": 226.1412672413113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00013.warc.gz"}
http://openstudy.com/updates/50336d79e4b0d8ed9b49b505
## lilMissMindset Group Title one more help please. some difficult DEs which I can't figure out. dx + (y^2x)dy = (x^2 y) dy 2 years ago 2 years ago 1. lilMissMindset Group Title it is : dx + (y^2 *x)dy = (x^2 *y) dy 2. amistre64 Group Title i wonder: $\frac{dx}{dy}+xy^2=x^2y$ $x^{-2}x'+y^2x^{-1}=y$ $z=x^{-1}:~z^{-1}=x;~z^{2}=x^{-2};~-z^{-2}z'=x';~$ $z^2(-z^{-2}z')+y^2z=y$ $-z'+y^2z=y$ $z'-y^2z=-y$ justwondering, i got no idea if its right tho 3. amistre64 Group Title $e^{-\frac{1}{3}y^3}z'-y^2e^{-\frac{1}{3}y^3}z=-ye^{-\frac{1}{3}y^3}$ $e^{-\frac{1}{3}y^3}z=-\int ye^{-\frac{1}{3}y^3}dy$ $z=-~e^{\frac{1}{3}y^3}\int ye^{-\frac{1}{3}y^3}dy$ 4. amistre64 Group Title i doubt that was right; unless we were spose to go into the Gamma function :/ 5. amistre64 Group Title any ideas on how to start lilMiss ?? 6. lilMissMindset Group Title sorry. poor internet connection here. well, what i was thinking was to get the integrating factor of the equation. 7. lilMissMindset Group Title just the same as what you did, i guess? 8. amistre64 Group Title the twp dys have me iffy; i have never had to deal with them in that state 9. amistre64 Group Title dx + (y^2x)dy = (x^2 y) dy dx = (x^2y - y^2x)dy dx/dy = x^2y - y^2x 10. amistre64 Group Title http://www.wolframalpha.com/input/?i=dx%2Fdy+%3D+x%5E2y+-+y%5E2x i run into a gamma function at every turn :/ 11. lilMissMindset Group Title help! integrate this please. : int. e^(-1/3 *y^3)dy 12. mukushla Group Title i think there is no closed form for that integral ---------------------------------------------------------------------- $M\text{d}x+N\text{d}y=0$another method is multiplynig by $$x^a.y^b$$ and searching for $$a$$ and $$b$$ for which$\frac{\partial (x^a.y^b.M)}{\partial y}=\frac{\partial (x^a.y^b.N)}{\partial x}$which turnes the equation to an exact diff equation let me try it 13. mukushla Group Title no its not working im wondering where these equations come from...this one and prev are very curious... 14. lilMissMindset Group Title i already got this one, using bernoulli's equation with respect to x, are you familiar wit that? my only problem is how am i going to integrate e^(-1/3 *y^3)dy 15. mukushla Group Title $\large \int e^{-\frac{y^3}{3}} dy$? 16. mukushla Group Title but there is no closed form for this integral... 17. lilMissMindset Group Title why? i'll try it in wolfram alpha 18. mukushla Group Title 19. lilMissMindset Group Title :( 20. lilMissMindset Group Title will i pass my midterm? :( 21. mukushla Group Title sure u will...dont think of these 2 prev problems 22. mukushla Group Title they were not normal 23. lilMissMindset Group Title aw. thank you so much. really really thank you for everything that you've done just so i could have answers. one medal isn't enough, i guess. :) 24. mukushla Group Title welcome...:)
2014-08-31 10:20:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7731688022613525, "perplexity": 7524.961762088027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00288-ip-10-180-136-8.ec2.internal.warc.gz"}
https://gateoverflow.in/44399/isro-2013-53
2.5k views A CPU scheduling algorithm determines an order for the execution of its scheduled processes. Given 'n' processes to be scheduled on one processor, how many possible different schedules are there? 1. $n$ 2. $n^{2}$ 3. $n!$ 4. $2^{n}$ retagged | 2.5k views preemptive= infinite way non preemptive=n! option=C by Boss (11.1k points) selected by 0 In all case n! Only +2 no , only for nonpreemptive it will be n! because, it takes 1 process at a time and completes it So, 1st process can executes n ways 2nd process can execute (n-1) ways 3rd process can executes (n-2) ways...... So, n process can executes n! ways 0 I did n't get why in case of pre-emption allowed , Why there can be infinite combination , assuming that time quantum tends to 0 and Burst time tending to NP hard problems (a silly way to say infinte) , still  if one  of the process is scheduled by CPU then , if suspended it is not the responsiblity of CPU  again (long term schedular) to schedule it again, rather the duty of Middle term schedular. +1 vote c)n! by (269 points)
2020-01-25 03:12:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5171650052070618, "perplexity": 8635.511530398784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00426.warc.gz"}
https://philosophy.stackexchange.com/questions/33342/can-i-say-that-extension-is-a-synonym-for-isomorphism
# Can I say that extension is a synonym for isomorphism? I read in Necessity and Sufficiency that For example, in graph theory a graph G is called bipartite if it is possible to assign to each of its vertices the color black or white in such a way that every edge of G has one endpoint of each color. And for any graph to be bipartite, it is a necessary and sufficient condition that it contain no odd-length cycles. Thus, discovering whether a graph has any odd cycles tells one whether it is bipartite and vice versa. A philosopher[5] might characterize this state of affairs thus: "Although the concepts of bipartiteness and absence of odd cycles differ in intension, they have identical extension.[6] So, it looks like extension of equality. Isomorphism also looks like extension of equality. In programming languages and in math, I believe, we say that some objects are not equal (they are different instances) but they can be attributed to the same equivalence class. That is, they are equivalent. Isomorphism says that you should use one equivalent object in one domain and another in the other domain. For instance, I should use 1 mile in UK and 1.609344 km in Europe, I speak about bipartite graphs in coloring and acyclic in some other context. Can I say that isomorphic objects are extensionally equivalent, that is, extension is basically an isomorphism? Both intentions and isomorphisms look like different aspects of the same entity, viewed from different angles for me. • No. Extension is a set of objects satisfying a condition ("concept"), it is not an extension of equality. Coextensiveness (having the same extension) is an extension of equality for concepts, they may differ by only having a different intension ("meaning"). Isomorphism is an extension of equality for structures (sets with extra stuff like operations, relations, etc.) , but structures unlike concepts have no extensions, so coextensiveness and isomorphism have little to do with each other. Even structure concepts being coextensive is different from corresponding structures being isomorphic. Mar 31, 2016 at 21:11 Extension and intension are qualities of concepts, and isomorphism is a quality of objects. The extension of a concept is the set of all objects falling under that concept: the extension of the concept "red" is the set of all red things. Two concepts that have the same extension therefore have the same set of objects as their extensions. An isomorphism is a structure-preserving mapping from one object onto another. A bipartite graph is a graph with no odd-length cycles; they are (mathematically) equivalent definitions. Two definitions are mathematically equivalent if they necessarily have the same extension. Definitions are not mathematical objects, and have no concept of isomorphism. • I asked because I do not understand why you cannot treat concepts as objects, especially after such identification seems to be the basic tool in the theory of categories, which seems to deal with isomorphisms professionally. Thanks for making this distinction explicit. Mar 31, 2016 at 20:21 Your question deal with several concepts which have to be defined first: • An isomophism is a mathematical concept which relates two structured sets, e.g., two groups or two vector spaces. An isomorphism between two groups is a map f: (G,+) --> (H,+) which is bijective, such that f(x+y) = f(x) + f(y), and such that the same holds for the inverse map. Note that for groups the condition for the inverse map follows from the first property. • The extension of a concept is the set of all objects to which the concept refers. E.g., the extension of the concept "human" is the set of all "humans". • The intension of a concept is the meaning of the concept. E.g., the concept "bipartite graph" has a different intension than the concept "graph without odd-length cycles". This can be easily seen from the definition of these concepts. According to these definitions extension is not a synonym for isomorphism. • I have drawn common ground between the concepts. You respond like you do not notice any. Is it fair ignorance? Furthermore, I do not get why do you get such complex definition of isomorphism. Here is a simple identification of set {1,2,3} with letters {A, B, C}. Mar 31, 2016 at 21:03 • @Valentin Tihomirov Indeed, I do not see much common ground between the three concepts. In particular, there is no relation between isomorphism and the pair (extension, intension). - The term isomorphism is generally used for structure preserving maps. Hence I assumed that both sets - more complex than in your example - have the additional structure of a group. Mar 31, 2016 at 21:12
2022-09-27 20:49:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175858855247498, "perplexity": 420.3058104202608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00494.warc.gz"}
https://www.vedantu.com/question-answer/there-are-4-boys-and-4-girls-in-how-many-ways-class-11-maths-cbse-5efc6a6dc732a135b6b90c2e
QUESTION # There are 4 boys and 4 girls. In how many ways they can sit in row, not all girls sit together. Hint:In this question firstly arrange all 8 people (boys and girls) in such a way that all the girls are together. The total arrangement for 8 people subtracted with the case of all the girls together gives the ways to sit them such that no girl sits together. Given data 4 boys and 4 girls. Now we have to find the number of ways so that no girl sits together. As there are a total 8, boys and girls. So the total number of ways to arrange them = (8!). Now let us consider 4 girls as one group. So the number of ways to arrange 4 boys and 1 group of girls (having in total 4 girls) will be the total arrangement of 4 girls in one group, and then we will have 5! In total as there are 4 boys and 1 group of girls that is 5, hence $\Rightarrow 4! \times 5!$ So the total number of ways that no girl is together is the difference of total number of ways to arrange 8, boys and girls and the total number of ways in which all girls are together. $\Rightarrow 8! - \left( {4! \times 5!} \right)$ Now simplify this we have, $\Rightarrow \left( {8 \times 7 \times 6 \times 5 \times 4 \times 3 \times 2 \times 1} \right) - \left( {4 \times 3 \times 2 \times 1 \times 5 \times 4 \times 3 \times 2 \times 1} \right)$ $\Rightarrow 40320 - 2880 = 37440$ So this is the required total number of ways so that no girls are together. Note – Since there were 8 people to be seated thus the total ways were 8! As they can have permutations amongst themselves. The concept of considering certain people into a single group helps solving problems of this certain type, as making them sit together by making them a part of a group helps finding the ways not to make them sit together.
2020-07-13 01:32:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5387976765632629, "perplexity": 247.61471834919908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00176.warc.gz"}
http://mathhelpforum.com/math-topics/66849-quick-physics-torque-question-need-check-print.html
# Quick Physics torque question - Need a check • Jan 4th 2009, 02:50 PM Intrusion Quick Physics torque question - Need a check So I've gotten an answer, but my percent error was huge, which is why I want to make sure that I did it right. Can someone do this real quick and tell me what they get? http://i42.tinypic.com/dxk9wl.png [not drawn to scale] Thanks! • Jan 4th 2009, 03:03 PM skeeter net torque caused by the 10 gram mass ? (.010)(9.8)(.48) = .047 N-m I get x = 2.4 cm to balance the system using the 200 gram mass. • Jan 4th 2009, 03:07 PM Intrusion Thanks, that's what I got. I guess it's meant to have a huge percent error. What could be the cause of a 70% error? (the "actual" value should have been 8m with a 200 kg mass) • Jan 4th 2009, 03:08 PM Mush Quote: Originally Posted by Intrusion So I've gotten an answer, but my percent error was huge, which is why I want to make sure that I did it right. Can someone do this real quick and tell me what they get? http://i42.tinypic.com/dxk9wl.png [not drawn to scale] Thanks! $M_1 = F_1 \times d_1 = 0.48m \times 0.01(9.81) = 0.047Nm$ $\Sigma M = M_1+M_2 = 0.047-(x\times 0.2 \times 9.81) = 0$ Hence: $x = \frac{0.047}{0.2\times 9.81} = 0.024m$ • Jan 4th 2009, 03:08 PM RossBrons Since 200 is 10 x 20 It stands to reason that 48/20 = x Thus 2.4 cm EDIT --------------------------------- • Jan 4th 2009, 03:11 PM Mush Quote: Originally Posted by Intrusion Thanks, that's what I got. I guess it's meant to have a huge percent error. What could be the cause of a 70% error? (the "actual" value should have been 8m with a 200 kg mass) But it isn't a 200kg mass, it's a 200g mass. 8m is ridiculous compared to the tiny torque exerted by the 10g mass. • Jan 4th 2009, 03:13 PM Intrusion Quote: Originally Posted by Mush But it isn't a 200kg mass, it's a 200g mass. 8m is ridiculous compared to the tiny torque exerted by the 10g mass. Yeah, I meant a 200g mass, not kg. • Jan 4th 2009, 04:33 PM Mush Quote: Originally Posted by Intrusion Yeah, I meant a 200g mass, not kg. Even at that, 8m would produce far too much torque. 10g mass is placed 48cm away, to balance this a 200g mass would be a lot closer than 48cm... and as you know 8m is far from being closer than 48cm. • Jan 4th 2009, 05:24 PM Intrusion Alright, I'll have to enlighten my teacher then and let him know his "actual" answer is wrong. Thanks!
2017-01-24 13:42:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054708242416382, "perplexity": 2127.040733740922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00061-ip-10-171-10-70.ec2.internal.warc.gz"}
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_Statistical_Thinking_for_the_21st_Century_(Poldrack)/27%3A_The_General_Linear_Model_in_R
# 27: The General Linear Model in R This page titled 27: The General Linear Model in R is shared under a CC BY-NC 2.0 license and was authored, remixed, and/or curated by Russell A. Poldrack via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-09-25 21:23:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917744994163513, "perplexity": 808.7152581423659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00215.warc.gz"}
https://www.biostars.org/p/65920/
Which Type Of Database Systems Are More Appropriate For Storing Information Extracted From Vcf Files 6 15 Entering edit mode 8.7 years ago alex ▴ 230 Hi all, I am looking into a DBMS for database storage. We have analyzed VCF Files. http://www.1000genomes.org/node/101 When I say analyzed I mean data that has been aligned with some other steps but is still in a VCF format. My question is does anyone have any experience with good database systems? Our problem is when you choose an indexing you may be indexing for one question and when you go to ask a different question your query may not be optimized for the next question. We are looking at systems such as Key/Value, RDBMS, Graph, and BigTable. I only have experience with RDBMS systems would be interested in hearing any and all experience on this problem. Thanks! ngs warehousing vcf genomics • 12k views 0 Entering edit mode 0 Entering edit mode Somewhat similar but that question deals with whether or not to import VCF. I am saying I am importing it into a DBMS and was wondering what system would be optimal 2 Entering edit mode but isn't that still a very similar question - it may have the word "import" there but importing also implies making use a database system to represent VCF files. Your question as you pose it feels more nebulous. The evaluation of a database system depends solely on the queries that one wishes to perform and from this question we don't know what your needs are. 0 Entering edit mode True, but that is why I am trying to see what other people's needs have been in the past and to see what database system worked for them and why. I am in a new division and have been asked to build out a VCF database for queries and so the questions that will be asked are not very well known. 0 Entering edit mode I think the answer here depends a lot on what sorts of questions you are asking. What do you want to get out of the data? VCF records have always seemed appropriate for a document store (ie, Cassandra or the interesting-but-new HyperDex), but that depends entirely on the questions you want to ask of the data. 13 Entering edit mode 8.7 years ago lh3 32k I agree with Pierrre and Istvan that your question is essentially the same as the old quesiton asked two years ago. Nonetheless, at that time, tabix had just arrived and there were no BCF2. 1000g VCF files were much smaller than they are now. I will update my comments here. Neither VCF/BCF2 nor databases are good for everything. VCF/BCF2 is not flexible. The only query you can perform on a VCF, as of now, is to retrieve data from a genomic region. If you want to get a list of rs# with allele frequency below a threshold, you are out of luck - you have to parse the entire VCF, which is slow. On the other hand, generic RDBMS are usually not as efficient as specialized read-only binary formats. With a database, you would need dedicated hardware and non-trivial configurations for retrieving alignments/genotypes in a region, which can be done on a mediocre PC with NGS tools. That is why almost no one loads read sequences to a RDBMS in typical use cases. As to VCF, some preliminary tests done by my colleagues suggest that interval queries in a huge BCF2 on a standard computing node are much faster (over a couple of orders of magnitude as I remember) than the same queries with MongoDB on a dedicated server with huge memory. In my opinion, you should keep the static primary data in BCF2 or in tabix'd VCF, and keep meta information and summary statistics (e.g. position, allele frequency and rs#) in a RDBMS. When you do not need to retrieve individual genotypes, you can query the database to get what you want. When you need to look up for genotypes at a position or in a region, query BCF2/VCF. This way you get the performance of BCF2 and the flexibility of databases at the same time. More efficient implementation would put the VCF/BCF2 virtual offset in the database, but it will be harder unless you are fairly familiar with the format and index. On BCF2 vs. tabix'd VCF. BCF2 is by far faster to read because you save the time on parsing VCF, the bottleneck. BCF2 also comes with a better index that has higher performance when a region contains many records. The downside of BCF2 is it is not well supported. The C implementation is largely complete, but GATK only supports uncompressed BCF2; no perl/python/ruby bindings exist so far. Tabix'd VCF works fine for data at the scale of 1000g. If you do not retrieve and parse many records per query, tabix'd VCF may be a better choice for its wide supports. Note that even BCF2 is not the right solution for up to 1 million samples. New and better solutions will emerge when we approach to that scale. 10 Entering edit mode 8.7 years ago While in general, I agree with Heng's points, we have found the use of Tabix and vcftools very useful for some analyses but a bit cumbersome for more intricate explorations of genetic variation. We find this to be especially true when one wants to put those variants in broader context through comparisons to many different genome annotations. Our solution, while not yet final, is a new tool that we have been developing called gemini. Our hope for gemini is for it to be used as a standard framework for exploring genetic variation on both family-based studies of disease and for broader population genetic studies. The basic gist is that you load a VCF file into a Gemini database (SQLite is the backend for portability and flexibility). As each variant is read from the VCF, it is annotated via comparisons to many different genome annotation files (e.g., ENCODE, dbSNP, ESP, UCSC, ClinVar, KEGG). Tabix is used to expedite the comparisons. The variants and the associated annotations are stored in a variants table. The attractive aspect to us is that the database framework itself is the API. Once the data is loaded, one can ask quite complex questions of one's data. With the help of Brad Chapman and Rory Kirchner, we have parallelized the loading step, which is, by far, the slowest aspect. To use multiple cores, one would do: gemini load -v my.vcf --cores 20 my.db One can also farm the work out to LSF, SGE, and Torque clusters. Once loaded, one can query the database via the command line: gemini query -q "select chrom, start, end, ref, alt, \ aaf, hwe, in_dbsnp, is_lof, impact, num_het \ from variants" \ my.db Also, we represent sample genotype information in compressed numpy arrays that are stored as binary BLOB columns to minimize storage and allow scalability (i.e., as opposed to having a genotypes _table_ where the number of rows is N variants * M samples: bad). As such, we have extended the SQL framework to allow struct-like access to the genotype info. For example, the following query finds rare, LoF variants meeting an autosomal recessive inheritance model (note that the --gt-filter option uses Python, not SQL syntax). gemini query -q “select chrom, start, end, ref, alt, gene, impact, aaf, gts.kid from variants where in_dbsnp = 0 and aaf < 0.01 and in_omim = 1 and is_lof = 1” --gt-filter “gt_types.mom = HET and and gt_types.kid = HOM_ALT” There is also a Python interface allowing one to write custom Python scripts that interface with the Gemini DB. An example script can be found at: http://gist.github.com/arq5x/5236411. There are many built-in tools for things such as finding compound hets, de novo mutations, protein-protein interactions, pathway analysis, etc. Gemini scales rather well. We recently loaded entire 1000 Genomes VCF (1092 samples) in 26 hours using 30 cores. The queries are surprisingly fast. While this scale is not our current focus (we are more focused on medical genetics), it is encouraging to see it work well with 100s of individuals. If interested, check out the documentation. Comments welcome. 4 Entering edit mode 8.7 years ago I wrote a tool to put a VCF in a SQLIte db: http://code.google.com/p/variationtoolkit/wiki/Vcf2Sqlite I never used it because it's always faster & easier to parse the VCF from scratch with a command line , with a workflow engine (biologists [use][2] knime here), etc... The only database i would write is a db with the path to the tabix-indexed VCF.gz on the server and some links to the associated sample/project. In my bookmarks: http://www.lovd.nl/2.0/ "Leiden Open (source) Variation Database." 3 Entering edit mode 8.7 years ago What Heng is describes in his answer is a hybrid, layered solution. Note that such hybrid solutions allow you to support more complex and varied queries (as compared to a single technology) at the expense of potentially increased complexity and data redundancy. For example, you might want to use a graph database to relate genes to diseases and pathways. You might use mongodb for storing gene information in a "document". You might store BCF2 in a relational database to allow arbitrary (but slow) queries and store those processed results into an hdf5 file as a data cube. In the end, the design is driven entirely by the queries and performance constraints. So, to answer your original question, there is no absolute optimal storage solution for VCF data. 3 Entering edit mode 8.7 years ago Chris Cole ▴ 770 I think this is a fair question. It's unfair to dismiss it so quickly. The issue (for me) of plain VCF files is that there's no information regarding the downsteam consequence of a variant mutation. Adding this information via e.g. VEP is critical for understanding the biological significance of the important variants. Querying this in multiple text files is a PITA. I currently my data in MySQL to allow querying against amino acid changes, Polyphen predictions, etc. It's extremely simple (two tables) at the moment while we get an idea of the kind of questions we want to ask of the data. With that information in hand I can create a better suited schema, if necessary. The only reason I've used MySQL is that I have experience with it. I believe that a NoSQL solution might work well, but don't really have time to investigate that as an option. Update There a new project called gemini which seems to be a good solution;http://gemini.readthedocs.org/en/latest/ 0 Entering edit mode 7.7 years ago Amos ▴ 40 My general inclination has been to follow the same workflow ih3 suggests in the accepted answer, but I stumbled on this recently and was looking around to see if anyone tried the SciDB (fast selects, in-database calc). There is a VCF loader available here https://github.com/slottad/scidb-genotypes
2021-12-04 02:05:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19863450527191162, "perplexity": 2261.6465340119025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00495.warc.gz"}
https://math.stackexchange.com/questions/2754828/why-line-passing-through-origin-in-the-intersection-of-three-planes-is-parallel
# Why line passing through origin in the intersection of three planes is parallel to the planes? • There are three planes $P_1, P_2$ and $P_3$ and they all pass through origin so it will have infinite solutions. • The normal to these planes $n_1, n_2$ and $n_3$ are coplanar. • The line passing though origin is perpendicular to normal vectors but is parallel to three planes. Why is that line passing through origin parallel to the three planes?
2019-07-20 01:32:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2214798778295517, "perplexity": 196.2314954457244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526401.41/warc/CC-MAIN-20190720004131-20190720030131-00348.warc.gz"}
https://www.gamedev.net/blogs/blog/656-mjps-last-stand/
• entries 9 12 • views 24240 ## DXSAS Controls Library Already posted this in the XNA CC forums, but I figured "why not post it here too?" Hey everyone. While working on a ModelViewer tool for my current project, I developed a handful of WinForms controls for editing the shader parameters of my material effects. I ended up making them use the DXSAS 1.0 UI Annotation (as documented here) so that I wouldn't have to change the existing annotations I'd added for XSI ModTool, and also so that they'd be somewhat useful outside of this one project. Anyway I'd planned on making some sort of sample or article on this topic using the code I developed, but at the moment I've just got so much to do and I don't think I'll have time to get around to it. I decided I'd share the code here, in case anyone might find it useful to just plug into their project or to serve as a base for something else. I figure that if any of you guys are like me, you'd rather be optimizing shader code than spending time messing around with TextBox's :P. So if anyone's interested, I've uploaded the code here. It's just the classes themeselves, no library project or sample project or anything like that. You should be able to just add them to any project that has the XNA Framework assemblies referenced, it doesn't depend upon anything aside from that (and WinForms, obviously). If you want to see what they look like, there's a picture of my ModelViewer below. That's them in the panel on the right-hand side, right below the combo boxes for picking the MeshPart and the Effect. From the top to the bottom you've got a Slider, ColorPicker, Slider, Numeric, FilePicker, and ListPicker. As you can see they're nothing fancy at all, but they get the job done. If you have any questions, feel free to ask here or email me at the address listed in the ReadMe.txt inside the zip. ## HDR Rendering Sample Well I finished up my HDR Rendering Sample, and Rim posted it on xnainfo.com. He's still working on getting all the content stuff worked out for the site, so you'll have to make do with reading the write-up in .doc format. :P ## A Preview of My New Sample Over the weekend I finally got around to coding up one of the samples I've been meaning to do: a full HDR pipeline in XNA that uses LogLuv encoding. It's pretty neat: it lets you switch between fp16/LogLuv, and also lets you switch on and off multisampling and linear filtering fordownscaling so you can see the results of linear filtering when LogLuv is used. Plus I don't think there's any samples out there that show you the basics of HDR with XNA, so I think it's something people would find useful. I haven't finished with my write-up yet, but I did take a sweet screenshot: Look at the bloom on that teapot! And yes I know...teapots and Uffizi Cross have been done in just about every other Direct3D sample. What can I say, I'm not that creative. ## LogLuv Encoding for HDR I originally posted this over on the xnainfo.com blog, but I've decided I like the entry so much that I'm going to shamelessly rip myself off. Enjoy! Designing an effective and performant HDR implementation for my game's engine was a step that was complicated a bit by a few of the quirks of running XNA on the Xbox 360. As a quick refresher for those who aren't experts on the subject, HDR is most commonly implemented by rendering the scene to a floating-point buffer and then performing a tone-mapping pass to bring the colors back into he visible range. Floating-point formats (like A16B16G16R16F, AKA HalfVector4) are used because their added precision and floating-point nature allows them to comfortbly store linear RGB values in ranges beyond the [0,1] typically used for shader output to the backbuffer, which is crucial as HDR requires having data with a wide dynamic range. They're also convenient, as this it allows values to be stored in the same format they're manipulated in the shaders. Newer GPU's also support full texture filtering and alpha-blending with fp surfaces, which prevents the need for special-case handling of things like non-opaque geometry. However as with most things, what's convient is not always the best option. During planning, I came up with the following list of pro's and con's for various types of HDR implementations: * Standard HDR, fp16 buffer +Very easy to integrate (no special work needed for the shaders) +Good precision +Support for blending on SM3.0+ PC GPU's +Allows for HDR bloom effects -Double the bandwidth and storage requirements of R8G8B8A8 -Weak support for multi-sampling on SM3.0 GPU's (Nvidia NV40 and G70/G71 can't do it) -Hardware filtering not available on ATI SM2.0 and SM3.0 GPU's -No blending on the Xbox 360 -Requires double space in framebuffer on the 360, which increases the number of tiles needed * HDR with tone-mapping applied directly in the pixel shader (Valve-style) +Doesn't require output to an HDR format, no floating-point or encoding required +Multi-sampling and blending is supported, even on old hardware -Can't do HDR bloom, since only an LDR image is available for post-processing -Luminance can't be calculated directly, need to use fancy techniques to estimate it -Increases shader complexity and combinations * HDR using an encoded format +Allows for a standard tone-mapping chain +Allows for HDR bloom effects +Most formats offer a very wide dynamic range +Same bandwidth and storage as LDR rendering +Certain formats allow for multi-sampling and/or linear filtering with reasonable quality -Alpha-blending usually isn't an option, since the alpha-channel is used by most formats -Linear filtering and multisampling usually isn't mathmatically correct, although often the results are "good enough" -Additional shader math needed for format conversions -Adds complexity to shaders My early prototyping used a standard tone-mapping chain and I didn't want to ditch that, nor did I want to move away from what I was comfortable with. This pretty much eliminated the second option for me off the bat...although I was unlikely to choose it anyway due its other drawbacks (having nice HDR bloom was something I felt was an important part of the look I wanted for my game, and in my opinion Valve's method doesn't do a great job of determining average luminance). When I tried out the first method I found that it worked as well as it always did on the PC (I've used it before), but on the 360 it was another story. I'm not sure why exactly, but for some reason it simply does not like the HalfVector4 format. Performance was terrible, I couldn't blend, I got all kinds of strange rendering artifacts (entire lines of pixels missing), and I'd get bizarre exceptions if I enabled multisampling. Loads of fun, let me tell you. This left me with option #3. I wasn't a fan of this approach initially, as my original design plan called for things to be simple and straightforward whenever possible. I didn't really want to have two versions of my material shaders to support encoding, nor did I want to integrate decoding into the other parts of the pipeline that needed it. But unfortunately, I wasn't really left with any other options after I found there were no plans to bring the support for the 360's special fp10 backbuffer format to XNA (which would have conveniently solved my problems on the 360). So, I started doing my research. Naturally the first place I looked was to actual released commercial game. Why? Because usually when a technique is used in a shipped game, it means it's gone trhough the paces and has been determined to actually be feasible and practical in game environment. Which of course naturally led me to consider NAO32. NAO32 is a format that gained some fame in the dev community when ex-Ninja Theory programmer Marco Salvi shared some details on the technique over on the beyond3D forums. Used in the game Heavenly Sword, it allowed for multisampling to be used in conjuction with HDR on a platform (PS3) whose GPU didn't support multisampling of floating-point surfaces (The RSX is heavily based on Nvidia G70). In this technique, color is stored in the LogLuv format using a standard R8G8B8A8 surface. Two components are used to store X and Y at 8-bit precision, and the other two are used to store the log of luminance at 16-bit precision. Having 16 bits for luminance allows for a wide dynamic range to be stored in this format, and storing the log of the luminance allows for linear filtering in multisampling or texture sampling. Since he first explained it other games have also used it, such as Naughty Dog's Uncharted. It's likely that it's been used in many other PS3 games, as well. My actual shader implementation was helped along quite a bit by Christer Ericson's blog post, which described how to derive optimized shader code for encoding RGB into the LogLuv format. Using his code as a starting point, I came up with the following HLSL code for encoding and decoding: // M matrix, for encodingconst static float3x3 M = float3x3( 0.2209, 0.3390, 0.4184, 0.1138, 0.6780, 0.7319, 0.0102, 0.1130, 0.2969);// Inverse M matrix, for decodingconst static float3x3 InverseM = float3x3( 6.0013, -2.700, -1.7995, -1.332, 3.1029, -5.7720, .3007, -1.088, 5.6268); float4 LogLuvEncode(in float3 vRGB){ float4 vResult; float3 Xp_Y_XYZp = mul(vRGB, M); Xp_Y_XYZp = max(Xp_Y_XYZp, float3(1e-6, 1e-6, 1e-6)); vResult.xy = Xp_Y_XYZp.xy / Xp_Y_XYZp.z; float Le = 2 * log2(Xp_Y_XYZp.y) + 127; vResult.w = frac(Le); vResult.z = (Le - (floor(vResult.w*255.0f))/255.0f)/255.0f; return vResult;}float3 LogLuvDecode(in float4 vLogLuv){ float Le = vLogLuv.z * 255 + vLogLuv.w; float3 Xp_Y_XYZp; Xp_Y_XYZp.y = exp2((Le - 127) / 2); Xp_Y_XYZp.z = Xp_Y_XYZp.y / vLogLuv.y; Xp_Y_XYZp.x = vLogLuv.x * Xp_Y_XYZp.z; float3 vRGB = mul(Xp_Y_XYZp, InverseM); return max(vRGB, 0);} Once I had this implemented and worked through a few small glitches, results were much improved in the 360 version of my game. Performance was much much better, I could multi-sample again, and the results looked great. So while things didn't exactly work out in an ideal way, I'm pleased enough with the results. ## Don't Cast Function Pointers (Unless You Really Know What You're Doing) Yet another Evil Steve-esque journal entry that I can instantly whip out when needed, instead of typing out a detailed explanation *Note: Any information here only strictly applies to Microsoft Visual C++, I'm nowhere near experienced enough in any other compilers to comment on them. And as always, if anything is wrong please let me know so I can correct it. I'm much more interested in correctness than in pride. [smile] **Extra Note: I compiled and ran the sample code on Visual C++ 2005, your results may differ for other versions. However that should just serve to show you how unpredictable and nasty this stuff can be. In my last real journal entry I mentioned how the For Beginners forum typically has all kinds of Bad Win32 Code floating around. Well it doesn't just stop there...it's also brimming with Really Bad C++ Code, and even Completely Horrifying C++ Code. For this entry I'll be tackling something so scary it keeps me lying awake at night: function pointer casts in C++. Anybody who's used C++ for more than a month knows how dangerous casting can be, yet we still see it commonly used as a tool to "simply make the compiler shut up". Casting function pointers is even more dangerous, and we're going to talk about why. At the lowest low-level, a function pointer is exactly that: a pointer to a function. Functions have an address in memory where they're located, and a function pointer contains that address. When you want to call the function pointed to by the function pointer, an x86 "call" command is used an execution starts at the address contained in the pointer. However there's more to calling a function then simply running the code: there's also the function parameters, the return value, and other information that needs to be stuck on the stack (like the return address, so execution can return to the calling function). How this exactly happens is determined by the calling convention of a function. The calling convention specifies how parameters are passed to the function (usually on the stack), how the return value is passed back, how the "this" pointer is passed (in the case of C++ member functions), and how the stack is eventually cleaned up when the function is finished. This entry from Raymond Chen's blog has a good summary of the calling conventions used in 32-bit Windows. As Raymond puts it on his blog, a calling convention is a contract that defines exactly what happens when that function is called. Both sides (the caller and the callee) must agree to the contract and hold up their respective ends of the bargain in order for things to continue smoothly. So it should be obvious by now that function pointers are more than just an address: they also specify the parameters, the return value, and the calling convention. When you create a function pointer, all of this information is contained in the pointer type. This is a Good Thing, because it means that the compiler can catch errors for you when you try to assign values to incompatible types. Take this code for instance, which generates a compilation error: #include #include using std::cout;int DoSomething(int num){ return num + 1;}int DoSomethingElse(int num, int* numPtr){ int result = num + *numPtr; return result;}typedef int (*DoSomethingPtr)(int);typedef int (*DoSomethingElsePtr)(int, int*);int main(){ DoSomethingPtr fnPtr = DoSomethingElse; int result = fnPtr(5); cout << result; getch(); return 0;} Look at that, the compiler saved our butt. We were trying to do something very bad! But of course since this is C++ we're talking about, the compiler does not have the final say in what happens. If we want, we can say "shut up compiler, and do what I tell you" and it will happily oblige. So go ahead and change the first line of main to this: DoSomethingPtr fnPtr = (DoSomethingPtr)DoSomethingElse; and watch the compiler error magically vanish. But now try running the code, in debug mode first. And look at that, an Access violation. Why did we get an access violation? Well that's easy: we called a function that expected two parameters on the stack. However we were using a function pointer that only specified one parameter. In other words, we violated the contract on our end. The callee however dutifully followed the contracted, and popped two parameters off the stack. The second value on the stack happened to be NULL, which caused an exception to be thrown when we tried to dereference NULL. This is actually a pretty "nice" error. The exception happens right when we call the function, so naturally the first thing we'd do is go back to where we called the function and see what went wrong. So in the event of an accidentally erroneous cast, we'd figure it out pretty quickly. But of course, that's not always the case. Try compiling and running in release mode. And look at that: no crash! However that return value of "-1559444344" sure does look funky...clearly we weren't so lucky this time. Now instead of a nice informative crash, we have a function that just produces a completely bogus value. Maybe that value could be used for something immediately after and we'll notice it's bogus, maybe we won't notice until we've made 8 calculations based on it. Either way something down the line will get screwed up, and the chance that you'll trace it back to a bogus function pointer get slimmer and slimmer every step of the way. But wait...the fun doesn't end there. Casting problems can be more subtle than that...as well as more catastrophic. Let's try this nearly-identical program instead: #include #include using std::cout;int __stdcall DoSomething(int num){ return num + 1;}int __stdcall DoSomethingElse(int num, int* numPtr){ int result = num + *numPtr; return result;}typedef int (*DoSomethingPtr)(int);typedef int (*DoSomethingElsePtr)(int, int*);int main(){ DoSomethingPtr fnPtr = (DoSomethingPtr)DoSomething; int result = fnPtr(5); cout << result; getch(); return 0;} Look at that, we're actually pointing our function pointer to the right function this time! This should work perfectly, right? Right? Go ahead and run it. And what to do you know, it spits out the anticipated result! But no go ahead and press a key to let the program close up and....crash. A strange one too...access violation? At address 0x00000001? No source code available? What the heck code are we even executing? A look at the call stack shows that we're somehow executing in the middle of nowhere! So how did this happen? Once again, we're crooks who violated the contract. The functions were declared with the calling convention __stdcall, which specifies that the function being called cleans up the stack. However our function pointers were never given an explicit calling convention, which means they got the default (which is __cdecl). This meant we put our parameter and other stuff on the stack, we called the function, the function cleaned up the junk on the stack by popping it off, and then when we returned the main function once again cleaned junk off the stack. Except that since the junk had already been cleaned up already, we instead completely bungled up our stack and wound up with an instruction pointer pointing to no-man's land. Beautiful. For those wondering, the correct way to declare the function pointers would be like this: typedef int (__stdcall *DoSomethingPtr)(int);typedef int (__stdcall *DoSomethingElsePtr)(int, int*); And of course, the even smarter thing to do would have been to have no cast at all, since then the compiler would have caught our mistake and whacked us over the head for it. By now I hope I've gotten my point across. If I haven't, my point is this: don't cast function pointers unless you're extremely careful about it, and you absolutely have no choice. Type safety exists for a reason: to save us from ourselves. Make use of it whenever you can. EXTRA: On a somewhat related note, sometimes what you think is a function pointer isn't really a function pointer at all. For instance...what you get back when you pass GWLP_WNDPROC to GetWindowLongPtr. Yet another reason to be careful with function pointers. ## Posting WM_DESTROY is *not* how you destroy a window In general, the Ask Beginners forum is positively rife with all kinds of Bad Win32 Code. I don't really fault the noobies for writing it...especially since chances are extremely good that they copied it from some crappy tutorial or used some outdated boilerplate code. Heck, for a long time the default Visual C++ Win32 code contained a function-pointer cast! Yikes. The target of my ranting for this entry is code that posts the WM_DESTROY message as a mechanism for terminating their message loop and destroying their window. The problem is that WM_DESTROY is a notification, not a message you're supposed to be passing around. Notifications are messages that tell your program when something important has occurred...in this particular case, this "something important" is the fact that your window is being destroyed. In other words it's like the Windows way of saying "hey your window's about to be gonzo, so do whatever you have to do before we pull the plug!". This is made clear by the documentation, which explains the exact circumstances which cause your message handler to receive the message. Now the important distinction here is that WM_DESTROY is *not* what's destroying your window, it's simply the natural reaction that occurs when a window is destroyed. It's the thunder to the lighting, if you will. Therefore it really doesn't make any sense to use it to destroy a window, since it doesn't destroy anything! It's just a notification, not a mechanism for destroying windows. Using the previously mentioned analogy, posting a WM_DESTROY message is a bit like making a thunderclap and excepting lightning to strike. The *actual* way of destroying a window is to call DestroyWindow and pass along the handle of the window. The real problem here is that from the point of view of the noobie, doing this does work as excepted....or at least it seems to. Typically their message handler will have something like this in it: case WM_DESTROY: PostQuitMessage(0); break; Which means that WM_QUIT will get sent, and the user will break out of their message loop. And of course the main window is destroyed along with the program. The catch here is that it didn't go away because of anything the programmer explicitly requested, it happened as an automatic part of cleaning up the thread and the process (since, as many lazy programmers already know, Windows will clean up after you). So you might be thinking now, "so what's the big deal then if it gets destroyed anyway?". Well it's really just about enforcing good programming practices. Leaking windows and handles is just plain bad practice, especially for programs that typically run for long periods of time and consume resources. Doing things in ways other than what the Windows API Documentation specifies is also a great way to make sure your app becomes one of the many Poorly Behaved Windows Programs that become the scourge of a user's PC once they upgrade to a new version of Windows. So the moral is: before you use something, read the documentation. In most cases, it should be pretty clear whether or not you're using it right. ## Working With Unicode in the Windows API So the issue of Unicode and character sets is one that seems to come up quite a bit in the For Beginners forum (and elsewhere). Usually someone who is new to Windows programming will make a thread saying that the compiler barfs when it gets to their "MessageBox" call, and has no idea how to deal with it. Therefore I spent a lot of time explaining what their problem is and how to fix it, and usually this involves me explaining how the Windows API deals with strings. This happened again today and it resulted in me writing up an explanation that I rather like, so I've decided to post it here so that I can simply link people to it when necessary. I hope that those who read it find it useful, and get on their way with Windows programming. If there's anything hideously wrong or that you think could be added, please let me know. Also before or after you read this, you may want to consult the official documentation at MSDN regarding Unicode and character sets. All the information I've explained is in there, you may just have to go through more reading to get to it. You can also find some more good information on the topic in this blog entry by Joel Spolsky. ------------------------------------------------------------------------------ The Windows API supports two kinds of strings, each using two types of characters. The first type is multi-byte strings, which are arrays of char's. With these strings each glyph can either be a single byte (char) or multiple bytes, and how the data is interpreted into glyphs depends on the ANSI code page being used. The "standard" code page for Windows in the US is windows-1252, known as "ANSI Latin 1; Western European". These strings are generally referred to as "ANSI" strings throughout the Windows documentation. The Windows headers typedef the type "char" to "CHAR", and also typedef pointers to strings as "LPSTR" and "LPCSTR" (the second being a constant pointer to a string). String literals for this type simply use quotations, like in this example: const char* ansiString = "This is an ANSI string!"; The second type of string is what is referred to as Unicode strings. There are several types of Unicode, but in the Windows API "Unicode" generally refers to UTF-16 encoding. UTF-16 uses (at least) two bytes per glyph, and therefore in C and C++ the strings are represented as arrays of the type wchar_t (which is two bytes in size, and therefore referred to as a "wide" character). Unicode is a worldwide standard, and supports glyphs from many languages with one standard code page (with multi-byte strings you'd have to use a different code page if you wanted something like kanji). This is obviously a big improvement, which is why Microsoft encourages that all newly-written apps use Unicode exclusively (this is also why a new Visual C++ project defaults to Unicode). The Windows headers typedef the type "wchar_t" to "WCHAR", and also typedef pointers to Unicode strings as "LPWSTR" and "LPCWSTR". String literals for this type use quotations prefixed with an "L", like in this example: const wchar_t* unicodeString = L"This is a Unicode string!"; Okay, so I said that the Windows API supports both the old ANSI strings as well as Unicode strings. It does this through polymorphic types and by using macros for functions that take strings as parameters. Allow me to elaborate on the first part... The Windows API defines a third character type, and consequently a third string type. This type is "TCHAR", and its definition looks something like this: #ifdef UNICODEtypedef WCHAR TCHAR;#elsetypedef CHAR TCHAR;#endiftypedef TCHAR* LPTSTR;typedef const TCHAR* LPCTSTR; So as you can see here, how the TCHAR type is defined depends on whether the "UNICODE" macro is defined. In this way, the "UNICODE" macro becomes a sort of switch that lets you say "I'm going to be using Unicode strings, so make my TCHAR a wide character." And this is exactly what Visual C++ does when you set the project's "character set" to Unicode: it defines UNICODE for you. So what you get out of this is the ability to write code that can compile to use either ANSI strings or Unicode strings depending on a macro definition or a compiler setting. This ability is further aided by the TEXT() macro, which will produce either an ANSI or Unicode string literal: LPCTSTR tString = TEXT("This could be either an ANSI or Unicode string!"); Now that you know about TCHAR's, things might make a bit more sense if you look at the documentation for any Windows API function that accepts a string. For example, let's look at the documentation for MessageBox. The prototype shown on MSDN looks like this: int MessageBox( HWND hWnd, LPCTSTR lpText, LPCTSTR lpCaption, UINT uType); As you can see, it asks for a string of TCHAR's. This makes sense, since your app could be using either character type and the API doesn't want to force either type on you. However there's a big problem with this: the functions that make up the Windows API are implemented as precompiled DLL's. Since TCHAR is resolved at compile-time, the function had to be compiled as either ANSI or Unicode. So how did MS get around this? They compiled both! See, the function prototype you see in the documentation isn't actually a prototype of any existing function. It's just a bunch of sugar to make things look nice for you when you're learning how a function works, and tells you how you should be using it. In actuality, every function that accepts strings has two versions: one with an "A" suffix that takes ANSI strings, and one with a "W" suffix that takes Unicode strings. When you call a function like MessageBox, you're actually calling a macro that's defined to one of its two versions depending on whether the UNICODE macro is defined. This means that the Windows headers has something that looks like this: WINUSERAPIintWINAPIMessageBoxA( __in_opt HWND hWnd, __in_opt LPCSTR lpText, __in_opt LPCSTR lpCaption, __in UINT uType);WINUSERAPIintWINAPIMessageBoxW( __in_opt HWND hWnd, __in_opt LPCWSTR lpText, __in_opt LPCWSTR lpCaption, __in UINT uType);#ifdef UNICODE#define MessageBox MessageBoxW#else#define MessageBox MessageBoxA#endif Pretty tricky, eh? With these macros, the ugliness of having two functions is kept reasonably transparent for the programmer (with the disadvantage of causing some confusion among Windows newbies). Of course these macros can be bypassed completely if you want, by simply calling one of the typed versions directly. This is important for programs that dynamically load functions from Windows DLL's at runtime, using LoadLibrary and GetProcAddress. Since macros like "MessageBox" don't actually exist in the DLL, you have to specify the name of one of the "real" functions. Anyway, that's basically a summarized guide of how the Windows API handles Unicode. With this, you should be able to get started with using Windows API functions, or at least know what kinds of questions to ask when you need something cleared up on the issue. The above refers specifically to how the Windows API handles strings. The Visual C++ C Run-Time library also supports its own _TCHAR type which is defined in a manner similar to TCHAR, except that it uses the _UNICODE macro. It also defines a _T() macro for string literals that functions in the same manner as TEXT(). String functions in the CRT also use the _UNICODE macro, so if you're using these you must remember to define _UNICODE in addition to UNICODE (Visual C++ will define both if you set the character set as Unicode). If you use Standard C++ Library classes that work with strings such as std::string and std::ifstream and you want to use Unicode, you can use the wide-char versions. These classes have a w prefix, such as std::wstring and std::wifstream. There are no classes that use the TCHAR type, however if you'd like you can simply define a tstring or tifstream class yourself using the _UNICODE macro. ## Starfield Skybox Generator Having never really found a decent starfield skybox texture or a program capable of generating one, I decided to make my own. It seemed like a fun project and a good opportunity to finally make use of DXUT (which I've found to be very useful), and it was definitely both of those things. The end result is very very simple, all it does is randomly generate coordinates of stars on the surface of a sphere and then draws the stars as textured quads (using code borrowed from the particle engine of my renderer). The stars are all drawn to the six faces of a cubemap, which is then viewed in rotated in the window. Generation is very fast, so you can change the parameters and view the results in real-time. Later I added in the ability to throw in "Space Objects", which are larger planets and galaxies. These are also drawn as textured quads. Main drawbacks are: -Since its using a cubemap and a skybox, things get pretty ugly at the edges of cube faces. I may consider looking into some method of sphere-mapping, although from what I understand there can be some resolution issues with those. -The app is capable of creating an HDR cubemap, but I have no HDR source textures for stars or planets. Anyway its not too bad for a day's work, and it will be useful if I ever actually make a simple game with my renderer. Here's a screenshot and the source code (uses DXUT and boost libraries): http://members.gamedev.net/MJP/Starbox_DX9.zip
2018-01-22 04:40:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.359669953584671, "perplexity": 2025.516510499463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890991.69/warc/CC-MAIN-20180122034327-20180122054327-00196.warc.gz"}
https://www.transtutors.com/questions/ue-wednesday-ssignimen-keap-the-highest-32-mpta-aa-aa-an-analysis-of-company-perform-2575123.htm
# Ue Wednesday ssignimen Keap the Highest: /32 mpta Aa Aa An analysis of company performance using ... please solve all parts completely wi 4th information given thanks ue Wednesday ssignimen Keap the Highest: /32 mpta Aa Aa An analysis of company performance using DuPont analysis l of your office building with a sheaf of papers in his hand, your friend and colleegue, Jason, stepped into your office asked the following. Jason Do you have 10 or 15 minutes that you can spare? I've got a meeting in an hour, but I don't want to start something new and then be interrupted by the meeting, so how can 1 help? Jason I've been reviewing the company's financial statements and looking for general ways to improve our performance, in hat 1 start by general, and the company's return on equity, or ROE, in particular. Anja, my new team leader, suggesfed t using a DuPont analysis, and I'd like to run my numbers and conclusions by you, to see If I've missed anything Here are the balance sheet and income statement data that Anja gave me, and here are my notes with my calculations Could you start by making sure that my numbers are correct? You Give me a minute to look at these financial statements and to remember what I know about the DuPont analysis. Income Statement Data Balance Sheet Data $800,000 Accounts payable 1,600,000 Accruals 2.400,000 Notes payable 4,800,000$16,000,000 8,000,000 8,000,000 4,000,000 4,000,000 595,200 3,404,800 1,191,680 $2,213,120 Cash 960,000 Sales 320,000 Cost of goods sold 1,280,000 Gross profit Accounts receivable Current assets llabilities 2,560,000 Operating expenses 3,650,000 EBIT 6,240,000 Interest expense Long-term debt Total liabilities 840,000 EBT 2,520,000 Taxes 3,360,000 Net income Common stock Net fixed assets 4,800,000 Retained eamings Total equity Total assets$9,600,000 Total debt and equity \$9,600,000 If I remember correctly, the DuPont equation breaks down our ROE into three component ratios: the the total asset turnover ratio, and the And, according to my understanding of the DUPont equation and its calculation of ROE, the three ratios provide insights into the company's effectiveness in using the company's assets, and
2018-08-19 02:31:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31672582030296326, "perplexity": 2577.9174114942366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214538.44/warc/CC-MAIN-20180819012213-20180819032213-00247.warc.gz"}
https://www.12000.org/my_notes/faq/maple_faq/insu24.htm
#### advisor database for maple 6 (24.5.00) ##### Robert Israel The new release is so far only for Maple 6: the Release 5 and Release 4 databases are unchanged so far (although some additions to them may be coming in the next few weeks). Notable additions to this release include the following functions:      csum - check convergence of a series expands - symbolic version of expand greatestroot and leastroot - approximate greatest or least root of an equation or expression on a real interval. PVInt - numerical principal value integral quickplot and quickplot3d - fast 2-d and 3-d curve or point plotting of a list or hfarray I appreciate hearing any comments on this project. You can access the Maple Advisor Database on-line, or download and install it on your own computer. The database consists of a library of Maple procedures, plus a database of Maple help pages in the following categories:    - advice on how to use and program Maple, containing answers to many of the common questions that users, especially students and other new users, have about Maple. - explanations of common error messages - help pages for the procedures in the library - work-arounds and fixes for bugs in Maple. ##### Tom Casselman (1.6.00) I have downloaded Robert Israel’s Maple advisor database for both M5.1 and M6. How do I incorporate Bob’s databases into my Maple 5 and Maple 6 programs. Bob was not able to help since he is not familiar with Macintosh systems. I am running using OS 9.0.4. ##### Wolfgang Ziller (2.6.00) I tried to install Robert Israel’s advisor database on my laptop which runs Windows98SE both in Maple V5.1 and V6 with only partial success. The only way I can get it to work, if I issue libname:=C:\\Program Files\\Maple 6/advisor,libname; in a running Maple session ( / instead of // does not work at all) If I put the same command in the maple6.ini file, Maple does not recognize the change (libname gives the old libname only). Did anyone get this to work in Windows 98 ? ##### Robert Israel (5.6.00) Don’t put it in maple6.ini. That is a Windows initialization file, not the Maple initialization file. You want to put it in a file named ”maple.ini”. You probably don’t have one yet if you haven’t made it yourself. It should normally go into the directory that is current when you start Maple, or else in Maple’s ”bin.wnt” directory. You could produce it with any text editor, e.g. Notepad or Wordpad. If you wish, you can produce the file in Maple itself, as follows (this will put the file in the current directory, appending to any existing maple.ini file there): Note that the change won’t take effect until you exit this Maple session.
2021-09-24 05:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7801250219345093, "perplexity": 381.1115691319633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00168.warc.gz"}
http://docs.itascacg.com/flac3d700/common/sel/doc/manual/sel_manual/liners/fish/sel.liner_intrinsics/fish_sel.liner.shear.state.html
# struct.liner.shear.state Syntax i := struct.liner.shear.state(p,inode) Get the yield state of the coupling spring at node inode (inode ∈ {1, 2, 3}). Return a value i ∈ {1, 2, 3} denotes never yielded, yielding now, or yielded in the past, respectively. Returns: i - yield state, i ∈ {1, 2, 3} denotes never yielded, yielding now or yielded in past, respectively p - a pointer to a liner element inode - node number in the element, inode ∈ {1, 2, 3}
2020-12-05 08:37:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5405439138412476, "perplexity": 4562.281364146982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00224.warc.gz"}
https://www.eng-tips.com/faqs.cfm?fid=1147
× INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # Engine & fuel engineering FAQ ## Engine & fuel engineering Diesel and Gaseous Fuels, DualFueling by franzh faq71-1147 Posted: 14 Aug 05 Dual fuel systems may be used with turbocharged Diesel applications.  These engines have been historically been used for:generators;* irrigation pumps;* farm tractors;* over the road tractor-trailer rigs;* utility and delivery vehicles;* personally owned medium to heavy-duty trucks used for either everyday driving or the occasional hauling.Before we get into the system, I would like to dispel some myths about dual fueling.First, you can do considerable damage to an engine if it is dual fueled to excess.  In the past, it was not uncommon to see engines with welded or melted pistons, rings, cylinder heads, gaskets, valves, and cylinders!  Hmm, done a few myself!  What has happened is that the vehicle owner decided that "if a little bit of fuel is good, a lot is better!"  If opening a little screw 1/8 turn, 1 full turn is better!  It is notoriously simple to add lots of a gaseous fuel to a diesel engine and produce copious amounts of torque and horsepower.  Dr. Diesel stated in his original manuscript the concept of adding "Erdgas" (Earth Gas, or Methane, or more commonly known, Natural Gas) to the incoming air stream.  He found that adding (by volume) approximately 35% Methane to the air stream improved engine power by almost 50%.  (Remember that these engines were produced in the early 1900's and diesel engine design was in its infancy.)There are simple rules of thermodynamics, this is the most basic:It takes heat to produce power in any internal combustion engine, and it takes fuel and air to produce heat.  The more heat produced, the more power is generated, up the point of destruction caused by the inability of the engine to absorb or reject the heat properly.  Since diesel engines operate in the "excess air mode" adding more fuel simply makes the combustion process hotter.To dispel other rumors:Propane is NOT used as a catalyst to "more completely burn the fuel".  A catalyst changes the chemical structure of the fuel, period.  Modern diesel engines burn almost all of the fuel (in the 99% plus range, see below).Also, fuel is NOT wasted out of the exhaust.  It is the engines mechanical responsibility to take advantage of the burned fuel.  Taking an exhaust analysis from a running engine shows the amount of unburned fuel to be in the region of less than .001%, pretty tiny!  Taking a scale of the amount of heat that is produced during combustion, and where it is used shows some interesting things:In a diesel engine (not spark ignited), about 35% of the combustion energy actually goes to the rear wheels.  The rest is consumed in the following ways, within reason, and with some flexibility due to engine and vehicle differences.  This is further broken down in three separate areas:Actual brake engine powerThermal losses from radiationThermal losses from the exhaustFrom these numbers, we then extrapolate these figures:12% is radiated from the engine radiator;10%  from thermal losses through the block through heat radiation;45% is lost through the exhaust waste heat (a little less if the engine is turbocharged);About 5% of the energy is consumed by the process of combustion, the physical conversion of chemicals into gasses;About 10% is lost due to engine motoring friction losses, piston drag, camshaft bearings, lifters, crank drag, oil pump, water pump, valve and rocker arm friction, etc.Unless a magical means of eliminating these values is discovered, you will NEVER see an engine produce much more than about 40% efficiency, from BTU's of raw heat from burning the fuel, to usable power at the wheels.The use of LPG or Natural Gas as a fumigation fuel does several things:1.  It displaces some of the air in the combustion chamber, requiring the use of a turbo to regain some of the air-mass density.  I am not saying that you cannot fumigate a non-turbo or normally aspirated diesel engine, only that you will not obtain the full benefit if it is not turbocharged.2.  A gaseous fuel (LPG in this case) has different burn characteristics.  Diesel has approximately 155,000 BTU's per gallon, LPG only 91,500 BTU's, per liquid gallon, or only 62% of the raw heat energy of diesel..  Diesel combusts by compression, and has a critical compression ratio beginning around 12:1, depending on the combustion chamber temperature (the hotter, the lower the compression needs to be to ignite the fuel).  LPG also has a critical compression ratio, somewhere around 12:1 too, but its spike or combustion pressure rise time is MUCH quicker than diesel.  That's why you hear the distinctive rattle and gray smoke when too much LPG is applied into a diesel air intake, (the gray smoke is unburned fuel, and can be VERY combustible).3.  It is possible to pull another 20%+ power from a gen II 5.9 24 valve Cummins, but be VERY careful of turbo temperatures.  Install a thermocouple BEFORE the turbo.  Exhaust temps should NOT exceed 900 to 1000 degs, if it does, lean the fuel mixture, YES, LEAN the fuel mixture.  One website shows a 4-wheel drive Ford smoking all 4 tires from a standing start, pulling a trailer!  Impressive?4.  As a diesel engine powers up under full load, the diesel fuel begins to displace the air.  LPG vapor is then admitted into the air stream, which further displaces air.  The turbo is required to dump great amounts of air back to regain the lost power.  Remember that its the air that makes the engine power, not dumping vast amounts of fuel in. (That's why the turbo is there, to pump air, not fuel!)The goal is to develop a substitution fuel engine, or an engine that burns an alternative fuel other than that fuel which the engine was originally designed for.  In this case, the engine which was designed for diesel fuel use, we now substitute approximately 20% LPG (Propane).  I am well aware of competitive vehicle kits that offer driver adjustable fuel systems that promise (at least verbally) horsepower gains of up to 40%, and gain as much as 15% fuel mileage, ostensibly by allowing the propane to act as a "catalyst"!Again, if you look at the rule of thermodynamics, to put it simply, "you can't get something for nothing".Consider:  You must burn approximately 38% more propane than diesel to achieve the same power level.  This is why it is not that simple, just switching one fuel for another.One other concept. . . .Diesel engines operate unthrottled, the principal of a diesel engine is that the engine speed and torque level is controlled by the amount of fuel injected or consumed during the intake stroke.  Diesel fuel has a very wide air-to fuel ratio, making it ideal for the diesel Compression Ignition (CI) principal.  Diesel has a combustibility limit of from 5 to 35% air to fuel by weight.  Any more than that and great clouds of brown and black smoke show up (unburned fuel).  Any less than that and the engine will not power up.  LPG has very precise lean and rich limits of 2.1 to 9.6% by weight.  As it reaches the last 1% of that region, either lean or rich, the engine will lose the ability to run effectively.Propane and Natural Gas have a narrow air to fuel ratio (by comparison to diesel), therefore in order to maintain an efficient combustion, the amount of air must be reduced to meet the ideal air to fuel ratio of the gaseous fuel (and gasoline too, for that matter).  This ideal ratio is called the "Stoichiometric Ratio".  The process of mixing the air with enough fuel to fully consume both during combustion describes the technical process of an efficient combustion.  Another thing to consider.  Modern vehicles have to pass emission tests in order to be sold in the US, and in many other parts of the world.  If you alter any OEM design criteria, and it deteriorates from the certified emission production of that vehicle, you have voided the OEM warranty, and can incur tremendous fines and penalties from the EPA.  Not one website offering these fumigation kits mention anything about EPA testing or certification.If you exceed the original vehicle manufacturers engine horsepower and torque design specifications, you void any warranty and will probably incur engine damage, turbocharger damage, cracked exhaust manifolds, etc.  I attended an automotive technician training conference in early 2002 where the attendees were told that when a diesel powered vehicle was admitted into their repair facility, they were to examine the vehicle carefully for signs of a "recently removed" fumigation system, including:Tank mountingsEngine induction system modificationsWiring splicesCooling system splices. . .and if there were such signs, the warranty would be voided!  One such website plainly states that their system is "easily removable if the vehicle must be taken to a dealership for warranty repair"!Also, there are several methods of fumigating a diesel engine.  Lets explore:1)  Installing a venturi just before the turbo.  This method is very popular, and relatively accurate since it provides fuel directly proportional to the amount of air entering the engine (and also relative to the amount of turbo boost).  Since the diesel engine is unthrottled, there is the same amount of air entering the engine (per crankshaft revolution) at idle as it is at full throttle, minus the additional air provided by the turbo.  The venturi provides a vacuum signal, again proportional to engine load and RPM, to the pressure regulator / reducer / converter, which then sends fuel, based on the amount of vacuum demand.  Simple but relatively accurate.  This is currently the most popular method.2)  Installing a spud pipe directly in the turbo inlet.  This is the absolute in simplicity, but very inaccurate in fuel metering.  The vacuum signal may be dependent on the presence and quality of the air filter element(!) and can change drastically.3)  Installing a pressure regulator that meters propane vapor at about 3 psi ABOVE turbo pressure (the pressure regulator must be balanced against turbo pressure), which then bleeds fuel into the intake manifold, often directly at the intake ports.  This system is very fast reacting, and can dump copious amounts of fuel, and a lot of experimentation must be done to obtain the right amount of fuel mixtures at loading. User installed orifices are generally used to restrict the fuel flow.   It is prone to over-fueling, but does work.4)  Electronic fuel injection.  This is absolutely the best method of fuel metering, but is expensive, and each engine must be "mapped" for its fuel supply vs. engine load.  Also, various sensors must be installed, such as a TPS sensor, air temp sensor, exhaust temp sensor, engine rpm, manifold vacuum/pressure or a mass air flow sensor.  Many modern fuel management systems can also present a pre-set fuel map that just uses pressure, rpm, and temperature, but the more input the management has, the better the outcome.At this time, there are just a few successful companies using this method, but are currently only developing systems with Natural Gas, but research has been done using LPG.I was recently given some of the evidence presented during a successful lawsuit.  Disclosure laws prevent me from exact details, (I don't know the exact details anyway) but the amount requested during the suit was USD $15,000 (the actual amount of damage), plus the amount paid to rent a replacement vehicle (about USD$4,000).  The court awarded more than three times the amount requested in punitive damages because the defendant attempted to obscure the evidence by introducing other evidence not relevant to the case.  The defendant included the installer who was the "manufacturer" of the kit.The engine had approximately 1,000 miles on the conversion, and approximately 15,000 miles on the vehicle (2000 model).The information provided is my own, any reference to existing documentation is coincidental. Back to Engine & fuel engineering FAQ Index Back to Engine & fuel engineering Forum Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
2018-09-19 16:17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40017345547676086, "perplexity": 2907.880688705558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.62/warc/CC-MAIN-20180919161420-20180919181420-00199.warc.gz"}
https://ledatascifi.github.io/ledatascifi-2023/content/05/05a_expanding.html
9.3. Expanding returns¶ 9.3.1. The problem¶ You know the charts that show cumulative returns if you’d bought and held a stock since some long ago date? Let’s make one! This is called “expanding returns” because you get the total returns from day 0 to day N, then from day 0 to day N+1, and so on; the window is expanding instead of having a fixed number of units or containing a specific increment of time. We need a dataset with firm, date, and the faily return. Let’s build it: #!pip install pandas_datareader # uncomment and run this ONE TIME ONLY to install pandas data reader import pandas as pd import numpy as np import pandas_datareader as pdr # you might need to install this (see above) from datetime import datetime # choose your firms and dates stocks = ['SBUX','AAPL','MSFT'] start = datetime(1980, 1, 1) end = datetime(2022, 7, 31) Tip The code in the next block is explained more thoroughly in handouts/factor_loading_simple.ipynb in the textbook repo, because that file prints the status of the data throughout. Looking at this might help. # download stock prices # here, from yahoo: not my fav source, but quick. # we need to do some data manipulation to get the data ready stock_prices = pdr.get_data_yahoo(stocks, start=start, end=end) stock_prices = stock_prices.filter(like='Adj Close') # reduce to just columns with this in the name stock_prices.columns = stocks # put their tickers as column names # refmt from wide to long stock_prices = stock_prices.stack().swaplevel().sort_index().reset_index() # add return var = pct_change() function compares to prior row # EXCEPT: don't compare for first row of one firm with last row of prior firm! # MAKE SURE YOU CREATE THE VARIABLES WITHIN EACH FIRM - use groupby stock_prices['ret'] = stock_prices['ret'] 0 AAPL 1980-12-12 0.100040 NaN 1 AAPL 1980-12-15 0.094820 -0.052171 2 AAPL 1980-12-16 0.087861 -0.073398 3 AAPL 1980-12-17 0.090035 0.024751 4 AAPL 1980-12-18 0.092646 0.028992 5 AAPL 1980-12-19 0.098300 0.061029 6 AAPL 1980-12-22 0.103084 0.048670 7 AAPL 1980-12-23 0.107434 0.042199 8 AAPL 1980-12-24 0.113088 0.052628 9 AAPL 1980-12-26 0.123527 0.092310 10 AAPL 1980-12-29 0.125267 0.014083 11 AAPL 1980-12-30 0.122222 -0.024304 12 AAPL 1980-12-31 0.118743 -0.028468 13 AAPL 1981-01-02 0.120048 0.010988 14 AAPL 1981-01-05 0.117438 -0.021738 9.3.3. Getting the expanding returns¶ Notice that this dataset has the simple return for a period, not the gross returns (defined here). To compute $$R_i[0,T]$$ for all firms $$i$$ and each time $$T$$ in the dataset, you’re going to need to use groupby. You have two equivalent options from there: 1. For each firm, get the cumprod() of the gross return over its time series. df.assign(R=1+df['r']).groupby('firm')['R'].cumprod() 2. For each firm, take the product of $$1+r$$ for all prior periods using the expanding window functionality. df.groupby('firm')['r'].expanding().apply(lambda x: np.prod(1+x)) Which you choose is up to you, but in my testing the cumprod approach is 2.5x faster. stock_prices['cumret'] = \ ( stock_prices .assign(ret=1+stock_prices['ret']) .groupby('Firm') ['ret'] .cumprod() ) 9.3.4. Plotting the total returns¶ If only we could turn back time. (stock_prices.set_index('Date').groupby('Firm')['cumret'] .plot(title="If you bought \$1 back when, you'd have this now", figsize=(6,5)) );
2022-09-29 11:55:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2836032211780548, "perplexity": 8576.71479867761}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00783.warc.gz"}
https://www.physicsforums.com/threads/mass-of-the-photon.816609/
# Mass of the photon 1. May 30, 2015 ### akashpandey according to einstine energy and mass both are euivalent it means mass can be converted into energy and energy can be converted into mass. it means photon is a kind of energy particle with velocity is equal to velocity of light if we condense the energy of photon the we will get the mass of it. but photons are mass less whyy?? 2. May 30, 2015 ### ande4jo I'll try and answer it to the degree which I think I understand it. The idea is that mass actually has 2 parts. The first part is what general population would call mass. This is intrinsic mass which exists regardless of motion. The second part is mass increase due to motion. It seems that as a body increases speed it's total mass increases. This makes sense because according to relativity, the speed of light is the maximum speed. The mechanism to make sure this speed limitation is not violated is that the total mass necessarily needs to increase with speed. A photon can only exist when it is moving at the speed of light in a vacuum (or slower than the speed of light in other medium). Thus a photon at rest is not possible. Because a photon at rest is no longer a photon, it necessarily cannot have intrinsic mass. 3. May 30, 2015 ### ArmanCham We can describe total energy of particle $E^2=(pc)^2+(mc^2)^2$ Then photon has no mass so photon energy will be $E^2=(pc)^2$ 4. May 30, 2015 ### Staff: Mentor For the last several decades, most physicists have also called this "the mass" of an object. For the first few decades after Einstein put forth his relativity theory, most physicists also used this notion of mass. It started to fall out of favor around 1950, I think. Even Einstein in his later years said that people should use the intrinsic mass (a.k.a. invariant mass or "rest mass") and not the speed-dependent "relativistic mass." When I was a graduate student in the late 1970s and early 1980s in experimental particle physics, where we worked routinely with highly relativistic particles, none of us used "relativistic mass." Nowadays "relativistic mass" appears only in popularizations of relativity, and in introductory-level textbooks that haven't been updated. 5. May 30, 2015 ### ande4jo So of relativistic mass it's not a phenomenon that physicists use what is the mechanism to insure the speed of light is not violated? Another words, why does added energy to a body in motion not continue to increase speed at the same rate? 6. May 30, 2015 ### ArmanCham That given energy will increase speed and increase in speed will increase object energy.You are right If you add energy on moving object speed will increase. 7. May 30, 2015 ### ande4jo It will not increase at the same rate. Thus a Joule of energy added to a body for 1 second may give the object an increase of speed of 500 miles per hour but if we then add another Joule for another full second the next increase in speed will be less than the 500 mile per hour increase that we obtained the first time. A great way to explain this is to assume that the mass of the object must have increased with the first increase in speed. Thus relativistic mass explains this phenomenon and you previously mentioned that that relativistic mass is not used by physicists. If that is this case I am curious as to what physicists use to explain this phenomenon. 8. May 30, 2015 ### Staff: Mentor First, relativistic mass is an outdated concept: http://math.ucr.edu/home/baez/physics/Relativity/SR/mass.html But beyond that Einstein's famous equation E=MC^2 does not say energy and mass are equivalent - it says mass is a form of energy - not energy is a form of mass. It is a different type of energy to kinetic energy, electrical energy, chemical energy, thermodynamic energy etc etc. They are all forms of energy but they are not mass, nor is mass any of those forms of energy. Since energy is conserved in principle it is possible to convert from one to the other but they are not the same. Why this is was sorted out by a very great mathematician, Emmy Noether, in a justifiably famous theorem, Noethers Theorem, she proved in response to a query by Hilbert about Einsteins theory: http://www.physics.ucla.edu/~cwp/articles/noether.asg/noether.html Using that, proving E=MC^2 is a snap, and strikingly elegant: http://fma.if.usp.br/~amsilva/Livros/Zwiebach/chapter5.pdf Just as an aside there is a discussion at the moment on what energy is: I gave the modern answer, basically the one I outlined here, but for some reason it didn't garner much support - don't know why. Thanks Bill Last edited: May 30, 2015 9. May 30, 2015 ### ande4jo Thanks for the details 10. May 31, 2015 ### akashpandey thanxx guysss. 11. May 31, 2015 ### Staff: Mentor This is discussed in our FAQ section at https://www.physicsforums.com/threads/do-photons-have-mass.511175/ [Broken] Be aware that when you ask a question without bothering to look at the FAQ section or the other threads, you project the impression that you aren't really all that serious about learning. If it's not important enough to you to spend five minutes thinking about it, you shouldn't expect other people to spend a lot of time working on giving you a good answer either. Last edited by a moderator: May 7, 2017
2018-07-23 10:39:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5034359097480774, "perplexity": 525.7081665656043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00004.warc.gz"}
https://www.physicsforums.com/threads/heat-equation-in-the-first-quadrant.561981/
# Heat equation in the first quadrant. 1. Dec 21, 2011 ### A_B 1. The problem statement, all variables and given/known data Solve the heat equation $u_t=u_{xx}+u_{yy}$ fot $t>0$ in the first quadrant of $\mathbb{R}^2$. The boundary conditions are $u(0,y,t)=u(x,0,t)=0$ and the initial temperature distribution is $$f(x,y)= \begin{cases} 1 \;\;\;\; \text{in the square } \; 0<x<1; \; 0<y<1 \\ 0 \;\;\;\; \text{elsewhere} \end{cases}$$ 3. The attempt at a solution I can solve problems in finite domains, ie where enough boundary conditions are given to determine the countable infinite set of eigenvalues. This problem is on a semi-infinite domain, with insufficient boundary conditions to do so. I have come to understand that to solve such a problem, one replaces "the sum over eigenvalues" with an integral of the solutions obtained by separation of variables, over the (roots of the) eigenvalues. This is where I got (detailed derivations not typed out) Separation of variables (solutions of the form $u=X(x)Y(y)T(t)$ gives ODEs $$\begin{cases} -\ddot{X} = \lambda_1 \;\;\;\;\;\;\;\;\;\;\;\; X(0)=0\\ -\ddot{Y} = \lambda_2 \;\;\;\;\;\;\;\;\;\;\;\; Y(0)=0\\ \dot{T}=-(\lambda_1+\lambda_2)T \end{cases}$$ The "eigenvalues" can be shown to be all positive. So let $\lambda_1 = \gamma_1^2$ and $\lambda_2 = \gamma_2^2$. With $\gamma_1$ and $\gamma_2$ both positive. The solutions are $$X(x)=\sin(\gamma_1 x)$$ $$Y(y)=\sin(\gamma_2 y)$$ $$T(t) = e^{-(\gamma_1^2 + \gamma_2^2)t}$$ So the solution for separated variables is $$u(x,y,t)=\sin(\gamma_1 x)\sin(\gamma_2 y) e^{-(\gamma_1^2 + \gamma_2^2)t}$$ Since there are solutions for every positive $\gamma_1$ and $\gamma_2$, we cant determine eigenvalues like in regular problems. We write the final solution as a weighed integral of the solution for separated variables, with respect to the gamma's. $$u(x,y,t) = \int_0^\infty\int_0^\infty d\gamma_1 d\gamma_2 \alpha(\gamma_1, \gamma_2) \sin(\gamma_1 x)\sin(\gamma_2 y) e^{-(\gamma_1^2 + \gamma_2^2)t}$$ The function $\alpha(\gamma_1,\gamma_2)$ is determined by the initial conditions: $$f(x,y) = u(x,y,0) = \int_0^\infty\int_0^\infty d\gamma_1 d\gamma_2 \alpha(\gamma_1, \gamma_2) \sin(\gamma_1 x)\sin(\gamma_2 y)$$ And here I'm stuck, how do I determine alpha? Any help is much appreciated, A_B Last edited: Dec 21, 2011 2. Dec 24, 2011 ### Thaakisfox Fourier transform ;) 3. Dec 26, 2011 ### A_B I solved it, the last formula expresses f(x,y) as a double Fourier sine transform of α(γ_1, γ_2). So taking the inverse Fourier sine transform of f(x,y) twice gives α. A_B
2017-09-24 07:17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9143181443214417, "perplexity": 580.4443801569977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689897.78/warc/CC-MAIN-20170924062956-20170924082956-00296.warc.gz"}
https://slideplayer.com/slide/6970716/
# Slide Slide 1 Chapter 8 Sampling Distributions Mean and Proportion. ## Presentation on theme: "Slide Slide 1 Chapter 8 Sampling Distributions Mean and Proportion."— Presentation transcript: Slide Slide 1 Chapter 8 Sampling Distributions Mean and Proportion Slide Slide 2 2 Goal of Statistical Analysis: Find Parameters of Population from Statistics on Sample Sample Population Random Sampling: every unit in the population has an equal chance to be chosen The quality of all statistical analysis depends on the quality of the sample data Slide Slide 3  Parameter: A number describing a population.  Statistic: A number describing a sample. 1.A random sample should represent the population well, so sample statistics from a random sample should provide reasonable estimates of population parameters. 2.All sample statistics have some error in estimating population parameters. 3.If repeated samples are taken from a population and the same statistic (e.g. mean) is calculated from each sample, the statistics will vary, that is, they will have a distribution. 4.A larger sample provides more information than a smaller sample so a statistic from a large sample should have less error than a statistic from a small sample. Slide Slide 4 Sampling distributions for:  Mean (8.1) EVave (mean of a parameter in a population or EVave )  Proportion (8.2) EV% (percentage of a parameter in a population or EV% ) Slide Slide 5 8.1 Distribution of the Sample Mean Slide Slide 6 Statistics such as are random variables since their value varies from sample to sample. So they have probability distributions associated with them. In this chapter we focus on the shape, center and spread of statistics such as. Slide Slide 7 8-7 The sampling distribution of a statistic is a probability distribution for all possible values of the statistic computed from a sample of size n. The sampling distribution of the sample mean is the probability distribution of all possible values of the random variable computed from a sample of size n from a population with mean  and standard deviation . Slide Slide 8 8-8 The weights of pennies minted after 1982 are approximately normally distributed with mean 2.46 grams and standard deviation 0.02 grams. Approximate the sampling distribution of the sample mean by obtaining 200 simple random samples of size n = 5 from this population. Example 1: Sampling Distribution of the Sample Mean-Normal Population Slide Slide 9 8-9 The data on the following slide represent the sample means for the 200 simple random samples of size n = 5. For example, the first sample of n = 5 had the following data: 2.493 2.466 2.473 2.4922.471 Note: =2.479 for this sample Slide Slide 10 8-10 Sample Means for Samples of Size n =5 Slide Slide 11 8-11 The mean of the 200 sample means is 2.46, the same as the mean of the population. The standard deviation of the sample means is 0.0086, which is smaller than the standard deviation of the population. The next slide shows the histogram of the sample means. Slide Slide 12 8-12 Slide Slide 13 8-13 What role does n, the sample size, play in the standard deviation of the distribution of the sample mean? Slide Slide 14 8-14 What role does n, the sample size, play in the standard deviation of the distribution of the sample mean? As the size of the sample gets larger, we do not expect as much spread in the sample means since larger observations will offset smaller observations. Slide Slide 15 Suppose that a simple random sample of size n is drawn from a large population with mean  and standard deviation . The sampling distribution of will have mean and standard deviation. The standard deviation of the sampling distribution is called the standard error of the mean and is denoted. The Mean and Standard Deviation of the Sampling Distribution of Slide Slide 16 Notation the mean of the sample means the standard deviation of sample mean  (often called the standard error of the mean) µx = µµx = µ n x =x =  Slide Slide 17 Sampling from Normal Populations Slide Slide 18 The weights of pennies minted after 1982 are approximately normally distributed with mean 2.46 grams and standard deviation 0.02 grams. What is the probability that in a simple random sample of 10 pennies minted after 1982, we obtain a sample mean of at least 2.465 grams? Example – Weight of pennies Slide Slide 19 is normally distributed with =2.46 and.. P(Z>0.79)=1-0.7852 =0.2148. Solution On CALCULATOR: P(x>2.465)=normalcdf(2.465,10^99,2.46, 0.0063)=0.2148 Slide Slide 20 Given the population of passengers has normally distributed weights with a mean of 172 lb and a standard deviation of 29 lb, a) if one man is randomly selected, find the probability that his weight is greater than 175 lb. b) if 20 different men are randomly selected, find the probability that their mean weight is greater than 175 lb Another Example – Water Taxi (work on your own) Slide Slide 21 Or use table: z = 175 – 172 = 0.10 29 a) if one man is randomly selected, find the probability that his weight is greater than 175 lb: CALCULATOR: P(X>175)=normalcdf(175,10^99,172,29) Ans Slide Slide 22 b) if 20 different men are randomly selected, find the probability that their mean weight is greater than 172 lb. CALCULATOR: P(X>175)=normalcdf(175,10^99,172,29/ 20 ) Ans – cont Or use table: z = 175 – 172 = 0.46 29 20 Slide Slide 23 b) if 20 different men are randomly selected, their mean weight is greater than 175 lb. P(x > 175) = 0.3228 It is much easier for an individual to deviate from the mean than it is for a group of 20 to deviate from the mean. a) if one man is randomly selected, find the probability that his weight is greater than 175 lb. P(x > 175) = 0.4602 Ans - conclusion Slide Slide 24 Sampling from a Population that is not Normal Slide Slide 25 The following table and histogram give the probability distribution for rolling a fair die:  =3.5,  =1.708 Note that the population distribution is NOT normal Face on DieRelative Frequency 1 0.1667 2 3 4 5 6 EXAMPLE: Sampling from a Population that is Not Normal Slide Slide 26 Estimate the sampling distribution of (average of n tosses of the die) by obtaining 200 simple random samples of size n=4 and calculating the sample mean for each of the 200 samples. Repeat for n = 10 and 30. Histograms of the sampling distribution of the sample mean for each sample size are given on the next slide. Slide Slide 27 Slide Slide 28 Slide Slide 29 Slide Slide 30 Central Limit Theorem  The random variable x has a distribution (which may or may not be normal) with mean µ and standard deviation .  Simple random samples all of size n are selected from the population. (The samples are selected so that all possible samples of the same size n have the same chance of being selected.) Given: 1. The distribution of sample x will, as the sample size increases, approach a normal distribution. 2. The mean of the sample means is the population mean µ. 3. The standard deviation of all sample means is  n Slide Slide 31 Key Points  The mean of the sampling distribution is equal to the mean of the parent population and the standard deviation of the sampling distribution of the sample mean is regardless of the sample size.  The shape of the distribution of the sample mean becomes approximately normal as the sample size n increases, regardless of the shape of the population: This is a result of The Central Limit Theorem. As the sample size increases, the sampling distribution of sample means approaches a normal distribution. Slide Slide 32 Practical Rules 1.For samples of size n larger than 30, the distribution of the sample means can be approximated reasonably well by a normal distribution. The approximation gets better as the sample size n becomes larger. 2. If the original population is itself normally distributed, then the sample means will be normally distributed for any sample size n (not just the values of n larger than 30). Slide Slide 33 Example : (Using the Central Limit Theorem) Suppose that the mean time for an oil change at a “10-minute oil change joint” is 11.4 minutes with a standard deviation of 3.2 minutes. (a)If a random sample of n = 35 oil changes is selected, describe the sampling distribution of the sample mean. (b) If a random sample of n = 35 oil changes is selected, what is the probability the mean oil change time is less than 11 minutes? Slide Slide 34 Example : (Using the Central Limit Theorem) Suppose that the mean time for an oil change at a “10-minute oil change joint” is 11.4 minutes with a standard deviation of 3.2 minutes. (a)If a random sample of n = 35 oil changes is selected, describe the sampling distribution of the sample mean. (b) If a random sample of n = 35 oil changes is selected, what is the probability the mean oil change time is less than 11 minutes? Solution: is approximately normally distributed with mean=11.4 and std. dev. =. Solution:, P(Z<-0.74)=0.23. Slide Slide 35 8.2 Distribution of the Sample Proportion Slide Slide 36 Point Estimate of a Population Proportion Suppose that a random sample of size n is obtained from a population in which each individual either does or does not have a certain characteristic. The sample proportion, denoted (read “p-hat”) is given by where x is the number of individuals in the sample with the specified characteristic. The sample proportion is a statistic that estimates the population proportion, p. Slide Slide 37 In a Quinnipiac University Poll conducted in May of 2008, 1,745 registered voters nationwide were asked whether they approved of the way George W. Bush was handling the economy. 349 responded “yes”. Obtain a point estimate for the proportion of registered voters who approved of the way George W. Bush was handling the economy. Example 1: Computing a Sample Proportion Slide Slide 38 In a Quinnipiac University Poll conducted in May of 2008, 1,745 registered voters nationwide were asked whether they approved of the way George W. Bush was handling the economy. 349 responded “yes”. Obtain a point estimate for the proportion of registered voters who approved of the way George W. Bush was handling the economy. Example 1: Computing a Sample Proportion Solution: Slide Slide 39 8-39 According to a Time poll conducted in June of 2008, 42% of registered voters believed that gay and lesbian couples should be allowed to marry. Describe the sampling distribution of the sample proportion for samples of size n=10, 50, 100. Example 2: Using Simulation to Describe the Distribution of the Sample Proportion Slide Slide 40 8-40 Slide Slide 41 8-41 Slide Slide 42 8-42 Slide Slide 43 8-43 Key Points from Example 2  Shape: As the size of the sample, n, increases, the shape of the sampling distribution of the sample proportion becomes approximately normal.  Center: The mean of the sampling distribution of the sample proportion equals the population proportion, p.  Spread: The standard deviation of the sampling distribution of the sample proportion decreases as the sample size, n, increases. Slide Slide 44 For a simple random sample of size n with population proportion p: The shape of the sampling distribution of is approximately normal provided np(1-p)≥10. The mean of the sampling distribution of is The standard deviation of the sampling distribution of is Sampling Distribution of Slide Slide 45 Sampling Distribution of The model on the previous slides requires that the sampled values are independent. When sampling from finite populations, this assumption is verified by checking that the sample size n is no more than 5% of the population size N (n ≤ 0.05N). Regardless of whether np(1-p) ≥10 or not, the mean of the sampling distribution of is p, and the standard deviation is Slide Slide 46 According to a Time poll conducted in June of 2008, 42% of registered voters believed that gay and lesbian couples should be allowed to marry. Suppose that we obtain a simple random sample of 50 voters and determine which believe that gay and lesbian couples should be allowed to marry. Describe the sampling distribution of the sample proportion. Example 3: Slide Slide 47 Solution The sample of n=50 is smaller than 5% of the population size (all registered voters in the U.S.). Also, np(1-p)=50(0.42)(0.58)=12.18≥10. The sampling distribution of the sample proportion is therefore approximately normal with mean=0.42 and standard deviation= Slide Slide 48 According to the Centers for Disease Control and Prevention, 18.8% of school-aged children, aged 6-11 years were overweight in 2004. (a)In a random sample of 90 school children aged 6-11 years what is the probability that at least 19% are overweight? (b)Suppose in one random sample of 90 school children aged 6-11 years there were 24 overweight children. What might you conclude? Example 4: Compute Probabilities of a Sample Proportion Slide Slide 49 n=90 is less than 5% of the population size np(1-p)=90(.188)(1-.188)≈13.7≥10 is approximately normal with mean=0.188 and standard deviation = (a)In a random sample of 90 school-aged children, aged 6-11 years, what is the probability that at least 19% are overweight? Or (CALCULATOR): Solution, P(Z>0.05)=1-0.5199=0.4801 P(X>0.19)=normalcdf(0.19,10^99,0.188,0.0412)=0.4801 Slide Slide 50 is approximately normal with mean=0.188 and standard deviation = 0.0412 (b)Suppose in one random sample of 90 school children aged 6-11 years there were 24 overweight children. What might you conclude? Solution, P(X>0.2667)= normalcdf(0.2667,10^99,0.188,0.0412)= 0.028. We would only expect to see about 3 samples in 100 resulting in a sample proportion of 0.2667 or more. This is an unusual sample if the true population proportion is 0.188 Slide Slide 51 Next: Practice problems – Answers included (re-work on your own): Problem 1: Sample Mean and probability for sample mean Problem 2: Sample proportion and probability for sample proportion Slide Slide 52 Problem 1 – Sample mean : Flight search processing time Web application for a flight search: An investigator takes a sample of 100 flight searches and notes the web response time. Assume that the population average of ALL web searches is 15 sec with a standard deviation is 5 sec. Here are the summary statistics calculated by a Statistical Software for the sample of 100: Summary of web processing time The MEANS Procedure Analysis Variable : time N Mean Std Dev Minimum Maximum --------------------------------------------------------------- 100 14.9955626 5.2117790 2.2461204 25.7383955 --------------------------------------------------------------- The estimated processing time is 14.99 seconds (sample average) The standard error is equal to  /sqrt(n) = 5/sqrt(100)=0.5. Slide Slide 53 What is the probability that in 100 flight searches, the average time to process the requests is less than 14 seconds? We can use the normal approximation: The sample average is normally distributed with mean equal to 15 and standard deviation equal to the standard error = 0.5. 14 15 time P(X<14) = normalcdf(-10^99,14,15,0.5) =0.0228 There is only about 2.3% chance that the average time to process 100 flight requests is less than 14 seconds. Slide Slide 54 A study by a Federal Agency in 1983 concluded that polygraph (lie detector) tests given to truthful people have probability 0.2 of suggesting that the person is deceptive. A firm asks 20 job applicants about thefts from previous employers, using a polygraph to assess their truthfulness. All applicants were truthful. What is the chance that at least one will fail the test? Problem 2 – Sample proportion: Polygraph percentages Compute sample proportion and standard error for the sample proportion: 0.05 0.2 proportion Sample proportion is p=0.2 (same as population proportion), The standard error is sqrt(p*(1-p)/n) =sqrt(0.2*0.8/20)=0.09 Thus sample percentage is approximately normal with mean 0.2 and standard deviation 0.09. Slide Slide 55 What is the chance that at least 1 in 20 will fail the test? Answer: First figure what prportion does 1 in 20 constitute: 1/20 = 0.05 So probability is: P(X>0.05) = normalcdf(0.05,10^99,0.2,0.09) = 0.952 0.05 0.2 percentage 95.2% Conclusion: The chance that at least one applicant out of 20 will fail the polygraph test is 95.2%. That is extremely high! Download ppt "Slide Slide 1 Chapter 8 Sampling Distributions Mean and Proportion." Similar presentations
2020-04-03 20:30:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366737961769104, "perplexity": 641.5593521541884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00407.warc.gz"}
https://www.physicsforums.com/threads/how-do-proportional-relationships-derive-physics-equations.855765/
# How do proportional relationships derive physics equations? Tags: 1. Feb 5, 2016 ### Rishabh Narula in particular f=Gm1m2/r^2?sorry if my question sounds very irrelevant.if f is proportional to m1m2 it implies f=some constant times m1m2.okay.at the same time f is inversely proportional to r^2 .so force = some other constant times 1/r^2.okay.but in most places i see that what is done is they take take the two proportional relationships with respect to f and say "THEY ARE COMBINED" into one which is f is proportional to m1m2 times 1/r^2 or f is proportional m1m2/r^2.im just not getting how those two proportional relationships "combine" to give that one relationship from which obviously the law of gravitation comes.Can you please explain why exactly two proportional relationships can be combined like that?also why doesnt it apply same way in f proportional to m and f proportional to a to give f proportional to ma and then f= k ma. why just f= ma? i really want to understand this.THANKS FOR YOUR TIME. 2. Feb 5, 2016 ### davidhyte So the only other alternative would be F = (some factor) x (Gm1m2/r^2). That factor could mean too things: • it's a universal constant in which case it's just a matter of units of G; • it's a quantity that depends on other physical properties such as time, substance, etc. We haven't found any evidence of the latter. G really seems to be universal and constant. So I guess the answer to your question is you need three propositions: • the force is proportional to m1; • the force is proportional to m2; • the force is proportional to 1/r^2; • the force isn't proportional to anything else. Therefore F is proportional to m1m2/r^2 or, in other words, F = Gm1m2/r^2 3. Feb 5, 2016 ### HallsofIvy Staff Emeritus If I know that "f is proportional to h" then I know that, as long as everything else is held fixed, f is equal to a constant times h: f= Ch. Of course, a lot may be hidden in that "C"- things that could be allowed to vary. If I also know that "f is inversely proportional to $r^2$",then I know that, everything except r being fixed, $f= \frac{K}{r^2}$. The two together are possible only if there is a "$\frac{1}{r^2}$" "hidden" in that "C" or, conversely, "h" "hidden" in that "K". Replacing the "C" in f= Ch with $\frac{K}{r^2}$,we have $f= \frac{Kh}{r^2}$. Replacing the "K" in $f= \frac{K}{r^2}$ with Ch, we have $f= \frac{Ch}{r^2}$, exactly the same formula, differing only in what we call the "constant of proportionality. 4. Feb 5, 2016 ### jbriggs444 f = k ma is correct. It is just that we have chosen our units of force, mass and acceleration so that k = 1 in those units. If you were to use pounds force, pounds mass and feet per second squared then the corresponding k would be approximately 1/32.17 [32.17 is a standard acceleration of gravity, "g", expressed in feet per second squared]. 5. Feb 6, 2016 ### Rishabh Narula hmm.very thankful to everyone who answered but i think where im actually getting stuck is nothing high level.im just not getting why a proportional to b and a proportional to c implies a proportional to bc .i get it that if b changes by factor x then a changes by factor x and at the same time if c changes by y then that a.x would change by y resulting a net change a into a.x.y and also i mean yeah when you see a proprtional to bc it is evident that b,c getting changed by x,y makes a change by xy.but i want some more derived proof or something i guess.
2017-08-23 12:07:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078089594841003, "perplexity": 1118.8879809534437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120194.50/warc/CC-MAIN-20170823113414-20170823133414-00227.warc.gz"}
https://ncatlab.org/nlab/show/Jan%20Zaanen
nLab Jan Zaanen Selected writings Selected writings • Robert-Jan Slager, Andrej Mesaros, Vladimir Juricic, Jan Zaanen, The space group classification of topological band insulators, Nature Physics 9 98 (2013) $[$doi:10.1038/nphys2513, arXiv:1209.2610$]$ On strange metals, high $T_c$-superconductors and AdS/CMT duality: category: people Last revised on January 5, 2023 at 11:35:40. See the history of this page for a list of all contributions to it.
2023-02-07 22:25:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42311668395996094, "perplexity": 13865.913901462356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00175.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1225/4/a/m/
# Properties Label 1225.4.a.m Level $1225$ Weight $4$ Character orbit 1225.a Self dual yes Analytic conductor $72.277$ Analytic rank $1$ Dimension $2$ CM no Inner twists $1$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$1225 = 5^{2} \cdot 7^{2}$$ Weight: $$k$$ $$=$$ $$4$$ Character orbit: $$[\chi]$$ $$=$$ 1225.a (trivial) ## Newform invariants Self dual: yes Analytic conductor: $$72.2773397570$$ Analytic rank: $$1$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{2})$$ Defining polynomial: $$x^{2} - 2$$ Coefficient ring: $$\Z[a_1, a_2]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 35) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of $$\beta = \sqrt{2}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( -4 + \beta ) q^{2} + ( 1 + 4 \beta ) q^{3} + ( 10 - 8 \beta ) q^{4} + ( 4 - 15 \beta ) q^{6} + ( -24 + 34 \beta ) q^{8} + ( 6 + 8 \beta ) q^{9} +O(q^{10})$$ $$q + ( -4 + \beta ) q^{2} + ( 1 + 4 \beta ) q^{3} + ( 10 - 8 \beta ) q^{4} + ( 4 - 15 \beta ) q^{6} + ( -24 + 34 \beta ) q^{8} + ( 6 + 8 \beta ) q^{9} + ( -7 + 32 \beta ) q^{11} + ( -54 + 32 \beta ) q^{12} + ( 25 - 4 \beta ) q^{13} + ( 84 - 96 \beta ) q^{16} + ( -25 - 44 \beta ) q^{17} + ( -8 - 26 \beta ) q^{18} + ( -18 + 44 \beta ) q^{19} + ( 92 - 135 \beta ) q^{22} + ( -122 - 68 \beta ) q^{23} + ( 248 - 62 \beta ) q^{24} + ( -108 + 41 \beta ) q^{26} + ( 43 - 76 \beta ) q^{27} + ( -13 - 24 \beta ) q^{29} + ( 60 - 180 \beta ) q^{31} + ( -336 + 196 \beta ) q^{32} + ( 249 + 4 \beta ) q^{33} + ( 12 + 151 \beta ) q^{34} + ( -68 + 32 \beta ) q^{36} + ( -282 - 60 \beta ) q^{37} + ( 160 - 194 \beta ) q^{38} + ( -7 + 96 \beta ) q^{39} + ( 164 + 124 \beta ) q^{41} + ( 130 + 68 \beta ) q^{43} + ( -582 + 376 \beta ) q^{44} + ( 352 + 150 \beta ) q^{46} + ( -175 + 132 \beta ) q^{47} + ( -684 + 240 \beta ) q^{48} + ( -377 - 144 \beta ) q^{51} + ( 314 - 240 \beta ) q^{52} + ( 28 + 128 \beta ) q^{53} + ( -324 + 347 \beta ) q^{54} + ( 334 - 28 \beta ) q^{57} + ( 4 + 83 \beta ) q^{58} + 616 q^{59} + ( -168 - 108 \beta ) q^{61} + ( -600 + 780 \beta ) q^{62} + ( 1064 - 352 \beta ) q^{64} + ( -988 + 233 \beta ) q^{66} + ( 76 - 64 \beta ) q^{67} + ( 454 - 240 \beta ) q^{68} + ( -666 - 556 \beta ) q^{69} -952 q^{71} + ( 400 + 12 \beta ) q^{72} + ( 338 + 344 \beta ) q^{73} + ( 1008 - 42 \beta ) q^{74} + ( -884 + 584 \beta ) q^{76} + ( 220 - 391 \beta ) q^{78} + ( 507 - 248 \beta ) q^{79} + ( -727 - 120 \beta ) q^{81} + ( -408 - 332 \beta ) q^{82} + ( -188 - 600 \beta ) q^{83} + ( -384 - 142 \beta ) q^{86} + ( -205 - 76 \beta ) q^{87} + ( 2344 - 1006 \beta ) q^{88} + ( 108 + 44 \beta ) q^{89} + ( -132 + 296 \beta ) q^{92} + ( -1380 + 60 \beta ) q^{93} + ( 964 - 703 \beta ) q^{94} + ( 1232 - 1148 \beta ) q^{96} + ( 1371 - 220 \beta ) q^{97} + ( 470 + 136 \beta ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2 q - 8 q^{2} + 2 q^{3} + 20 q^{4} + 8 q^{6} - 48 q^{8} + 12 q^{9} + O(q^{10})$$ $$2 q - 8 q^{2} + 2 q^{3} + 20 q^{4} + 8 q^{6} - 48 q^{8} + 12 q^{9} - 14 q^{11} - 108 q^{12} + 50 q^{13} + 168 q^{16} - 50 q^{17} - 16 q^{18} - 36 q^{19} + 184 q^{22} - 244 q^{23} + 496 q^{24} - 216 q^{26} + 86 q^{27} - 26 q^{29} + 120 q^{31} - 672 q^{32} + 498 q^{33} + 24 q^{34} - 136 q^{36} - 564 q^{37} + 320 q^{38} - 14 q^{39} + 328 q^{41} + 260 q^{43} - 1164 q^{44} + 704 q^{46} - 350 q^{47} - 1368 q^{48} - 754 q^{51} + 628 q^{52} + 56 q^{53} - 648 q^{54} + 668 q^{57} + 8 q^{58} + 1232 q^{59} - 336 q^{61} - 1200 q^{62} + 2128 q^{64} - 1976 q^{66} + 152 q^{67} + 908 q^{68} - 1332 q^{69} - 1904 q^{71} + 800 q^{72} + 676 q^{73} + 2016 q^{74} - 1768 q^{76} + 440 q^{78} + 1014 q^{79} - 1454 q^{81} - 816 q^{82} - 376 q^{83} - 768 q^{86} - 410 q^{87} + 4688 q^{88} + 216 q^{89} - 264 q^{92} - 2760 q^{93} + 1928 q^{94} + 2464 q^{96} + 2742 q^{97} + 940 q^{99} + O(q^{100})$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 1.1 −1.41421 1.41421 −5.41421 −4.65685 21.3137 0 25.2132 0 −72.0833 −5.31371 0 1.2 −2.58579 6.65685 −1.31371 0 −17.2132 0 24.0833 17.3137 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Atkin-Lehner signs $$p$$ Sign $$5$$ $$1$$ $$7$$ $$-1$$ ## Inner twists This newform does not admit any (nontrivial) inner twists. ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1225.4.a.m 2 5.b even 2 1 245.4.a.k 2 7.b odd 2 1 175.4.a.c 2 15.d odd 2 1 2205.4.a.u 2 21.c even 2 1 1575.4.a.z 2 35.c odd 2 1 35.4.a.b 2 35.f even 4 2 175.4.b.c 4 35.i odd 6 2 245.4.e.h 4 35.j even 6 2 245.4.e.i 4 105.g even 2 1 315.4.a.f 2 140.c even 2 1 560.4.a.r 2 280.c odd 2 1 2240.4.a.bn 2 280.n even 2 1 2240.4.a.bo 2 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 35.4.a.b 2 35.c odd 2 1 175.4.a.c 2 7.b odd 2 1 175.4.b.c 4 35.f even 4 2 245.4.a.k 2 5.b even 2 1 245.4.e.h 4 35.i odd 6 2 245.4.e.i 4 35.j even 6 2 315.4.a.f 2 105.g even 2 1 560.4.a.r 2 140.c even 2 1 1225.4.a.m 2 1.a even 1 1 trivial 1575.4.a.z 2 21.c even 2 1 2205.4.a.u 2 15.d odd 2 1 2240.4.a.bn 2 280.c odd 2 1 2240.4.a.bo 2 280.n even 2 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{4}^{\mathrm{new}}(\Gamma_0(1225))$$: $$T_{2}^{2} + 8 T_{2} + 14$$ $$T_{3}^{2} - 2 T_{3} - 31$$ $$T_{19}^{2} + 36 T_{19} - 3548$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$14 + 8 T + T^{2}$$ $3$ $$-31 - 2 T + T^{2}$$ $5$ $$T^{2}$$ $7$ $$T^{2}$$ $11$ $$-1999 + 14 T + T^{2}$$ $13$ $$593 - 50 T + T^{2}$$ $17$ $$-3247 + 50 T + T^{2}$$ $19$ $$-3548 + 36 T + T^{2}$$ $23$ $$5636 + 244 T + T^{2}$$ $29$ $$-983 + 26 T + T^{2}$$ $31$ $$-61200 - 120 T + T^{2}$$ $37$ $$72324 + 564 T + T^{2}$$ $41$ $$-3856 - 328 T + T^{2}$$ $43$ $$7652 - 260 T + T^{2}$$ $47$ $$-4223 + 350 T + T^{2}$$ $53$ $$-31984 - 56 T + T^{2}$$ $59$ $$( -616 + T )^{2}$$ $61$ $$4896 + 336 T + T^{2}$$ $67$ $$-2416 - 152 T + T^{2}$$ $71$ $$( 952 + T )^{2}$$ $73$ $$-122428 - 676 T + T^{2}$$ $79$ $$134041 - 1014 T + T^{2}$$ $83$ $$-684656 + 376 T + T^{2}$$ $89$ $$7792 - 216 T + T^{2}$$ $97$ $$1782841 - 2742 T + T^{2}$$
2022-01-21 14:01:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9703300595283508, "perplexity": 9392.64572152056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00257.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-02979279
# Evaluation of Oxygen Stoichiometry during the Sintering of (U, Pu)O$_2$ Fuel * Corresponding author Abstract : For ceramics such as (U, Pu)O$_2$, diffusion phenomena occurring during sintering are affected by oxygen content of the atmosphere. The latter imposes the nature and the concentration of structural defects which govern diffusion mechanisms inside the material. The oxygen partial pressure, pO$_2$, of the sintering gas in equilibrium with MOX pellets needs to be precisely controlled; otherwise a large dispersion in critical parameters for fuel manufacturing could be induced. Among them, the oxygen over metal ratio (O/M) after sintering defines many properties of the fuel in operation (thermal conductivity, mechanical properties,). SFR fuels have to be hypostoichiometric with a O/M ratio close to 1,98. To achieve this, it is crucial to understand the relation between the sintering atmosphere and the fuel along the thermal cycle. In this study, oxygen potential monitoring of the sintering gas was carried out by measuring oxygen partial pressure (pO$_2$) at the outlet of a dilatometer by means of a zirconia probe. Mots-clés : Document type : Conference papers Domain : https://hal-cea.archives-ouvertes.fr/cea-02979279 Submitted on : Tuesday, October 27, 2020 - 10:13:35 AM Last modification on : Tuesday, November 3, 2020 - 3:06:29 AM Long-term archiving on: : Thursday, January 28, 2021 - 6:32:09 PM ### Files 201400002939.pdf Files produced by the author(s) ### Identifiers • HAL Id : cea-02979279, version 1 ### Citation S. Vaudez, J. Léchelle, S. Berzati. Evaluation of Oxygen Stoichiometry during the Sintering of (U, Pu)O$_2$ Fuel. Plutonium Futures: The Science 2014, ANS, Sep 2014, Las Vegas, United States. ⟨cea-02979279⟩ Record views
2021-06-14 16:05:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6138733625411987, "perplexity": 10783.381237414118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00518.warc.gz"}
http://kg15.herokuapp.com/abstracts/170
# Ryser's conjecture for multipartite hypergraphs ### Ian Wanless Monash University PDF Minisymposium: COMBINATORICS Content: An {\em $r$-partite hypergraph} has a partition of the vertices into disjoint sets $V_1,\dots,V_r$ and every edge includes exactly one vertex from each $V_i$. A {\em matching} in a hypergraph $H$ is a set of disjoint edges. The largest size of a matching in $H$ is denoted $\nu(H)$. A {\em cover} in a hypergraph is a set of vertices which meets every edge. The size of the largest cover in $H$ is denoted $\tau(H)$. Ryser conjectured that $\tau(H)\le(r-1)\nu(H)$ for $r$-partite hypergraphs. Define $f(r)$ to be the smallest number of edges in an $r$-partite hypergraph $H$ with $\nu(H)=1$ and $\tau(H)\ge r-1$ (if such an $H$ exists!). I will discuss work from two recent papers in which we \begin{itemize} \item Prove the special case of Ryser's conjecture where $r\le9$ and $H$ is linear (meaning each pair of edges meets at a unique vertex). \item Disprove a strengthened version of Ryser's conjecture proposed by Aharoni. \item Find the values of $f(6)$ and $f(7)$ and improve the general lower bound on $f(r)$. \end{itemize} Back to all abstracts
2021-04-13 05:10:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970200061798096, "perplexity": 240.02959108437986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072082.26/warc/CC-MAIN-20210413031741-20210413061741-00080.warc.gz"}
http://aas.org/archives/BAAS/v26n4/aas185/abs/S10609.html
Properties of Field Galaxies to $I=22$ from the Medium Deep Survey Session 106 -- Galaxies: Photometry and Spectrophotometry Display presentation, Thursday, 12, 1995, 9:20am - 6:30pm ## [106.09] Properties of Field Galaxies to $I=22$ from the Medium Deep Survey A. C. Phillips, D. A. Forbes, C. Gronwall, G. D. Illingworth, D. C. Koo (Lick Obs., UCSC), R. E. Griffiths, K. Ratnatunga (JHU), R. S. Ellis (IoA, Cambridge Univ.), R. F. Green (NOAO), J. P. Huchra (CfA), J. A. Tyson (Bell Labs), R. A. Windhorst (ASU) \def\lesssim{\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}} \hbox{$<$}}}} We present a complete sample of 72 $I\lesssim 22$ field galaxies from two fields observed with the WFPC-II as part of the Medium Deep Survey. Basic observable parameters ($I$ and $V$ photometry, sizes and intensity profiles) have been measured, and are discussed in the context of morphological type and apparent structural peculiarities. The median redshift of the sample is expected to be z$\sim$0.5. Despite lookback times approaching half the age of the universe, we find little evidence for strong evolutionary effects. In particular, we see no extreme color gradients in the galaxies. The size-vs-magnitude relationship shows no indication that this distant sample differs strongly from the local galaxy population, beyond changes predicted by passive evolution and assuming a local luminosity function improved from that of Koo, Gronwall \& Bruzual (1993). (This work was supported by NASA/HST grants GO-2684-0*-94A from STScI, which is operated by AURA, Inc., under NASA contract NAS5-26555.)
2015-03-03 09:07:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.675711452960968, "perplexity": 10364.141029689514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463165.18/warc/CC-MAIN-20150226074103-00123-ip-10-28-5-156.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Maximal_lotteries
# Maximal lotteries Maximal lotteries refers to a probabilistic voting system first considered by the French mathematician and social scientist Germain Kreweras[1] in 1965. The method uses preferential ballots and returns so-called maximal lotteries, i.e., probability distributions over the alternatives that are weakly preferred to any other probability distribution. Maximal lotteries satisfy the Condorcet criterion,[2] the Smith criterion,[2] reversal symmetry, polynomial runtime, and probabilistic versions of reinforcement,[3] participation,[4] and independence of clones.[3] Maximal lotteries are equivalent to mixed maximin strategies (or Nash equilibria) of the symmetric zero-sum game given by the pairwise majority margins. As such, they have a natural interpretation in terms of electoral competition between two political parties[5]. Moreover, they can be computed using linear programming. The voting system that returns all maximal lotteries is axiomatically characterized as the only one satisfying probabilistic versions of population-consistency (a weakening of reinforcement) and composition-consistency (a strengthening of independence of clones).[3] A social welfare function that top-ranks maximal lotteries is characterized using Arrow's independence of irrelevant alternatives and Pareto efficiency.[6] Maximal lotteries satisfy a strong notion of Pareto efficiency and a weak notion of strategyproofness.[7] In contrast to random dictatorship, maximal lotteries do not satisfy the standard notion of strategyproofness. Also, maximal lotteries are not monotonic in probabilities, i.e., it is possible that the probability of an alternative decreases when this alternative is ranked up. However, the probability of the alternative will remain positive[8]. Maximal lotteries or variants thereof have been rediscovered multiple times by economists,[9] mathematicians,[2][10] political scientists, philosophers,[11] and computer scientists.[12] In particular, the support of maximal lotteries, which is known as the essential set[13] or the bipartisan set, has been studied in detail.[9][14] Similar ideas appear also in the study of reinforcement learning and evolutionary biology to explain the multiplicity of co-existing species[15][16]. ## Collective preferences over lotteries The input to this voting system consists of the agents' ordinal preferences over outcomes (not lotteries over outcomes), but a relation on the set of lotteries is constructed in the following way: if ${\displaystyle p}$ and ${\displaystyle q}$ are different lotteries over outcomes, ${\displaystyle p\succ q}$ if the expected value of the margin of victory of an outcome selected with distribution ${\displaystyle p}$ in a head-to-head vote against an outcome selected with distribution ${\displaystyle q}$ is positive. While this relation is not necessarily transitive, it does always contain at least one maximal element. It is possible that several such maximal lotteries exist, but unicity can be proven in the case where the margins between any pair of alternatives is always an odd number[17]. This is the case for instance if there is an odd number of voters who all hold strict preferences over the alternatives. Following the same argument, unicity holds for the original "bipartisan set" that is defined as the support of the maximal lottery of a tournament game[8]. ## Example Suppose there are five voters who have the following preferences over three alternatives: • 2 voters: ${\displaystyle a\succ b\succ c}$ • 2 voters: ${\displaystyle b\succ c\succ a}$ • 1 voter: ${\displaystyle c\succ a\succ b}$ The pairwise preferences of the voters can be represented in the following skew-symmetric matrix, where the entry for row ${\displaystyle x}$ and column ${\displaystyle y}$ denotes the number of voters who prefer ${\displaystyle x}$ to ${\displaystyle y}$ minus the number of voters who prefer ${\displaystyle y}$ to ${\displaystyle x}$. ${\displaystyle {\begin{matrix}{\begin{matrix}&&a\quad &b\quad &c\quad \\\end{matrix}}\\{\begin{matrix}a\\b\\c\\\end{matrix}}{\begin{pmatrix}0&1&-1\\-1&0&3\\1&-3&0\\\end{pmatrix}}\end{matrix}}}$ This matrix can be interpreted as a zero-sum game and admits a unique Nash equilibrium (or minimax strategy) ${\displaystyle p}$ where ${\displaystyle p(a)=3/5}$, ${\displaystyle p(b)=1/5}$, ${\displaystyle p(c)=1/5}$. By definition, this is also the unique maximal lottery of the preference profile above. The example was carefully chosen not to have a Condorcet winner. Many preference profiles admit a Condorcet winner, in which case the unique maximal lottery will assign probability 1 to the Condorcet winner. ## References 1. ^ G. Kreweras. Aggregation of preference orderings. In Mathematics and Social Sciences I: Proceedings of the seminars of Menthon-Saint-Bernard, France (1–27 July 1960) and of Gösing, Austria (3–27 July 1962), pages 73–79, 1965. 2. ^ a b c P. C. Fishburn. Probabilistic social choice based on simple voting comparisons. Review of Economic Studies, 51(4):683–692, 1984. 3. ^ a b c F. Brandl, F. Brandt, and H. G. Seedig. Consistent probabilistic social choice. Econometrica. 84(5), pages 1839-1880, 2016. 4. ^ F. Brandl, F. Brandt, and J. Hofbauer. Welfare Maximization Entices Participation. Games and Economic Behavior. 14, pages 308-314, 2019. 5. ^ Laslier, J.-F. Interpretation of electoral mixed strategies Social Choice and Welfare 17: pages 283–292, 2000. 6. ^ F. Brandl and F. Brandt. Arrovian Aggregation of Convex Preferences. Econometrica. Forthcoming. 7. ^ H. Aziz, F. Brandt, and M Brill. On the Tradeoff between Economic Efficiency and Strategyproofness. Games and Economic Behavior. 110, pages 1-18, 2018. 8. ^ a b Laslier, J.-F. Tournament solutions and majority voting Springer-Verlag, 1997. 9. ^ a b G. Laffond, J.-F. Laslier, and M. Le Breton. The bipartisan set of a tournament game. Games and Economic Behavior, 5(1):182–201, 1993. 10. ^ D. C. Fisher and J. Ryan. Tournament games and positive tournaments. Journal of Graph Theory, 19(2):217–236, 1995. 11. ^ D. S. Felsenthal and M. Machover. After two centuries should Condorcet’s voting procedure be implemented? Behavioral Science, 37(4):250–274, 1992. 12. ^ R. L. Rivest and E. Shen. An optimal single-winner preferential voting system based on game theory. In Proceedings of 3rd International Workshop on Computational Social Choice, pages 399–410, 2010. 13. ^ B. Dutta and J.-F. Laslier. Comparison functions and choice correspondences. Social Choice and Welfare, 16: 513–532 , 1999. 14. ^ F. Brandt, M. Brill, H. G. Seedig, and W. Suksompong. On the structure of stable tournament solutions. Economic Theory, 65(2):483–507, 2018. 15. ^ B. Laslier and J.-F. Laslier. Reinforcement learning from comparisons: Three alternatives are enough, two are not Annals of Applied Probability 27(5): 2907–2925, 2017. 16. ^ Jacopo Grilli, György Barabás, Matthew J. Michalska-Smith and Stefano Allesina. Higher-order interactions stabilize dynamics in competitive network models Nature 548: 210-214, 2017. 17. ^ Gilbert Laffond, Jean-François Laslier and Michel Le Breton A theorem on two–player symmetric zero–sum games Journal of Economic Theory 72: 426–431, 1997.
2020-01-26 23:27:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420807123184204, "perplexity": 2889.0362223994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00387.warc.gz"}
https://par.nsf.gov/biblio/10319591
Transients from the Cataclysmic Deaths of Cataclysmic Variables Abstract We explore the observational appearance of the merger of a low-mass star with a white dwarf (WD) binary companion. We are motivated by recent work finding that multiple tensions between the observed properties of cataclysmic variables (CVs) and standard evolution models are resolved if a large fraction of CV binaries merge as a result of unstable mass transfer. Tidal disruption of the secondary forms a geometrically thick disk around the WD, which subsequently accretes at highly super-Eddington rates. Analytic estimates and numerical hydrodynamical simulations reveal that outflows from the accretion flow unbind a large fraction ≳90% of the secondary at velocities ∼500–1000 km s −1 within days of the merger. Hydrogen recombination in the expanding ejecta powers optical transient emission lasting about a month with a luminosity ≳10 38 erg s −1 , similar to slow classical novae and luminous red novae from ordinary stellar mergers. Over longer timescales the mass accreted by the WD undergoes hydrogen shell burning, inflating the remnant into a giant of luminosity ∼300–5000 L ⊙ , effective temperature T eff ≈ 3000 K, and lifetime ∼10 4 –10 5 yr. We predict that ∼10 3 –10 4 Milky Way giants are CV merger products, more » Authors: ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10319591 Journal Name: The Astrophysical Journal Volume: 923 Issue: 1 ISSN: 0004-637X We explore implications of a range of black hole (BH) seeding prescriptions on the formation of the brightest $z$ ≳ 6 quasars in cosmological hydrodynamic simulations. The underlying galaxy formation model is the same as in the IllustrisTNG simulations. Using constrained initial conditions, we study the growth of BHs in rare overdense regions (forming $\gtrsim 10^{12}\, {\rm M}_{\odot }\,h^{-1}$ haloes by $z$ = 7) using a  (9 Mpc h−1)3 simulated volume. BH growth is maximal within haloes that are compact and have a low tidal field. For these haloes, we consider an array of gas-based seeding prescriptions wherein $M_{\mathrm{seed}}=10^4\!-\!10^6\, {\rm M}_{\odot }\,h^{-1}$ seeds are inserted in haloes above critical thresholds for halo mass and dense, metal-poor gas mass (defined as $\tilde{M}_{\mathrm{h}}$ and $\tilde{M}_{\mathrm{sf,mp}}$, respectively, in units of Mseed). We find that a seed model with $\tilde{M}_{\mathrm{sf,mp}}=5$ and $\tilde{M}_{\mathrm{h}}=3000$ successfully produces a $z$ ∼ 6 quasar with $\sim 10^9\, {\rm M}_{\odot }$ mass and ∼1047 erg s−1 luminosity. BH mergers play a crucial role at $z$ ≳ 9, causing an early boost in BH mass at a time when accretion-driven BH growth is negligible. With more stringent seeding conditions (e.g. $\tilde{M}_{\mathrm{sf,mp}}=1000$), the relative paucity of BH seeds results in a much lower merger rate. In this case, $z$more » 5. ABSTRACT The merger of two or more galaxies can enhance the inflow of material from galactic scales into the close environments of active galactic nuclei (AGNs), obscuring and feeding the supermassive black hole (SMBH). Both recent simulations and observations of AGN in mergers have confirmed that mergers are related to strong nuclear obscuration. However, it is still unclear how AGN obscuration evolves in the last phases of the merger process. We study a sample of 60 luminous and ultra-luminous IR galaxies (U/LIRGs) from the GOALS sample observed by NuSTAR. We find that the fraction of AGNs that are Compton thick (CT; $N_{\rm H}\ge 10^{24}\rm \, cm^{-2}$) peaks at $74_{-19}^{+14}{{\ \rm per\ cent}}$ at a late merger stage, prior to coalescence, when the nuclei have projected separations (dsep) of 0.4–6 kpc. A similar peak is also observed in the median NH [$(1.6\pm 0.5)\times 10^{24}\rm \, cm^{-2}$]. The vast majority ($85^{+7}_{-9}{{\ \rm per\ cent}}$) of the AGNs in the final merger stages (dsep ≲ 10 kpc) are heavily obscured ($N_{\rm H}\ge 10^{23}\rm \, cm^{-2}$), and the median NH of the accreting SMBHs in our sample is systematically higher than that of local hard X-ray-selected AGN, regardless of the merger stage. This implies that thesemore »
2022-12-01 16:52:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5994653105735779, "perplexity": 3595.049053009675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00158.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-27-conceptual-questions-page-1039/6
## College Physics (4th Edition) As the frequency $f$ increases, the maximum kinetic energy of the ejected photons also increases. We can write an expression for the maximum kinetic energy: $K_{max} = hf - \phi$ We can see that as the frequency $f$ increases, the maximum kinetic energy of the ejected photons also increases. This is quite intuitive as higher frequency photons have more energy than lower frequency photons.
2019-11-18 13:16:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870508074760437, "perplexity": 196.8815202227115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00511.warc.gz"}
https://techcommunity.microsoft.com/t5/office-365-networking/asking-for-help-connectivity-office-com-survey-on-network/td-p/2079456
Microsoft # Asking for help. Connectivity.office.com survey on network location office or residence We have a survey on https://connectivity.office.com that we would like to get some support completing. The survey asks you what type of location you are running the test from and we will also capture a few anonymous network measurements that help detect if this is a residential network or an office network. We would appreciate if you could sign-in to the tool before starting since we will rely on those survey results by priority over the anonymous ones. However, your identity information is not associated with the result. Hope you will take a look 0 Replies
2022-08-09 20:03:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250064253807068, "perplexity": 1114.6290785669041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00516.warc.gz"}
https://www.gamedev.net/forums/topic/517966-stl-vector-resize-on-msvc-broken/
STL vector resize on MSVC broken? This topic is 3355 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts Changing the ordering of the resizes below gives me different results--if it's this way it works, if I swap some lines, it results in corrupted data. All arrays start off at size 0 except tuftGuides, which has some data and I expand it here. I read on the Web that resize() to larger size adds elements while keeping the existing ones--but this is not happening depending on how I order the below >:( The vectors hold either floats or structs of several floats, where the structs have default and copy constructors and assignment operators defined and working fine. There's no issue in Debug build. I'm using Visual Studio 2008. There's no exception thrown. A few of the vectors are class members, and the other few are local to the function. try { tuftGuides.resize(numTufts * LAYERS); invTuftSz.resize(numTufts); sineFactors.resize(numTufts); hairSecs.resize(numHairs * LAYERS); hairDia.resize(numHairs * LAYERS); hairOffsets.resize(numHairs * LAYERS); nears.resize(numHairs); } catch (...) ..... What do I do? Share on other sites Quote: STL vector resize on MSVC broken? Nope [smile] I suspect that the constructor, assignment operator or destructor for one (or more) of your aggregate types is incorrect. Can you explain how the corrupted data manifests itself? Share on other sites Visually. I'm using these vectors to generate fur for rendering. Above the code I posted, I fill part of tuft guides with poisson disk sampling, and it's just using push_back() (since I don't know the number of hair positions that will be generated). In the code after what I posted, I derive the other layers. Rearranging, such as putting the hairSecs on top, gives me screwed up hairs with what are either missing or dark sections along their length, I can't tell exactly. I could post screenshots but I don't think it would be very informative. The vectors are of float, Float2 (see below), and Float3 (similar to Float2). union Float2{ inline Float2(void); inline Float2(float const, float const = 0.0f); inline Float2(float const []); inline Float2(Float2 const &); inline Float2 &operator=(Float2 const &); struct { float x; float y; }; float a[2];};inline Float2::Float2(void){}inline Float2::Float2(float const nx, float const ny) : x(nx), y(ny){}inline Float2::Float2(float const n[]) : x(n[0]), y(n[1]){}inline Float2::Float2(Float2 const &other) : x(other.x), y(other.y){}inline Float2 &Float2::operator=(Float2 const &other){ if (this == &other) return *this; x = other.x; y = other.y; return *this;} Share on other sites How large are you making those vectors? Share on other sites Couple MB for the largest one. Share on other sites Resize uses the default constructor but it appears your default constructor leaves the union uninitialized. Could this be a problem? Also, I don't think you need to define a copy constructor/assignment operator if all they do is a shallow copy.
2018-02-21 20:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2202313244342804, "perplexity": 7432.604748848778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813803.28/warc/CC-MAIN-20180221202619-20180221222619-00199.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/13195/grover-search-with-different-diffusion-operators/13290#13290
# Grover search with different diffusion operators I was reading about the Grover Search algorithm on https://qiskit.org/textbook/ch-algorithms/grover.html#example. I understood the method but I have a few questions. My question regards the two-qubit case. Does the diffusion operator $$D=2|s\rangle\langle s|-1$$, depend upon the initial state i.e $$|+\rangle|+\rangle$$ and the marked state? Actually I was reading an article https://journals.aps.org/pra/pdf/10.1103/PhysRevA.68.022306, which had an equation $$$$-U_{S_j}|S_j\rangle_{w}=|w\rangle$$$$ with $$U_x=1-2|x\rangle\langle x|$$, $$S_1=\left(\dfrac{0+1}{\sqrt{2}}\right)^{\otimes 2}$$, and $$w$$ is the marked state. The other $$S_{j's}$$ can be the states for instance $$|+\rangle|-\rangle$$, $$|-\rangle|-\rangle$$, $$|-\rangle|+\rangle$$ etc. with total such $$S_j$$ being $$16$$. My question is how does one make the diffusion operator for a state $$|+\rangle|-\rangle$$. As an example from the table in the article it states for instance if $$j=2$$, $$S_2=|+\rangle|-\rangle$$ $$-U_{S_2}|S_1\rangle_{10}=-|00\rangle,$$ where $$10=w$$ is the marked state. Can somebody explain how this equation came? can somebody atleast hint at some references? ## 1 Answer 1. it should be clear that the core of the Grover algorithm includes 3 steps a) prepare initial state $$|s\rangle$$ b) apply $$U=1-2|\omega\rangle\langle \omega|$$ c) apply $$D=2|s\rangle\langle s|-1$$ then repeat step b and c 2. In the original Grover algorithm, the diffusion operator is fixed as $$D=2|s\rangle\langle s|-1$$, which you can say it depends upon the initial state. Actually in the image of qiskit textbook, you can see the initial state is $$|s\rangle=H^{\otimes n}|0\rangle^{n}$$ 3. In the paper your reference, it extends the Grover algorithm, especially extends the diffusion operator from $$|s\rangle$$ (named as $$\left|S_{1}\right\rangle$$) to another 15 states $$\left|S_{j}\right\rangle$$ 4. For the equation you mentioned in the case of $$j = 2$$, the detail derivation is as follows: the tensor product formula can learn from here • Can you explain you simplification of the equation $U_{S_2}U_{10}|S_1\rangle$ Aug 12 '20 at 21:58 • yes, it's a little bit complicate so I ignore some middle derivations. Basically you can follow my handwritten, to plugin the $U_{S_{2}}$ and $U_{10}$ of the first row into the equation. The tricky part is how to get the last 3 rows, you can use that tensor product formula, shrink it one by one. – kita Aug 12 '20 at 23:31 • Yes exactly that was what i was asking. Aug 13 '20 at 7:34
2021-12-01 22:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7966911196708679, "perplexity": 368.56895366464556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00443.warc.gz"}
https://www.physicsforums.com/threads/calculating-e-mc-2-using-foot-pounds-of-force.699309/
# Calculating e=mc^2 using foot-pounds of force 1. Jun 29, 2013 ### markteller I have seen the on-line calculators, but would like to see how the formula works in full detail. I have seen the kilograms / meters / second example already. The second part of the question is, what is the actual formula for converting foot pounds of force to Newton meters? Again, the actual details. Thanks... Last edited: Jun 29, 2013 2. Jun 29, 2013 ### DrewD What sort of details are you looking for? For the first part, if you just want to calculate the rest energy, then you plug in $m$ and $c$ and get a number. Deriving the formula requires a bit more effort and can be found easily online. If you choose to meters, kilograms, seconds, then you get the energy in Joules. If you want it in ft*lbs then you use the conversion factor. According to the all powerful google, it is $1\ J=0.7375ft\cdot lbs$. If you wanted to, you could multiply the individual conversion factors to get there. That is, the conversion meters to feet and Newtons to lbs (of force) and get the same number. Is this what you are asking about or did I misunderstand your question. 3. Jun 30, 2013 ### markteller Thanks! I am looking to convert 1 kilogram of mass into foot-pounds using e=mc^2 and without any unnecessary conversions. I want to see how the final number is arrived at. I then want to see how foot-pounds are converted back to newton meters, which are more typical for e=mc^2. Again, I know there are online calculators, but they don't educate the mind :) 4. Jun 30, 2013 ### Staff: Mentor In the spirit of teaching a man how to fish versus simply giving him a fish, it looks like you might benefit from a tutorial on "how to convert units". Here's the first one I found with a Google search on that phrase:
2016-10-01 12:08:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5432960391044617, "perplexity": 690.9918680649228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662856.94/warc/CC-MAIN-20160924173742-00224-ip-10-143-35-109.ec2.internal.warc.gz"}
https://socratic.org/questions/a-block-of-charge-q-and-mass-m-is-connected-to-a-spring-of-constant-k-an-electri#603893
# A block of charge q and mass m is connected to a spring of constant k. An electric field E exists parallel to the ground. The block is released from rest from a unstreched spring. Find Maximum displacement? Apr 29, 2018 $\frac{2 E q}{m}$ #### Explanation: Newton's 2nd law: • $F = m a$ Here: • $F = E q - k x$ The spring linearly opposes displacement from the equilibrium position, hence the negative term and the harmonic oscillation. Hence, equation of motion: $E q - k x = m \ddot{x}$ • $\implies \ddot{x} + \frac{k}{m} x = \frac{E q}{m}$ General solution: • $x = A \cos \omega t + B \sin \omega t + \frac{E q}{k}$, where $q \quad {\omega}^{2} = \frac{k}{m}$ With IV's: • $x \left(0\right) = 0$ $\implies x = B \sin \omega t + \frac{E q}{k} \left(1 - \cos \omega t\right)$ • $x ' \left(0\right) = 0$ $x ' = \omega B \cos \omega t + \omega \frac{E q}{k} \sin \omega t \implies B = 0$ So the governing equation is: • $x = \frac{E q}{k} \left(1 - \cos \sqrt{\frac{k}{m}} t\right)$ Because $- 1 \le \cos \theta \le 1$: • $0 < x < \frac{2 E q}{k}$
2022-01-28 02:58:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572748303413391, "perplexity": 2657.57419012335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00685.warc.gz"}
https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/19%3A_Appendix_H-_Bernoulli's_Equation
# 19: Appendix H- Bernoulli's Equation Recall the momentum equation for a homogeneous, inviscid fluid, written in gravity-aligned coordinates: $\frac{D \vec{u}}{D t}=-g \hat{e}^{(z)}-\vec{\nabla} \frac{p}{\rho_{0}}.$ Using the vector identity $[\vec{u} \cdot \vec{\nabla}] \vec{u} \equiv(\vec{\nabla} \times \vec{u}) \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u}),$ we can rewrite this as $\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}+\frac{1}{2} \vec{\nabla}(\vec{u} \cdot \vec{u})=-g \hat{e}^{(z)}-\vec{\nabla} \frac{p}{\rho_{0}},$ where $$\vec{\omega} = \vec{\nabla} \times\vec{u}$$ is the vorticity. Next, note that the vertical unit vector is the gradient of the vertical coordinate: $$\hat{e}^{(z)}$$ = $$\vec{\nabla}_z$$. We now substitute this and collect all of the terms that can be expressed as gradients: $\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}=-\vec{\nabla}\left(\frac{1}{2}(\vec{u} \cdot \vec{u})+g z+\frac{p}{\rho_{0}}\right),$ or $\frac{\partial \vec{u}}{\partial t}+\vec{\omega} \times \vec{u}=-\vec{\nabla} B,$ where $B=\frac{1}{2}(\vec{u} \cdot \vec{u})+g z+\frac{p}{\rho_{0}}$ is called the Bernoulli function1. Now assume that the flow is in steady state, i.e., $$\partial\vec{u}/\partial t$$ = 0 $\vec{\nabla} B=\vec{u} \times \vec{\omega}.$ This tells us that the gradient of the Bernoulli function is perpendicular to both $$\vec{u}$$ and $$\vec{\omega}$$, and therefore that $$B$$ does not vary in the direction of either of those vectors. In other words, in steady flow of a homogeneous, inviscid fluid, • B is uniform along a vortex filament, and • a fluid particle maintains a constant value of $$B$$ (since $$\vec{u}\cdot\vec{\nabla}B = DB/Dt = 0$$) as it travels. The second point, often called Bernoulli’s Law, famously explains how an airplane flies. Because the upper and lower surfaces of the wing are convex, flow past them is forced to speed up, so that the first term in $$B$$ increases. Variation in the second term is negligible, so the third term must decrease to maintain a constant value of $$B$$, i.e., the pressure must drop. Wings are designed with the upper surface more convex than the lower, so that the pressure drop is greater. The resulting pressure difference exerts a net upward force (“lift”) on the wing. 19: Appendix H- Bernoulli's Equation is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Bill Smyth via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-07-04 21:53:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108192920684814, "perplexity": 372.90004117843944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00502.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-8-further-applications-of-integration-8-1-arc-length-8-1-exercises-page-589/21
Calculus 8th Edition $\sqrt{2} + \ln{(1+\sqrt{2})}$ $y = \frac{2}{1} x^{2}$ then $y' = x$ and $1+(dy/dx)^{2} = 1 + x^{2}$ So $L = \int^{1}_{-1} \sqrt{1+x^{2}} dx = 2 \int^{1}_{0} \sqrt{1+x^{2}}dx = 2[\frac{x}{2} \sqrt{1+x^{2}} + \frac{1}{2} \ln(x+\sqrt{1+x^{2}})]^{1}_{0}$ $= \sqrt{2} + \ln{1+\sqrt{2}}$
2018-04-19 19:57:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892194867134094, "perplexity": 423.777178980429}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00458.warc.gz"}
https://labs.tib.eu/arxiv/?author=J.P.A.M.%20de%20Andr%C3%A9
• We describe and report the status of a neutrino-triggered program in IceCube that generates real-time alerts for gamma-ray follow-up observations by atmospheric-Cherenkov telescopes (MAGIC and VERITAS). While IceCube is capable of monitoring the whole sky continuously, high-energy gamma-ray telescopes have restricted fields of view and in general are unlikely to be observing a potential neutrino-flaring source at the time such neutrinos are recorded. The use of neutrino-triggered alerts thus aims at increasing the availability of simultaneous multi-messenger data during potential neutrino flaring activity, which can increase the discovery potential and constrain the phenomenological interpretation of the high-energy emission of selected source classes (e.g. blazars). The requirements of a fast and stable online analysis of potential neutrino signals and its operation are presented, along with first results of the program operating between 14 March 2012 and 31 December 2015. • We have conducted three searches for correlations between ultra-high energy cosmic rays detected by the Telescope Array and the Pierre Auger Observatory, and high-energy neutrino candidate events from IceCube. Two cross-correlation analyses with UHECRs are done: one with 39 cascades from the IceCube high-energy starting events' sample and the other with 16 high-energy track events'. The angular separation between the arrival directions of neutrinos and UHECRs is scanned over. The same events are also used in a separate search using a maximum likelihood approach, after the neutrino arrival directions are stacked. To estimate the significance we assume UHECR magnetic deflections to be inversely proportional to their energy, with values $3^\circ$, $6^\circ$ and $9^\circ$ at 100 EeV to allow for the uncertainties on the magnetic field strength and UHECR charge. A similar analysis is performed on stacked UHECR arrival directions and the IceCube sample of through-going muon track events which were optimized for neutrino point-source searches. • T2K has performed the first measurement of \nu{\mu} inclusive charged current interactions on carbon at neutrino energies of ~1 GeV where the measurement is reported as a flux-averaged double differential cross section in muon momentum and angle. The flux is predicted by the beam Monte Carlo and external data, including the results from the NA61/SHINE experiment. The data used for this measurement were taken in 2010 and 2011, with a total of 10.8 x 10^{19} protons-on-target. The analysis is performed on 4485 inclusive charged current interaction candidates selected in the most upstream fine-grained scintillator detector of the near detector. The flux-averaged total cross section is <\sigma_CC>_\phi =(6.91 +/- 0.13 (stat) +/- 0.84 (syst)) x10^{-39} cm^2/nucleon for a mean neutrino energy of 0.85 GeV. • The T2K collaboration: reports evidence for electron neutrino appearance at the atmospheric mass splitting, |\Delta m_{32}^2|=2.4x10^{-3} eV^2. An excess of electron neutrino interactions over background is observed from a muon neutrino beam with a peak energy of 0.6 GeV at the Super-Kamiokande (SK) detector 295 km from the beam's origin. Signal and background predictions are constrained by data from near detectors located 280 m from the neutrino production target. We observe 11 electron neutrino candidate events at the SK detector when a background of 3.3\pm0.4(syst.) events is expected. The background-only hypothesis is rejected with a p-value of 0.0009 (3.1\sigma), and a fit assuming \nu_{\mu}->\nu_e oscillations with sin^2(2\theta_{23})=1, \delta_{CP}=0 and |\Delta m_{32}^2|=2.4x10^{-3} eV^2 yields sin^2(2\theta_{13})=0.088^{+0.049}_{-0.039}(stat.+syst.). • We report a measurement of muon-neutrino disappearance in the T2K experiment. The 295-km muon-neutrino beam from Tokai to Kamioka is the first implementation of the off-axis technique in a long-baseline neutrino oscillation experiment. With data corresponding to 1.43 10**20 protons on target, we observe 31 fully-contained single muon-like ring events in Super-Kamiokande, compared with an expectation of 104 +- 14 (syst) events without neutrino oscillations. The best-fit point for two-flavor nu_mu -> nu_tau oscillations is sin**2(2 theta_23) = 0.98 and |\Delta m**2_32| = 2.65 10**-3 eV**2. The boundary of the 90 % confidence region includes the points (sin**2(2 theta_23),|\Delta m**2_32|) = (1.0, 3.1 10**-3 eV**2), (0.84, 2.65 10**-3 eV**2) and (1.0, 2.2 10**-3 eV**2).
2020-03-31 16:04:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.720481276512146, "perplexity": 3594.63961942348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00342.warc.gz"}
http://teilchen.at/planet
Particle Physics Planet September 21, 2018 Christian P. Robert - xi'an's og Riddler collector Once in a while a fairly standard problem makes it to the Riddler puzzle of the week. Today, it is the coupon collector problem, explained by W. Huber on X validated. (W. Huber happens to be the top contributor to this forum, with over 2000 answers, and the highest reputation closing on 200,000!) With nothing (apparently) unusual: coupons [e.g., collecting cards] come in packs of k=10 with no duplicate, and there are n=100 different coupons. What is the expected number one has to collect before getting all of the n coupons?  W. Huber provides an R code to solve the recurrence on the expectation, obtained by conditioning on the number m of different coupons already collected, e(m,n,k) and hence on the remaining number of collect, with an Hypergeometric distribution for the number of new coupons in the next pack. Returning 25.23 packs on average. As is well-known, the average number of packs to complete one’s collection with the final missing card is expensively large, with more than 5 packs necessary on average. The probability distribution of the required number of packs has actually been computed by Laplace in 1774 (and then again by Euler in 1785). The n-Category Cafe Cartesian Double Categories In general, there are two kinds of bicategories: those like $\mathrm{Cat}Cat$ and those like $\mathrm{Span}Span$. In the $\mathrm{Cat}Cat$-like ones, the morphisms are “categorified functions”, which generally means some kind of “functor” between some kind of “category”, consisting of functions mapping objects and arrows from domain to codomain. But in the $\mathrm{Span}Span$-like ones (which includes $\mathrm{Mod}Mod$ and $\mathrm{Prof}Prof$), the morphisms are not “functors” but rather some kind of “generalized relations” (including spans, modules, profunctors, and so on) which do not map from domain to codomain but rather relate the domain and codomain in some way. In $\mathrm{Span}Span$-like bicategories there is usually a subclass of the morphisms that do behave like categorified functions, and these play an important role. Usually the morphisms in this subclass all have right adjoints; sometimes they are exactly the morphisms with right adjoints; and often one can get away with talking about “morphisms with right adjoints” rather than making this subclass explicit. However, it’s also often conceptually and technically helpful to give the subclass as extra data, and arguably the most perspicuous way to do this is to work with a double category instead. This was the point of my first published paper, though others had certainly made the same point before, and I think more and more people are coming to recognize it. Today a new installment in this story appeared on the arXiv: Cartesian Double Categories with an Emphasis on Characterizing Spans, by Evangelia Aleiferi. This is a project that I’ve wished for a while someone would do, so I’m excited that at last someone has! We know now that various structure on a double category corresponds to similar structure on a bicategory. For instance, a monoidal structure on a (suitably well-behaved) double category induces a monoidal structure on its underlying bicategory. However, the monoidal double category is generally much stricter and easier to work with. Aleiferi’s paper is about extending this to the cartesian monoidal case. A cartesian monoidal double category is easy to define: its diagonal $D\to D×DD\to D\times D$ and projection $D\to 1D\to 1$ have right adjoints, just as for ordinary categories. It’s also easy to say what it means for a $\mathrm{Cat}Cat$-like bicategory to be cartesian monoidal: we can say that its diagonal and projection have right adjoints too, although that’s more complicated because the adjoints are generally only pseudofunctors living in a tricategory. But it’s not at all obvious what it means for a $\mathrm{Span}Span$-like bicategory to be “cartesian monoidal”. Intuitively, bicategories like $\mathrm{Span}Span$ itself, or more generally $\mathrm{Span}\left(E\right)Span\left(E\right)$ for $EE$ a category with finite limits, and $\mathrm{Prof}\left(V\right)Prof\left(V\right)$ when $VV$ is cartesian monoidal, should be “cartesian” — but they are not cartesian monoidal in the $\mathrm{Cat}Cat$-like way. The notion of cartesian bicategory was defined (by Carboni, Walters, Kelly, Verity, and Wood) to capture examples like these, but it is quite complicated. Moreover, to someone familiar with double categories, it is crying out to be reformulated in double-category language (e.g. it requires certain morphisms to have right adjoints, and induces $\mathrm{Cat}Cat$-like cartesian structure on the sub-bicategory of morphisms with right adjoints). In fact, it blows my mind that anyone was able to define the notion of cartesian bicategory without secretly having double categories in their head! Aleiferi has now made a more careful study of cartesian double categories, and shown that they can be used for at least some (which I suspect will eventually become “nearly all”) of the same purposes as cartesian bicategories. For instance, here is a theorem from Lack-Walters-Wood Bicategories of spans as cartesian bicategories: Theorem: A bicategory is equivalent to $\mathrm{Span}\left(E\right)Span\left(E\right)$, for some category $EE$ with finite limits, if and only if it is cartesian, each comonad has an Eilenberg-Moore object, and every map is comonadic. And here is a theorem from Aleiferi’s paper: Theorem: A double category is equivalent to $\mathrm{Span}\left(E\right)Span\left(E\right)$, for some category $EE$ with finite limits, if and only if it is cartesian, fibrant, unit-pure, and has strong Eilenberg-Moore objects for copointed endomorphisms. Even without understanding all the words, the family resemblance should be clear, even if the technicalities are different. On a quick skim of Aleiferi’s paper it looks like there is no formal comparison yet between cartesian double categories and cartesian bicategories, but I’m sure that will come. The n-Category Cafe A Pattern That Eventually Fails Sometimes you check just a few examples and decide something is always true. But sometimes even $1.5×{10}^{43}1.5 \times 10^\left\{43\right\}$ examples is not enough. You can show that ${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $ ${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $ ${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{201}\right)}{\frac{t}{201}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{201\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{201\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $ ${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{201}\right)}{\frac{t}{201}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{301}\right)}{\frac{t}{301}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{201\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{201\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{301\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{301\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $ and so on. It’s a nice pattern. But it doesn’t go on forever! In fact, Greg Egan showed the identity ${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{201}\right)}{\frac{t}{201}}\cdots \phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{100n+1}\right)}{\frac{t}{100n+1}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{201\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{201\right\}\right\} \cdots \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{100 n +1\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{100 n + 1\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $ holds when $n<15,341,178,777,673,149,429,167,740,440,969,249,338,310,889 n < 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889 $ but fails for all $n\ge 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889. n \ge 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889 .$ It’s not as hard to understand as it might seem; it’s a special case of the infamous ‘Borwein integrals’. The key underlying facts are: • The Fourier transform turns multiplication into convolution. • The Fourier transform of $\mathrm{sin}\left(cx\right)/\left(cx\right)\sin\left(c x\right)/\left(c x\right)$ is a step function supported on the interval $\left[-c,c\right]\left[-c,c\right]$. • The sum $\sum _{k=1}^{n}\frac{1}{100n+1}\displaystyle\left\{\sum_\left\{k = 1\right\}^n \frac\left\{1\right\}\left\{100n + 1\right\}\right\}$ first exceeds $11$ when $n=15,341,178,777,673,149,429,167,740,440,969,249,338,310,889. n = 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889. $ For Greg’s more detailed explanation, based on that of Hanspeter Schmid, and for another famous example of a pattern that eventually fails, go here: Peter Coles - In the Dark Tonight is Culture Night! Just time for a quick post to mention that tonight is Culture Night in Ireland, which means that over 1600 venues around the country are open this evening for free cultural events. Museums, art galleries and other public buildings and spaces will open later this evening to welcome the general public and there are scores of free concerts going on all over the place. There’s a useful guide here.There are some events in Maynooth tonight, including one at Maynooth Castle. I would have gone to tonight’s free concert at the National Concert Hall. Although it’s free you have to book a ticket because the capacity is limited and unfortunately I was too late getting around to doing that so couldn’t get in. I’ll probably listen to it on the radio tonight instead. I think Culture Night is a great idea, as it encourages people to sample cultural fare they might otherwise not get around to trying, and may boost the audiences for the rest of the year as a result. I wonder if anyone has ever thought of running a Culture Night in, say, Cardiff? Christian P. Robert - xi'an's og postdoctoral position on the Malaria Atlas Project, Oxford [advert] The Malaria Atlas Project is opening a postdoctoral position in Oxford in geospatial modelling toward collaborating with other scientists to develop probabilistic maps of malaria risk at national and sub-national level to evaluate the efficacy of past intervention strategies and to assist with the planning of future interventions. An understanding of spatiotemporal modelling and expertise in geostatistics, random-field models, or equivalent are essential. An understanding of the epidemiology of a vector-borne disease such as malaria is desirable but not essential. You must have a PhD or equivalent experience in mathematics, statistics, biostatistics, or a similar quantitative discipline. You will contribute to and, as appropriate, lead in the preparation of scientific reports and journal articles for publication of research findings from this work in open access journals. Travel to collaborators in Europe, the United States, Africa, and Asia will be part of the role. This full-time position is fixed-term until 31 December 2019 in the first instance. The closing date for this position will be 12.00 noon on Wednesday 17 October 2018. Emily Lakdawalla - The Planetary Society Blog The day I caught rocket fever On February 6, 2018, I found myself shoulder to shoulder with two of my heroes: Bill Nye on the left, Buzz Aldrin on the right. Our eyes were fixed on the first vertical Falcon Heavy rocket. Figuring the world's most powerful rocket might send me flying backwards once the countdown hit zero, I gripped the railing so tightly I started to lose the feeling in my fingertips. September 20, 2018 Christian P. Robert - xi'an's og red sister [book review] “It is important, when killing a nun, to ensure that you bring an army of sufficient size. For Sister Thorn of the Sweet Mercy convent Lano Tacsis brought two hundred men.” If it were a film, this book would be something like Harry Potter meets Clockwork Orange meets The Seven Samurai meets Fight Club! In the sense that it is set in a school (convent) for young girls with magical powers who are trained in exploiting these powers, that the central character has a streak of unbounded brutality at her core, that the training is mostly towards gaining fighting abilities and assassin skills. And that most of the story sees fighting, either at the training level or at the competition level or at the ultimate killing level. As in the previous novels by Mark Lawrence, which I did not complete, the descriptions of fights and deaths therein are quite graphic, and detailed, and obviously gory. But I found myself completely captivated by the story and the universe Lawrence created [with some post-apocalyptic features common with his earlier books] and the group of novices at the centre of the plot [even if some scenes were totally unrealistic within the harsh universe of Red Sister]. Despite the plot being sometimes very weak. or even incoherent. “I’ve never deleted a page and rewritten it, some authors rewrite whole chapters or remove or add characters. That’s going to make it a lengthy process.” As the warning from the author above makes it clear, the style itself is not always great, with too obvious infodumps and repetitions. And some unevenness in the characters that suddenly switch from pre-teens in a boarding school to mature schemers to super-mature strategists, from one page to the next. And [weak spoiler!] the potential villain is walking with a flashing light on top of her, almost from the start! Still, this book I bought on my last day on Van Isle, in the bookstore dense town of Sidney (B.C.) kept me hooked for a bit more than a day, from airport waits to sleepless breaks in the plane and the night after at home. And ordering the next volume of the trilogy almost immediately! One point reassuring in the interview of Lawrence is that he wrote the entire trilogy before publishing the first volume, contrary to Robert Jordan, George Martin, or Patrick Rothfuss!, meaning that his readers do not have to enjoy special time-accelerating powers to be certain to reach the date of publication of the next volume. John Baez - Azimuth Patterns That Eventually Fail Sometimes patterns can lead you astray. For example, it’s known that $\displaystyle{ \mathrm{li}(x) = \int_0^x \frac{dt}{\ln t} }$ is a good approximation to $\pi(x),$ the number of primes less than or equal to $x.$ Numerical evidence suggests that $\mathrm{li}(x)$ is always greater than $\pi(x).$ For example, $\mathrm{li}(10^{12}) - \pi(10^{12}) = 38,263$ and $\mathrm{li}(10^{24}) - \pi(10^{24}) = 17,146,907,278$ But in 1914, Littlewood heroically showed that in fact, $\mathrm{li}(x) - \pi(x)$ changes sign infinitely many times! This raised the question: when does $\pi(x)$ first exceed $\mathrm{li}(x)$? In 1933, Littlewood’s student Skewes showed, assuming the Riemann hypothesis, that it must do so for some $x$ less than or equal to $\displaystyle{ 10^{10^{10^{34}}} }$ Later, in 1955, Skewes showed without the Riemann hypothesis that $\pi(x)$ must exceed $\mathrm{li}(x)$ for some $x$ smaller than $\displaystyle{ 10^{10^{10^{964}}} }$ By now this bound has been improved enormously. We now know the two functions cross somewhere near $1.397 \times 10^{316},$ but we don’t know if this is the first crossing! All this math is quite deep. Here is something less deep, but still fun. You can show that $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, dt = \frac{\pi}{2} }$ $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, dt = \frac{\pi}{2} }$ $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, dt = \frac{\pi}{2} }$ $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, \frac{\sin \left(\frac{t}{301}\right)}{\frac{t}{301}} \, dt = \frac{\pi}{2} }$ and so on. It’s a nice pattern. But this pattern doesn’t go on forever! It lasts a very, very long time… but not forever. More precisely, the identity $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$ holds when $n < 9.8 \cdot 10^{42}$ but not for all $n.$ At some point it stops working and never works again. In fact, it definitely fails for all $n > 7.4 \cdot 10^{43}$ The explanation The integrals here are a variant of the Borwein integrals: $\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, dx= \frac{\pi}{2} }$ $\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} \, dx = \frac{\pi}{2} }$ $\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\, \frac{\sin(x/3)}{x/3} \, \frac{\sin(x/5)}{x/5} \, dx = \frac{\pi}{2} }$ where the pattern continues until $\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} \, dx = \frac{\pi}{2} }$ but then fails: $\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots \frac{\sin(x/15)}{x/15} \, dx \approx \frac \pi 2 - 2.31\times 10^{-11} }$ I never understood this until I read Greg Egan’s explanation, based on the work of Hanspeter Schmid. It’s all about convolution, and Fourier transforms: Suppose we have a rectangular pulse, centred on the origin, with a height of 1/2 and a half-width of 1. Now, suppose we keep taking moving averages of this function, again and again, with the average computed in a window of half-width 1/3, then 1/5, then 1/7, 1/9, and so on. There are a couple of features of the original pulse that will persist completely unchanged for the first few stages of this process, but then they will be abruptly lost at some point. The first feature is that F(0) = 1/2. In the original pulse, the point (0,1/2) lies on a plateau, a perfectly constant segment with a half-width of 1. The process of repeatedly taking the moving average will nibble away at this plateau, shrinking its half-width by the half-width of the averaging window. So, once the sum of the windows’ half-widths exceeds 1, at 1/3+1/5+1/7+…+1/15, F(0) will suddenly fall below 1/2, but up until that step it will remain untouched. In the animation below, the plateau where F(x)=1/2 is marked in red. The second feature is that F(–1)=F(1)=1/4. In the original pulse, we have a step at –1 and 1, but if we define F here as the average of the left-hand and right-hand limits we get 1/4, and once we apply the first moving average we simply have 1/4 as the function’s value. In this case, F(–1)=F(1)=1/4 will continue to hold so long as the points (–1,1/4) and (1,1/4) are surrounded by regions where the function has a suitable symmetry: it is equal to an odd function, offset and translated from the origin to these centres. So long as that’s true for a region wider than the averaging window being applied, the average at the centre will be unchanged. The initial half-width of each of these symmetrical slopes is 2 (stretching from the opposite end of the plateau and an equal distance away along the x-axis), and as with the plateau, this is nibbled away each time we take another moving average. And in this case, the feature persists until 1/3+1/5+1/7+…+1/113, which is when the sum first exceeds 2. In the animation, the yellow arrows mark the extent of the symmetrical slopes. OK, none of this is difficult to understand, but why should we care? Because this is how Hanspeter Schmid explained the infamous Borwein integrals: ∫sin(t)/t dt = π/2 ∫sin(t/3)/(t/3) × sin(t)/t dt = π/2 ∫sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2 ∫sin(t/13)/(t/13) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2 But then the pattern is broken: ∫sin(t/15)/(t/15) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2 Here these integrals are from t=0 to t=∞. And Schmid came up with an even more persistent pattern of his own: ∫2 cos(t) sin(t)/t dt = π/2 ∫2 cos(t) sin(t/3)/(t/3) × sin(t)/t dt = π/2 ∫2 cos(t) sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2 ∫2 cos(t) sin(t/111)/(t/111) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2 But: ∫2 cos(t) sin(t/113)/(t/113) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2 The first set of integrals, due to Borwein, correspond to taking the Fourier transforms of our sequence of ever-smoother pulses and then evaluating F(0). The Fourier transform of the sinc function: sinc(w t) = sin(w t)/(w t) is proportional to a rectangular pulse of half-width w, and the Fourier transform of a product of sinc functions is the convolution of their transforms, which in the case of a rectangular pulse just amounts to taking a moving average. Schmid’s integrals come from adding a clever twist: the extra factor of 2 cos(t) shifts the integral from the zero-frequency Fourier component to the sum of its components at angular frequencies –1 and 1, and hence the result depends on F(–1)+F(1)=1/2, which as we have seen persists for much longer than F(0)=1/2. • Hanspeter Schmid, Two curious integrals and a graphic proof, Elem. Math. 69 (2014) 11–17. I asked Greg if we could generalize these results to give even longer sequences of identities that eventually fail, and he showed me how: you can just take the Borwein integrals and replace the numbers 1, 1/3, 1/5, 1/7, … by some sequence of positive numbers $1, a_1, a_2, a_3 \dots$ The integral $\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/a_1)}{x/a_1} \, \frac{\sin(x/a_2)}{x/a_2} \cdots \frac{\sin(x/a_n)}{x/a_n} \, dx }$ will then equal $\pi/2$ as long as $a_1 + \cdots + a_n \le 1,$ but not when it exceeds 1. You can see a full explanation on Wikipedia: • Wikipedia, Borwein integral: general formula. As an example, I chose the integral $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt }$ which equals $\pi/2$ if and only if $\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} \le 1 }$ Thus, the identity holds if $\displaystyle{ \sum_{k=1}^n \frac{1}{100 k} \le 1 }$ but $\displaystyle{ \sum_{k=1}^n \frac{1}{k} \le 1 + \ln n }$ so the identity holds if $\displaystyle{ \frac{1}{100} (1 + \ln n) \le 1 }$ or $\ln n \le 99$ or $n \le e^{99} \approx 9.8 \cdot 10^{42}$ On the other hand, the identity fails if $\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} > 1 }$ so it fails if $\displaystyle{ \sum_{k=1}^n \frac{1}{101 k} > 1 }$ but $\displaystyle{ \sum_{k=1}^n \frac{1}{k} \ge \ln n }$ so the identity fails if $\displaystyle{ \frac{1}{101} \ln n > 1 }$ or $\displaystyle{ \ln n > 101}$ or $\displaystyle{n > e^{101} \approx 7.4 \cdot 10^{43} }$ With a little work one could sharpen these estimates considerably, though it would take more work to find the exact value of $n$ at which $\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$ first fails. Peter Coles - In the Dark Seven Years From Swindon I took the above snap this morning walking back to the Science Building. It shows the view from the other side of St Joseph’s Square compared to the picture I posted on Tuesday, i.e. towards St Patrick’s House rather than away from it. The weather has taken a turn for the worse since Tuesday, and it’s decidedly autumnal today but it’s still not a bad view to be greeted with on the way to the office. Contrast this with a photograph I took precisely seven years ago today, on September 20th 2011, when I had just arrived in Swindon for a stint on the STFC Astronomy Grants Panel: I’m no longer part of the UK research system so I guess I’ll never have to visit Swindon again… Jon Butterworth - Life and Physics What is the universe really made of? The paperback edition of A Map of the Invisible is out now, and to help promote it we made a few videos on some of the themes in the book. Here’s the first one: September 19, 2018 Christian P. Robert - xi'an's og peer reviews on-line or peer community? Nature (or more precisely some researchers through Nature, associated with the UK Wellcome Trust, the US Howard Hughes Medical Institute (hhmo), and ASAPbio) has (have) launched a call for publishing reviews next to accept papers, one way or another, which is something I (and many others) have supported for quite a while. Including for rejected papers, not only because making these reviews public diminishes on principle the time involved in re-reviewing re-submitted papers but also because this should induce authors to revise papers with obvious flaws and missing references (?). Or abstain from re-submitting. Or publish a rejoinder addressing the criticisms. Anything that increases the communication between all parties, as well as the perspectives on a given paper. (This year, NIPS allows for the posting of reviews of rejected submissions, which I find a positive trend!) In connection with this entry, I am still most sorry that I could not pursue the [superior in my opinion] project of Peer Community in computational statistics, for the time requested by Biometrika editing is just too important [given my current stamina!] for me to handle another journal (or the better alternative to a journal!). I hope someone else can take over the project and create the editorial team needed to run it. CERN Bulletin GAC-EPA Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre. La prochaine permanence se tiendra le : Mardi 25 septembre de 13 h 30 à 16 h 00 Salle de réunion de l’Association du personnel Les permanences suivantes auront lieu les mardis 30 octobre et 27 novembre 2018. Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires. Informations : http://gac-epa.org/ Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php CERN Bulletin Interfon Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30. CERN Bulletin Offer for our members Our partner FNAC is offering to all our members 10% discount on all the iMacs and Macbooks. This offer is valid between September 12 and September 30, 2018 upon the presentation of your Staff Association membership card. CERN Bulletin Exhibition Le Chronoscope Images du Temps, Eclats de Temps Thomas Desbrières Du 24 septembre au 5 octobre CERN Meyrin, Bâtiment principal Passionné  de sciences et d’art, l’artiste Thomas Desbrières réalise des tableaux d’art fractal (art numérique, création d’images à partir de formules mathématiques calculées par ordinateur). Le Chronoscope est un instrument scientifique imaginaire dont le but est de capter des images du Temps. Il pose les questions : A quoi pourrait ressembler le Temps ? Et comment pourrions-nous l’observer ? Les tableaux fractals sont comme les images résultats de cette expérience. Ce sont des visions originales du Temps obtenues par le Chronoscope. Ils montrent des mécanismes complexes, des cycles démultipliés à l’infini. Leur aspect évoque celui d’horloges étranges, dont les coloris dorés rappellent également le laiton des instruments de précision. http://www.senarius.fr Pour plus d’informations et demandes d’accès : staff.association@cern.ch  |  +41 22 767 28 19 September 16, 2018 ZapperZ - Physics and Physicists Want To Located The Accelerometer In Your Smartphone? Rhett Allain has a simple, fun rotational physics experiment that you can perform on your smartphone to locate the position of the accelerometer in that device, all without opening it. Your smart phone has a bunch of sensors in it. One of the most common is the accelerometer. It's basically a super tiny mass connected with springs (not actual springs). When the phone accelerates in a particular direction, some of these springs will get compressed in order to make the tiny test mass also accelerate. The accelerometer measures this spring compression and uses that to determine the acceleration of the phone. With that, it will know if it is facing up or down. It also can estimate how far you move and use this along with the camera to find out where real world objects are, using ARKit. So, we know there is a sensor in the phone—but where is it located? I'm not going to take apart my phone; everyone knows I'll never get it back together after that. Instead, I will find out the location by moving the phone in a circular path. Yes, moving in a circle is a type of acceleration. I'll let you read the article to know what he did, and what you can do yourself. Now, the only thing left is to verify the result. Someone needs to open an iPhone 7 and confirm the location of the accelerometer (do we even know what it looks like in such a device?). Any volunteers? :) Zz. John Baez - Azimuth The 5/8 Theorem This is a well-known, easy group theory result that I just learned. I would like to explain it more slowly and gently, and I hope memorably, than I’ve seen it done. It’s called the 5/8 theorem. Randomly choose two elements of a finite group. What’s the probability that they commute? If it exceeds 62.5%, the group must be abelian! This was probably known for a long time, but the first known proof appears in a paper by Erdös and Turan. It’s fun to lead up to this proof by looking for groups that are “as commutative as possible without being abelian”. This phrase could mean different things. One interpretation is that we’re trying to maximize the probability that two randomly chosen elements commute. But there are two simpler interpretations, which will actually help us prove the 5/8 theorem. How big can the center be? How big can the center of a finite group be, compared to the whole group? If a group $G$ is abelian, its center, say $Z,$ is all of $G.$ But let’s assume $G$ is not abelian. How big can $|Z|/|G|$ be? Since the center is a subgroup of $G,$ we know by Lagrange’s theorem that $|G|/|Z|$ is an integer. To make $|Z|/|G|$ big we need this integer to be small. How small can it be? It can’t be 1, since then $|Z| = |G|$ and $G$ would be abelian. Can it be 2? No! This would force $G$ to be abelian, leading to a contradiction! The reason is that the center is always a normal subgroup of $G$, so $G/Z$ is a group of size $|G/Z| = |G|/|Z|$. If this is 2 then $G/Z$ has to be $\mathbb{Z}/2.$ But this is generated by one element, so $G$ must be generated by its center together with one element. This one element commutes with everything in the center, obviously… but that means $G$ is abelian: a contradiction! For the same reason, $|Z|/|G|$ can’t be 3. The only group with 3 elements is $\mathbb{Z}/3,$ which is generated by one element. So the same argument leads to a contradiction: $G$ is generated by its center and one element, which commutes with everything in the center, so $G$ is abelian. So let’s try $|Z|/|G| = 4.$ There are two groups with 4 elements: $\mathbb{Z}/4$ and $\mathbb{Z}/2 \times \mathbb{Z}/2.$ The second, called the Klein four-group, is not generated by one element. It’s generated by two elements! So it offers some hope. If you haven’t studied much group theory, you could be pessimistic. After all, $\mathbb{Z}/2 \times \mathbb{Z}/2$ is still abelian! So you might think this: “If $G/Z \cong \mathbb{Z}/2 \times \mathbb{Z}/2,$ the group $G$ is generated by its center and two elements which commute with each other, so it’s abelian.” But that’s false: even if two elements of $G/Z$ commute with each other, this does not imply that the elements of $G$ mapping to these elements commute. This is a fun subject to study, but best way for us to see this right now is to actually find a nonabelian group $G$ with $G/Z \cong \mathbb{Z}/2 \times \mathbb{Z}/2$. The smallest possible example would have $\mathbb{Z}/2,$ and indeed this works! Namely, we’ll take $G$ to be the 8-element quaternion group $Q = \{ \pm 1, \pm i, \pm j, \pm k \}$ where $i^2 = j^2 = k^2 = -1$ $i j = k, \quad j k = i, \quad k i = j$ $j i = -k, \quad k j = -i, \quad i k = -j$ and multiplication by $-1$ works just as you’d expect, e.g. $(-1)^2 = 1$ You can think of these 8 guys as the unit quaternions lying on the 4 coordinate axes. They’re the vertices of a 4-dimensional analogue of the octahedron. Here’s a picture by David A. Richter, where the 8 vertices are projected down from 4 dimensions to the vertices of a cube: The center of $Q$ is $Z = \{ \pm 1 \},$ and the quotient $Q/Z$ is the Klein four-group, since if we mod out by $\pm 1$ we get the group $\{1, i, j, k\}$ with $i^2 = j^2 = k^2 = 1$ $i j = k, \quad j k = i, \quad k i = j$ $j i = k, \quad k j = i, \quad i k = j$ So, we’ve found a nonabelian finite group with 1/4 of its elements lying in the center, and this is the maximum possible fraction! How big can the centralizer be? Here’s another way to ask how commutative a finite group $G$ can be, without being abelian. Any element $g \in G$ has a centralizer $C(g),$ consisting of all elements that commute with $g.$ How big can $C(g)$ be? If $g$ is in the center of $G,$ then $C(g)$ is all of $G.$ So let’s assume $g$ is not in the center, and ask how big the fraction $|C(g)|/|G|$ can be. In other words: how large can the fraction of elements of $G$ that commute with $g$ be, without it being everything? It’s easy to check that the centralizer $C(g)$ is a subgroup of $G.$ So, again using Lagrange’s theorem, we know $|G|/|C(g)|$ is an integer. To make the fraction $|C(g)|/|G|$ big, we want this integer to be small. If it’s 1, everything commutes with $g.$ So the first real option is 2. Can we find an element of a finite group that commutes with exactly 1/2 the elements of that group? Yes! One example is our friend the quaternion group $Q.$ Each non-identity element commutes with exactly half the elements. For example, $i$ commutes only with its own powers: $1, i, -1, -i.$ So we’ve found a finite group with a non-central element that commutes with 1/2 the elements in the group, and this is maximum possible fraction! What’s the maximum probability for two elements to commute? Now let’s tackle the original question. Suppose $G$ is a nonabelian group. How can we maximize the probability for two randomly chosen elements of $G$ to commute? Say we randomly pick two elements $g,h \in G.$ Then there are two cases. If $g$ is in the center of $G$ it commutes with $h$ with probability 1. But if $g$ is not in the center, we’ve just seen it commutes with $h$ with probability at most 1/2. So, to get an upper bound on the probability that our pair of elements commutes, we should make the center $Z \subset G$ as large as possible. We’ve seen that $|Z|/|G|$ is at most 1/4. So let’s use that. Then with probability 1/4, $g$ commutes with all the elements of $G,$ while with probability 3/4 it commutes with 1/2 the elements of $G.$ So, the probability that $g$ commutes with $h$ is $\frac{1}{4} \cdot 1 + \frac{3}{4} \cdot \frac{1}{2} = \frac{2}{8} + \frac{3}{8} = \frac{5}{8}$ Even better, all these bounds are attained by the quaternion group $Q.$ 1/4 of its elements are in the center, while every element not in the center commutes with 1/2 of the elements! So, the probability that two elements in this group commute is 5/8. So we’ve proved the 5/8 theorem and shown we can’t improve this constant. Further thoughts I find it very pleasant that the quaternion group is “as commutative as possible without being abelian” in three different ways. But I shouldn’t overstate its importance! I don’t know the proof, but the website groupprops says the following are equivalent for a finite group $G$: • The probability that two elements commute is 5/8. • The inner automorphism group of $G$ has 4 elements. • The inner automorphism group of $G$ is $\mathbb{Z}/2 \times \mathbb{Z}/2.$ Examining the argument I gave, it seems the probability 5/8 can only be attained if $|Z|/|G| = 1/4$ $|C(g)|/|G| = 1/2$ for every $g \notin Z.$ So apparently any finite group with inner automorphism group $\mathbb{Z}/2 \times \mathbb{Z}/2$ must have these other two properties as well! There are lots of groups with inner automorphism group $\mathbb{Z}/2 \times \mathbb{Z}/2.$ Besides the quaternion group, there’s one other 8-element group with this property: the group of rotations and reflections of the square, also known as the dihedral group of order 8. And there are six 16-element groups with this property: they’re called the groups of Hall–Senior class two. And I expect that as we go to higher powers of two, there will be vast numbers of groups with this property. You see, the number of nonisomorphic groups of order $2^n$ grows alarmingly fast. There’s 1 group of order 2, 2 of order 4, 5 of order 8, 14 of order 16, 51 of order 32, 267 of order 64… but 49,487,365,422 of order 1024. Indeed, it seems ‘almost all’ finite groups have order a power of two, in a certain asymptotic sense. For example, 99% of the roughly 50 billion groups of order ≤ 2000 have order 1024. Thus, if people trying to classify groups are like taxonomists, groups of order a power of 2 are like insects. In 1964, the amusingly named pair of authors Marshall Hall Jr. and James K. Senior classified all groups of order $2^n$ for $n \le 6.$ They developed some powerful general ideas in the process, like isoclinism. I don’t want to explain it here, but which involves the quotient $G/Z$ that I’ve been talking about. So, though I don’t understand much about this, I’m not completely surprised to read that any group of order $2^n$ has commuting probability 5/8 iff it has ‘Hall–Senior class two’. There’s much more to say. For example, we can define the probability that two elements commute not just for finite groups but also compact topological groups, since these come with a god-given probability measure, called Haar measure. And here again, if the group is nonabelian, the maximum possible probability for two elements to commute is 5/8! There are also many other generalizations. For example Guralnick and Wilson proved: • If the probability that two randomly chosen elements of $G$ generate a solvable group is greater than 11/30 then $G$ itself is solvable. • If the probability that two randomly chosen elements of $G$ generate a nilpotent group is greater than 1/2 then $G$ is nilpotent. • If the probability that two randomly chosen elements of $G$ generate a group of odd order is greater than 11/30 then $G$ itself has odd order. The constants are optimal in each case. I’ll just finish with two questions I don’t know the answer to: • For exactly what set of numbers $p \in (0,1]$ can we find a finite group where the probability that two randomly chosen elements commute is $p?$ If we call this set $S$ we’ve seen $S \subseteq (0,5/8] \cup \{1\}$ But does $S$ contain every rational number in the interval (0,5/8], or just some? Just some, in fact—but which ones? It should be possible to make some progress on this by examining my proof of the 5/8 theorem, but I haven’t tried at all. I leave it to you! • For what properties P of a finite group is there a theorem of this form: “if the probability of two randomly chosen elements generating a subgroup of $G$ with property P exceeds some value $p,$ then $G$ must itself have property P”? Is there some logical form a property can have, that will guarantee the existence of a result like this? References Here is a nice discussion, where I learned some of the facts I mentioned, including the proof I gave: • MathOverflow, 5/8 bound in group theory. Here is an elementary reference, free online if you jump through some hoops, which includes the proof for compact topological groups, and other bits of wisdom: • W. H. Gustafson, What is the probability that two group elements commute?, American Mathematical Monthly 80 (1973), 1031–1034. For example, if $G$ is finite simple and nonabelian, the probability that two elements commute is at most 1/12, a bound attained by $\mathrm{A}_5.$ Here’s another elementary article: • Desmond MacHale, How commutative can a non-commutative group be?, The Mathematical Gazette 58 (1974), 199–202. If you get completely stuck on Puzzle 1, you can look here for some hints on what values the probability of two elements to commute can take… but not a complete solution! The 5/8 theorem seems to have first appeared here: • P. Erdös and P. Turán, On some problems of a statistical group-theory, IV, Acta Math. Acad. Sci. Hung. 19 (1968) 413–435. September 15, 2018 Jon Butterworth - Life and Physics Rising up to the challenge: My Brexit plan Even a stopped clock gives the right time twice a day. And the brexit ultras and associated careerists are correct that the so-called “Chequers” proposal is indeed “worse than status quo”. Damning indeed if you take “Marguerita Time” into consideration. … Continue reading September 14, 2018 ZapperZ - Physics and Physicists Bismuthates Superconductors Appear To Be Conventional A lot of people overlooked the fact that during the early days of the discovery of high-Tc superconductors, there was another "family" of superconductors beyond just the cuprates (i.e. those compounds having copper-oxide layers). These compounds are called bismuthates, where instead of having copper-oxide layers, they have bismuth-oxide layers. Otherwise, their crystal structures are similar to the cuprates. They didn't make that much of a noise at that time because Tc for this family of material tends to be lower than the cuprates. And, even back then, there were already evidence that the bismuthates superconductors might be "boring", i.e. the results that they have produced looked like they might be a conventional superconductor. This is supported by several experiments, including a tunneling experiment[1] that showed that the phonon density of states obtained from tunneling data matches that of the density of states obtained from neutron scattering. Now it seems that there is more evidence that the bismuthates are conventional BCS superconductors, and it comes from ARPES experiment[2]. There have been no ARPES measurement done on bismuthates before this because it had been a serious challenge to get a single-crystal of this compound large enough to perform such an experiment. But obviously, large-enough single-crystals have been synthesized. In this latest experiment, they look at the band structure of this compound, and extract, among others, the strong electron-phonon coupling that matches the superconducting gap. This strongly indicates that phonons are the "glue" in the superconducting mechanism for this compound. So this adds another piece of the puzzle for the whole mystery of the origin of superconductivity in the cuprates. Certainly, having similar layered crystal structure does not discount being a conventional superconductor. Yet, the cuprates have very different behavior when we perform tunneling and ARPES experiments, and they certainly have higher Tc's. The mystery continues. Zz. [1] Q. Huang et al. Nature v347, p369 (1990). [2] CHP. Wen et al. PRL  121, 117002 (2018). https://arxiv.org/abs/1802.10507 September 13, 2018 ZapperZ - Physics and Physicists Human Eye Can Detect Cosmic Radiation Well, not in the way you think. I recently found this video of an appearance of astronaut Scott Kelly on The Late Show with Stephen Colbert. During this segment, he talked about the fact that when he went to sleep on the Space Station and closed his eyes, he occasionally detected flashes of light. He attributed it to the cosmic radiation  passing through his body, and his eyes in particular. Check out the video at minute 3:30 My first inclination is to say that this is similar to how we detect neutrinos, i.e. the radiation particles interact with the medium in his yes, either the vitreous or the medium that makes up the lens, and this interaction causes the ejection of relativistic electron and subsequently, a Cerenkov radiation. The Cerenkov radiation is then detected by the eye. Of course, there are other possibilities, such as the cosmic particle causes an excitation of an atom or molecules when they collided, and this then caused a light emission. But Scott Kelly mentioned that these flashes appeared like fireworks. So my guess here is that it is more of a very short cascade of events, and probably the Cerenkov light scenario. This, BTW, is almost how we detect neutrinos, especially at Super Kamiokande and all the neutrino detectors around the world. Neutrinos come into the detector, and those that interact with the medium inside the detector (water, for example), cause the emission of relativistic electrons that move faster than the speed of light inside the medium. This creates the Cerenkov radiation, and typically, the light is blueish white. It's the same glow that you see if you look in a pool of fuel rods in a nuclear reactor. So there! You can detect something with your eyes closed! Zz. September 12, 2018 John Baez - Azimuth Noether’s Theorem I’ve been spending the last month at the Centre of Quantum Technologies, getting lots of work done. This Friday I’m giving a talk, and you can see the slides now: • John Baez, Getting to the bottom of Noether’s theorem. Abstract. In her paper of 1918, Noether’s theorem relating symmetries and conserved quantities was formulated in term of Lagrangian mechanics. But if we want to make the essence of this relation seem as self-evident as possible, we can turn to a formulation in term of Poisson brackets, which generalizes easily to quantum mechanics using commutators. This approach also gives a version of Noether’s theorem for Markov processes. The key question then becomes: when, and why, do observables generate one-parameter groups of transformations? This question sheds light on why complex numbers show up in quantum mechanics. At 5:30 on Saturday October 6th I’ll talk about this stuff at this workshop in London: The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame). This workshop celebrates the 100th anniversary of Noether’s famous paper connecting symmetries to conserved quantities. Her paper actually contains two big theorems. My talk is only about the more famous one, Noether’s first theorem, and I’ll change my talk title to make that clear when I go to London, to avoid getting flak from experts. Her second theorem explains why it’s hard to define energy in general relativity! This is one reason Einstein admired Noether so much. I’ll also give this talk at DAMTP—the Department of Applied Mathematics and Theoretical Physics, in Cambridge—on Thursday October 4th at 1 pm. The organizers of London workshop on the philosophy and physics of Noether’s theorems have asked me to write a paper, so my talk can be seen as the first step toward that. My talk doesn’t contain any hard theorems, but the main point—that the complex numbers arise naturally from wanting a correspondence between observables and symmetry generators—can be expressed in some theorems, which I hope to explain in my paper. September 10, 2018 Lubos Motl - string vacua and pheno Why string theory is quantum mechanics on steroids In many previous texts, most recently in the essay posted two blog posts ago, I expressed the idea that string theory may be interpreted as the wisdom of quantum mechanics that is taken really seriously – and that is applied to everything, including the most basic aspects of the spacetime, matter, and information. People like me are impressed by the power of string theory because it really builds on quantum mechanics in a critical way to deduce things that would have been impossible before. On the contrary, morons typically dislike string theory because their mezzoscopic peabrains are already stretched to the limit when they think about quantum mechanics – while string theory requires the stretching to go beyond these limits. Peabrains unavoidably crack and morons, writing things that are not even wrong about their trouble with physics, end up lost in math. Other physicists have also made the statement – usually in less colorful ways – that string theory is quantum mechanics on steroids. It may be a good idea to explain what all of us mean – why string theory depends on quantum mechanics so much and why the power of quantum mechanics is given the opportunity to achieve some new amazing things within string theory. At the beginning, I must say that the non-experts (including many pompous fools who call themselves "experts") usually overlook the whole "beef" of string theory just like they overlook the "beef" of quantum mechanics. They imagine that quantum mechanics "is" a new equation, Schrödinger's equation, that plays the same role as Newton's, Maxwell's, Einstein's, and other equations. But quantum mechanics is much more – and much more universal and revolutionary – than another addition to classical physics. The actual heart of quantum mechanics is that the objects in its equations are connected to the observations very differently than the classical counterparts have been. In the same way, they imagine that string theory is a theory of a new random dynamical object, a rubber band, and they imagine either downright classical vibrating strings or quantum mechanical strings that just don't differ from other quantum mechanical objects. But this understanding doesn't go beyond the (unavoidably oversimplified) name of string theory. If you analyze the composition of the term "string theory" as a linguist, you may think it's just a "theory of some strings". But that's not really the lesson one should draw. The real lesson is that if certain operations are done well with particular things, one ends with some amazing set of equations that may explain lots of things about the Universe. Strings are exceptionally powerful – and only exceptionally powerful – at the quantum level. And the point of string theory isn't that it's a theory of another object. The point is that string theory is special among theories that would initially look "analogous". Why is it special? And why is the magic of string theory so intertwined with quantum mechanics? Discrete types of Nature's building blocks For centuries, people knew something about chemistry. Matter around us is made of compounds which are mixtures of elements – such as hydrogen, helium, lithium, and I am sure you have memorized the rest. The number of types of atoms around us is finite. If arbitrarily large nuclei were allowed or stable, it would be countably infinite. But the number would still be discrete – not continuous. For some century, people realized that the elements are probably made out of identical atoms. Each element has its own kind of atoms. The concept of atoms was first promoted by Democritus in ancient Greece. But in chemistry, atoms became more specific. Sometime in the late 19th and early 20th century, people began to understand that the atom isn't as indivisible as the Greek name suggested. It is composed of a dense nucleus and electrons that live somewhere around the nucleus. Nucleus was later found to be composed of protons and neutrons. Quantum mechanics of 1925 allowed the physicists to study the quantized motion of electrons around the nuclei – and the motion of the electrons is the crucial thing that decides about the energy levels of all atoms and, consequently, their chemical properties. In the 1960s, protons and neutrons were found to be composite as well. First, matter was composed of atoms – different kinds of building blocks for every element. Later, matter was reduced to bound states of electrons, protons, and neutrons. Later, protons and neutrons were replaced with quarks while electrons remained and became an important example of leptons, a group of fermions that is considered "on par" with quarks. The Standard Model deals with fermions, namely quarks and leptons, and bosons, namely the gauge boson and the Higgs boson. The bosons are particularly capable of mediating forces between all the fermions (and bosons). But even in this "nearly final" picture, there are still finitely many but relatively many species of elementary particles. Their number is slightly lower than the number of atoms that were considered indivisible a century earlier. But the difference isn't too big – neither qualitatively nor quantitatively. We have dozens of types of basic "atoms" or "elementary particles" and each of them must be equipped with some properties (yes, the properties of elementary particles in the Standard Model look more precise and fundamental than the properties of atoms of the elements used to). The different particle species amount to many independent assumptions about Nature that have to be added to the mix to build a viable theory. Can we do better? Can we derive the species from a smaller number of assumptions – and from one kind of matter? String theory – let's assume that Nature is described by a weakly-coupled heterotic string theory (closed strings only), to make it simpler – describes all elementary particles, bosons and fermions, as discrete energy eigenstates of a vibrating closed string. All interactions boil down to splitting and merging of these oscillating strings. Quantum mechanics is needed for the energy levels to be discrete – just like in the case of the energy levels of atoms. But for the first time, there is only one underlying building block in Nature, a vibrating closed string. Like in atomic and molecular physics, quantum mechanics is needed for the discrete – finite or countable – number of species of small bound objects that exist. Also, the number of spacetime dimensions was always arbitrary in classical physics. When constructing a theory, you had to assume a particular number – in other words, you had to add the coordinates $$t,x,y,z$$ to your theory manually, one by one – and because the choice of the spacetime dimension was one of the first steps in the construction of any theory, there was no way to treat the theories in different spacetime dimensions simultaneously, and there were consequently no conceptual ways how to derive the right spacetime dimension. In string theory, it's different because even the spacetime dimensions – scalar fields on the world sheet – are "things" that contribute to various quantities (such as the conformal anomaly) and string theory is therefore capable of picking the preferred (critical) dimension of the spacetime. Even the individual spacetime dimensions are sort of made of the "same convertible stuff" within string theory. This would be unthinkable in classical physics. Prediction of gravity and other special forces: state-operator correspondence String theory is not only the world's only known theory that allows Einsteinian gravity in $$D\geq 4$$ to co-exist with quantum mechanics. String theory makes the Einsteinian gravity unavoidable. It predicts gravitons, spin-two particles that interact in agreement with the equivalence principle (all objects accelerate at the same acceleration in a gravitational field). Why is it so? I gave an explanation e.g. in 2007. It is because a particular energy level of the vibrating closed string looks like a spin-two massless particle and it may be shown that the addition of a coherent state of such "graviton strings" into a spacetime is equivalent to the change of the classical geometry on which all other objects – all other vibrating strings – propagate. In this way, the dynamical curved geometry (or at least any finite change of it) may be literally built out of these gravitons. (Similarly, the addition of strings in another mode, the photon mode, may have the effect that is indistinguishable from the modification of the background electromagnetic field and it is true for all other low-energy fields, too.) Why is it so? What is the most important "miracle" or a property of string theory that allows this to work? I have picked the state-operator correspondence. And the state-operator correspondence is an entirely quantum mechanical relationship – something that wouldn't be possible in a classical world. What is the state-operator correspondence? Consider a closed string. It has some Hilbert space. In terms of energy eigenstates, the Hilbert space has a zero mode described by the usual $$x_0,p_0$$ degrees of freedom that make the string behave as a quantum mechanical particle. And then the strings may be stretched and the amount of vibrations may be increased by adding oscillators – excitations by creation operators of many quantum harmonic oscillators. So a basis vector in this energy basis of the closed string's Hilbert space is e.g.$\alpha^\kappa_{-2}\alpha^\lambda_{-3} \tilde \alpha^\mu_{-4} \tilde\alpha_{-1}^\nu \ket{0; p^\rho}.$ What is this state? It looks like a momentum eigenstate of a particle whose spacetime momentum is $$p^\rho$$. However, for a string, the "lightest" state with this momentum is just a ground state of an infinite-dimensional harmonic oscillator. We may excite that ground state with the oscillators $$\alpha$$. These excitations are vaguely analogous to the kicking of the electrons in the atoms from the ground state to higher states, e.g. from $$1s$$ to $$2p$$. Those oscillators without a tilde are left-moving, those with a tilde are right-moving waves on the string. The (negative) subscript labels the number of periods along the closed string (which Fourier mode we pick). The superscript $$\kappa$$ etc. labels in which transverse spacetime direction the string's oscillation is increased. The total squared mass is given by $$2+3=4+1$$ in some string units. The sum of the tilded and untilded subscripts must be equal (five, in this case) for the "beginning" of the closed string to be immaterial, technically because $$L_0-\tilde L_0 = 0$$. Great. This was a basis of the closed string's Hilbert space. But we may also discuss the linear operators on that Hilbert space. They're constructed as functionals of $$X^\kappa(\sigma)$$ and $$P^\kappa(\sigma)$$ – I am omitting some extra fields (ghosts) that are needed in some descriptions, plus I am omitting a discussion about the difference between transverse and longitudinal directions of the excitations etc. – there are numerous technicalities you have to master when you study string theory at the expert level but they don't really affect the main message I want to convey. OK, the Hilbert space is infinite-dimensional but its dimension $$d$$ must be squared, to get $$d^2$$, if you want to quantify the dimension of the space of matrices on that space, OK? A matrix is "larger" than a column vector. The number $$d^2$$ looks much higher than $$d$$ but nevertheless, for $$d=\infty$$, as long as it is the right "stringy infinity", there exists a very natural one-to-one map between the states and the local operators. Let me immediately tell you what is the operator corresponding to the state above:$(\partial_z)^2 X^\kappa (\partial_z)^3 X^\lambda (\partial_{\bar z})^4 X^\mu (\partial_{\bar z})^1 X^\nu \exp(ip\cdot X(\sigma))$ There should be some normal ordering here. All the four operators $$X^{\kappa,\lambda,\mu,\nu}$$ are evaluated at the point of the string $$\sigma$$, too. You see that the superscripts $$\kappa,\lambda,\mu,\nu$$ were copied to natural places, the subscripts $$2,3,4,1$$ were translated to powers of the world sheet derivative with respect to $$z$$ or $$\bar z$$, the holomorphic or antiholomorphic complex coordinates on the Euclideanized worldsheet. Tilded and untilded oscillators were translated to the holomorphic and antiholomorphic derivatives. An exponential of $$X^\rho$$ operator was inserted to encode the ordinary "zero mode", particle-like total momentum of the string. And the total operator looks like some very general product of a function of $$X^\rho$$ – the imaginary exponentials are a good basis, ask Mr Fourier why it is so – and its derivatives (of arbitrarily high orders). By the combination of the "Fourier basis wisdom" and a simple decomposition to monomials, every function of $$X^\rho$$ and its worldsheet derivatives may be expanded to a sum of such terms. The map between operators and states isn't quite one-to-one. We only considered "local operators at point $$\sigma$$ of the string" where the value of $$\sigma$$ remains unspecified. But the "number of possible values of $$\sigma$$" looks like a smaller factor than the factor $$d$$ that distinguishes $$d,d^2$$, the dimension of the Hilbert space and the space of operators, so the state-operator correspondence is "almost" a one-to-one map. Such a map would be unthinkable in classical physics. In classical physics, a pure state would be a point in the phase space. On the other hand, the observable of classical physics is any coordinate on the phase space – such as $$x$$ or $$p$$ or $$ax^2+bp^2$$. Is there a canonical way to assign a coordinate on the phase space – a scalar function on the phase space – to a particular point $$(x,p)$$ on that space? There's clearly none. These mathematical objects carry completely different information – and the choice of the coordinate depends on much more information. You would have a chance to map a probability distribution (another scalar function) on the phase space to a general coordinate on the phase space – except that the former is non-negative. But that map wouldn't be shocking in quantum mechanics, either, because the probability distribution is upgraded to a density matrix which is a similar matrix as the observables. The magic of string theory is that there is a dictionary between pure states and operators. This state-operator correspondence is important – it is a part of the most conceptual proof of the string theory's prediction of the Einsteinian gravity. Why does the state-operator correspondence exist? What is the recipe underlying this magic? Well, you can prove the state-operator correspondence by considering a path integral on an infinite cylinder. By conformal transformations – symmetries of the world sheet theory – the infinite cylinder may be mapped to the plane with the origin removed. The boundary conditions on the tiny removed circle at the origin (boundary conditions rephrased as a linear insertion in the path integral) correspond to a pure state; but the specification of these boundary conditions must also be equivalent to a linear action at the origin, i.e. a local operator. Another "magic player" that appeared in the previous paragraph – a chain of my explanations – is the conformal symmetry. A solution to the world sheet theory works even if you conformally transform it (a conformal transformation is a diffeomorphism that doesn't change the angles even if you keep the old metric tensor field). Conformal symmetries exist even in purely classical field theories. Lots of the self-similar or scale-invariant "critical" behavior exhibits the conformal symmetry in one way or another. But what's cool about the combination of conformal symmetry and quantum mechanics is that a particular, fully specified pure state (and the ground state of a string or another object, e.g. the spacetime vacuum) may be equivalent to a particular state of the self-similar fog. The combination of quantum mechanics and conformal symmetry is therefore responsible for many nontrivial abilities of string theory such as the state-operator correspondence (see above) or holography in the AdS/CFT correspondence. At the classical level, the conformal symmetry of the boundary theory is already isomorphic to the isometry of the AdS bulk. But that wouldn't be enough for the equivalence between "field theory" in spacetimes of different dimensions. Holography i.e. the ability to remove the holographic dimension in quantum gravity may only exist when the conformal symmetry exists within a quantum mechanical framework. Dualities, unexpected enhanced symmetries, unexpected numerous descriptions The first quantum mechanical X-factor of quantum mechanics is the state-operator correspondence and its consequences – either on the world sheet (including the prediction of forces mediated by string modes) or on in the boundary CFT in the holographic AdS/CFT correspondence. To make the basic skeleton of this blog post simple, I will only discuss the second class of stringy quantum muscles as one package – the unexpected symmetries, enhanced symmetries, and numerous descriptions. For some discussion of the enhanced symmetries, try e.g. this 2012 blog post. In theoretical physicists' jargon, dualities are relationships between seemingly different descriptions that shouldn't represent the same physics but for some deep, nontrivial, and surprising reasons, the physical behavior is completely equivalent, including the quantitative properties such as the mass spectrum of some bound states etc. The enhanced symmetries such as the $$SU(2)$$ gauge group of the compactification on a self-dual circle (under T-duality) are a special example of dualities, too. The action of this $$SU(2)$$, except for the simple $$U(1)$$ subgroup, looks like some weird mixing of states with different winding numbers etc. Nothing like that could be a symmetry in classical physics. In particular, we need quantum mechanics to make the momenta quantized – just like the winding numbers (the integer saying how many times a string is wound around a non-contractible circle in the spacetime) are quantized – if we want to exchange momenta and windings as in T-duality. But within string theory, those symmetries become possible. Many stringy vacua have larger symmetry groups than expected classically. You may identify 16+16 fermions on the heterotic string's world sheet and figure out that the theory will have an $$SO(16)\times SO(16)$$ symmetry. But if you look carefully, the group is actually enhanced to an $$E_8\times E_8$$. Similarly, a string theory on the Leech lattice could be expected to have a Conway group of symmetries – the isometry of such a lattice – but instead, you get a much cooler, larger, and sexier monster group of symmetries, the largest sporadic finite group. Two fermions on the world sheet may be bosonized – they are equivalent to one boson. This is also a simple example of a "stringy duality" between two seemingly very different theories. The conformal symmetry and/or the relative scarcity of the number of possible conformal field theories may be used in a proof of this equivalence. Wess-Zumino-Witten models involving strings propagating on group manifolds are equivalent to other "simple" theories, too. I don't want to elaborate on all the examples – their number is really huge and I have discussed many of them in the past. They may often be found in different chapters of string theory textbooks. Here, I want to emphasize their general spirit and where this spirit comes from. Quantum mechanics is absolutely essential for this phenomenon. Why is it so? Why don't we see almost any of these enhanced symmetries, dualities, and equivalences between descriptions in classical physics? An easy answer is unlikely to be a rigorous proof but it may be rather apt, anyway. My simplest explanation would be: You don't see dualities and other things in classical physics because classical physics allows you the "infinite sharpness and resolution" which means that if two things look different, they almost certainly are different. (Well, some symmetries do exist classically. For example, Maxwell's equations – with added magnetic monopoles or subtracted electric charges – have the symmetry of exchanging the electric fields with the magnetic fields, $$\vec E\to \vec B$$, $$\vec B\to -\vec E$$. This is a classical seed of the stringy S-dualities – and of stringy T-dualities if the electromagnetic duality is performed on a world sheet. But quantum mechanics is needed for the electromagnetic duality to work in the presence of particles with well-defined non-zero charges in the S-duality case; and in the presence of quantized stringy winding charges in the T-duality example because the T-dual momenta have to be quantized as well.) On the other hand, quantum mechanics brings you the uncertainty principle which introduces some fog and fuzziness. The objects don't have sharp boundaries and shapes given by ordinary classical functions. Instead, the boundaries are fuzzy and may be interpreted in various ways. It doesn't mean that the whole theory is ill-defined. Quantum mechanics is completely quantitative and allows an arbitrarily high precision. Instead, the quantum mechanical description often leads to a discrete spectrum and allows you to describe all the "invariant" properties of an energy-like operator by its discrete spectrum – by several or countably many eigenvalues. And there are many classical models whose quantization may yield the same spectrum. The spectrum – perhaps with an extra information package that is still relatively small – may capture all the physically measurable, invariant properties of the physical theory. We may see the seed of this multiplicity of descriptions in basic quantum mechanics. The multiplicity exists because there are many – and many clever – unitary transformations on the Hilbert space and many bases and clever bases we may pick. The Fourier-like transformation from one basis to another makes the theory look very different than before. Such integral transformations would be very unnatural in classical physics because they would map a local theory to a non-local one. But in quantum mechanics, both descriptions may often be equally local. OK, so string theory, due to its being a special theory that maximizes the number of clever ways in which the novel features of quantum mechanics are exploited, is the world champion in predicting things that were believed to be "irreducible assumptions whose 'why' questions could never be answered by science" and allowing new perspectives to look at the same physical phenomena. String theory allows to derive the spacetime dimension, the spectrum of elementary particles (given some discrete information about the choice of the compactification, a vacuum solution of the stringy equations), and it allows you to describe the same physics by bosonized or fermionized descriptions, descriptions related by S-dualities, T-dualities (including mirror symmetries), U-dualities, string-string-dualities which exhibit enhanced gauge symmetries, holography as in the AdS/CFT correspondence, the matrix model description representing any system as a state of bound D-branes with off-diagonal matrix entries for each coordinate, the ER-EPR correspondence for black holes, and many other things. If you feel why quantum mechanics smells like progress relatively to classical physics, string theory should smell like progress relatively to the previous quantum mechanical theories because the "quantum mechanical thinking" is applied even to things that were envisioned as independent classical assumptions. That's why string theory is quantum mechanics squared, quantum mechanics with an X-factor, or quantum mechanics on steroids. Deep thinkers who have loved the quantum revolution and who have looked into string theory carefully are likely to end up loving string theory, and those who have had psychological problems with quantum mechanics must have even worse problems with string theory. Throughout the text above, I have repeatedly said that "quantum mechanics is applied to new properties and objects" within string theory. When I was proofreading my remarks, I felt uneasy about these formulations because the comment about the "application" indicates that we just wanted to use quantum mechanics more universally and seriously, and it was guaranteed that we could have done so. But this isn't the case. The existence of string theory (where the deeper derivations of seemingly irreducible classical assumptions about the world may arise) is a sort of a miracle, much like the existence of quantum mechanics itself. (Well, a miracle squared.) Before 1925, people didn't know quantum mechanics. They didn't know it was possible. But it was possible. Quantum mechanics was discovered as a highly constrained, qualitatively different replacement for classical physics that nevertheless agrees with the empirical data – and allows us to derive many more things correctly. In the same way, string theory is a replacement for local quantum field theories that works in almost the same way but not quite. Just like quantum mechanics allows us to derive the spectrum and states of atoms from a deeper point, string theory allows us to derive the properties of elementary particles and even the spacetime dimension and other things from a deeper, more starting point. Like quantum mechanics itself, string theory feels like something important that wasn't invented or constructed by humans. It pre-existed and it was discovered. Jon Butterworth - Life and Physics Ten years after the “Big Bang” Ten years ago it was Wednesday, and at 10:28 in the morning Geneva time the first protons had just made the 27 km journey through the Large Hadron Collider at CERN. The media referred to it as “Big Bang Day”, and … Continue reading September 04, 2018 Clifford V. Johnson - Asymptotia Beach Scene… The working title for this was “when you forget to bring your camera on holiday...” but I know you won’t believe that's why I drew it! (This was actually a quick sketch done at the beach on Sunday, with a few tweaks added over dinner and some shadows added using iPad.) I'm working toward doing finish work on a commissioned illustration for a magazine (I'll tell you about it more when I can - check instagram, etc., for updates/peeks), and am finding my drawing skills very rusty --so opportunities to do sketches, whenever I can find them, are very welcome. The post Beach Scene… appeared first on Asymptotia. Jon Butterworth - Life and Physics Anti-protons, Dark Matter and Helium First post of “Postcards from the Energy Frontier” at the Cosmic Shambles Network. A new measurement at CERN tells us something about the way particles travel through interstellar space. Which in turn may help a satellite on the International Space … Continue reading September 01, 2018 Jon Butterworth - Life and Physics Geneva Monopoly Just returned from a couple of weeks at CERN. Saw this in Geneva and had to buy it – you can probably tell why. So, CERN is Oxford Street (which, for those of you who don’t know London, is much … Continue reading August 31, 2018 Lubos Motl - string vacua and pheno Light Stückelberg bosons deported to the swampland Conjecture would also imply that photons have to be strictly massless I am rather happy about the following new hep-th preprint that adds 21 pages of somewhat nontrivial thoughts to some heuristic arguments that I always liked to spread. Just to be sure, Harvard's Matt Reece released his paper Photon Masses in the Landscape and the Swampland What's going on? Quantum field theory courses usually start with scalar fields and the Klein-Gordon Lagrangian. At some moment, people want to learn about some empirically vital quantum field, the electromagnetic field, whose Lagrangian is${\mathcal L}_\gamma = -\frac 14 F_{\mu\nu} F^{\mu\nu}.$ The action is invariant under the $$U(1)$$ gauge invariance which is why 3+1 polarizations of the $$A_\mu$$ field are reduced to the $$(D-2)$$ i.e. two transverse physical polarizations of the spin-1 photon. Are there also massive spin-one bosons? Yes, there are, e.g. W-bosons and Z-bosons that were discovered at CERN more than 30 years ago. The addition of masses naively corresponds to a simple mass term${\mathcal L}_{\rm mass} = \frac {m^2}{2} A_\mu A^\mu.$ A problem is that this term isn't gauge-invariant. So the theory must be defined without the gauge invariance and we can't consistently reduce the 3+1 polarizations (including one, time-like polarization that has the wrong sign of the norm so it would lead to negative probabilities) to 3 (for a massless photon, 2) polarizations. However, the Standard Model allows massive spin-1 bosons by the Higgs mechanism. The fundamental Lagrangian actually is gauge-invariant and the gauge-invariance-violating mass term above isn't included directly. Instead, it is generated from the Higgs field's vacuum expectation value $$\langle h\rangle = v$$ through the interactions of the gauge field $$W_\mu$$ or $$Z_\mu$$ with the Higgs field that is included in the Higgs boson's kinetic term $$\partial_\mu h \cdot\partial^\mu h$$ once the partial derivatives are replaced with the covariant derivatives. These covariant derivatives $$D_\mu=\partial_\mu - i g A_\mu$$ are not only allowed but needed to construct gauge-invariant kinetic terms So the W-bosons and Z-bosons get their masses via the interaction with the Higgs boson (that's also true for the fermions – leptons and quarks). This is the pretty way to generate masses of spin-1 bosons. It is exploited by the Standard Model and the Higgs mechanism is the last big clear discovery of experimental particle physicists. So massive gauge bosons automatically point to the Higgs mechanism. But then there's the "ugly" way – and I've always considered it an ugly way – to make spin-1 bosons massive, the Stückelberg mechanism. The mass term for the photons is rewritten as${\mathcal L}_{\rm mass} = \frac 12 f_{\theta}^2 (\partial_\mu \theta - eA_\mu)^2.$ We added a new scalar field $$\theta$$ and preserved the gauge invariance $$A_\mu\to A_\mu +(1/e)\partial_\mu \alpha$$ but the new scalar field must also transform under it, $$\theta\to \theta+\alpha$$. Because we have the same "amount" of gauge invariance as we have in the massless photons, but there is one scalar field added, we end up with 3 physical polarizations of the massive particle instead of the massless photon's two polarizations. They're the ordinary three spatial or transverse polarizations of a massless vector particle, $$x,y,z$$. One may gauge-fix the Stückelberg action by setting $$\theta=0$$ which reduces the system to the Proca action for the "regular" massive spin-1 boson. But the advantage of the Stückelberg form is that you know how to write down the field's interactions with others in a gauge-invariant way. The mass of the (Swiss) Ernst Stückelberg's boson is $$m_A = ef_\theta$$. You may send it zero either by sending the gauge coupling $$e\to 0$$ or sending $$f_\theta\to 0$$ or some combination of both. Note that $$e\to 0$$ is something that the weak gravity conjecture labels dangerous and, under certain assumptions, forbidden. OK, this kind of a description of a massive spin-1 boson doesn't seem to be exploited by the Standard Model. It's ugly because the scalar field transforms in a suicidal way and the theory doesn't point to any non-Abelian gauge symmetry and other pretty things. In principle, people would always say that the photon that we know and love (and especially see) can in principle be massive, thanks to a Stückelberg mechanism. Well, I always protested when someone presented it as a real possibility. If a photon were massive, we still know that the mass must be much smaller than the inverse radius of the Earth – because we know that the magnetic fields around the Earth behave as those in the proper massless electromagnetism, not in some Proca-Yukawa way. And if the photon were massive but this light, it would at least amount to a new, unsubstantiated fine-tuning. It's more likely and we are encouraged to assume that the photon is exactly massless. Reece places this "negative sentiment" of mine into a potentially axiomatic if not provable framework. He argues that the limit of the very light photon is "very far in the configuration space" and in consistent theories of quantum gravity, the swampland reasoning implies the existence of some light enough particles (well, a whole tower of them) and/or other reasons why the effective field theory has to break at relatively low energy scales. Quantitatively, Reece claims that the effective field theory has to break above$\Lambda_{UV} = \sqrt{ \frac{m_\gamma M_{\rm Planck}}{e} }.$ Well, the theory would have to break down earlier, at the scale $$e^{1/3} M_{\rm Planck}$$, if the latter scale were even lower. At any rate, using the scale in the displayed equation above, we know that the photon mass is rather tiny (recall my comments about the geomagnetic field etc.) and the geometric average with the Planck mass sends us to an atomic physics scale where QED still seems OK, and that's how the massive photon hypothesis could be strictly refuted. We're not quite sure about any of these swampland-based principles but I tend to think that many of them, when properly formulated, are right and powerful. I find this picture intriguing. Lots of the constructions in effective field theory, like the Stückelberg masses, looked ugly and heuristically "less consistent" to the people who had as good a taste as your humble correspondent. Finally, we may be becoming able to clearly articulate the arguments showing that this "feeling of reduced consistency" is not just some emotion. When coupled to quantum gravity, these ugly scenarios could indeed be strictly forbidden. Quantum gravity and/or string theory could only allow the solutions that seemed "more pretty" than their ugly competitors. And you could stop issuing politically correct disclaimers such as "we are assuming that the photon mass is exactly zero; if it had a nonzero mass, we would have to revise the whole analysis". Reece paper has no direct relationship to the de Sitter vacua and the cosmological controversies. But if it's right or at least accepted, it clearly strengthens the Vafa Team in that dispute. There are really different sketches of the general spirit of the stringy research in the future. In Team Stanford's plan, we're satisfied with some Rube Goldberg-style construction, we don't know which one (or which class) is the right one, we get used to it, and we train ourselves to be happy that we won't learn anything new. On the other hand, in Team Vafa's plan for the future, string theory research continues to do actual progress, trying to answer well-defined questions about the world around us that weren't previously answered, such as "Can some massive bosons we will produce have Stückelberg masses? Is our photon allowed by string theory to be massive?" Truly curious physicists simply want new answers like that to be found. It may be impossible to answer some of these questions, especially if our vacuum is a relatively random one in a set of vacua that have different properties. But this possibility is not a proven fact and even if it is true for some properties, it is not true for all questions. We can't ever accept the belief that all questions that haven't been answered so far will remain unanswered forever! That would be a clear religious attitude that stops progress in science – and that could have stopped it at every moment in the past. Harvard's Reece sketched some arguments that may prohibit Stückelberg masses in quantum gravity and you – I am primarily talking about you, dear reader in Palo Alto – should better think about it and decide whether he's right or not. In some technical questions within the de Sitter controversy, I am uncertain, and so are others. But I am certain about certain principles of the scientific method. The real pleasure of science is to find ways to answer questions – to discriminate between possible answers. Many people in Northern California (which includes Palo Alto) may have adopted a non-discrimination approach to society and science (all people and answers and vacua are equally good) – but without discrimination, there is no science. August 30, 2018 ZapperZ - Physics and Physicists Where Do Elementary Particle Names Come From? In this video, Fermilab's Don Lincoln tackles less about physics, but more about history and classification of our current Standard Model of elementary particles. Zz. August 29, 2018 Lubos Motl - string vacua and pheno Team Stanford launches Operation Barbarossa against quintessence The disagreement between Team Stanford – which defends its paradigm with a large landscape of de Sitter solutions of string theory – and Team Vafa – which suggests that de Sitter spaces could be banned due to general stringy "swampland" principles (and which proposes quintessence as an alternative) – has been seemingly confined to short enough exchanges in the questions-and-answers periods of various talks. The arguments couldn't have been properly analyzed and compared in such a limited context. In science, it is better to write them down. You may look at these arguments and equations for hours – and so can your antagonists – which usually increases the quality of the analyses. Team Stanford clearly believes that the de Sitter vacua are here to stay, the criticisms are wrong, and quintessence has fatal problems. But can they back these opinions by convincing arguments? Today, in the list of new hep-th preprints, we received an avalanche of papers that say something about the deSitter-vs-quintessence controversy in string theory. Using the [numbers] from the daily ordering of papers, we talk about the following papers: [3] De Sitter vs Quintessence in String Theory (by Cicoli+4, 49 pages) [4] A comment on effective field theories of flux vacua (by Kachru+Trivedi, 22 pages) [15] dS Supergravity from 10d (by Kallosh+Wrase, 18 pages) [16] de Sitter Vacua with a Nilpotent Superfield (by Kallosh+3, 6 pages) [18] The landscape, the swampland and the era of precision cosmology (by Akrami+3, 43 pages) I have omitted Tadashi Takayanagi's paper(s) although one of them also talks about de Sitter spaces. First, concerning the affiliations: I include all of the collaborations into "Team Stanford" because they defend de Sitter solutions in string theory. But the first paper is really international (Bologna-Boulder-India-Cambridge), the second paper is Stanford-Bombay, the third paper is Stanford-Vienna, the fourth paper is Stanford-Brown-Leuven (Belgium), and the last paper is Stanford-Leiden (the Netherlands). Well, you may hopefully see that Stanford is overrepresented in these papers. Moreover, it seems to play the role of the "headquarters" of this campaign. And the first paper among the five which is the only Stanford-free is arguably the least combative one, too. ;-) I think it's fair to say that the stringy landscape picture of cosmology is the greatest source of pride for Stanford's theoretical physicists in recent 15 years. At some human level, we could understand why they could be anxious if someone were basically saying that those 15 years revolved around a mistake or some sloppiness. But the pride doesn't imply that those papers were right and safe, of course. Now, the number of papers – five – is rather large and the salvos had to be at least partially coordinated. Can the colleagues be expected to swallow a reasonably high percentage of the content? Wasn't the number of papers chosen to be high to simply intimidate the opposition? To replace the quality of the arguments with the quantity of papers? I am not saying that. I am just asking. The high number of papers leads me to similar feelings as the proposed large number of de Sitter vacua. Less is sometimes more. Let's talk about the separate papers. The middle paper, one by Kallosh and Wrase, claims that the anti-D3-branes in the KKLT "uplifting" procedure may be replaced by anti D5, D6, D7, or D9-branes, too. That seems like a bold statement to me. If this were the case, why wouldn't have KKLT noticed these four new possible dimensions right away? Fifteen years ago, I was surely asking the question why anti-D3-branes were used and not some branes of other dimensions and I was surely given a – not so convincing – answer implying that it had to be anti-D3-branes. If one says that 4 possible dimensionalities of the antibranes are just as OK, and one does so 15 years after the game-changing paper is released, it doesn't exactly help both of these papers to be trustworthy. I would probably choose to disbelieve the new Kallosh-Wrase paper. One general problem with this paper (but, to some extent, with many other papers and perhaps with Team Stanford's papers in general) is that it seems to be a supergravity paper, not a full-blown stringy paper. And I think it's fair to describe both Kallosh and Wrase as supergravity experts, not string theory experts. Shouldn't a full-blown string theory expert validate claims that D-branes may be used in a certain new way? My answer is that he or she should. At their supergravity level of analysis, many things are possible and they may change the dimensionality of the uplifting antibrane. Great. But have they actually demonstrated that string theory allows such solutions, especially the new ones? I don't think that they have made the full-blown string analysis. Whatever is intrinsically stringy is treated in a sloppy way. For example, search for an "open string" in the Kallosh-Wrase paper. You will get three hits – and all of them just say that they have ignored the open string moduli. The more stringy a given concept or structure is, the more it is ignored in this paper. Again, I think that this criticism applies to most of the Team Stanford papers in general. But the whole point of the Vafa Team is to carefully study the fine, characteristically stringy features, phenomena, and constraints that are completely invisible at the level of supergravity – i.e. at the level of effective field theory. I have doubts about every particular, precise enough "swampland statement" made by Vafa or any disciple (including our "weak gravity conjecture" group). On the other hand, I have no doubts that it is extremely important to appreciate that string theory is not just supergravity and most of the particular low-energy supergravity-based effective field theories have no consistent quantum gravity or stringy completion. Kallosh and Wrase – and, as I said, much of the Team Stanford – seem to use string theory as the "ultimate justification of the 'anything goes' paradigm in supergravity". You may do anything you want in supergravity, add any string-inspired object, fluxes, branes, whatever you like, and then you use the term "string theory" as if it were the ultimate and universal justification of the validity of all such constructions. For them, string theory is just a knife that always unties your hands. Like with Elon Musk's promises, anything goes with string theory. OK, I am sure that this is just a wrong usage or interpretation of "string theory". String theory offers some new tools, new objects, new transitions, phenomena, and relationships between the objects. But string theory also – and maybe primarily – brings us new constraints, new bans, new universal, and particular predictions. For me, string theory may have produced new ingredients and possibilities but it's still primarily a theory that has a greater predictive power than the effective quantum field theory. It's clearly a sloppy, skewed way to use string theory if someone only uses string theory as the "source of many new objects and possibilities" – and not as a "book full of new constraints, universal laws and principles, and previously impossible predictions for particular situations". (There has been a community of "extremely applied" string theorists – whom I would surely call non-string theorists – who have used the term "string theory" as an excuse for really non-standard pieces of physics including the Lorentz symmetry violation and the violation of the equivalence principle. I believe that string theory is, on the contrary, a solid framework that bans or at least greatly discourages such experiments.) Because we are discussing the question whether the carefully and accurately studied string/M-theory allows de Sitter vacua, the KKLT construction, and similar things, another supergravity-level sloppy analysis just cannot possibly be relevant for the big question defining the Team Stanford vs Team Vafa controversy. To resolve this controversy, one simply needs a higher stringy precision of the arguments. The paper by Kallosh and Wrase doesn't have it and it's questionable whether they could make such an analysis in any other paper. OK, let's now look at the fourth paper among the five about a "nilpotent superfield". The new paper is a response to a 2017 paper Towards de Sitter from 10D by Moritz, Retolaza, and Westphal. OK, those authors have claimed that the KKLT didn't work because during the uplift, there was a stronger backreaction than previously thought and the compactification remains AdS and doesn't become dS. In the new paper, they claim that the nilpotent superfield as a SUSY breaking tool isn't compatible with the nonlinearly realized SUSY. But that doesn't really matter because even if one allows it, they do get a de Sitter, not anti de Sitter. I would believe that one of these groups must admit defeat soon enough because the claims and arguments look rather straightforward. Now, let's turn our attention to the new Kachru-Trivedi paper. It's written as a "positive paper" on effective field theories of the KKLT-style flux vacua. I haven't read the paper in its entirety but the abstract and the general organization of the paper does suggest that they're reviewing the thoughts that have been around from the KKLT. It seems to me that concerning the validity and existence of the effective field theories for the stringy situations, they always rely on field-theory-based, e.g. Wilsonian arguments. I am not persuaded that this is good enough. String theory may invalidate the effective field theories by making sure that an energy-$$E$$ effective theory isn't a local quantum field theory at all. What really bothers me is the superficial approach of Kachru and Trivedi to the arguments given by the Vafa Team: A recent paper [46 Obied Ooguri Spodyneiko Vafa], motivated largely by no-go theorems with limited applicability to a partial set of classical ingredients, made a provocative conjecture implying that quantum gravity does not support de Sitter solutions. [Footnote about two previous papers saying similar things.] Our analysis – and more importantly, effective field theory applied to the full set of ingredients available in string theory – is in stark conflict with this conjecture. This leads us to believe that the conjecture is false. Do Kachru and Trivedi consider this non-technical, judgmental paragraph to be enough to deal with the proposed alternative picture? OK, let's rephrase what they are saying: We may repeat what we said 15 years ago. We may pay no attention whatsoever to the detailed arguments given by the Vafa Team. We don't need to be impartially interested in the validity of the proposed new principles, inequalities, and no-go theorems. We just don't want to learn any and we prefer to believe that no such new insights exist. Instead, it's enough to dismiss all these papers with a simple slogan, with slurs such as "provocative" that make the Vafa Team look limited while we look unlimited, repeat that everything we have ever claimed to be true must be true, and that's enough to "prove" that we are right and they are wrong. I am sorry but it doesn't seem enough to me. The claim that the Vafa Team's statements are limited to a "partial set of classical ingredients" while Team Stanford is better because effective field theory is "applied to all ingredients available in string theory" seems utterly demagogic to me. KKLT and followers have used lots of ingredients from string theory but there's no proof that they are "all" ingredients of string theory. New ingredients kept on emerging and we still can't prove that we know "all of them" because we don't have a universal definition of string theory. Moreover, the high number of such ingredients makes it more likely, and not less likely, that one of them breaks down and invalidates the KKLT construction as a whole. So they have many, not "all", ingredients of string theory and this fact makes their construction more vulnerable, not less so! And the very conclusion that "Vafa seems to disagree with something we wrote so he must be wrong" simply looks childish. This is not a rational way to argue. Vafa might say exactly the same thing – he doesn't – but none of these two stubborn propositions would imply a convincing argument in one way or another. Finally, we have the first paper by Cicoli et al.; and the last, fifth paper by Akrami et al. These two papers explicitly claim to discuss the "de Sitter versus quintessence" controversy in string theory. The Akrami paper seems to have one proposed counterargument against quintessence that Akrami et al. are proud about and that they want to be carefully read by the reader. What is it? They pick the constant $$c$$ from the Vafa Team inequality, claim that it should be equal to one (or being of order one, they're not sure), and then they claim that the cosmological observations rule out $$c\gt 1$$ at the 3-sigma i.e. 99.7% level. I am sorry but the right value of $$c$$ isn't really known, at least not too reliably, so they can't determine the statistical significance well, either. The right $$c$$ could be $$1/3$$ and there would be no exclusion at all. It seems to me that this overemphasis on the $$c\sim 1$$ "prediction" and its weak exclusion by the observational data is their strongest argument. If that's so, I find it extremely weak. Even if the calculation of the 3-sigma confidence level were solid, which it doesn't seem to be at all, it is still just a 3-sigma confidence. A few years ago, the LHC diphoton bump was "discovered" at four sigma and it was fake. A potential universal new principle of string theory is a different caliber. In my list of priorities, if I become sufficiently certain about a new universal principle of physics, it may beat even 5-sigma deviations from the predictions. Finally, the first, Stanford-free paper is less arrogant than the Stanfordful papers. They prefer the de Sitter, KKLT-style models because they look concrete, there seems to be a calculational control, and it's apparently getting better with time. Quintessence is more "challenging" and requires more fine-tuning, we read. Well, Vafa et al. disagree with the second point, probably both points. At any rate, they're potentially subjective. You can't use your feelings that something is "challenging" – without any particular argument or quantification of the "challenge" – as a persuasive argument against an alternative theory. So I am afraid that this Cicoli et al. paper is going to be too vague when arguing against the alternative picture based on the new general principles proposed by the Vafa Team. One problem, as I have mentioned, is that these two paradigms are very different from each other. They have completely different advantages, very different numbers of requires solutions or corners of the stringy configuration space, different importance of the precision needed to analyze things, and so on. Depending on one's philosophy, the prior probabilities assigned to these paradigms may be very different. The probability ratio may very well be more extreme than 300-to-1 in either direction – which makes some 3-sigma empirical arguments weaker than a weak tea. At the end, it should be possible to resolve the controversy. But one simply needs to study the purely stringy effects in these compactifications (or would-be vacua) more accurately or more reliably than ever before. This increased control over the stringy effects or the increased reliability of the stringy arguments is probably necessary for any progress in resolving this open question. I haven't read the papers in their entirety but I am afraid it is obvious that they haven't really made any progress in resolving the actual disagreement. They are basically repeating the things that were done before and that's not a good path to progress. And that's the memo. August 24, 2018 Clifford V. Johnson - Asymptotia Science Friday Book Club Wrap! Don't forget, today live on Science Friday we (that's SciFri presenter Ira Flatow, producer Christie Taylor, Astrophysicist Priyamvada Natarajan, and myself) will be talking about Hawking's "A Brief History of Time" once more, and also discussing some of the physics discoveries that have happened since he wrote that book. We'll be taking (I think) caller's questions too! Also we've made recommendations for further reading to learn more about the topics discussed in Hawking's book. -cvj (P.S. The picture above was one I took when we recorded for the launch of the book club, back in July. I used the studios at Aspen Public Radio.) Click to continue reading this post The post Science Friday Book Club Wrap! appeared first on Asymptotia. John Baez - Azimuth Compositionality – Now Open For Submissions Our new journal Compositionality is now open for submissions! It’s an open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Topics may concern foundational structures, an organizing principle, or a powerful tool. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition. Compositionality is free of cost for both readers and authors. CALL FOR PAPERS We invite you to submit a manuscript for publication in the first issue of Compositionality (ISSN: 2631-4444), a new open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. To submit a manuscript, please visit http://www.compositionality-journal.org/for-authors/. SCOPE Compositionality refers to complex things that can be built by sticking together simpler parts. We welcome papers using compositional ideas, most notably of a category-theoretic origin, in any discipline. This may concern foundational structures, an organising principle, a powerful tool, or an important application. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition. Related conferences and workshops that fall within the scope of Compositionality include the Symposium on Compositional Structures (SYCO), Categories, Logic and Physics (CLP), String Diagrams in Computation, Logic and Physics (STRING), Applied Category Theory (ACT), Algebra and Coalgebra in Computer Science (CALCO), and the Simons Workshop on Compositionality. SUBMISSION AND PUBLICATION Submissions should be original contributions of previously unpublished work, and may be of any length. Work previously published in conferences and workshops must be significantly expanded or contain significant new results to be accepted. There is no deadline for submission. There is no processing charge for accepted publications; Compositionality is free to read and free to publish in. More details can be found in our editorial policies at http://www.compositionality-journal.org/editorial-policies/. STEERING BOARD John Baez, University of California, Riverside, USA Bob Coecke, University of Oxford, UK Kathryn Hess, EPFL, Switzerland Steve Lack, Macquarie University, Australia Valeria de Paiva, Nuance Communications, USA EDITORIAL BOARD Corina Cirstea, University of Southampton, UK Ross Duncan, University of Strathclyde, UK Andree Ehresmann, University of Picardie Jules Verne, France Tobias Fritz, Max Planck Institute, Germany Neil Ghani, University of Strathclyde, UK Dan Ghica, University of Birmingham, UK Jeremy Gibbons, University of Oxford, UK Nick Gurski, Case Western Reserve University, USA Helle Hvid Hansen, Delft University of Technology, Netherlands Chris Heunen, University of Edinburgh, UK Aleks Kissinger, Radboud University, Netherlands Joachim Kock, Universitat Autonoma de Barcelona, Spain Martha Lewis, University of Amsterdam, Netherlands Samuel Mimram, Ecole Polytechnique, France Simona Paoli, University of Leicester, UK Dusko Pavlovic, University of Hawaii, USA Christian Retore, Universite de Montpellier, France Peter Selinger, Dalhousie University, Canada Pawel Sobocinski, University of Southampton, UK David Spivak, MIT, USA Jamie Vicary, University of Birmingham and University of Oxford, UK Simon Willerton, University of Sheffield, UK Sincerely, The Editorial Board of Compositionality August 20, 2018 Clifford V. Johnson - Asymptotia And So it Begins… It’s that time of year again! The new academic year’s classes begin here at USC today. I’m already snowed under with tasks I must get done, several with hard deadlines, and so am feeling a bit bogged down already, I must admit. Usually I wander around the campus a bit and soak up the buzz of the new year that you can pick up in all the campus activity swarming around. But instead I sit at my desk, prepping my syllabus, planning important dates, adjusting my calendar, exchanging emails, (updating my blog), and so forth. I hope that after class I can do the wander. What will I be teaching this semester? The second part of graduate electromagnetism, as I often do. Yes, in a couple of hours, I’ll be again (following Maxwell) pointing out a flaw in one of the equations of electromagnetism (Ampere’s), introducing the displacement current term, and then presenting the full completed set of the equations - Maxwell’s equations, one of the most beautiful sets of equations ever to have been written down. (And if you wonder about the use of the word beautiful here, I can happily refer you to look at The Dialogues, starting at page 15, for a conversation about that very issue…!) Speaking of books, if you’ve been part of the Science Friday Summer reading adventure, reading Hawking’s A Brief History of Time, you should know that I’ll be back on the show on Friday talking with Priyamvada Natarajan, producer Christie Taylor, and presenter Ira flatow about the book one more time. There may also be an opportunity to phone in with questions! And do look at their website for some of the extra material they’ve bene posting about the book, including extracts from last week’s live tweet Q&A. Anyway, I’d better get back to prepping my class. I’ll be posting more about the semester (and many other matters) soon, so do come back. The post And So it Begins… appeared first on Asymptotia. August 18, 2018 Lubos Motl - string vacua and pheno Quintessence is a form of dark energy Tristan asked me what I thought about Natalie Wolchover's new Quanta Magazine article, Dark Energy May Be Incompatible With String Theory, exactly when I wanted to write something. Well, first, I must say that I already wrote a text about this dispute, Vafa, quintessence vs Gross, Silverstein, in late June 2018. You may want to reread the text because the comments below may be considered "just an appendix" to that older text. Since that time, I exchanged some friendly e-mails with Cumrun Vafa. I am obviously more skeptical towards their ideas than they are but I think that I have encountered some excessive certainty of some of their main critics. Wolchover's article sketches some basic points about this rather important disagreement about cosmology among string theorists. But there are some very unfortunate details. The first unfortunate detail appears in the title. Wolchover actually says that "dark energy might be incompatible with string theory". That's the statement she seems to attribute to Cumrun Vafa and co-authors. But that misleading formulation is really invalid – it's not what Cumrun is saying. Here, the misunderstanding may be blamed on some sloppy "translation" of the technical terms that has become standard in the pop science press – and the excessively generalized usage of some jargon. OK, what's going on? First of all, the Universe is expanding, isn't it? We're talking about cosmology, the big bang theory (which I don't capitalize – to make sure that I am not talking about the sitcom), and the expansion of the Universe was already seen in the 1920s although people only became confident about it some 50 years ago. In the late 1990s, it was observed that the expansion wasn't slowing down, as widely expected, but speeding up. The accelerated expansion may be explained by dark energy. Dark energy is anything that is present everywhere in the vacuum and that tends to accelerate the expansion of the Universe. Dark energy, like dark matter, is invisible by optical telescopes (that's why both of them are called dark). But unlike dark matter which has (like all matter or dust) the pressure $$p=0$$, the dark energy has nonzero pressure, namely $$p\lt 0$$ or $$p\approx -\rho$$ where $$\rho$$ is the energy density. That's how dark energy and dark matter differ; dark energy's negative pressure is needed for its ability to accelerate the expansion of the Universe. Dark energy is supposed to be a rather general, umbrella term that may be represented by several known, slightly different theoretical concepts described by equations of physics. So far, the by far most widespread and "canonical" or "minimalist" kind of dark energy was the cosmological constant. That's really a number that is independent of space and especially time (it's why it's called a constant) which Einstein added to the original Einstein's equations of the general theory of relativity. Einstein's original goal was to allow the size of the Universe to be stable in time – because his equations seemed to imply that the Universe's size should evolve, much like the height of a freely falling apple. It just can't sit at a constant value – just like the apple usually doesn't sit in the air in the middle of the room. But the expansion of the Universe was discovered. Einstein could have predicted it because it follows from the simplest form of Einstein's equations, as I said. That could have earned him another Nobel prize when the expansion was seen by Hubble. (Well, Einstein's stabilization by the cosmological constant term wouldn't really work even theoretically, anyway. The balance would be unstable, tending to turn to an expansion or the implosion, like a pencil standing on the tip. Any tiny perturbation would be enough for this instability to grow exponentially.) That's probably the main reason why Einstein labeled the introduction of the cosmological constant term "the greatest blunder of his life". Well, it wasn't the greatest blunder of his life: the denial of quantum mechanics and state-of-the-art physics in general in the last 30 years of his life was almost certainly a greater blunder. In the late 1990s, the Universe's expansion was seen to accelerate which is why it seemed obvious that Einstein's blunder wasn't a blunder at all, let alone the worst one: the cosmological constant term seems to be there and it's responsible for the acceleration of the Universe. Suddenly, Einstein's cosmological term (with a different numerical value than Einstein needed – but one that is of the same order) seemed like a perfect, minimalistic explanation of the accelerated expansions. Recall that Einstein's equations say$G_{\mu\nu} +\Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}.$ Note that even in the complicated SI units, there is no $$\hbar$$ here – Einstein's general relativity is a classical theory that doesn't depend on quantum mechanics at all. Here, $G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu}$ is the Einstein curvature tensor, constructed from the Ricci tensor and the Ricci scalar $$R$$. It's some function of the metric and its first and especially second partial derivatives in the spacetime. On the right hand side of Einstein's equations, $$T_{\mu\nu}$$ is the stress-energy tensor that knows about the sources, the density of mass/energy and momentum and their flow. The $$\Lambda g_{\mu\nu}$$, a simple term that adds an additional mixture of the metric tensor to Einstein's equations, is the cosmological constant term. It naturally reappeared in the late 1990s. It's a rather efficient theory. The term doesn't have to be there but in some sense, it's even "simpler" than Einstein's tensor, so why should it be absent? And it seems to explain the accelerated expansion, so we need it. The theory is really natural which is why the standard cosmological model was the $$\Lambda{CDM}$$ model, i.e. a big bang theory with the cold dark matter (CDM) and the cosmological constant term $$\Lambda$$. What about string theory? String theory really predicts gravity. You may derive Einstein's equations, including the equivalence principle, from the vibrating strings. Einstein's theory of gravity is a prediction of string theory, which is still one of the main reasons to be confident that string theory is on the right track to find a deeper or final theory in physics, to say the least. Aside from gravitons and gravity (and Einstein's equations that may be derived from string theory for this force), string theory also predicts gauge fields and matter fields such as leptons and quarks. They have their (Dirac, Maxwell...) equations and their stress-energy tensors also enter as terms in $$T_{\mu\nu}$$ on the right hand side of Einstein's equations. String theory demonstrably predicts Einstein's equations as the low-energy limit for the massless, spin-two field (the graviton field) that unavoidably arises as a low-lying excitation of a vibrating string. To some extent, this appearance of Einstein's equations is guaranteed by consistency of the theory (or by the relevant gauge invariance, namely the diffeomorphisms) – and string theory is consistent (which is a highly unusual, and probably unprecedented, virtue of string theory among quantum mechanical theories dealing with massless spin-two fields). Does string theory also predict the cosmological constant term, one that Einstein originally included in the equations? At this level, the answer is unquestionably Yes and Cumrun Vafa and pals surely agree. To say the least, string theory predicts lots of vacua with a negative value of the cosmological constant, the anti de Sitter (AdS) vacua. In fact, those are the vacua where the holographic principle of quantum gravity may be shown rather rigorously – holography takes the form of Maldacena's AdS/CFT correspondence. There are lots of Minkowski, $$\Lambda=0$$, vacua in string theory. And there are also lots of AdS, $$\Lambda\lt 0$$, vacua in string theory. I think that the evidence is clear and no one who is considered a real string theorist by most string theorists disputes the statement that both groups of vacua, flat Minkowski vacua and AdS vacua, are predicted by string theory. The real open question is whether string theory allows the existence of $$\Lambda \gt 0$$ (de Sitter or dS) vacua. Those seem to be needed to describe the accelerated expansion of the Universe in terms of the cosmological constant. After 2000, the widespread view – if counted by the number of heads or number of papers – was that string theory allowed the positive cosmological constant. Even though I still find de Sitter vacua in string theory plausible, I believe that it's fair to say that the frantic efforts to spread this de Sitter view – and write papers about de Sitter in string theory – may be described as a sign of group think in the community. There have always been reason to doubt whether string theory allows de Sitter vacua at all. At the end of the last millennium, Maldacena and Nunez wrote a paper with a no-go theorem. It was mostly based on supergravity, a supersymmetric extension of Einstein's general relativity and a low-energy limit of superstring theories, but people generally believed that this approximation of string theory was valid in the context of the proof. Sociologically, you may also want to know that in the 1990s, Edward Witten was "predicting" that the cosmological constant had to be exactly zero (and a symmetry-like principle would be found that implies the vanishing value). He was motivated by the experience with string theory. Even before Maldacena and Nunez and lots of similar work, it looked very hard to establish de Sitter, $$\Lambda \gt 0$$ vacua in string theory. However, some of these problems could have been – and were – considered just technical difficulties. Why? Because if the cosmological constant is positive, you don't have any time-like Killing vectors and there can be no unbroken spacetime supersymmetry. Controlled stringy calculations only work when the spacetime supersymmetry is present (and guarantees lots of cancellations etc.) which is why people were willing to think that the difficulties in finding de Sitter vacua in string theory were only technical difficulties – caused by the hard calculations in the case of a broken supersymmetry. However, aside from Maldacena-Nunez, we got additional reasons to think that string theory might prohibit de Sitter vacua in general. Cumrun Vafa's Swampland – the term for an extension of the (nice stringy) landscape that also includes effective field theories that string theory wouldn't touch, not even with a long stick – implies various general (sometimes qualitative, sometimes quantitative) predictions of string theory that hold in all the stringy vacua, despite their high number. Along with his friend Donald Trump, Cumrun Vafa has always wanted to drain the swamp. ;-) The Swampland program has produced several, more or less established, general laws of string theory – that may also be considered consequences of a consistent theory of quantum gravity. Wolchover mentions that the most well-established example of a Swampland law is our "weak gravity conjecture". Gravity (among elementary particles) is much weaker than other forces in our Universe – and in fact, it probably has to be the case in all Universes that are consistent at all. The Swampland business contains many other laws like that, some of them are more often challenged than the weak gravity conjecture. Cumrun Vafa and his co-authors have presented an incomplete sketch of a proof that de Sitter vacua could be banned in string theory for Swampland reasons – for similar general reasons that guarantee that gravity is the weakest force. This assertion is unsurprisingly disputed by lots of people, especially people around Stanford, because Stanford University (with Linde, Kallosh, Susskind, Kachru, Silverstein, and many others) has been the hotbed of the "standard stringy cosmology" after 2000. They wrote lots of papers about cosmology, starting from the KKLT paper, and the most famous ones have thousands of citations. At some level, authors of such papers may be tempted to think that their papers just can't be wrong. But even the main claims of papers with thousands of citations ultimately may be wrong, of course. Sadly, I must say that some of this Stanford environment likes to use group think – and arguments about authorities and number of papers – that resembles the "consensus science" about the global warming. Sorry, ladies and gentlemen, but that's not how science works. Doubts about the KKLT construction are reasonable because the KKLT and similar papers still build on certain assumptions and approximations. I am confident it is correct to say that the authors of some of the critical papers questioning the KKLT (especially the final, de Sitter "uplift" of some intermediate AdS vacua, an uplift that is achieved by the addition of some anti-D3-branes) are competent physicists – at least "basically indistinguishable" in competence from the Stanford folks. See e.g. Thomas Van Riet's TRF guest blog from November 2014 (time is fast, 1 year per year). Cumrun Vafa et al. don't want to say that string theory has been ruled out. Instead, they say that in string theory, the observed dark energy is represented by quintessence which is just a form of dark energy (read the first sentence of the Wikipedia article I just linked to) – and that's why Wolchover's title that "dark energy is incompatible with string theory" is so misleading. I think that the previous sentence is enough for everyone to understand the main unfortunate terminological blunder in Wolchover's article. Cumrun and pals say that dark energy is described by quintessence, a form of dark energy, in string theory. They don't say that dark energy is impossible in string theory. Wolchover's blunder may be blamed upon the habit to consider the phrase "dark energy" to be the pop science equivalent of the "cosmological constant". Well, they are not quite equivalent and to understand the proposals by Cumrun Vafa et al., the difference between the terms "dark energy" and "cosmological constant" is absolutely paramount. Quintessence is a philosophically if not spiritually sounding word but in cosmology, it's just a fancy word for an ordinary time-dependent generalization of the cosmological constant – that results from the potential energy of a new, inflaton-like scalar field. String theory often predicts many scalar fields, some of them may play the role of the inflaton, others – similar ones – may be the quintessence that fills our Universe with the dark energy which is responsible for the accelerated expansion. Now, the disagreement between "Team Vafa" and "Team Stanford" may be described as follows: Team Stanford uses the seemingly simplest description, one using Einstein's old cosmological constant. It's really constant, string theory allows it, and elaborate – but not quite exact – constructions with antibranes exist in the literature. They use lots of sophisticated equations, do many details very accurately and technically, but the question whether these de Sitter vacua exist remains uncertain because approximations are still used. Team Stanford ignores the uncertainty and sometimes intimidates other people by sociology – by a large number of authors who have joined this direction. The cosmological constant may be positive, they believe, and there are very many, like the notorious number $$10^{500}$$, ways to obtain de Sitter vacua in string theory. We may live in one of them. Because of the high number, the predictive power of string theory may be reduced and some form of the multiverse or even the anthropic principle may be relevant. Team Vafa uses a next-to-simplest description of dark energy, quintessence, which is a scalar field. This scalar field evolves and the potential normally needs to be fine-tuned even more so than the cosmological constant. But Team Vafa says that due to some characteristically stringy relationships, the new, added fine-tuning is actually not independent from the old one, the tuning of the apparently tiny cosmological constant, so from this viewpoint, their picture might be actually as bad (or as good) as the normal cosmological constant. The very large hypothetical landscape may be an illusion – all these constructions may be inconsistent and therefore non-existent, due to subtle technical bugs overlooked by the approximations or, equivalently, due to very general Swampland-like principles that may be used to kill all these hypothetical vacua simultaneously. Team Vafa doesn't have too many fancy mathematical calculations of the potential energy and it doesn't have a very large landscape. So in this sense, Team Vafa looks less technical and more speculative than Team Stanford. But one may argue that Team Stanford's fancy equations are just a way to intimidate the readers and they don't really increase the probability that the stringy de Sitter vacua exist. These are just two very different sketches how dark energy is actually incorporated in string theory. They differ by some basic statements, by the expectation "how very technical certain adequate papers answering a question should be", and in many other respects. I think we can't be certain which of them, if any, is right – even though Team Stanford would be tempted to disagree. But their constructions simply aren't waterproof and they look arbitrary or contrived from many points of view. And yes, as you could have figured out, I do have some feeling that the way of argumentation by Team Stanford has always been similar to the "consensus science" behind the global warming hysteria. Occasional references to the "consensus" and a large number of papers and authors – and equations that seem complicated but if you think about their implications, they don't really settle the basic question (whether the de Sitter vacua – or the dangerous global warming – exist at all). Team Vafa proposes a new possibility and I surely believe it deserves to be considered. It's "controversial" in the sense that Team Stanford is upset, especially some of the members such as E.S. But I dislike Wolchover's subtitle: A controversial new paper argues that universes with dark energy profiles like ours do not exist in the “landscape” of universes allowed by string theory. What's the point of labeling it "controversial"? It may still be right. Strictly speaking, the KKLT paper and the KKLT-based constructions by Team Stanford are controversial as well. These a priori labels just don't belong to the science reporting, I think – they belong to the reporting about pseudosciences such as the global warming hysteria. Reasonable people just don't give a damn about these labels. They care about the evidence. Cumrun Vafa is a top physicist, he and pals have proposed some ideas and presented some evidence, and this evidence hasn't really been killed by solid counter-evidence as of now. Incidentally, after less than two months, Team Vafa already has 23+19 citations. So it doesn't look like some self-evidently wrong crackpot papers, like papers claiming that the Standard Model is all about octonions. I was also surprised by another adjective used by Wolchover: In the meantime, string theorists, who normally form a united front, will disagree about the conjecture. Do they form a united front? What is it supposed to mean and what's the evidence that the statement is correct whatever it means? Are all string theorists members of Marine Le Pen's National Front? Boris Pioline could be one but I think that even he is not. ;-) String theorists are theoretical physicists at the current cutting-edge of fundamental physics and they do the work as well as they can. So when something looks clearly proven by some papers, they agree about it. When something looks uncertain, they are individually uncertain – and/or they disagree about the open questions. When a possible new loophole is presented that challenges some older lore or no-go not-yet-theorems, people start to think about the new possibilities and usually have different views about it, at least for a while. What is Wolchover's "front" supposed to be "united" for or against? String theorists are united in the sense that they take string theory seriously. Well, that's a tautology. They wouldn't be called string theorists otherwise. String theory also implies something so they of course take these implications – as far as they're clearly there – seriously. But is there any valid, non-tautological content in Wolchover's statement about the "united front"? It's complete nonsense to say that string theories are "more united as a front" than folks in any other typical scientific discipline that does things properly. String theorists have disagreed about numerous things that didn't seem settled to some of them. I could list many technical examples but one recent example is very conceptual – the firewall by late Joe Polchinski and his team. There were sophisticated constructions and equations in the papers by Polchinski et al. but the existence of the firewalls obviously remained disputed, and I think that almost all string theorists think that firewalls don't exist in any useful operational sense. But they followed the papers by Polchinski et al. to some extent. Polchinski and others weren't excommunicated for a heresy in any sense – despite the fact that the statement "the black holes don't have any interior at all" would unquestionably be a radical change of the lore. This disagreement about the representation of dark energy within string theory is comparably deep and far-reaching as the firewall wars. Again, I still assign the probability above 50% to the basic picture of Team Stanford which leads to a cosmological constant from string theory. But I don't think it has been proven (a similar warning I have said about $$P\neq NP$$ and other things). I have communicated with many apparently smart and technically powerful folks who had sensible arguments against the validity of the basic conclusions of the KKLT. I am extremely nervous about the apparent efforts of some Stanford folks to "ban any disagreement" about the KKLT-based constructions, a ban that would be "justified" by the existence of many papers and their mutual citations. That's not how actual science may progress for a very long time. If folks like Vafa have doubts about de Sitter vacua in string theory and all related constructions, and they propose quintessence models that could be more natural than once believed (the simple reasons why quintessence would be dismissed by string theorists including myself just a few years ago), they must have the freedom – not just formally, but also in practice – to pursue these alternative scenarios, regardless of the number of papers in literature that take KKLT for granted! Only when the plausibility and attractiveness of these ideas really disappears according to the body of the experts, it could make sense to suggest that Vafa seems to be losing. These two pictures offer very different sketches how the real world is realized within string theory. Indeed, the string phenomenological communities that would work on these two possibilities could easily evolve into "two separated species" that can't talk to each other usefully (although both of them would still be trained with the help of the same textbooks up to a basic textbook of string theory). But as long as we're uncertain, this splitting of the research to several different possibilities is simply the right thing that should happen. Putting eggs to one basket when we're not quite sure about the right basket would simply be wrong. Wolchover also mentions the work of Dr Wrase. I haven't read that so I won't comment. But I will comment on some remarks by Matt Kleban (trained at Team Stanford, now NYU) such as Maybe string theory doesn’t describe the world. [Maybe] dark energy has falsified it. Well, that's nice. String theory is surely falsifiable and such things might happen which would be a big event. But I think it's obvious that Kleban isn't really taking the side of the string theory critics. Instead this statement – that dark energy may have falsified string theory – is a subtle demagogic attack against Team Vafa which is whom he actually cares about (he doesn't care about Šm*its). Effectively, Matt is trying to compare Vafa et al. to Šmoits. If the dark energy in string theory doesn't work in the Stanford way, I will scream and cry, Matt says, and you will give it up. Matt knows that the real people whom he cares about wouldn't consider string theory ruled out for similar reasons so he's effectively saying that they shouldn't buy Team Vafa's claims, either. Sorry, Matt, but that's a demagogy. Team Vafa doesn't really claim that they have falsified string theory. There is a genuine new possibility whether you like to admit it or not. Also, Matt expressed his attacks against Team Vafa using a different verbal construction: He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,"... Cute, Matt. I always love when people complain about lamppost reasoning. I've had funny discussions both with Brian Greene and Lisa Randall about this phrase before they published their popular books. Lisa felt very entertained when I said it was actually rational to spend more time by looking under the lamppost. But it is rational. I must explain the proverb here. There exists some mathematical set of possibilities in theoretical physics or string theory but only some of them have been discovered or understood, OK? So we call those things that have been understood or studied (intensely enough) "the insights under the lamppost". Now, the "lamppost reasoning" is a criticism used by some people who accuse others from a specific kind of bias. What is this sin or bias supposed to be? Well, the sin is that these people only search for their lost keys under the lamppost. Now, this is supposed to be funny and immediately mock the perpetrators of the "sin" and kill their arguments. If you lose your keys somewhere, it's a matter of luck whether the keys are located under a lamppost, where you could see them, or elsewhere, where you couldn't. So obviously, you should look for the keys everywhere, including places that aren't illumined by the lamp, Kleban and Randall say, among others. But there's a problem with this recommendation. You can't find the keys in the dark too easily – because you don't see anything there. Perhaps if you sweep the whole surface by your fingers. But it's harder and the dark area may be very large. If you want to increase the probability that you find something, you should appreciate the superiority of vision and primarily look at the places where you can see something! You aren't guaranteed to find the keys but your probability to find them per unit time may be higher because you can see there. And there might even exist reasons why the keys are even more likely to be under the lamppost. When you were losing them, you probably preferred to walk at places where you could see, too. You may have lost them while checking the content of your wallet, and you were more likely to do it under the lamppost. So that's why you were more likely under the lamppost at that time, too! Similarly, when God was creating the world, assuming her similar mathematical skills, She was likely to start with discovering things that were relatively easy for us to discover and clarify, too. So she was more likely to drop our Universe under the lamppost, too, and that's why it's right to focus our attention there, too. For a researcher, it's damn reasonable to focus on things that are easier to be understood properly. The two situations (keys, physics) aren't quite analogous but they're close enough. My claim is even clearer in the metaphorical "lamppost" of physics. If you want to settle a question, such as the existence of de Sitter vacua, you simply have to build primarily on the concepts – both general principles and the particular constructions – that have been understood well enough. You can't build on the things that are completely unknown. And if you build on things that are only known vaguely or with a lot of uncertainty, you can be misled easily! So in some sense, I am saying that you should look for your keys under the lamppost, and then increase the sensitivity of your retinas and increase your range that you have a control over. That's how knowledge normally grows – but there always exist regions in the space of ideas and facts that aren't understood yet. The suggestion that claims in physics may be supported by constructions that are either completely unknown or badly understood are just ludicrous. They may sound convincing to them because the keys may be anywhere, the keys may be in the dark. But in the dark of ignorance, science can't be applied and we must appreciate that all our scientific conclusions may only be based on the things that have been illuminated – all of our legitimate science is built out of the insights about the vicinity of the lamppost. Whoever claims to have knowledge derived from the dark is a charlatan – sorry but it's true, Lisa and Matt! In this particular case, it's totally sensible for Team Vafa to evaluate the experience with the known constructions of the vacua and conclude that it seems rather convincing that no de Sitter vacua exist in string theory and the existing counterexamples are fishy and likely to be inconsistent. This evidence is circumstantial because it builds on the "set of constructions" that have been studied or illuminated – constructions under the lamppost – but that's still vastly better than if you make up your facts and make far-reaching claims about the "world in the dark" that we have no real evidence of! You surely expect comparisons to politics as well. I can't avoid the feeling that the Team Stanford claim that de Sitter vacua simply have to exist is just another example of some egalitarianism or non-discrimination. Like men and women, anti de Sitter and de Sitter vacua must be treated as equal. But sorry to say, like men and women, de Sitter and anti de Sitter vacua are simply not equal. The constructions of these two classes within string theory look very different and unlike the anti de Sitter vacua, it's plausible and at least marginally compatible with the evidence that the de Sitter vacua don't exist at all. A Palo Alto leftist could prefer a non-discrimination policy but the known facts, evidence, and constructions surely do discriminate between de Sitter and anti de Sitter spaces – and Team Vafa, like any honest scientist who actually cares about the evidence, assigns some importance to this highly asymmetric observation! Lubos Motl - string vacua and pheno Search for ETs is more speculative than modern theoretical physics Edwin has pointed out a new tirade against theoretical physics, Theoretical Physics Is Pointless without Experimental Tests, that Abraham Loeb published at pages of Scientific American which used to be an OK journal some 20 years ago. The title itself seems plagiarized from Deutsche or Aryan Physics – which may be considered ironic for Loeb who was born in Israel. And in fact, like his German role models, Loeb indeed tries to mock Einstein as well – and blame his mistakes on the usage of thought experiments: Einstein made great discoveries based on pure thought, but he also made mistakes. Only experiment and observation could determine which was which. Albert Einstein is admired for pioneering the use of thought experiments as a tool for unraveling the truth about the physical reality. But we should keep in mind that he was wrong about the fundamental nature of quantum mechanics as well as the existence of gravitational waves and black holes... Loeb has a small, unimportant plus for acknowledging that Einstein was wrong on quantum mechanics. However, as an argument against theoretical physics based on thought experiments and on the emphasis on the patient and careful mental work in general, the sentences above are at most demagogic. The fact that Einstein was wrong about quantum mechanics, gravitational waves, or black holes don't imply anything wrong about the usage of thought experiments and other parts of modern physics. There's just no way to credibly show such an implication. Other theorists have used better thought experiments, have thought about them more carefully, and some of them have correctly figured out that quantum mechanics had to be right and gravitational waves and black holes had to exist. The true fathers of quantum mechanics, especially Werner Heisenberg, were really using Einstein's new approach based on thought experiments, principles, and just like Einstein, they carefully tried to remove the assumptions about physics that couldn't have been operationally established (such as the absolute simultaneity killed by special relativity; and the objective existence of values of observables before an observation, killed by quantum mechanics). Note that gravitational waves as well as black holes were detected many decades after their theoretical discovery. The theoretical discoveries almost directly followed from Einstein's equations. So Einstein's mistakes meant that he didn't trust (his) theory enough. It surely doesn't mean and cannot mean that Einstein trusted theories and theoretical methods too much. Because Loeb has made this wrong conclusion, it's quite some strong evidence in favor of a defect in Loeb's central processing unit. The title may be interpreted in a way that makes sense. Experiments surely matter in science. But everything else that Loeb is saying is just wrong and illogical. In particular, Loeb wrote this bizarre paragraph about Galileo and timing: Similar to the way physicians are obliged to take the Hippocratic Oath, physicists should take a “Galilean Oath,” in which they agree to gauge the value of theoretical conjectures in physics based on how well they are tested by experiments within their lifetime. Well, I don't know how I could judge theories according to experiments that will be done after I die, after my lifetime. That's clearly impossible so this restriction is vacuous. On the other hand, is it OK to judge theories according to experiments that were done before our lifetimes or before physicists' careers? You bet. Experimental or empirical facts that have been known for a long time are still experimental or empirical facts. In most cases, they may be repeated today, too. People often don't bother to repeat experiments that re-establish well-established truths. But these old empirical facts are still crucial for the work of every theorist. They are sufficient to determine lots of theoretical principles. You know, it's correct to say that science is a dialogue between the scientist and Nature. But this is only true in the long run. It doesn't mean that every day or every year, both of them have to speak. If Nature doesn't want to speak, She has the right to stay silent. And She often stays silent even if you complained that She doesn't have the right. She ignores your restrictions on Her rights! So at the LHC after the Higgs boson discovery, Nature chose to remain silent so far – or She kept on saying "the Standard Model will look fine to you, human germ". You can't change this fact by some wishful thinking about "dialogues". Theorists just didn't get new post-Higgs data from the LHC because so far, there are no new data at the LHC. They need to keep on working which makes it obvious that they have to use older facts and new theoretical relationships between them, new hypotheses etc. In the absence of new theoretical data, it is obvious that theorists' work has to be overwhelmingly theoretical or, in Loeb's jargon, it has to be a monologue! When Nature has something new and interesting to say (through experiments), Nature will say it. But theorists can't be silent or "doing nothing" just because Nature is silent these years! Only a complete idiot may fail to realize these points or agree with Loeb. What Loeb actually wants to say is that a theorist should be obliged to plan the experiments that will settle all his theoretical ideas within his lifetime. But that's not possible. The whole point of scientific research in physics is to study questions about the laws of Nature that haven't been answered yet. And because they haven't been answered yet, people don't know and can't know what the answer will be – and even when it will be found. An experimenter (or a boss or a manager of an experimental team) may try to plan what the experiment will do, when it will do these things, and what are the answers that it could provide us with. Even this planning sometimes goes wrong, there are delays etc. But this is not the main problem here. The real problem is that the result of a particular experiment is almost never the real question that people want to be answered. An experiment is often just a step towards adjusting our opinions about a question – and whether this step is a big or small one depends on what the experimental outcome actually is, and this is not known in advance. Loeb has mentioned examples of such questions himself. People actually wanted to know whether there were black holes and gravitational waves. But a fixed experiment with a fixed budget, predetermined sensitivity etc. simply cannot be guaranteed to produce the answer. That's the crucial point that kills Loeb's Aryan Physics as a proposed (not so) new method to do science. For example, both gravitational waves and black holes are rather hard to see. Similarly, the numerical value of the cosmological constant (or vacuum energy density) is very small. It's this smallness that has implied that one needed a long – and impossible to plan – period of time to discover these things experimentally. Because black holes, gravitational waves, and a positive cosmological constant needed fine gadgets – and it was not known in advance how fine they had to be – does it mean that the theorists should be banned from studying these questions and concepts? The correct answer is obviously No – while Loeb's answer is Yes. Almost all of theoretical physics is composed of such questions. We just can't know in advance how much time will be needed to settle the questions we care about (and, as Edwin emphasized, there is nothing special about the timescale given by "our lifespan"). We can't know what the answers will be. We can't know whether the evidence that settles these questions will be theoretical in character, dependent on somewhat new experimental tools, or dependent on completely new experimental tools, discoveries, and inventions. None of these things about the future flow of evidence can be known now (otherwise we could settle all these things now!) which is why it's impossible for these unknown answers to influence what theorists study now! The influences that Loeb demands would violate causality. If the theorists knew in advance when the answer is obtained, they would really have to know what the answer is – as I mentioned above, the confirmation of a null hypothesis always means that the answer to the interesting qualitative question was postponed. But then the whole research would be pointless. So if science followed Loeb's Aryan Physics principles, it would be pointless! The real science follows the scientific method. Scientists must make decisions and conclusions, often conclusions blurred by some uncertainty, right now, based on the facts that are already known right now – not according to some 4-year plans, 5-year plans, or 50-year plans. And if their research depends on some assumptions, they have to articulate them and go through the possibilities (ideally all of them). It's also utterly demagogic for him to talk about the "Galilean Oath" because Galileo Galilei disagreed with ideas that were very similar to Loeb's. In particular, Galileo has never avoided the formulation of hypotheses that could have needed a long time to be settled. One example where he was wrong was Galileo's belief that comets were atmospheric phenomena. That belief looks rather silly to me (didn't they already observe the periodicity of some comets, by the way?) but the knowledge was very different then. Science needed a long time to really settle the question. But more generally, Galileo did invent lots of conjectures and hypotheses because those were the real new concepts that became widespread once he started the new method, the scientific method. Google search for "Galileo conjectured" or "Galileo hypothesized". Of course you get lots of hits. As e.g. Feynman said in his simple description of the scientific method, the scientific method to search for new laws works as follows: First, we guess the laws. Then we compute consequences. And then we compare the consequences to the empirical data. Note the order of the steps: the guess must be at the very beginning, scientists must be free to present all such possible hypotheses and guesses, and the computation of the consequences must still be close to the beginning. Loeb proposes something entirely different. He wants some planning of future experiments to be placed at the beginning, and this planning should restrict what the physicists are allowed to think about in the first place. Sorry, that wouldn't be science and it couldn't have produced interesting results, at least not systematically. And these restrictions are indeed completely analogous to the bogus restrictions that the church officials – and later various philosophers etc. – tried to place on the scientific research. Like Loeb, the church hierarchy also wanted the evidence to be direct at all cases. But one of the ingenious insights by Galileo was that he realized that the evidence may often be indirect or very indirect but one may still learn a great deal of insights out of it. The simplest example of this "direct vs indirect" controversy are the telescopes. Galileo has improved the telescope technology and made numerous new observations – such as those of the Jovian moons. The church hierarchy actually disputed that those satellites existed because the observation by telescopes wasn't direct enough for them. It took many years before people realized how incredibly idiotic such an argument was. It would be a straight denial of the evidence. The telescopes really see the same thing as the eyes when both see something. Sometimes, telescopes see more details than the eyes – so they must be considered nothing else than improved eyes. The observations from eyes and telescopes are equally trustworthy. But telescopes have a better resolution. The laymen trust telescopes today even though the telescope observations are "indirect" ways to see something. But the tools to observe and deduce things in physics have become vastly more indirect than they were in Galileo's lifetime. And most laymen – including folks like Loeb – simply get lost in the long chains of reasoning. That's one reason why many people distrust science. Because they haven't verified them individually (and most laymen wouldn't be smart or patient enough to do so), they believe that the long chains of reasoning and evidence just cannot work. But they do work and they are getting longer. The importance of reasoning and theory-based generalizations was increasing much more quickly during Newton's lifetime – and it kept on increasing at an accelerating rate. Newton united the celestial and terrestrial gravity, among other things. The falling apple and the orbiting Moon move because of the very same force that he described by a single formula. Did he have a "direct proof" that the apple is doing the same thing in the Earth's gravitational field as the Moon? Well, you can't really have a direct proof of such a statement – which could be described as a metaphor by some. His theory was natural enough and compatible with the available tests. Some of these tests were quantitative yet not guaranteed at the beginning. So of course they increased the probability that the unification of celestial and terrestrial gravity was right. But whether such confirmations would arise, how strong and numerous they would be, and when they would materialize just isn't know at the beginning. The risk for physics stems primarily from mathematically beautiful “truths,” such as string theory, accepted prematurely for decades as a description of reality just because of their elegance. OK, this criticism of "elegance" is mostly a misinterpretation of pop science. Scientists sometimes describe their feelings – how their brains feel nicely when things fit together. Sometimes they only talk about these emotional things in order to find some common ground with a journalist or another layman. But at the end, this type of beauty or elegance is very different from the beauty or elegance experienced by the laymen or artists. The theoretical physicists' version of beauty or elegance reflects some rather technical properties of the theories and the statement that these traits increase the probability that the theory is right may be pretty much proven. But even if you disagree with these proofs, it doesn't matter because the scientific papers simply don't use the beauty or elegance arguments prominently. When you read a new paper about some string dualities, string vacua, or anything of the sort, you don't really read "this would be beautiful, and therefore the value of some quantity is XY". Only when there are some calculations of XY, the authors claim that there is some evidence. Otherwise they call their propositions conjectures or hypotheses. And sometimes they use these words that remind us of the uncertainty even when there is a rather substantial amount of evidence available, too. But the uncertainty is unavoidable in science. A person who feels sick whenever there is some uncertainty just cannot be a scientist. Despite the uncertainty, a scientist has to determine what seems more likely and less likely right now. When some things look very likely, they may be accepted as facts at a preliminary basis. Some other people's belief in these propositions may be weaker – and they may claim that the proposition was accepted prematurely. But at the end, some preliminary conclusions are being made about many things. Science just couldn't possibly work without them. By the way, I forgot to discuss the subtitle of Loeb's article: Our discipline is a dialogue with nature, not a monologue, as some theorists would prefer to believe Note that he emphasizes that theoretical physics is "his discipline". It sounds similar to Smolin's fraudulent claims that he was a "string theorist". Smolin isn't a string theorist and doesn't have the intellectual abilities to ever become a string theorist. Whether Loeb is a theoretical physicist is at least debatable. He's the boss of Harvard's astronomy department. The words "astrophysicist" would surely be defensible. But the phrase "theoretical physicist" isn't quite the same thing. I hope that you remember Sheldon Cooper's explanation of the difference between a rocket scientist and a theoretical physicist. Why doesn't Missy just tell them that Sheldon is a toll taker at the Golden Gate Bridge? ;-) Given Loeb's fundamental problems with the totally basic methodology of theoretical physics – including thought experiments and long periods of careful and patient thinking uninterrupted by experimental distractions – I think it is much more reasonable to say that Loeb clearly isn't a theoretical physicist so his subtitle is a fraudulent effort to claim some authority that he doesn't possess. OK, Loeb tried to hijack Galileo's name for some delusions about (or against) modern physics that Galileo would almost certainly disagree with. Galileo wouldn't join these Aryan-Physics-style attacks on theoretical physics. At some level, we may consider him a founder of theoretical physics, too. SETI vs string theory But my title refers to a particular bizarre coincidence in Loeb's criticism of theorists' thinking that could be experimentally inaccessible for the rest of our (or some living person's?) lifetimes. He wants the experimental results right now, doesn't he? A funny thing is that Loeb is also a key official at the Breakthrough Starshot Project, Yuri Milner's \$100 million kite to be sent to greet the oppressed extraterrestrial minorities who live near Alpha Centauri, the nearest star of ours except for the Sun. String theory is too speculative for him but the discussions with the ETs are just fine, aren't they? Loeb seems aware of the ludicrous situation in which he has maneuvered himself: At the same time, many of the same scientists that consider the study of extra dimensions as mainstream regard the search for extraterrestrial intelligence (SETI) as speculative. This mindset fails to recognize that SETI merely involves searching elsewhere for something we already know exists on Earth, and by the knowledge that a quarter of all stars host a potentially habitable Earth-size planet around them. From his perspective, the efforts to chat with the extraterrestrial aliens are less speculative than modern theoretical physics. Wow. Why is it so? His argument is cute as well. SETI is just searching for something that is known to exist – intelligent life. However, the thing that just searches for something that is known to exist – intelligent life – would have the acronym SI only and it would be completely pointless because the answer is known. SETI also has ET in the middle, you know, which stands for "extraterrestrial". And Loeb must have overlooked these two letters altogether. It is not known at all whether there are other planets where intelligent life exists, and if they exist, what is their density, age, longevity, appearance, and degree of similarity to the life on Earth. It's even more unknown or speculative how these hypothetical ETs, if they exist near Alpha Centauri, would react to Milner's kite. We couldn't even reliably predict how our civilization would react to a similar kite that would arrive to Earth. How could we make realistic plans about the reactions of a hypothetical extraterrestrial civilization? On the other hand, string theory is just a technical upgrade of quantum field theory – one that looks unique even 50 years after the birth of string theory. Quantum field theory and string theory yield basically the same predictions for the doable experiments, quantum field theory is demonstrably the relevant approximation of stringy physics, and this approximation has been successfully compared to the empirical data. Everything seems to work. The extra dimensions are just scalar fields analogous to those that are known to exist that are added on the stringy world sheet (and in this sense, the addition of the extra dimension is as mundane as the addition of an extra flavor of leptons or quarks). We have theoretical reasons to think that the total number of spacetime dimensions should be 10 or 11. Unlike the expectations about the ETs, this is not mere prejudice. There are actually calculations of the critical dimension. Joe Polchinski's "String Theory" textbook contains 7 different calculations of $$D=26$$ for the bosonic string in the first volume; the realistic superstring analogously has $$D=10$$. This is not like saying "there should be cow-like aliens near Alpha Centauri because the stars look alike and I like this assertion". How can someone say that this research of extensions of successful quantum field theories is as speculative as Skyping with extraterrestrial aliens, let alone more speculative than those big plans with the ETs? At some moments, you can see that some people have simply lost it. And Loeb has lost it. It makes no sense to talk to him about these matters. He seems to hate theoretical physics so fanatically that he's willing to team up not only with the Šmoit-like crackpots but also with extraterrestrial aliens in his efforts to fight against modern theoretical physics. Too bad, Mr Loeb, but even if extraterrestrial intelligent civilizations exist, it won't help your case because these civilizations – because of the adjective "intelligent" – know that string theory is right and you are full of šit. And that's the memo. P.S.: I forgot to discuss the "intellectual power" paragraph: Given our academic reward system of grades, promotions and prizes, we sometimes forget that physics is a learning experience about nature rather than an arena for demonstrating our intellectual power. As students of experience, we should be allowed to make mistakes and correct our prejudices. Now, this is a bizarre combination of statements. Loeb says "physics is about" learning, not demonstrating our intellectual power. "Physics is about" is a vague sequence of words, however. We should distinguish two questions: What drives people to do physics? And what decides about their success? What primarily drives the essential people to do physics is curiosity. Physicists want to know how Nature works. String theorists want lots of more detailed questions about Nature to be answered. Their curiosity is real and they don't give a damn whether an ideologue wants to prevent them from studying some questions: the curiosity is real, they know that they want to know, and some obnoxious Loeb-style babbling can't change anything about it. Some people are secondary researchers. They do it because it's a good source of income or prestige or whatever. They study it because others have made it possible, they created the jobs, chairs, and so on. But the primary motivation is curiosity. But then we have the question whether one succeeds. The intellectual power isn't everything but it's obviously important. Loeb clearly wants to deny this importance – but he doesn't want to do it directly because the statement would sound idiotic, indeed. But why does he feel so uncomfortable about the need for intellectual power in theoretical physics? He presents the intellectual power as the opposite of the validity of physical theories. This contrast is the whole point of the paragraph above. But this contrast is complete nonsense. There is no negative correlation between "intellectual power" and "validity of the theories that are found". On the contrary, the correlation is pretty much obviously positive. At the end, his attack against the intellectual power is fully analogous to the statement that ice-hockey isn't about the demonstration of one's physical strength and skills, it's about scoring goals. When some parts are emphasized, the sentence is correct. But not too correct. The demonstration of the physical skills and strength is also "what ice-hockey is about". It's what drives some people. And the skills and strength are needed to do it well, too. The rhetorical exercise "either strength, or goals" – which is so completely analogous to Loeb's "either intellectual power, or proper learning of things about Nature" – is just a road to hell. The only possible implication of such a proposition would be to say that "people without the intellectual power should be made theoretical physicists". Does he really believe this makes any sense? Or why does he mix the validity of theories with the intellectual power in this negative way? Well, let me tell you why. Because he is jealous about some people's superior intellectual powers compared to his. And he is making the bet – probably correctly – that the readers of Scientific American's pages are dumb enough not to notice that his rant is completely illogical, from the beginning to the end. August 13, 2018 Andrew Jaffe - Leaves on the Line Planck: Demographics and Diversity Another aspect of Planck’s legacy bears examining. A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012). Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize. Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.) I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back). This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.) However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management. This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades. Axel Maas - Looking Inside the Standard Model Fostering an idea with experience In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done? The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of. The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity. Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement. And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process. Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration. The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates. This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track. So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle. August 08, 2018 Clifford V. Johnson - Asymptotia Science Friday Book Club Q&A Between 3 and 4 pm Eastern time today (very shortly, as I type!) I’ll be answering questions about Hawking’s “A Brief History of Time” as part of a Live twitter event for Science Friday’s Book Club. See below. Come join in! Hey SciFri Book Clubbers! Do you have had any … Click to continue reading this post The post Science Friday Book Club Q&A appeared first on Asymptotia. August 01, 2018 Clifford V. Johnson - Asymptotia DC Moments… I'm in Washington DC for a very short time. 16 hours or so. I'd have come for longer, but I've got some parenting to get back to. It feels a bit rude to come to the American Association of Physics Teachers annual meeting for such a short time, especially because the whole mission of teaching physics in all the myriad ways is very dear to my heart, and here is a massive group of people devoted to gathering about it. It also feels a bit rude because I'm here to pick up an award. (Here's the announcement that I forgot to post some months back.) I meant what I said in the press release: It certainly is an honour to be recognised with the Klopsteg Memorial Lecture Award (for my work in science outreach/engagemnet), and it'll be a delight to speak to the assembled audience tomorrow and accept the award. Speaking in an unvarnished way for a moment, I and many others who do a lot of work to engage the public with science have, over the years, had to deal with not being taken seriously by many of our colleagues. Indeed, suffering being dismissed as not being "serious enough" about our other [...] Click to continue reading this post The post DC Moments… appeared first on Asymptotia. July 26, 2018 Sean Carroll - Preposterous Universe Mindscape Podcast For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising! I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on. As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another. We’ve already had a bunch of cool guests, check these out: And there are more exciting episodes on the way. Enjoy, and spread the word! July 20, 2018 Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe) Summer days, academics and technological universities The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh. Counsellor’s Strand in Dunmore East So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish. However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st. And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities. This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone? July 19, 2018 Andrew Jaffe - Leaves on the Line (Almost) The end of Planck This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available. Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end. And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work). (I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.) Planck 2018: the science So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation. The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe. All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.) Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model. As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise). Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error. The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors. (If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.) Planck 2018: lessons learned So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB). But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning. Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start. That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts. Andrew Jaffe - Leaves on the Line Loncon 3 Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?). At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion. At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes. There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us. July 16, 2018 Tommaso Dorigo - Scientificblogging A Beautiful New Spectroscopy Measurement What is spectroscopy ? (A) the observation of ghosts by infrared visors or other optical devices (B) the study of excited states of matter through observation of energy emissions If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist. Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC). July 12, 2018 Matt Strassler - Of Particular Significance “Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth. As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction. In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky. I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos. On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from. (This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.) Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe). The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare. Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source. The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed. Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied. The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson. The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger! July 08, 2018 Marco Frasca - The Gauge Connection ICHEP 2018 The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence ($3\sigma$) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding. About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here) $\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})$ and CMS (see here) $\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).$ The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model. When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from $35.9{\rm fb}^{-1}$ data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below $2\sigma$ (see here). For the WW decay, ATLAS does not see anything above $1\sigma$ (see here). So, although there is something to take under attention with the increase of data, that will reach $100 {\rm fb}^{-1}$ this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery. July 04, 2018 Tommaso Dorigo - Scientificblogging Chasing The Higgs Self Coupling: New CMS Results Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...). In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin. June 25, 2018 Sean Carroll - Preposterous Universe On Civility Alex Wong/Getty Images White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume. I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something. On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant. This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously. More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights. The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable. The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society. This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy. June 24, 2018 Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe) 7th Robert Boyle Summer School This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation. The Irish-born scientist and aristocrat Robert Boyle Lismore Castle in Co. Waterford , the birthplace of Robert Boyle Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science. This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here. All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland. Images from the garden party in the grounds of Lismore Castle June 22, 2018 Jester - Resonaances Both g-2 anomalies Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant: Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27). You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms. At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows: If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv. More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections... June 16, 2018 Tommaso Dorigo - Scientificblogging On The Residual Brightness Of Eclipsed Jovian Moons While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event. This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult. June 12, 2018 Axel Maas - Looking Inside the Standard Model How to test an idea As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects. This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way: Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it. So far, this does not seem to be something where it is necessary to worry about. However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation. So, is this hopeless? Do we have to wait for new physics to make its appearance? Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory. Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test. Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned. By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed. June 10, 2018 Tommaso Dorigo - Scientificblogging Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study Simulation, noun: 1. Imitation or enactment 2. The act or process of pretending; feigning. 3. An assumption or imitation of a particular appearance or form; counterfeit; sham. Well, high-energy physics is all about simulations. We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations. June 09, 2018 Jester - Resonaances Dark Matter goes sub-GeV It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei. Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10. It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude. Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon. June 08, 2018 Jester - Resonaances Massive Gravity, or You Only Live Twice Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant). In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time... The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity. There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl~10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best, So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass. Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot: Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time. Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point. June 07, 2018 Jester - Resonaances Can MiniBooNE be right? The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments. This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed. In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following. What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01. Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque. But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update. So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration. June 01, 2018 Jester - Resonaances WIMPs after XENON1T After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored. To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background. What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea? Tommaso Dorigo - Scientificblogging MiniBoone Confirms Neutrino Anomaly Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more. May 26, 2018 Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe) A festschrift at UCC One of my favourite academic traditions is the festschrift, a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement, as scholars from all around the world gather to pay tribute their former mentor, colleague or collaborator. Festschrifts tend to be very stimulating meetings, as the diverging careers of former students and colleagues typically make for a diverse set of talks. At the same time, there is usually a unifying theme based around the specialism of the professor being honoured. And so it was at NIALLFEST this week, as many of the great and the good from the world of Einstein’s relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics noted internationally for seminal contributions to general relativity.  Some measure of Niall’s influence can be seen from the number of well-known theorists at the conference, including major figures such as Bob WaldBill UnruhEdward Malec and Kip Thorne (the latter was recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves). The conference website can be found here and the programme is here. University College Cork: probably the nicest college campus in Ireland As expected, we were treated to a series of high-level talks on diverse topics, from black hole collapse to analysis of high-energy jets from active galactic nuclei, from the initial value problem in relativity to the search for dark matter (slides for my own talk can be found here). To pick one highlight, Kip Thorne’s reminiscences of the forty-year search for gravitational waves made for a fascinating presentation, from his description of early designs of the LIGO interferometer to the challenge of getting funding for early prototypes – not to mention his prescient prediction that the most likely chance of success was the detection of a signal from the merger of two black holes. All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like a great piano teacher of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation! May 21, 2018 Andrew Jaffe - Leaves on the Line Leon Lucy, R.I.P. I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998. Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas. Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics. Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s. For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models. Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed. May 14, 2018 Sean Carroll - Preposterous Universe Intro to Cosmology Videos In completely separate video news, here are videos of lectures I gave at CERN several years ago: “Cosmology for Particle Physicists” (May 2005). These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age. Update: I originally linked these from YouTube, but apparently they were swiped from this page at CERN, and have been taken down from YouTube. So now I’m linking directly to the CERN copies. Thanks to commenters Bill Schempp and Matt Wright. May 10, 2018 Sean Carroll - Preposterous Universe User-Friendly Naturalism Videos Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg. Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing. No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan. The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories: A lot of good stuff in there. Enjoy!
2018-09-22 09:08:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 162, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 173, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5540596842765808, "perplexity": 953.4467551779973}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00201.warc.gz"}
https://www.princetoninstruments.com/learn/raman/introduction-to-raman-spectroscopy
# Introduction to Raman SpectroscopyApplication Notes ## Raman Scattering Raman spectroscopy is an optical scattering technique that is widely used for the identification of materials and the characterization of their properties. It is commonly applied in material science, chemistry, physics, life science and medicine, the pharmaceutical and semiconductor industries, process and quality control and forensics. Raman scattering is an inelastic spectroscopy technique meaning incoming light undergoes a change in color and is scattered with a different energy. The Raman process specifically describes the interaction of incident light with molecular vibrations and rotations in a material. Light can either excite vibrations and lose energy (redshift) or pick up energy from present vibrations (that are typically thermally excited). As the shift in energy is mostly dependent on the material composition and structure and not the wavelength of the excitation light, Raman spectroscopy measures the energy shift of the Raman scattered light relative to the incident light energy which is characteristic to the sample that is being investigated. Raman spectroscopy is extremely adaptable to different experimental configurations, from compact handheld instruments to super high-resolution multistage lab systems. It is also adaptable to different samples in solid, liquid or gas phases, from solid state crystals to proteins in the body. Raman scattering is non-destructive and requires little to no sample preparation. Raman scattering is named after Chandrasekhara Venkata Raman and was discovered in 1928 by Raman and Kariamanikkam Srinivasa Krishnan and in parallel by Grigory Landsberg and Leonid Mandelstam. The actual breakthrough for Raman spectroscopy as an analytical technique started with the invention of the laser in the 1960s. The early Raman scattering experiments used filtered sunlight or light from atomic emission lamps as incident light sources. It turns out though that the Raman scattering process is extremely weak, laser light however can be produced with much higher intensity, leading to stronger Raman signals. Today Raman spectroscopy is a ubiquitous technique due to the availability of affordable laser source and high performance filters. ### The Raman Spectrum The measurement output of a Raman spectroscopic measurement is a Raman spectrum which contains several components due to the different directions light scatters from the samples. Spectral lines produced by Raman scattering correspond to the different vibrational modes of the sample material or molecule. They are distributed around a spectral line at the excitation laser wavelength due to elastic Rayleigh scattering. Raman spectroscopy traditionally uses units of wavenumbers (cm-1) to measure the energy shift of the Raman bands relative to the laser line. If the Raman scattering signal is located at wavelength λRaman[nm] the Raman shift [cm-1] is related to the wavelength of the excitation laser λLaser[nm] by: $\mathrm{Ramanshift}={10}^{7}\left(\frac{1}{{\lambda }_{\mathrm{Laser}}\mathrm{\left[nm\right]}}-\frac{1}{{\lambda }_{\mathrm{Raman}}\mathrm{\left[nm\right]}}\right)$ Raman lines with higher energy than the laser line (lower wavelength) are referred to as Anti-Stokes lines where the scattered light gains energy from interacting with existing vibrations in the sample. Raman lines with lower energy (higher wavelength) occur when the incident light loses energy by exciting molecular vibrations. Energy in molecular vibrations is excited by thermal excitations so as the temperature of the sample decreases the intensity of the Anti-Stokes bands as less vibrational energy will be in the material. Specific energy ranges in the Raman spectrum are often referred to as the low frequency or THz Raman regions (below 200 cm-1), the fingerprint region (up to 1800 cm-1) and the high wavenumber region (above 2800 cm-1). ### Material Identification and Quantification Material identification is one of the most important applications of Raman spectroscopy. The vibrations of a material are determined by its specific arrangement of molecular bonds and symmetries. Raman spectra are characteristic of a specific material, building a unique fingerprint that allows for identification of a specific material or detection of contaminations in a material. In a well calibrated measurement Raman spectroscopy can be used to quantify the amount of material as the intensity of the Raman lines is proportional to the concentration of the analyte in the probe volume. A simple application is the determination of mixture ratios of 2 substances by comparing the relative strength of specific Raman bands in the spectrum. The sensitivity of Raman spectroscopy is extremely high and allows for detection down to trace levels of a material which could be anything from contamination in a chemical solvent, a biomarker in a cell or traces of explosives on a sample, applications where Raman spectroscopy is widely used. ### Raman Spectroscopy for Internal and Environmental Sensing The nature of the Raman process makes it a great tool for the investigation of the internal properties of materials. The size and position of spectral lines is extremely sensitive to changes in molecules, lattice or crystal structure or the presence of defects in a material. For example, graphene is a single layer of Carbon where the atoms are arranged in a honeycomb structure and is the building block of graphite (many graphene layers stacked on top of each other). Raman spectroscopy has become the standard tool for identifying the number of graphene layers stacked on top of each other. The so called 2D band, around 2650 cm-1, is changing its structure and symmetry between single and double layers of graphene. The influence of autofluorescence in Raman measurements can be The so-called D band around 1300 cm-1, which is absent in pure graphene, has been found to indicate the presence and concentration of defects. The interactions between the atoms of a material can be very sensitive to the external physical and chemical environment as well. Understanding the influence of any environmental parameter on the Raman spectrum, makes a Raman measurement an effective sensing tool for this parameter. Raman as a sensing tool is particularly useful in microscopic and remote observations where environment parameters cannot be accessed by other techniques. Among the quantities routinely measured by Raman spectroscopy are temperature, pressure, stress/strain and pH values. ### Summary Raman spectroscopy analyzes the vibrations of molecules and crystals. It is one of the most common spectroscopic techniques for material identification and determining a material’s physical and chemical environment. Raman spectra use units of wavenumbers to measure the energy shift of scattered light relative to the energy of the excitation laser.
2020-09-26 07:36:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5818776488304138, "perplexity": 913.9950851081696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00597.warc.gz"}
https://wumbo.net/greek-alphabet/
# Greek Alphabet The symbols of the Greek Alphabet are used throughout math to represent variables, constants, and coeffecients within math expressions, formulas, and equations. This page first lists the lower case Greek symbols and their usage and then lists the upper case Greek symbols and their usage in math. Symbol Description The Greek letter (alpha) is used in math as a variable. The symbol is the lower case Greek letter beta. The symbol is used in mathematics as a variable. The Greek letter chi (χ). The greek lower case letter δ (delta) is used in mathematics as a variable. The Greek letter epsilon (ε). The Greek letter eta (η). The Greek letter gamma (γ). The Greek letter ioata (ι). The Greek letter kappa (κ). The Greek letter lambda (λ) is a symbol used throughout mathematics, computer science, and physics. Lambda is used to represent the wavelength of a wave when discussing wave forms and equations. In computer science, the symbol is used in the study of “lambda calculus” and anonymous functions. The Greek letter mu (μ) is used in statistics to represent the population mean of a distribution. The Greek letter nu (ν). The Greek letter omega (ω). The Greek letter omicron (ο). The Greek letter phi (φ) is used in geometry as a constant to represent the golden ratio. The Greek letter (pi) is used in trigonometry as a constant to represent a half-rotation around a circle in radians. The value of is approximately . The symbol appears in multiple geometric formulas. The Greek letter psi (ψ). The symbol is the lower case Greek letter rho (ρ). In mathematics the symbol is used as a variable. In physics, more specifically, the symbol is used to represent density. The Greek letter τ (tau) is used in trigonometry as a constant to represent a full rotation around a circle in radians. The value of τ is approximately 6.28 and can be calculated by dividing any circle's circumference by its radius. The Greek letter theta is used in mathematics as a variable usually associated with a measured angle. For example, the symbol theta appears in the three main trigonometric functions: sine, cosine, and tangent as the input variable. In plain language, this represents the cosine function which takes in one argument represented by the variable . The Greek letter xi (ξ). The Greek letter zeta (ζ). Symbol Description The capital Greek letter (alpha) is visually very similar to the upper-case Latin letter A. For that reason, refer to the usage of Capital A for how the symbol appears in math. The capital Greek letter (Beta) is visually very similar to the upper-case Latin letter B. For that reason, refer to the usage of Capital B for how the symbol appears in math. The capital Greek letter (Chi) is visually very similar to the upper-case Latin letter X. For that reason, refer to the usage of Capital X for how the symbol appears in math. The capital greek letter (Delta) is used in mathematics to represent change. Typically, the symbol is used in an expression like this: The capital Greek letter (Epsilon) is visually very similar to the upper-case Latin letter E. For that reason, refer to the usage of Capital E for how the symbol appears in math. The capital Greek letter (Eta) is visually very similar to the upper-case Latin letter H. For that reason, refer to the usage of Capital H for how the symbol appears in math. The capital Greek letter (Γ). The capital Greek letter (Ioata) is visually very similar to the upper-case Latin letter i. For that reason, refer to the usage of Capital I for how the symbol appears in math. The capital Greek letter (Κ). The capital Greek letter (Λ). The capital Greek letter (Μ). The capital Greek letter (Nu) is visually very similar to the upper-case Latin letter N. For that reason, refer to the usage of Capital N for how the symbol appears in math. The capital Greek letter (Ω). The capital Greek letter (Ο). The capital Greek letter (Φ). The capital Greek letter Pi () is used in math to represent the product operator. Typically, the product operator is used in an expression like this: The capital Greek letter (Ψ). The capital Greek letter (Ρ). The capital Greek letter Σ (Sigma) is used in algebra to represent the summation operator. The capital Greek letter (Τ). The capital Greek letter (Θ). The capital Greek letter (Υ). The capital Greek letter (Ξ). The capital Greek letter (Ζ).
2020-08-03 17:09:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917303085327148, "perplexity": 1015.8129448676067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735823.29/warc/CC-MAIN-20200803170210-20200803200210-00491.warc.gz"}
https://zbmath.org/?q=an:0766.30010
## On univalent functions with negative coefficients.(English)Zbl 0766.30010 Let $$H(U)$$ denote the set of all functions $$f$$ holomorphic in the unit disc $$U$$ and $$D^ n$$, $$n\in\mathbb{N}=\{0,1,2,\dots\}$$ be the operator such that $$D^ n: H(U)\to H(U)$$ and $$D^ 0 f(z)=f(z)$$, $$D^ 1 f(z)=zf'(z)$$, $$D^ n f(z)=D(D^{n-1} f(z))$$. The author considers the class $$T_ n(\alpha,\beta)$$, $$\alpha\in\langle 0,1)$$, $$\beta\in(0,1\rangle$$, $$n\in\mathbb{N}$$, of functions $$f$$ in $$H(U)$$ of the form $f(z)=z-\sum_{k=1}^ \infty a_ k z^ k, \qquad a_ k\geq 0, \qquad k=2,3,\dots$ and such that $$| J_ n(f,\alpha;z)|<\beta$$ where $J_ n(f,\alpha;z)=\left[ {{D^{n+1}f(z)} \over {D^ n f(z)}}-1\right] \left/\left[ {{D^{n+1}f(z)} \over {D^ n f(z)}}+1-2\alpha\right]\right.,\qquad z\in U.$ He obtains many properties of the considered class. ### MSC: 30C45 Special classes of univalent and multivalent functions of one complex variable (starlike, convex, bounded rotation, etc.) ### Keywords: negative coefficients
2022-05-27 03:40:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554323554039001, "perplexity": 221.00072987531428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00774.warc.gz"}
https://abaqus-docs.mit.edu/2017/English/SIMACAEKEYRefMap/simakey-r-beamsection.htm
# *BEAM SECTION Specify a beam section when numerical integration over the section is required. This option is used to define the cross-section for beam elements when numerical integration over the section is required (usually because of nonlinear material response in the section). Related Topics In Other Guides Using a beam section integrated during the analysis to define the section behavior About beam modeling Pipes and pipebends with deforming cross-sections: elbow elements ProductsAbaqus/StandardAbaqus/ExplicitAbaqus/CAE TypeModel data LevelPartPart instance Abaqus/CAEProperty module ## Required parameters ELSET Set this parameter equal to the name of the element set for which this section is defined. MATERIAL Set this parameter equal to the name of the material to be used with this beam section definition. SECTION Set this parameter equal to the name of the section type (see Beam cross-section library). The following cross-sections are available for beam elements: • ARBITRARY, for an arbitrary section. • BOX, for a rectangular, hollow box section. • CIRC, for a solid circular section. • HEX, for a hollow hexagonal section. • I, for an I-beam section. • L, for an L-beam section. • PIPE, for a thin-walled circular section. • RECT, for a solid, rectangular section. • THICK PIPE, for a thick-walled circular section (Abaqus/Standard only). • TRAPEZOID, for a trapezoidal section. Set SECTION=ELBOW for elbow elements, which are available only in Abaqus/Standard. ## Optional parameters LUMPED This parameter is relevant only for linear Timoshenko beam elements in Abaqus/Standard. Set LUMPED=YES (default) to use a lumped mass matrix in frequency extraction and modal analysis procedures. Set LUMPED=NO to use a mass matrix based on a cubic interpolation of deflection and quadratic interpolation of the rotation fields in frequency extraction and modal analysis procedures. POISSON Set this parameter equal to the effective Poisson's ratio for the section to provide uniform strain in the section because of strain of the beam axis (so that the beam changes cross-sectional area when it is stretched). The value of the effective Poisson's ratio must be between −1.0 and 0.5. The default is POISSON=0. A value of 0.5 will enforce incompressible behavior of the element. This parameter is used only in large-displacement analyses. It is not used with elbow elements or with element types B23, B33, PIPE21, PIPE22, and the equivalent “hybrid” elements (which are available only in Abaqus/Standard). ROTARY INERTIA This parameter is relevant only for three-dimensional Timoshenko beam elements. Set ROTARY INERTIA=EXACT (default) to use the exact rotary inertia corresponding to the beam cross-section geometry in dynamic and eigenfrequency extraction procedures. Set ROTARY INERTIA=ISOTROPIC to use an approximate rotary inertia for the cross-section. In Abaqus/Standard the rotary inertia associated with the torsional mode of deformation is used for all rotational degrees of freedom. In Abaqus/Explicit the rotary inertia for all rotational degrees of freedom is equal to a scaled flexural inertia with a scaling factor chosen to maximize the stable time increment. TEMPERATURE Use this parameter to select the mode of temperature and field variable input used on the FIELD, the INITIAL CONDITIONS, or the TEMPERATURE options. For beam elements set TEMPERATURE=GRADIENTS (default) to specify temperatures and field variables as values at the origin of the cross-section, together with gradients with respect to the 2-direction and, for beams in space, the 1-direction of the section. Set TEMPERATURE=VALUES to give temperatures and field variables as values at the points shown in the beam section descriptions (see Beam cross-section library). For elbow elements set TEMPERATURE=GRADIENTS (default) to specify temperatures and field variables at the middle of the pipe wall and the gradient through the pipe thickness. Set TEMPERATURE=VALUES to give temperatures and field variables as values at points through the section, as shown in Pipes and pipebends with deforming cross-sections: elbow elements. ## Data lines for BOX, CIRC, HEX, I, L, PIPE, RECT, THICK PIPE, and TRAPEZOID sections First line 1. Beam section geometric data. Values should be given as specified in Beam cross-section library for the chosen section type. 2. Etc. Second line (optional; enter a blank line if the default values are to be used) 1. First direction cosine of the first beam section axis. 2. Second direction cosine of the first beam section axis. 3. Third direction cosine of the first beam section axis. The entries on this line must be (0, 0, −1) for planar beams. The default for beams in space is (0, 0, −1) if the first beam section axis is not defined by an additional node in the element's connectivity. See Beam element cross-section orientation for details. Third line (optional) 1. Number of integration points in the first direction or branch. This number must be an odd number (for Simpson's integration), unless noted otherwise in Beam cross-section library. 2. Number of integration points in the second direction or branch. This number must be an odd number (for Simpson's integration), unless noted otherwise in Beam cross-section library. This entry is needed for the THICK PIPE section, as well as for beams in space. 3. Number of integration points in the third direction or branch. This number must be an odd number (for Simpson's integration), unless noted otherwise in Beam cross-section library. This entry is needed only for I-beams. ## Data lines for ARBITRARY sections First line 1. Number of segments making up the section. 2. Local 1-coordinate of first point defining the section. 3. Local 2-coordinate of first point defining the section. 4. Local 1-coordinate of second point defining the section. 5. Local 2-coordinate of second point defining the section. 6. Thickness of first segment. Second line 1. Local 1-coordinate of next section point. 2. Local 2-coordinate of next section point. 3. Thickness of segment ending at this point. Repeat the second data line as often as necessary to define the ARBITRARY section. Third line (optional) 1. First direction cosine of the first beam section axis. 2. Second direction cosine of the first beam section axis. 3. Third direction cosine of the first beam section axis. The entries on this line must be (0, 0, −1) for planar beams. The default for beams in space is (0, 0, −1) if the first beam section axis is not defined by an additional node in the element's connectivity. See Beam element cross-section orientation for details. ## Data lines for ELBOW sections First line 1. Outside radius of the pipe, r. 2. Pipe wall thickness, t. 3. Elbow torus radius, R, measured to the pipe axis. For a straight pipe, set $R=0$. Second line Enter the coordinates of the point of intersection of the tangents to the straight pipe segments adjoining the elbow, or, if this section is associated with straight pipes, the coordinates of a point off the pipe axis. The second cross-sectional axis will lie in the plane thus defined, with its positive direction pointing toward this off-axis point. 1. First coordinate of the point. 2. Second coordinate of the point. 3. Third coordinate of the point. Third line 1. Number of integration points through the pipe wall thickness. This number must be an odd number. (The default is 5.) 2. Number of integration points around the pipe. (The default is 20.) 3. Number of ovalization modes around the pipe (maximum 6). The section can be used with 0 (zero) ovalization modes, in which case uniform radial expansion only is included.
2022-12-05 17:41:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7681716680526733, "perplexity": 2416.841597118397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00840.warc.gz"}
https://physics.stackexchange.com/questions/491280/is-the-speed-of-light-in-a-vacuum-constant
# Is the speed of light in a vacuum constant? The news site is titled, “Speed of Light May Not Be Constant, Physicists Say.” It talks about how the speed of light in a vacuum is not constant and that estimates of the size of the universe might be off. Would this also affect the estimates of the age of the universe, or would the differences of the speed of light would be so minute that it would not cause a drastic change? • As the article says (in a massive understatement), “Some scientists are a bit skeptical, though.” Jul 12 '19 at 20:51 • There is plenty of evidence that it is, and no evidence that it is not. Jul 12 '19 at 20:53 • Possible duplicates: physics.stackexchange.com/q/2230/2451 and links therein. Jul 12 '19 at 21:11 Even if these theories are correct, it's a tiny effect- $$\sim 5\times 10^{-17}~{\rm s\over \sqrt{m}}$$ of fluctuation according to one of the papers. For the diameter of the observable universe ($$\sim9\times 10^{10}~\rm ly$$), that comes out to about $$400~\rm km$$, or $$5\times10^{-22}$$ times smaller. This is much, much less than the margin of error we already have for the radius of the observable universe, and so would not affect estimates of the universe's size or age at all.
2021-11-30 06:48:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4687819182872772, "perplexity": 240.69697543525893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00055.warc.gz"}
http://cms.math.ca/10.4153/CMB-2002-004-6
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals → CMB Abstract view # Modular Equations and Discrete, Genus-Zero Subgroups of $\SL(2,\mathbb{R})$ Containing $\Gamma(N)$ Published:2002-03-01 Printed: Mar 2002 • C. J. Cummins Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript ## Abstract Let $G$ be a discrete subgroup of $\SL(2,\R)$ which contains $\Gamma(N)$ for some $N$. If the genus of $X(G)$ is zero, then there is a unique normalised generator of the field of $G$-automorphic functions which is known as a normalised Hauptmodul. This paper gives a characterisation of normalised Hauptmoduls as formal $q$ series using modular polynomials. MSC Classifications: 11F03 - Modular and automorphic functions 11F22 - Relationship to Lie algebras and finite simple groups 30F35 - Fuchsian groups and automorphic functions [See also 11Fxx, 20H10, 22E40, 32Gxx, 32Nxx] © Canadian Mathematical Society, 2014 : http://www.cms.math.ca/
2014-04-19 19:38:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.486952006816864, "perplexity": 3932.792701768839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.reddit.com/r/HomeworkHelp/comments/10z5m7/11th_grade_ap_physics_b_2_uniform_circular_motion/
This is an archived post. You won't be able to vote or comment. [–]Google Web Search 1 point2 points  (1 child) Problem 1 a_cent = v^2 / R = g v = 2*pi*R / T In your picture, you can cancel out the m terms. You also didn't square the velocity. Make sure to convert the period, T, into seconds. Problem 2 The way I learned it was to set centripetal force going outward radially, even though the force acts in an inward direction. For example, spinning a mass tethered to a string on a horizontal plane. The centripetal force is the tension in the string. However, in the force balance, I would say ΣF = 0 = F_cent - T F_cent = T The math would come out right in the end. Here, you have a similar situation. Imagine the centripetal force (or maybe you would call it the centrifugal force?) pushing away from the center of the circle. Imagine a pendulum spinning in a vertical circle. At the top of the circle, the tension in the string will be the lowest because gravity counteracts part of the centrifugal force. At the bottom of the circle, the tension will be the highest because centrifugal force acts in addition to gravity. On the sides, the tension in the string will be average. Problem 3 Let's call the fifth mass, m5. The two masses in the bottom corners basically cancel each other out. They exert an equal and opposite force on m5. The top two masses exert equal and not-quite-opposite forces on m5. However, you can break those forces up into horizontal and vertical components. As their horizontal distance from m5 is identical, then those effects are equal and opposite and will cancel out. Therefore, you are left with only the vertical components of the gravitational force. [–]12th grade: AP Calculus BC[S] 0 points1 point  (0 children) Thank you!
2017-09-26 07:23:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6614658832550049, "perplexity": 579.3544134509333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695113.88/warc/CC-MAIN-20170926070351-20170926090351-00205.warc.gz"}
https://tel.archives-ouvertes.fr/tel-00004729
# Méthodes de vérification de spécifications comportementales : étude et mise en œuvre Abstract : This work deals with the verification of behavioural specifications for parallel programs, and, more precisely, with the design of efficient algorithms for the comparison of two labelled transition systems modulo a simulation or a bisimulation relation. First, we recall the principle of the classical decision procedures, based on partition refinement algorithms. This approach requires to previously build the transition relations of the two systems before the comparison phase, which constitutes a practical limitation. Consequently, we propose an original algorithm, based on a depth-first traversal of a synchronous product of the two systems, which allows to perform the comparison on the fly'', without explicitly building or storing the two transition relations. This on the fly'' comparison algorithm has been implemented within the Aldebaran verification tool with for various relations: strong bisimulation, observational equivalence, tau*a-bisimulation, delay bisimulation and branching bisimulation, as well as safety equivalence and preorder. Its application to the verification of several Lotos programs confirms the interest of this approach in comparison with the more classical ones. Finally, we are also concerned with diagnostic generation when the two labelled transitions systems are not equivalent: the decision procedures implemented within Aldebaran provide a set of discriminating execution sequences, which are minimal with respect to a given order relation. Keywords : Document type : Theses Cited literature [32 references] https://tel.archives-ouvertes.fr/tel-00004729 Contributor : Thèses Imag <> Submitted on : Tuesday, February 17, 2004 - 3:00:42 PM Last modification on : Friday, November 6, 2020 - 4:13:07 AM Long-term archiving on: : Thursday, September 13, 2012 - 1:20:11 PM ### Identifiers • HAL Id : tel-00004729, version 1 ### Citation Laurent Mounier. Méthodes de vérification de spécifications comportementales : étude et mise en œuvre. Génie logiciel [cs.SE]. Université Joseph-Fourier - Grenoble I, 1992. Français. ⟨tel-00004729⟩ Record views
2020-12-01 12:46:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2657880187034607, "perplexity": 3653.787682462759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00207.warc.gz"}
http://physics.stackexchange.com/tags/fluid-dynamics/new
# Tag Info 2 I would say pressure is better defined by $$\vec{F} = P \vec{A}.$$ Yes, we are defining a quantity without having it all alone on the left-hand side. And yes, area is a vector. And as you guessed trying to divide one vector by another leads to trouble, so we won't do it. Let me explain where this comes from and what it is shorthand for. In continuum ... 0 There are different mathematical way to define pressure (all equivalent), but perhaps the most common one is using the component of the force normal to the surface. That is why in the definition you are using you are actually dividing scalars. For this way to define pressure, you see http://en.wikipedia.org/wiki/Pressure. You can also consider area as a ... 1 Yes, area is a vector, which is the normal to the surface. ($\vec{A}=A\vec{n}$) $\vec{F} = -P\vec{A}$ In this case P is simply the proportionality constant to the vectors F and A, which also mean that F and A has to be in the same direction (F is the normal force and not shear forces). The negative sign accounts for the fact that the force and normal ... 3 Pressure is a scalar and does not have a direction. This is discussed in some detail in the answers to Define Pressure at A point. Why is it a Scalar?, though this might be a bit technical. When you measure a pressure you are actually measuring the force applied to a surface. For some small bit of surface $\delta {\bf A}$, the force produced on that surface ... 0 Pressure at a point in a static fluid is independent of direction. http://www.southampton.ac.uk/~jps7/Aircraft%20Design%20Resources/Sydney%20aerodynamics%20for%20students/fprops/statics/node4.html The force exerted by the walls on the liquid will be pointing inwards. Imagine if there is a hole in the container and water is liquid out, it is easy to see ... 0 A very pragmatic solution would be to introduce a 3rd camera looking from a 3rd angle to make the problem well-defined. This camera doesn't have to be as good or fast a camera as the other ones (assuming that those are highspeed cameras), because you basically only need 1 frame for which you know for sure which of the 2 possibilities it is. This does mean ... 0 No, it is not equal. Without sucking and without friction losses, the kinetic energy would be equal if the inlet and outlet pressures were equal (at equal cross sections). But with sucking, a part of kinetic energy is lost in the inelasic process of mixing of two masses. It is like in a usual classical mechanics: if one body hits another one and sticks to ... 1 B will move faster, the reason is that the acceleration, $a$, of A is smaller for two reasons (remember that $F_{applied}-F_{drag}=ma$) : 1) the same force forward is applied so the contribution to the acceleration on the smaller ball will be larger 2)the drag force on the larger ball will be larger (see Rennie's comment) on A because the cross sectional ... 0 I'll just throw some ideas out. Initial angle of tilt. Higher the angle the higher the back pressure has to get before air will hold the fluid from flowing until it equalizes (glugs). It will have longer time between "glugs" but this will also tend to slow the flow down. At a very low angle air can enter while fluid exists at the same time even ... 2 I can't answer the question as posed - but I can point you to something related. If you have an (infinite) cylinder perpendicular to a flow, you can calculate the flow around it (see for example this interesting page where these images and equations come from). The velocity profile would look like this: and the pressure profile is similar: They give ... 0 Decreasing diameter indeed leads to increasing flow resistance, and that is the dominant factor in lung airway resistance. But branching itself also contributes to airway resistance. When flow is forced to change direction there are energy losses which also cause pressure drop and this adds to the resistance. The trachea is more or less a straight pipe with ... 0 I believe the question can be rephrased as asking why is a single big pipe better than two smaller pipes with the same area Since that's really what you are asking. The bronchial tree splits into smaller branches, but tries to carry the same amount of air. Now the answer should be obvious: for the same area A = $2\pi r^2$, where $r$ is the radius of ... 1 If you pump gas along a pipe then pressure drop per unit length of the pipe depends on the diameter of the pipe. The smaller the pipe the harder it is to pump the gas through it. The pressure drop is given by the Darcy-Weisbach equation: $$\Delta P = f_D \frac{\rho v^2}{2} \frac{\ell}{d}$$ though with the complication that the density of the gas depends ... 0 I once had the same problem understanding. But one day I realized pressure is just a measure of energy and Bernoulli's law is just another way of expressing the conservation of energy. The analogy: Total Pressure = Dynamic Pressure + Static Pressure >> Total Energy = Kinetic Energy + Potential Energy. In either case we assume no losses, or otherwise ... 1 Since you know the force on the bottle (roughly 200N), you would have to get an approximate area over which this force is distributed. You could try by spreading some ink on either the bottle or the weight to estimate the contact area. While this is not completely correct (the wall of the bottle does redistribute the pressure on the outside to a larger area ... 2 Yes, it does protect against G forces because it spreads the pressure on the support surfaces of the body evenly. For example, and interesting article here 1 The ideal shape for the water to just 'hang' is actually the cube that you've proposed. In this case all of the forces on the water are uniform across the interface. For one region to slip down, another needs to move up (as you've indicated in your diagram). But since every location is experiencing the same forces, there's no reason any spot to start ... 1 The definition that you are using is not the most general. If you insist on applying the way you do, it only applies for a uniform flow in between two counter-mooving walls. Then, the velocity profile is indeed linear. A Newtonian fluid is defined by the approximation that local stress (or drag) is proportional to local strain. I would write your equation ... 0 I'm assuming incompressible fluid here and rigid pipes (no compliance). Before the actual physical split is there any appreciable resistance compared to RB and RA? If not then PA = PB = 19" wg. If there is a significant resistance you need to include it in the model as Rin, and then PA = PB = 19" *(RA || RB)/(Rin + RA || RB) In any event PA = PB 2 As an another attempt, I calculated the coefficients of the cubic regressions that describe the NIST data. I first calculated the cubic regression coefficients as a function of pressure for viscosity as a function of temperature. In other words, I calculated the coefficients for each isobar. In equation form that is, $$\mu ... 1 Steady would mean that flow at a point defined in that coordinate system does not change in time. If you mark a spot on the ground and look at the air flow above it you'll see it change over time. It will start out being still, it will move as the cyclist passes and then become still again. If you look at a spot say 1 m ahead of the cyclist (moving with ... 4 The nonlinear term, \left( \mathbf{V} \cdot \nabla \right) \mathbf{V}, determines the steepening of a wave. This can be balanced/offset by loss terms like dispersion, diffusion, viscosity, resistivity, friction, etc. If the loss term dominates over the nonlinear term, then the wave cannot steepen as there is too much damping. If the loss term balances ... 2 It depends on the fluid. Consider, for example, an ideal gas at fixed temperature near the surface of the Earth. Does the density vary in such a column? Yes. Let's investigate as follows. Imagine that the column is in the z-direction and has cross-sectional area A. Let z=0 at the ground. Consider a small, vertical "piece" of the column between ... 0 There are many resources on the web which help to answer this question, but not necessarily very recent ones. Take NACA TN 2674 which shows tufted delta wings, or NACA Research Memorandum L57A30. Generally, a delta wing shows separating flow at the leading edge beginning at moderate angles of attack. This separation leads to the formation of a vortex which ... 4 This technique is called supercavitation. So far it has only been applied to objects no larger than, say, a torpedo, because it not easy to produce a stable bubble the size of a ship when moving at high velocities. 0 I suspect the little bubbles are actually CO2. CO2 is water soluble, and you could find it in most forms of tap water. "hard" water tend to have high concentrates of CO2 while softer water have less. If indeed this is the case, the bubbles form because it is energetically efficient to them to adhere to your hand, which is a lower energy state for them. I ... 1 I think that it helps to define appropriate control volumes. See the image below where I define surfaces A and B. Here, we can say that the pressure at A is given by \rho g h_A and the pressure at B is given by \rho g h_B, recognizing that h_a and h_b are functions of time. If the tank is open to atmosphere the P_A and P_B terms will be equal ... 0 I would tell her the bubbles contain water, and that water is sticky. I would remind her that even after she lets water run off her hands by gravity, she still needs to dry them off with a towel (unless you use an electric hand dryer), because some of the water sticks. It's easier to see the foam than it is to see the water, because the foam is puffed up ... 0 a liquid is not perfectly incompressible - see cavitation in fluids. still, i think we can assume that here . we can calculate the maximum possible height reachable. If i assume that the wall is perfectly rigid and it cannot move when the wave hits it, and if i also assume that the wave system is perfectly conservative , i.e: no energy is lost as heat or ... 0 I would recommend that you derive an empirical answer by running a few experiments. Use different colored water, and measure the resulting splatters. Use the results to derive the formula. 2 The parameter h is the maximum distance that two smoothed particles, a and b, can be before the distance between them is negligible for SPH purposes. If the distance, \vert r_a-r_b\vert>h then the weight is zero. For any kernel, the integral over the particular region, e.g. r\in(-h,\,h), is necessarily 1. Since h is an unknown parameter, then ... 1 How do slight changes in these properties result in a large change in pressure, microscopically? Slight change of volume is not so easy to accomplish for solids - it takes a great force to achieve it. Considerable external force applied by different body (wall) needs to be maintained. The pressure is a measure of this force per unit area and since the ... 6 There's no magic behind it. It was done by non-dimensionalizing the momentum equation in the Navier-Stokes equations. Starting with:$$\frac{\partial u_i}{\partial t} + u_j\frac{\partial u_i}{\partial x_i} = -\frac{1}{\rho}\frac{\partial P}{\partial x_i} + \nu \frac{\partial^2 u_i}{\partial x_i x_j}$$which is the momentum equation for an incompressible ... 2 The way it was explained to me: you start by thinking of all the possible factors that could play in drag (size, velocity, density, viscosity, ...); then you do dimensional analysis and find dimensionless combinations - these tend to be "special" since they remain constant over different scales of time and space. Reynolds number is one such combination. The ... 0 Alright, so here's how I'm thinking about the scenario now. I'd appreciate any comments/suggestions on my logic here. If the bubbles are close, density gradients in each bubble will cause some of the gas to diffuse out into the liquid, and, since the small bubble has a larger pressure, the concentration outside of the smaller bubble will be higher than ... 5 The intuitive way to think about this is to consider a gas inside a glass container (that cannot expand). If the gas expands, then what must happen as a result? The gas leaks out of the container. Similarly, if try I put more gas into the container, then the gas compresses. The vector field \mathbf F is what we use to describe the flow of a fluid. The ... 0 Beautiful article (great graphics - not sure if they are photos or GCI) at http://math.berkeley.edu/~hutching/pub/bubbles.html - not sure if this addresses "your" kind of bubbles, but worth a look anyway. In general surface tension prevents bubbles from becoming a single larger bubble as there is an intermediate phase when the surface would have to be ... 1 Here we assume that OP is mostly interested in the Eulerian fluid picture (as opposed to the Lagrangian fluid picture). Both fluid pictures are discussed in great detail in Ref. 1. Note however that in the methods of Ref. 1, the mass density \rho is a dynamical variable. The variation of \rho is important in order to obtain a full set of eoms. But OP ... 4 The key is the Reynolds number,$$ Re=\frac{\rho LV}{\mu}=\frac{LV}{\nu}\tag{1} $$where L and V are characteristic lengths and velocities of the particular problem and \mu & \nu are the dynamic & kinematic viscosities, respectively. If you multiply (1) by \rho LV/\rho LV, you get$$ Re=\frac{\rho L^2V^2}{\mu LV} $$The numerator is the ... 2 Sorry for the enormous delay, I was caught with more work than I thought, then, here it is. @DanielSank, relativity is not necessary, it would help with what you said, since it would pinpoint exactly you are calling momentum in your system. My answer would be an extension of DanielSank comment. When there is the conservation of a continuous quantity, the ... 0 The term in equation is:$$\frac{\partial u_i}{\partial x_j}\frac{\partial u_j}{\partial x_i}$$So let's take a step back and think about what kinds of terms can appear in conservation equations. There can be a production term, a transport term, and a dissipation term. The transport term is the \vec{u}\cdot\nabla q term that you noted. When you look at ... 2 For the sake of the explanation I will assume you mean a gas bubble in a liquid*. David Hammen names a few conditions for a bubble to be spherical, in fact you could summarize these all as: for a bubble to be spherical the surface tension has to dominate over other forces (per unit length). If surface tension is indeed dominant than the pressure in the ... 4 Rising bubbles of air in a liquid oftentimes are anything but spherical. These bubbles have haphazard shapes because they are rising and because they are interacting with other nearby bubbles. The combination of drag, turbulence, and mutual interactions prevents those bubbles from taking on a nice, simple spherical shape. Here's a rather non-spherical ... 1 The thing you'll notice about a sphere is that it's symmetrical. very symmetrical. No matter how you rotate it, it looks the same. the surface tension pulls the surface of the bubble into a shape that has even surface tension over the entire bubble. The shape with even surface tension is a sphere. a sphere has the smallest possible surface area for an ... 0 It is a case of flow through an orifice. It depends on the shape and area of the orifice, and on the viscocity of the fluid. At a low ratio of pressure to viscocity, flow rate is proportional to pressure. At a high ratio of pressure to viscocity, flow rate is proportional to square root of pressure. You're going to have to write a differential equation, and ... 0 The subscript j represents all particles, including i, from 1 to n. This should be obvious in the example below Equation (9), To give an example, let us consider the distance constraing function C(\mathbf p_1,\,\mathbf p_2)=\vert\mathbf p_1-\mathbf p_2\vert-d. The derivative with respect to the points are \nabla_{\mathbf p_1}C(\mathbf ... 1 You can get nothing out of equilibrium thermodynamic considerations for the rate at which pressure will equalize. What will matter is the speed of sound in the gas, as that is the rate at which density fluctuations travel in a fluid and assuming an equation of state, say p(\rho)=\rho^{\gamma}, the pressure is then enslaved to the density. So the sound ... 1 The answer is due to the area-Mach number relation for hydrodynamic shocks. G.B. Whitham has a great book (check out Chapter 8) on all sorts of various waves and has a good discussion of this topic. The idea is that one can define the Mach number as a function of the cross-sectional area of a ray tube. The simple form is:$$ \frac{ 1 }{ A } \frac{ d A }{ ... 1 Unless the two containers are separate, i.e. have a wall sealing them off completely, the right set of tools for this question is fluid-dynamics rather than thermodynamics. for the sealed off problem, assuming ideal gasses, the end state for the coupled baths will be that of equal temperature. in that case it is essential you have the right number of ... 2 The rotation is part of the key to the storm itself. Primarily the pressure and temperature differences are what causes these systems to take the shape and forms that they do. Once a tropical depression starts to form you can already see rotation in the moisture around the low pressure zone, even through it typically looks nothing like a hurricane. Not ... Top 50 recent answers are included
2014-10-23 15:34:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629408240318298, "perplexity": 407.600666584263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066932.32/warc/CC-MAIN-20141017150106-00365-ip-10-16-133-185.ec2.internal.warc.gz"}
https://mathoverflow.net/tags/permutations/hot
# Tag Info Accepted ### Why 'excedances' of permutations? Mea culpa. Comtet used the term excédence. When writing EC1 I needed an English term for this concept. For some reason I didn't like the word exceedance. I thought it looked better without the double ... • 43.8k Accepted ### Multiplying all the elements in a group Yes, your $G!$ (as a set) is always either $[G,G]$ (if the order of $G$ is odd, or its Sylow $2$-subgroup is non-cyclic) or $z[G,G]$ if $G$ has cyclic Sylow $2$-subgroup, where $z$ is the involution ... • 1,837 Accepted • 12.9k ### Arranging numbers from $1$ to $n$ such that the sum of every two adjacent numbers is a perfect power Not an answer, but maybe a start: It is fairly clear why trivial cases like $n=18,$ power$=2$ don't work, after all of the sum-pairs $\neq$ a power of $2$ that are $\leq2n$ are stripped away: ... • 1,801 Accepted ### Linear permutations commuting with $x\rightarrow x^{-1}$ The answer is no: a linear transformation of $F$ which commutes with $\phi$ is an automorphism of $F$. This is a seemingly inelegant but simple argument. Any map $\psi: F \rightarrow F$ can be ... • 4,661 ### Is the Number of Carries in Integer-Addition Associative? For any base $b$, if we add $a$ and $c$ with $k$ carries, then $S_b(a+c)=S_b(a)+S_b(c)-(b-1)k$, where $S_b$ denotes the sum of digits. Since the resulting sum is independent of the order of addition, ... • 18.9k Accepted ### Is $(\mathbb{R},+)$ isomorphic to a subgroup of $S_\omega$? See Theorem 4.3 of this paper by De Bruijn. Any abelian group of order $2^\kappa$ can be embedded in $Sym(\kappa)$ when $\kappa$ is infinite. (There is also an addendum to the paper which corrects ... • 2,925 • 88.5k Accepted ### Constructing permutations avoiding a pattern As far as "combining pattern foo and bar makes the set empty for large $n$", there is an answer and it is fairly trivial. There are no permutations of length longer than $(k-1)(\ell-1)+1$ ... • 2,251 ### Permutations with all cycles odd length and permutations with all cycles even length Here is Eytan's proof in more detail. First, there is a canonical way to write the cycle decomposition of a permutation. You order the cycles in descending order based on the largest member they ... • 107k ### How many rearrangements must fail to alter the value of a sum before you conclude that none do? This is really a comment on Joel's answer, but apparently too long. Let P be the forcing which adds a permutation of $\mathbb{N}$ by finite pieces (so $P$ is forcing-equivalent to Cohen forcing). ... • 2,430 This already fails for the second-smallest sporadic group $M_{12}$. A simple subgroup $G$ of $S_N$ cannot contain an $n$-cycle with $n$ even (unless $|G|=2$...), because then $G \cap A_N$ would be an ... ### Arranging numbers from $1$ to $n$ such that the sum of every two adjacent numbers is a perfect power This is a to long for a comment: Let $G(n,N)$ Micah's graph with vertices the numbers $1,..,n$ and edges $\{i,j\}$ if $i+j$ is a power of $N$. Your condition is satisfied if and only if $G$ contains ...
2022-06-30 03:48:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444211483001709, "perplexity": 311.7475182662414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00667.warc.gz"}
https://www.alexbowe.com/articles/
# Articles ## Iterative Tree Traversal By memorizing a simple implementation of iterative tree traversal we simplify a large number of programming interview questions. ## How to Recover a Bitcoin Passphrase How I recovered a Bitcoin passphrase by performing a Breadth-First search on typos of increasing Damerau-Levenshtein distances from an initial guess. ## Succinct de Bruijn Graphs This post will give a brief explanation of a Succinct implementation for storing de Bruijn graphs [http://en.wikipedia.org/wiki/De_Bruijn_graph], which is recent (and continuing) work I have been doing with Sadakane. Using our new structure, we have squeezed a graph for a human genome (which I’ve participated in about four sets of Google interviews (of about 3 interviews each) for various positions. I’m still not a Googler though, which I guess indicates that I’m not the best person to give this advice. However, I think it’s about time I put in ## FM-Indexes and Backwards Search Last time (way back in June! I have got to start blogging consistently again) I discussed a gorgeous data structure called the Wavelet Tree [https://alexbowe.com/wavelet-trees/]. When a Wavelet Tree is stored using RRR sequences, it can answer rank and select operations in $\mathcal{O}(\log{A})$ time, ## Wavelet Trees: an Introduction Today I will talk about an elegant way of answering rank queries on sequences over larger alphabets – a structure called the Wavelet Tree. In my last post [https://alexbowe.com/yarrr-me-hearties] I introduced a data structure called RRR, which is used to quickly answer rank queries on binary sequences, and ## RRR: A Succinct Rank/Select Index for Bit Vectors This blog post will give an overview of a static bitsequence data structure known as RRR, which answers arbitrary length rank queries in $\mathcal{O}(1)$ time, and provides implicit compression. As my blog is informal, I give an introduction to this structure from a birds eye view. If you ## Generating Binary Permutations in Popcount Order I’ve been keeping an eye on the search terms that land people at my site, and although I get the occasional “alex bowe: fact or fiction” and “alex bowe bad ass phd student” queries (the frequency strangely increased when I mentioned this on Twitter [http://www.twitter.com/alexbowe] ## Some Lazy Fun with Streams Update:fellow algorithms researcherFrancisco Claude [http://fclaude.recoded.cl] just posteda great article about using lazy evaluation to solve Tic Tac Toe games in Common Lisp [http://fclaude.recoded.cl/archives/177]. Niki [http://niki.code-karma.com](my brother) also wrote a post using generators with asynchronous prefetching to hide ## Design Pattern Flash Cards Last year I studied a subject which required me to memorise design patterns. I tried online flash card web sites, but I was irritated that I didn’t own the data I put up (they had no export option). So I wrote a something in Python to generate flash cards
2022-08-19 09:08:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21735717356204987, "perplexity": 3467.380154917937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00758.warc.gz"}
http://tex.stackexchange.com/questions/67661/how-to-properly-remove-the-parentheses-around-the-year-in-authoryear-style
# How to (properly) remove the parentheses around the year in authoryear style? Some time ago, Alan Munn asked and lockstep eloquently answered a question about removing parentheses from biblatex authoryear style references. Unfortunately, lockstep's solution injects an unwanted \addperiod\space into "dash" references. For example, given Author, A. cited twice: \documentclass{article} \usepackage[style=authoryear]{biblatex} \usepackage{xpatch} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @misc{A01,author={Author, A.},year={2001},title={Alpha}} @misc{A02,author={Author, A.},year={2001},title={Beta}} \end{filecontents} \nocite{*} \begin{document} \printbibliography \xpatchbibmacro{date+extrayear}{% \printtext[parens]% }{% \printtext% }{}{} \printbibliography \end{document} we get: I've tried building a solution using constructs like \usebibmacro{bbx:dashcheck} without success. How, then, based on lockstep's nice xpatch-based approach, can I conditionally include \addperiod\space only in the case of "non-dash" references? - Comment the line \addperiod\space% and should work. –  Marco Daniel Aug 18 '12 at 7:08 @Marco, no. As in Alan's original question and in lockstep's answer, the \addperiod\space is required after the author field in normal (non-repeated author, "non-dash") cases. To see this more clearly, change the author= field from {Author, A.} to, say, {Author, Anne}. In which case, we want output to look like Author, Anne. 2001a and -- 2001b. Commenting out the \addperiod\space produces Author, Anne 2001a. The question is, how can we conditionally remove the period in the repeated author ("dash") case? –  Nikki Aug 18 '12 at 7:53 I didn't notice this. –  Marco Daniel Aug 18 '12 at 7:58 The output of units should be done inside the command \setunit. \xpatchbibmacro{date+extrayear}{% \printtext[parens]% }{%
2014-07-31 21:58:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837181329727173, "perplexity": 13314.937634048687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00369-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.cs.columbia.edu/~rocco/papers/icalp09malicious.html
Learning Halfspaces with Malicious Noise. A. Klivans and P. Long and R. Servedio. Journal of Machine Learning Research 10(Dec), 2009, pp. 2715--2740. Preliminary version in 36th International Conference on Automata, Languages and Programming (ICALP), 2009, pp. 609-621. Abstract: We give new algorithms for learning halfspaces in the challenging {\it malicious noise} model, where an adversary may corrupt both the labels and the underlying distribution of examples. Our algorithms can tolerate malicious noise rates exponentially larger than previous work in terms of the dependence on the dimension $n$, and succeed for the fairly broad class of all isotropic log-concave distributions. We give poly$(n, 1/\eps)$-time algorithms for solving the following problems to accuracy $\epsilon$: • Learning origin-centered halfspaces in $\R^n$ with respect to the uniform distribution on the unit ball with malicious noise rate $\eta = \Omega(\eps^2/\log(n/\eps)).$ (The best previous result was $\Omega(\eps/(n \log (n/\eps))^{1/4})$.) • Learning origin-centered halfspaces with respect to any isotropic log-concave distribution on $\R^n$ with malicious noise rate $\eta = \Omega(\eps^{3}/\log(n/\epsilon)).$ This is the first efficient algorithm for learning under isotropic log-concave distributions in the presence of malicious noise. • We also give a poly$(n,1/\eps)$-time algorithm for learning origin-centered halfspaces under any isotropic log-concave distribution on $\R^n$ in the presence of \emph{adversarial label noise} at rate $\eta = \Omega(\eps^{2}/\log(1/\eps))$. In the adversarial label noise setting (or agnostic model), labels can be noisy, but not example points themselves. Previous results could handle $\eta = \Omega(\eps)$ but had running time exponential in an unspecified function of $1/\eps$. Our analysis crucially exploits both concentration and anti-concentration properties of isotropic log-concave distributions. Our algorithms combine an iterative outlier removal procedure using Principal Component Analysis together with smooth'' boosting. pdf of conference version
2016-05-05 01:06:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6933521628379822, "perplexity": 1093.0159532688085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125750.3/warc/CC-MAIN-20160428161525-00217-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-on-derivation-of-work.673361/
# Help on derivation of work 1. Feb 21, 2013 ### sloane729 1. The problem statement, all variables and given/known data I'm working through a derivation for work in Thornton's Classical Dynamics but I'm stuck at one step. \begin{align} \vec{F} \cdot d\vec{r} &= m\frac{d\vec{v}}{dt} \cdot \frac{d\vec{r}}{dt}dt = m\frac{d\vec{v}}{dt} \cdot \vec{v}dt \\ &= \frac{m}{2}\frac{d}{dt}(\vec{v}\cdot \vec{v})dt \end{align} I'm having trouble getting from the last equality on the first line to the second line. 2. Relevant equations 3. The attempt at a solution I don't know what the mathematical reasoning is. 2. Feb 21, 2013 ### MrWarlock616 I don't get it too. Might be a mistake. 3. Feb 21, 2013 ### sloane729 Last edited: Feb 21, 2013 4. Feb 21, 2013 ### HallsofIvy Staff Emeritus You can't say that because you haven't defined "v^2". Since v is a vector, what you really mean is $\vec{v}\cdot\vec{v}= ||\vec{v}||^2$. But, the basic idea is correct: $$\frac{d(\vec{v}\cdot\vec{v})}{dt}= 2\vec{v}\cdot\frac{d\vec{v}}{dt}$$
2017-08-17 20:14:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999704360961914, "perplexity": 2455.1677299902644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00600.warc.gz"}
https://hiltmon.com/blog/2012/12/21/cd-to-current-finder-path/
# CD to Current Finder Path To open a new terminal in the current Finder path in OS X, you can use the built-in service (See below on how to enable). But if you are already in a Terminal session, you need to leave the keyboard, mouse to Finder and drag and drop the path back. Here’s a trick that gets you to the frontmost Finder path without leaving the Terminal or keyboard. Add the following to your .bash_profile: In a new Terminal (or reload the current), you now have the following commands available: • cdf which will cd you to the frontmost finder path • cfp which will copy the frontmost finder path onto the pasteboard I usually leave my current working folder visible in Finder so I can see the files available while working in Terminal. To get back to that folder, a quick cdf and I’m there. ### OR: Enable the Service To open a new terminal window at the current Finder path from Finder, you need the New Terminal at Folder service enabled. Make sure it’s checked in System Preferences / Keyboard / Keyboard Shortcuts / Services under Files and Folders. Once enabled, you can right click on any folder in Finder, choose Services / New Terminal at Folder to open a new Terminal at that location.
2017-09-22 09:36:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26030871272087097, "perplexity": 3396.1904787315284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688932.49/warc/CC-MAIN-20170922093346-20170922113346-00459.warc.gz"}
http://sro.sussex.ac.uk/id/eprint/73720/
# Probing star formation and ISM properties using galaxy disk inclination I: Evolution in disk opacity since z~0.7 Leslie, S K, Sargent, M T, Schinnerer, E, Groves, B, van der Wel, A, Zamorani, G, Fudamoto, Y, Lang, P and Smolčić, V (2018) Probing star formation and ISM properties using galaxy disk inclination I: Evolution in disk opacity since z~0.7. Astronomy and Astrophysics, 615 (A7). pp. 1-20. ISSN 0004-6361 Disk galaxies at intermediate redshift ($z\sim0.7$) have been found in previous work to display more optically thick behaviour than their local counterparts in the rest-frame B-band surface brightness, suggesting an evolution in dust properties over the past $\sim$6 Gyr. We compare the measured luminosities of face-on and edge-on star-forming galaxies at different wavelengths (Ultraviolet (UV), mid-infrared (MIR), far-infrared (FIR), and radio) for two well-matched samples of disk-dominated galaxies: a local Sloan Digital Sky Survey (SDSS)-selected sample at $z\sim0.07$ and a sample of disks at $z\sim0.7$ drawn from Cosmic Evolution Survey (COSMOS). We have derived correction factors to account for the inclination dependence of the parameters used for sample selection. We find that typical galaxies are transparent at MIR wavelengths at both redshifts and that the FIR and radio emission is also transparent as expected. However, reduced sensitivity at these wavelengths limits our analysis; we cannot rule out opacity in the FIR or radio. Ultra-violet attenuation has increased between $z\sim0$ and $z\sim0.7$, with the $z\sim0.7$ sample being a factor of $\sim$3.4 more attenuated. The larger UV attenuation at $z\sim0.7$ can be explained by more clumpy dust around nascent star-forming regions. There is good agreement between the fitted evolution of the normalisation of the SFR$_{\text{UV}}$ versus 1-cos(i) trend (interpreted as the clumpiness fraction) and the molecular gas fraction/dust fraction evolution of galaxies found out to $z<1$.
2022-09-29 09:07:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46888014674186707, "perplexity": 5412.380680132345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00032.warc.gz"}
http://www.physicsforums.com/showpost.php?p=3656415&postcount=2
View Single Post Recognitions: Homework Help Quote by Ad123q Hi, Was wondering if anyone could give me a hand. I need to prove that the Cayley Transform operator given by U=(A-i)(A+i)^-1 is UNITARY, ie that UU*=U*U=I where U* is the adjoint of U (I am given also that A=A* in the set of bounded operators over a Hilbert space H). My solution so far, is this correct? U=(A-i)(A+i)^-1 so (U)(x) = (A-i)((A+i)^-1)x (U acting on an x) Then (Ux,y)= {INTEGRAL}(A-i)((A+i)^-1)x y(conjugate) dx (1) = {INTEGRAL}x(A-i)((A+i)^-1)(both conjugate)y(all three conjugate) dx (2) =(x,U*y) and so deduce (U*)(y) = (A+i)((A-i)^-1)y and so the adjoint of U is U*=(A+i)(A-i)^-1 It can then be checked that UU*=U*U=I How do you conclude this from your expression for U*? Btw, instead of using the integral, can't you simply use the properties of the adjoint operator? That is, $(AB)^*=B^*A^*$ and $(A^{-1})^*=(A^*)^{-1}$? Quote by Ad123q As you can see my main query is the mechanism of finding the adjoint of U for the given U. For clarity in step (1) it is just the y which is conjugated, and in step (2) it is (A-i)(A+i)^-1 which is conjugated and then also the whole of (A-I)((A+i)^-1)y which is also conjugated. Sorry if my notation is confusing, if unsure just ask. Thanks for your help in advance!
2013-05-18 19:32:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196174144744873, "perplexity": 939.1399213996394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00068-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/952702/understanding-polynomial-regression
# Understanding polynomial regression I'm looking for a good tutorial on how to calculate a "line of best fit" for non-linear data. I found this site: http://easycalculation.com/statistics/learn-regression.php which gives a very good tutorial on calculating a linear equation, but I can't seem to find a similar guide for non-linear data. The closest I could find was this: http://www.arachnoid.com/sage/polynomial.html which starts out promising, but I began to understand less and less as it continued (compared to linear regression, where the most difficult concept to grasp was squares and sums). I have very little mathematical education, so that's the stumbling block here. Are there any simple means of calculating polynomial regression (I believe that's the term), or is it probably above my head if I don't understand the second link? (I'm creating a program to calculate and use the equation) Thank you • Regression will require at least a basic knowledge of linear algebra. – Jemmy Sep 30 '14 at 15:44 • So if I don't know what that is, I'm probably out of luck for the time being? I managed to easily get a program working for linear regression; does that require knowledge of linear algebra? And after more searching, I found this had2know.com/academics/quadratic-regression-calculator.html , but it requires using matrices. I understand matrices conceptually, but have never done any math on them. – Carcigenicate Sep 30 '14 at 15:47 • You're throwing around the word "nonlinear" too much. Fitting a polynomial by least squares is linear regression. It is a mistake to think that the reason linear regression is called that is that one is fitting a line. See this earlier question: math.stackexchange.com/questions/75959/… math.stackexchange.com/questions/75959/… – Michael Hardy Sep 30 '14 at 16:00 • Thank you. Like I said, I have very little math schooling (nothing past Applied Grade 12); I'm guessing on the terminology. All I know is I want an equation to fit to data that doesn't necessarily form a straight line. – Carcigenicate Sep 30 '14 at 16:02 Let $$X = \begin{bmatrix} 1 & x_1 & x_1^2 \\ \vdots & \vdots & \vdots \\ 1 & x_n & x_n^2 \end{bmatrix}.$$ Let $$Y = \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}.$$ Then the three entries in the $3\times 1$ matrix $(X^T X)^{-1}X^TY$ are the least-squares estimates of the coefficients in $y = \alpha+\beta x+\gamma x^2$.
2019-07-22 04:05:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584888756275177, "perplexity": 286.4038367238463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527474.85/warc/CC-MAIN-20190722030952-20190722052952-00526.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-6-logarithmic-functions-chapter-review-exercises-page-543/52
## Intermediate Algebra: Connecting Concepts through Application $x = -3$ $\log_4 (x+4) + \log_4 (x+7) = 1$ $\log_4 (x+4)(x+7) = 1$ $(x+4)(x+7) = 4^{1}$ $(x+4)(x+7) = 4$ $x(x+7)+4(x+7) = 4$ $x^{2} + 7x + 4x+28 = 4$ $x^{2} + 11x + 28 - 4 = 0$ $x^{2} + 11x + 24 = 0$ $x^{2} + 8x + 3x + 24 = 0$ $x(x+8) + 3(x+8) = 0$ $(x+3)(x+8) = 0$ $x = -3, -8$ $x = -3$ Check: When $x= -3$ $\log_4 (x+4) + \log_4 (x+7) \overset{?}{=} 1$ $\log_4 ((-3)+4) + \log_4 ((-3)+7) \overset{?}{=} 1$ $\log_4 (1) + \log_4 (4) \overset{?}{=} 1$ $0 + 1 \overset{?}{=} 1$ $1 = 1$ When $x = -8$ $\log_4 (x+4) + \log_4 (x+7) \overset{?}{=} 1$ $\log_4 (-8+4) + \log_4 (-8+7) \overset{?}{=} 1$ $\log_4 (-4) + \log_4 (-1) \overset{?}{=} 1$ Does not exist, since you cannot have a negative logarithm.
2018-08-19 12:45:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194747805595398, "perplexity": 132.1227712051879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215077.71/warc/CC-MAIN-20180819110157-20180819130157-00526.warc.gz"}
https://www.eolymp.com/en/contests/30353/problems/354087
Competitions # Teleportation One of the farming chores Farmer John dislikes the most is hauling around lots of cow manure. In order to streamline this process, he comes up with a brilliant invention: the manure teleporter! Instead of hauling manure between two points in a cart behind his tractor, he can use the manure teleporter to instantly transport manure from one location to another. Farmer John's farm is built along a single long straight road, so any location on his farm can be described simply using its position along this road (effectively a point on the number line). A teleporter is described by two numbers x and y, where manure brought to location x can be instantly transported to location y, or vice versa. Farmer John wants to transport manure from location a to location b, and he has built a teleporter that might be helpful during this process (of course, he doesn't need to use the teleporter if it doesn't help). Please help him determine the minimum amount of total distance he needs to haul the manure using his tractor. #### Input One line contains four integers: a and b describing the start and end locations, followed by x and y describing the teleporter. All positions are integers in the range 0 ... 100, and they are not necessarily distinct from each-other. #### Output Print a single integer giving the minimum distance Farmer John needs to haul manure in his tractor. #### Example In this example, the best strategy is to haul the manure from position 3 to position 2, teleport it to position 8, then haul it to position 10. The total distance requiring the tractor is therefore 1 + 2 = 3. Time limit 1 second Memory limit 128 MiB Input example #1 3 10 8 2 Output example #1 3 Source 2018 USACO February, Bronze
2023-02-09 00:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25903892517089844, "perplexity": 2193.764866238068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00040.warc.gz"}
http://mathematica.stackexchange.com/questions/16619/matrixform-explanation-as-why-row-extract-is-displayed-as-a-column
# MatrixForm explanation as why row extract is displayed as a column? I have a 5 x 5 matrix: But after doing a row extract, why is it displaying as a column? rowSpread2 = cdsSpread5yrs[[2]]; - What you extract has the form {x1,..,x5} and that by default is displayed as a column, you can put braces around it if you wish to display it as a row {cdsSpread5yrs[[2]]}//MatrixForm or cdsSpread5yrs[[{2}]]//MatrixForm – ssch Dec 19 '12 at 15:45 Makes it confusing to know whether I am dealing with a column or row... – sebastian c. Dec 19 '12 at 16:56 You might find this tutorial useful: reference.wolfram.com/mathematica/tutorial/… – chuy Dec 19 '12 at 17:14 Think of it this way, a matrix is a rectangular set of elements: m = {{a, b, c}, {d, e, f}}, and the first row m[[1,All]] has the list of elements {a,b,c}, the first column m[[All,1]] has the list of elements {a,d} Now if I ask Mathematica to plot on matrix form both {a,b,c} and {a,b}, how on earth should it know whether I got those lists of elements from a row or a column? Or I could have just typed them in. What it needs to do is to interpret them as a column (eg {{a},{b},{c}}) or a row (eg {{a,b,c}}). The default is to interpret it as a column. What you can do is that when you need to extract stuff write it out matrix = {{a, b, c}, {d, e, f}}; matrix [[{2} , All]] (* => {{b}, {e}} which is all rows in column 2*) matrix [[All , {2}]] (* => {{d, e, f}} which is all columns in row 2*) This way you retain the information of whether it's a column or a row you are dealing with. (And yes of cause in the first case [[{2},;;]] the last part is redundant, as it's the default). - thanks @jVincent, I think your reply is the best so far. Nice and simple. – sebastian c. Dec 19 '12 at 17:42 @NasserM.Abbasi This isn't a trick. – jVincent Dec 19 '12 at 17:55 @jVincent Should m[[All,1]] not give {a,d} instead of {a,b} – Lou Dec 19 '12 at 18:36 @Lau Yes, Thank you. – jVincent Dec 19 '12 at 19:00 @NasserM.Abbasi No offence taken! I just wanted to clarify that this isn't some cleaver trick that makes it come out like one would want it. It's the intended behavior and quite consistent. I sometimes feel like MATLABs defaults is a trick based on the assumption that people mainly deal with 2D matrices, thus they don't have lists of numbers, only rows or columns. – jVincent Dec 19 '12 at 19:03 Well, to answer you comment For that extra work might I just wanted to say that Mathematica is really a very flexible language (may be too flexible:) If you do not like something, you could always write little code to customize things. Seeing the excellent solution by jVincent below, I thought I should re-write eveything again to make this easier and more directed answer. To obtain the same display as one can with Matlab, follow these 2 simple steps ### A simple list is neither a row nor a column A simple list is just a collection of elements and cannot be transposed like a row/column. Indeed, you can see for yourself that it does not have a second singleton dimension which is necessary for a row/column vector. Dimensions[a = Range@5] (* {5} *) Transpose@a (* Transpose::nmtx: The first two levels of the one-dimensional list {1,2,3,4,5} cannot be transposed. >> *) (Note: these apply to ragged lists too, but I'll not address that here.) Contrast this with the behaviour for a row/column vector: Dimensions[a = {Range@5}] (* {1, 5} *) Transpose@a (* {{1}, {2}, {3}, {4}, {5}} *) Dimensions@% (* {5, 1} *) You can see that these have the second dimension and can be transposed back and forth. However, you can: ### Use Part directly to get the column/row vector As I mentioned above, when you do a[[All, 1]] what you're really asking for are the elements in the first position in all the sublists. However, if you instead wrap {} around your index 1, then as the documentation says, you get back a list of the parts. This list of parts introduces the second singleton dimension that then transforms the simple list into a corresponding row/column vector as the case may be. For example: a = Range@9 ~Partition~ 3; a[[All, {1}]] // MatrixForm (* This is a column vector *) a[[{1}, All]] // MatrixForm (* This is a row vector *) -
2016-05-01 13:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5669565796852112, "perplexity": 1265.608274649763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116173.76/warc/CC-MAIN-20160428161516-00025-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.kullabs.com/classes/subjects/units/lessons/notes/note-detail/10271
Notes on Magnification and Image | Grade 10 > Science > Light | KULLABS.COM Notes, Exercises, Videos, Tests and Things to Remember on Magnification and Image Please scroll down to get to the study materials. • Note • Things to remember • Videos • Exercise • Quiz Real Image and Virtual Image Real image is that image which can be obtained on a screen. It is formed by the actual intersection of the refracted rays. It is usually formed by a convex lens. Virtual image is that image which cannot be obtained on a screen. It is formed by the intersection of the rays. It is usually formed by a concave lens. Differences between Real Image and Virtual Image The major differences between real image and virtual image are: S. No. Real Image S. No. Virtual Image 1. It is formed at the point where the refracted rays meet. 1. It is formed at the point where the refracted rays appear to meet. 2. It is always inverted. 2. It is always erect. 3. The image is usually formed either on another side or behind the lens. 3. The image is always formed on the same side of the object in the lens. 4. Its size depends on the distance of the object from the optical center of the lens. 4. Its size is larger in the convex lens and smaller in the concave lens. 5. The image can be obtained on a screen. 5. The image cannot be obtained on the screen. Magnification (m) The size of the image obtained by the lens depends upon the distance of the object from the lens. If the object is placed near the lens, the image is magnified and if the object is placed far from the lens, the image is diminished. Thus, we can define magnification of a lens as the ratio of a height of the image to the height of the object. Mathematically, Magnification = $$\frac {height\:of\:image\:(I)}{height\:of\:object\:(O)}$$ $$\therefore$$ m = $$\frac IO$$ Explanation: In the above figure, two triangles $$\triangle$$OCD and $$\triangle$$OAB formed are similar, since all the angles of two triangles are equal.  We have: $$\frac {CD}{AB}$$ = $$\frac {OC}{OA}$$ According to the definition of magnification: Magnification = $$\frac {CD}{AB}$$ Or, m = $$\frac {distance\:of\:image\:from\:lens\:(v)}{distance\:of\:object\:from\:lens\:(u)}$$ i.e. m = $$\frac vu$$ Therefore, magnification is also calculated by the ratio of image distance (v) to object distance (u). Interpretation of Magnification 1. If magnification (m) is equal to 1 (m = 1), then the height of the image (I) is equal to the height of the object (O) i.e. I = O. 2. If magnification (m) is less than 1 (m ˂ 1), then the height of the image is smaller than the height of the object. 3. If magnification (m) is greater than 1 (m ˃ 1), then the image is larger than the height of the object. 4. If magnification (m) is negative, the image is virtual and erected. 5. If magnification (m) is positive, the image is real and inverted. Hence, we can conclude that magnification shows how smaller or larger an image is than the object. To prove: $$\frac IO$$ = $$\frac vu$$ Let an object AB be placed on the principal axis of the convex lens and perpendicular to its principal axis beyond 2F. A ray BP is parallel to the principal axis passes through F after refraction through it and another ray BO passes undeviated through its optical center O. These two refracted rays PB’ and OB’ meet at B’. Hence, B’ is the real image of B, and A’B’ is the real image of AB. In $$\triangle$$ABO and $$\triangle$$A’B’O, we have; 1. $$\angle$$BAO = $$\angle$$B’A’O [$$\because$$ both being 90⁰] 2. $$\angle$$BOA = $$\angle$$B’OA’ [$$\because$$ vertically opposite angles] 3. $$\angle$$ABO = $$\angle$$A’B’O [$$\because$$ remaining angles of each triangles] $$\therefore$$ $$\triangle$$ ABO = $$\triangle$$ A’B’O are similar. Hence, $$\frac {A’B’}{AB}$$ = $$\frac {OA’}{OA}$$ i.e. $$\frac {height\:of\:image}{height\:of\:object}$$ = $$\frac {image\:distance}{object\:distance}$$ $$\therefore$$ $$\frac IO$$ = $$\frac vu$$ Proved. Relation between Object distance, Image distance, and Focal length If u, v and f represent object distance, image distance and focal length of a lens respectively, we can give the relation between them by a formula: $$\frac 1f$$ = $$\frac 1u$$ + $$\frac 1v$$ It is said to be lens formula. For our simplicity, we take the real distance as positive and virtual distance is taken as negative. Hence, the focal length of convex lens is taken as positive and the focal length of concave lens is taken as negative. 1. An image that can be obtained on a screen is said to be real image. It is always inverted. 2. The image that cannot be obtained on a screen is said to be virtual image. It is always erect. 3. Magnification of a lens can be defined as the ratio of height of the image to the height of the object. i.e. Magnification = $$\frac {height of image (I)}{height of object (O)}$$ 4. Magnification can also be calculated by the ratio of image distance (v) to object distance (u). Mathematically, Magnification = $$\frac {image distance (v)}{object distance (u)}$$ 5. Lens formula: $$\frac 1f$$ = $$\frac 1u$$ = $$\frac 1v$$ . ### Very Short Questions Magnification of image is defined as the ratio of height of the image to the height of the object. Mathematically, m = $$\frac {height\:of\:image\:(I)}{height\:of\:object\:(O)}$$ Real image is defined as the image that can be obtained on a screen. It is formed by the actual intersection of the refracted rays in a convex mirror. Virtual image is defined as the image which cannot be obtained on the screen. It is formed by the intersection of the rays in a concave mirror or lens. S. No. Real Image S.No. Virtual Image 1. It is formed at that point where the refracted rays actually meet. 1. It is formed at that point where the refracted rays appear to meet. 2. It is always inverted. 2. It is always erect. 3. It is always formed on another side or behind the lens. 3. It is always formed on the same side where the object in the lens is. 4. Its size depends on the distance of the object from the optical centre of the lens. 4. Its size varies in concave and convex lens. It is larger in convex lens and smaller in concave lens. 5. It is obtained on a screen. 5. It cannot be obtained on screen. 0% greater than all of them none of them larger than smaller than equal to none of them greater than all of them equal to smaller than two times larger than all of them equal to none of them half smaller than none of them appear to seems to not exactly all of them actually all of them erect none of them small inverted parallel ## ASK ANY QUESTION ON Magnification and Image No discussion on this note yet. Be first to comment on this note
2020-03-29 12:24:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8036313056945801, "perplexity": 777.7275870464086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494331.42/warc/CC-MAIN-20200329105248-20200329135248-00036.warc.gz"}
https://www.freemathhelp.com/forum/threads/49121-Find-length-of-diagonal-across-8-5-by-11-inch-sheet
# Thread: Find length of diagonal across 8.5-by-11 inch sheet 1. ## Find length of diagonal across 8.5-by-11 inch sheet Standard letter paper is 8.5 inches wide and 11 inches long. To the nearest tenth of an inch, what is the diagonal distance d across the paper? 2. Hint: Apply the Pythagorean Theorem. Eliz. 3. ## Find length of diagonal across 8.5-by-11 inch sheet Can somebody else than Eliz be more willing to show some work so I can see if i did it correctly. Eliz. Thanks for the "tip" but you didn't really help much. 4. SaeLK, if you're unfamiliar with the Pythagorean Theorem, then you need classroom help...or at least (if you're not too busy or tired) look it up: use Google Have a nice day. 5. Originally Posted by SaeLk Can somebody else than Eliz be more willing to show some work so I can see if i did it correctly. If you've done the exercise, then please show your work. We'll be glad to check to see if you did this correctly. Note: Since you used some method other than the Pythagorean Theorem (or, which is the same thing, the Distance Formula), please state what method you used when you reply. Thank you. Eliz. 6. $a^{2}+b^{2}=c^{2}$ I'll let you write the letters on the paper and maybe figure out how to fold it just right. 7. ## Find length of diagonal across 8.5-by-11 inch sheet 8.5^2 in + 11^2 in. a^2 + b^2 = d ? d = 193.25 or d = 13.901..... 72.25 + 121 = 193.25 This method I used to get the answer. but my first one was 11 in - 8.5 in. = 2.5 in then 11 + 2.5 = 13.5 in was my answer. I'm not sure which answer is the correct one. or even if i have a correct answer. 8. Hey, well done Saelk! The math gurus like to see: let d = diagonal; then: d^2 = 8.5^2 + 11^2 d^2 = 72.25 + 121 d^2 = 193.25 d = sqrt(193.25) d = ~13.90143.... d = 13.9 (nearest 10th) Now take a sheet of 8.5 by 11, draw the diagonal, then measure it with your ruler: 13.9 looks good ? And if you CUT along the diagonal, you're left with 2 triangles, same size and with right angle, right? #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
2018-05-26 02:19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6484767198562622, "perplexity": 1709.7280558109894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867277.64/warc/CC-MAIN-20180526014543-20180526034543-00227.warc.gz"}
https://www.physicsforums.com/threads/why-does-this-clearly-solve-the-heat-equation.841779/
# Why does this "clearly" solve the heat equation? 1. Nov 6, 2015 So one of my least favorite things that textbooks do is using the words "clearly", "it should be obvious", etc. In my PDEs class, we've started the Fourier Transform, and I missed the first day of it so I am trying to read through my book. Regarding the heat equation on an infinite domain, it tells me this: From our previous experience, we note that the expression $\sin{\frac{n\pi x} L } e^{-k(\frac{n\pi} L)^2t}$ solves the heat equation [$u_t=k\cdot u_{xx}$] for integer n, as well as $\cos{\frac{n\pi x} L } e^{-k(\frac{n\pi} L)^2t}$. In fact, it is clear tha $$u(x,t)=e^{-i\omega x}e^{-k\omega^2t}$$ solves [the heat equation as well], for arbitrary ω both positive and negative.​ It's not "clear" to me why this happens, so I tried 'deriving' this form for a bit by using $\omega=\frac{n\pi}L$ and writing both of the trig functions in their exponential forms $$\sin x = \frac 1 2(e^{ix}-e^{-ix})$$ $$\cos x = \frac 1 2(e^{ix}+e^{-ix})$$ (and with terms like $e^{i\omega x}$ as well) and added, multiplied, etc, but to no avail. To be clear (no pun intended), I know that the $e^{-k\omega^2t}$ term is the same as the exponential term in both of the expressions which solve the equation, but I'm failing to see where $e^{-i\omega x}$ came into play. Any advice as to how I can figure this out? (If possible, please give me some advice on 'deriving' it myself before giving a full answer?) 2. Nov 7, 2015 ### SteamKing Staff Emeritus If you assume that $u (x, t) = \sin({\frac{n\pi x} L }) ⋅ e^{-k(\frac{n\pi} L)^2t}$, then all you need to do is calculate ut and uxx and substitute these back into the original heat equation, [$u_t=k\cdot u_{xx}$]. You don't have to convert sine or cosine into their exponential equivalents to do this, just use plain old partial differentiation with the product rule. 3. Nov 7, 2015 I know how to plug it in, but I don't see where the solution that is the product of the exponentials comes from. How was it found? 4. Nov 7, 2015 ### SteamKing Staff Emeritus I really can't say. However, your textbook says, "From our previous experience, we note that ...", which leads me to believe that there is an earlier section in your text where such a solution was developed. Perhaps separation of variables was used, a common technique which is utilized to solve PDEs like this heat equation. For this PDE, note that one side of the equation involves a derivative w.r.t. t, while the other side involves a second derivative w.r.t. x. The exponential function is solely a function of t while the trig function involves only x. If you differentiate sine or cosine twice, you get the same function back, just like you get back an exponential function if you differentiate an exponential function, plus some multiplicative constants, of course. These facts would suggest that a trial solution of the form u(x,t) = sin(Ax) ⋅ exp (Bt) with the appropriate choice of the constants A and B would solve ut = k ⋅ uxx. 5. Nov 7, 2015 ### maka89 1. Fourier transform the equation over x, to get an ODE (with t as the variable) for the fourier transform of the solution. 2. Solve this ODE. Get a nice expression for the fourier transform of the final solution. 3. Inverse fourier transform this expression. Step 1 gives: $\hat{u}_t = -k\omega^2\hat{u}$. Have used that the property of the fourier transform: $\mathcal{F}(\frac{du}{dx}) = (i\omega)\hat{u}$. Step 2 is solving this first order ODE for $\hat{u}(\omega,t)$. Step 3 is applying the inverse fourier transform to this solution. $u(x,t) = \frac{1}{2\pi}\int^\infty_\infty \exp(i\omega x)\hat{u}(\omega,t) d\omega$. You may need the following formula: $\int^\infty_{-\infty} e^{-(ax^2+bx+c)} dx = \sqrt{\frac{\pi}{a}} e^\frac{b^2-4ac}{4a}$ EDIT: Not implying that this should be "clearly obvious", though ^^ Last edited: Nov 7, 2015 6. Nov 8, 2015 ### Staff: Mentor $$u(x,t)=e^{iωx}e^{-kω^2t}=(cos(ωx)+isin(ωx))e^{-kω^2t}=cos(ωx)e^{-kω^2t}+isin(ωx)e^{-kω^2t}$$ Each term of the final expression satisfies the differential equation individually.
2018-07-22 04:12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897550106048584, "perplexity": 441.4847620640822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00057.warc.gz"}
http://su.diva-portal.org/smash/record.jsf?pid=diva2%3A1195402&dswid=6755
Change search Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Cosmic Dawn in a Fuzzy Universe: Constraining the nature of Dark Matterwith 21 cm Cosmology Stockholm University, Faculty of Science, Department of Astronomy. 2017 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis ##### Abstract [en] The cold dark matter (CDM) paradigm underlying the standard $\Lambda$CDM model of cosmology is successful on large scales but faces potential problems on small scales partly related to a seeming overproduction of dwarf galaxies. This could be alleviated in exotic dark matter models that suppresses small-scale structure formation. One such attractive model is known as fuzzy dark matter (FDM). FDM positsthat dark matter is composed of ultra-light bosons with masses $m_{\rm FDM} \sim 10^{-22}$ eV. With such light particle masses, quantum effects become important. More specifically, a pressure-like term appears in the equations of motion that counteracts gravitational collapse on small scales. Because small galaxies form first in CDM, it follows that the early history ot galaxy formation predicted by FDM should be markedly different. One novel way to probe this effect would be to use the 21 cm line of hydrogen which acts as a sensitive probe of the epoch of reionization (EoR) and Cosmic Dawn — when the first galactic sources of X-rays started to reheat theintergalactic medium (IGM). In this thesis, the evolution of the 21 cm signal have been simulated for both CDM and FDM. These simulations indicate that the fluctuationsin the 21 cm signal amenable to future observations are extremely weak ($\ll$ 1 mK) — and probably unobservable — for FDM at high redshifts $z \sim 15-16$ compared to CDM (which tend to yield signals with amplitudes $\gg$ 1 mK). This is mainly due to the delayed galaxy formation in FDM resulting in delayed Lyman-$\alpha$ coupling of the 21 cm spin temperature to the kinetic temperature of the IGM. A robust prediction from all FDM scenarios explored in this thesis is that any detection of a signal at $z \sim 15-16$ would rule out interesting particle masses for FDM, and would be evidence for CDM-like structure formation. Future work that properly models ionization fluctuations during the EoR could also yield strong predictions at lower redshifts. 2017. , p. 85 ##### National Category Astronomy, Astrophysics and Cosmology ##### Identifiers OAI: oai:DiVA.org:su-154861DiVA, id: diva2:1195402 ##### Examiners Available from: 2018-10-30 Created: 2018-04-05 Last updated: 2018-10-30Bibliographically approved #### Open Access in DiVA ##### File information File name FULLTEXT01.pdfFile size 50300 kBChecksum SHA-512 c09c58700dc755f70430ba0a590934718f5a38280819be014aaf3729c8016913591a0a7c9801ccc2f692b6c28807ec48557cefc442cabbf77fac1d6efb7bfe14 Type fulltextMimetype application/pdf ##### By organisation Department of Astronomy ##### On the subject Astronomy, Astrophysics and Cosmology #### Search outside of DiVA The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available urn-nbn #### Altmetric score urn-nbn Total: 225 hits Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf
2019-10-23 19:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5875465273857117, "perplexity": 8465.808624699439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00219.warc.gz"}
https://qanda.ai/en/solutions/3nxZrxcs4X-3)-26c-02-4)-d-(-1)-13
Symbol Problem $3\right)$ $2.6=c-0.2$ $4\right)$ $d-\left(-1\right)=$ $13$ $5\right)$ $e-\dfrac {1} {4}=\dfrac {1} {4}$ $6\right)$ $f-\dfrac {1} {2}=-1$ มัธยมต้น คณิตศาสตร์ Search count: 1,548 Solution คุณครู Qanda - MM
2021-02-27 22:22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279176592826843, "perplexity": 11369.908707728273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00214.warc.gz"}
http://informationtransfereconomics.blogspot.com/2013/12/plucking-rgdp-growth.html
## Monday, December 23, 2013 ### Plucking RGDP growth I mentioned in this post that I had hypothesized earlier that the RGDP growth was operating like a bound; I decided to re-do some of the graphs from the second link as well as this link using a "plucking" framework. So first is the the implementation -- instead of showing S(y) as a path (where y is the time variable) through NGDP-MB space fit to a line, I instead fit to a linear bound. The result is below (the bound S is shown as a line tangent to the black empirical path): The expected RGDP along S(y) is shown in black on this graph (the empirical RGDP is shown in green and the model calculation along the path is shown in blue): Already you can see some hint of an upper bound (the recessions all appear to be sharp downward falls with overcompensating rises afterwards). We'll take out the trend and look at the deviations from it to make this a little clearer (the recessions are shown in red): If we excise the recessions, it becomes even clearer (and the distribution looks more like random fluctuations around a mean): Here is the original distribution of fluctuations (black line) and with the recessions excised (purple): The distribution becomes noticeably more symmetric without the recessions (with a mean just below the trend, lending support to the plucking model). It is still not a normal distribution as it is much narrower than would be expected; it is not as narrow as e.g. a Cauchy distribution. We can also look at deviations from inflation in this plucking framework: Inflation it seems deviates systematically in both directions, in particular being unexpectedly low during the 1960s and rising during the oil shocks of the 1970s. The deviation from the expected inflation accounts for most of the deviation from NGDP growth: In the posts linked above, I pointed to the lack of inflation in the 1960s being a mystery (which shows up as lower NGDP growth than expected). The plucking model translated into an ideal information transfer bound (IS ≤ ID, with IS being the information received by the supply and ID being the information transmitted by the demand) could give a potential explanation. The US economy increased information transfer efficiency from IS ~ say 10% of ID to IS ~ say 50% of ID in the immediate post-war period (the numbers 10% and 50% are for concreteness; I don't know what the exact values are or even if they can be determined). While this didn't affect real growth very strongly if at all, it manifested as low inflation (and hence low NGDP growth). After about 1980 we reached the bound given by S(y). (Per the fit above, is the fit a bound at IS = ID? or is it just IS ~ ID? Again, I can't answer that.) From that point on we had the "Great Moderation" where inflation and NGDP followed the expected path given by the bound S(y) until the "Great Recession", a major fall in information transfer efficiency. One final note is that all the graphs here are only slight changes from the graphs in the posts linked above. #### 1 comment: 1. The distribution of ΔRGDP above makes me think of the fluctuation theorem: http://en.wikipedia.org/wiki/Fluctuation_theorem In particular, results like this: http://physics.aps.org/articles/v2/43
2017-10-19 03:39:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078555464744568, "perplexity": 1363.6976847348828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00121.warc.gz"}
http://blog.alexsanjoseph.com/posts/osx-reinstall-from-scratch/
# Intro So my (ostensibly high-end) Macbook Pro has been crawling the last few months whenever I connect it to external monitors. Generally it is still usable, however, after a lot of experimentation, I have found that like many things in life I have to make a tradeoff between two out of three things - • Get on a zoom call (especially with video/screen sharing). • Connect multiple monitors. • Charge the laptop. If I do all three of the above, within less than 10 minutes, my mac will start overheating and the dreaded kernel_task issue kicks in meaning that my computer is literally unusable. I had tried a thousand things and failed miserably and so decided to take the plunge this weekend to go for the nuclear option, and do a factory reset of my system to see if that helps. I did basic reading up and figured out the plan (backup important stuff, go to recovery, erase system disk, reinstall, restore backups, install only the things I care about thereby getting rid of any cruft that have accumulated over time) and was ready for the adventure. After doing the backups, I reached recovery mode by restarting and holding CMD + R until the recovery screen appeared. # The two mistakes of my life The first step was to erase the system disk. Here is where I goofed up the first time. Instead of just erasing the system partition, the recovery prompt helpfully gave me an option to delete the “partition group” which I clicked yes accidentally. I was hoping an act of such great gravity would have at least some sort of a dry run, or a further confirmation, but the system was happy enough to go ahead and delete on that single click. Everything. Although I didn’t realize it at the time, at this point I had ended up deleting not just the system partition, but also the recovery partition that would help me do the reinstall! I had done the proverbial cutting-the-branch-where-you’re-sitting act! I suspect it would still have been possible to reinstall the system as is at the time (maybe not?) since the recovery was still loaded in memory, but I decided to try and see what would happen if I restarted the system right about now. My argument was - “hey, what’s the worst that could happen, I could always have a power issue causing the system to restart, so might as well try and see it through”. Well, turns out that I’d know soon enough. # Panic! After the restart, I didn’t do the recovery trick and sure enough MacBook reached the Startup Disk Not Found screen as expected. All was good and expected. However, when on trying to enter the recovery, instead of the familiar recovery screen, I was met with a black screen with a spinning globe icon with NOTHING except an option to connect to Wifi. That’s when I realized the fact that something was drastically wrong. I tried connecting to my Wifi which it seemed to connect to, but absolutely nothing changed - I didn’t get a new message and the globe continued to spin. I waited a little, to no avail. There were no other buttons to click, nothing much do other than watch the globe spin. It didn’t look like it was doing anything n the background either since I still had the option to change my WiFi, and reconnect to a different one just like before. The next option was to try restarting the system again, in different recovery modes (who knew mac had so many startup key combinations!), while frantically trying to figure out what to do if the recovery doesn’t work. After downloading the system image for Big Sur on my spouse’s Macbook and ordering a new USB drive because I hadn’t used one in eons, I was back to trying to figure out if I could do anything more while I was waiting. # Debugging However, it turned out that while the globe was quietly spinning, it was doing something in the background, and in the 15 minutes when I was figuring out alternate options, I had got an error screen on the system. ## Error 2002f The error message was succinct, and read apple.com/support - 2002f. Doing the usual debugging by trawling stackoverflow/stackexchange (useful!) and apple forums (which suggests resetting NVRAM and PRAM for any possible Apple issue starting from getting kicked out of Eden) yielded that this issue was because the system couldn’t hit Apple servers. One helpful suggestion floting around was to reset the DNS ip on the router. As futile as it seemed at that point, I logged into the router settings, and after a lot of digging, changed the DNS from the default to the Google DNS (8.8.8.8 and 8.8.4.4. Restarted the router, restarted recovery, and reconnected to the Wifi. The change was instant. While the recovery system had kept spinning even after connecting to Wifi many times before, this time it immedately proceeded to “Starting Internet Recovery. This may take a while”. I was on cloud nine (and so was the Macbook trying to do a cloud recovery)! ## Error 1008f Already my spouse was asking if she could stop downloading the Big Sur image on her system since she didn’t want any accidental unnecessary upgrades, her being a Site Reliability Engineer. However, I thought that that the danger had not yet passed since it was still just starting the recovery download. And I was right. After a few minutes, I was hit by a new error. apple.com/support - 1008f. Back to the SOF/SE and the support forums and man pages. Turns out that issue can happen because of the subtle differences between CMD + R and CMD + Option + R. The other possibility was that I had to disable Activation lock by logging into another system and I had everything setup to be ready for that as well. However, it turned out that the changing the startup command to CMD + Option + R actually worked and I was back on the road. With this the the recovery download didn’t stop immediately and I could see from (lack of) speed of the progress bar that it was actually downloading something. ## Error 2005f The third error was probably the simplest to solve. A quick google search lead me to this article and all I had to do was to restart the system pressing the magic buttons Option + Command + P + R. The only issue here is that your hands should be reasonably dextrous to do the command while turning on the machine, performing the equivalent of the twister game with fingers. ## Error 2004f The final error happened after nearly an hour of the recovery being download. The progress bar stopped at one point and gave up with Error-2004f. Searching online just said that these were internet issues (Lies!) but after the restart, the recovery was smart enough to start from the point where it stopped last minute and continue onwards. The issue happened a couple of more times, and I had to do a few more manual restarts. However, eventually the recovery system was fully downloaded and the automatic restart took me to the familiar, comforting recovery screen. From thereon, installing Big Sur was an absolute breeze and within a couple of hours, I had my new fresh system ready to go. # Conclusions I have had similar experiences in the past with Win/Linux where the only option at such a stage was to manually flash a new recovery/boot system using an external USB. On retrospect, although the UI/UX could have been better, I am mighty impressed by the fact that I was able to restore my Macbook practically from Silicon and Oxygen all without having to connect any physical media.
2023-03-25 17:53:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4830467104911804, "perplexity": 1467.8572404336608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00152.warc.gz"}
https://puzzling.stackexchange.com/questions/402/what-is-the-solution-to-the-generic-round-table-problem
# What is the solution to the generic round table problem? This question was asked about a group of people whom a king wishes to play a cruel game with. Going around in a circle, the king kills every other person. The question concerned how one could compute the answer without executing the sequence, and this answer comes out to $n=2^m+p$, where $n$ is the number of people, $2^m$ is the highest value less than $n$, and $p$ is the seat number of the person who lives. What happens if the king kills every third person? Fourth person? For convenience, let $s$ be the number of seats skipped. I've worked out the first few digits in the sequence where $s=3$, as $1, 2, 2, 1, 4, 1, 4, 7$, but I don't see a pattern here. I imagine I'd have to generate quite a number of these to see the pattern. The first few digits where $s=4$ are $1, 2, 3, 2, 1, 5, 2, 6, 1$. How does one generate this sequence in a generic way for any $s$? As mentioned in the previous thread, this is the (generalized) Josephus problem, known as such because the oldest known reference (at least in Western history) is by the historian Flavius Josephus. There is no known general closed form. You can compute the position of the survivor by recurrence: $$p_s^n = \begin{cases} 1 + (p^{n-1}_s - 1 + s) \bmod n & \text{if $$n \gt 1$$} \\ 1 & \text{if $$n = 1$$} \\ \end{cases}$$ where $p_s^n$ is position of the survivor for the $s$-step Josephus problem with $n$ people. The Online Encyclopedia of Integer Sequences should always be your first hit when you have a sequence of integers. It has a bunch of entries tagged “Josephus”, including • A006257: survivor for $s=2$ • A054995: survivor for $s=3$ • A088333: survivor for $s=4$ • A181281: survivor for $s=5$ • A032434: $p_s^n+1$ for $s \le n$, in lexicographic order of $(s,n)$ The reference for A054995 gives a formula for $s=3$: $$p^3_n = 3n+1 - \lfloor K(3) \cdot (3/2)^{\lceil L(3) \rceil} \rfloor$$ where $L(3) = \dfrac{\log((2n+1)/K(3))}{\log(3/2)}$ and $K(3) \approx 1.62227050288476731595695$ as seen in A083286. See “Functional iteration and the Josephus problem” by A. M. Odlyzko and H. S. Wilf for a proof and generalization. There is also a proof with didactic commentary in Concrete Mathematics (Graham, Knuth and Patashnik, 1994 (2nd ed.)) §3.3. Note that this doesn't really help to compute the values, because computing a precise enough value of $K(3)$ is as hard as computing $p^3_n$.
2019-12-12 15:56:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5695169568061829, "perplexity": 428.86755884575746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00268.warc.gz"}
https://gujcet.in/Gujarati/Maths/Chapter-2/71/MCQs?q=9aZHDjblmRk=
### MCQs of ત્રિકોણમિતીય પ્રતિવિધેયો Showing 1 to 10 out of 139 Questions 1. $sin\left(3si{n}^{-1}\frac{1}{3}\right)$ = _____ . (a) $\frac{23}{27}$ (b) $\frac{1}{3}$ (c) $\frac{27}{23}$ (d) $\frac{2\sqrt{3}}{9}$ 2. જો કોઈક માટે $si{n}^{-1}x=\frac{\pi }{7}$ તો $co{s}^{-1}x$ = _____. (a) $\frac{3\pi }{14}$ (b) $\frac{5\pi }{14}$ (c) $\frac{\pi }{14}$ (d) $\frac{6\pi }{7}$ 3. = _____ . (a) 15 (b) 6 (c) 13 (d) 25 4. = _____ . (a) $\frac{\pi }{6}$ (b) $\frac{5\pi }{6}$ (c) $-\frac{\pi }{6}$ (d) $\frac{7\pi }{6}$ 5. નો પ્રદેશગણ _____ છે. (a) (b) (c) (d) $\left[-1,1\right]$ 6. ${\mathrm{tan}}^{-1}$ નો વિસ્તાર _____ છે. (a) (b) R (c) (d) 7. નું મૂલ્ય _____ છે. (a) $-\frac{\pi }{3}$ (b) $\frac{\pi }{3}$ (c) $\frac{4\pi }{3}$ (d) $\frac{2\pi }{3}$ 8. = _____. (a) $\frac{\pi }{6}$ (b) $\frac{\pi }{3}$ (c) $\frac{\pi }{2}$ (d) $\frac{3\pi }{2}$ 9. $si{n}^{-1}\left(sin\frac{5\pi }{3}\right)$ નું મૂલ્ય _____ છે. (a) $-\frac{\pi }{3}$ (b) $\frac{5\pi }{3}$ (c) $\frac{\pi }{3}$ (d) $\frac{2\pi }{3}$ 10. = _____ . (a) $\frac{4}{9}$ (b) $\frac{1}{3}$ (c) 0 (d) $-\frac{1}{3}$ Showing 1 to 10 out of 139 Questions
2022-08-09 20:55:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451376795768738, "perplexity": 2607.361158202289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00192.warc.gz"}
https://math.stackexchange.com/questions/2336592/mathbbq-mathbbz-is-an-injective-cogenerator-of-mathbbz-mod
# $\mathbb{Q}/ \mathbb{Z}$ is an injective cogenerator of $\mathbb{Z}-Mod$ Let $R$ be a ring. An injective left $R$-module $E$ is called an injective cogenerator if $Hom_R(M,E) \not =0$ for $0 \not =M \in R-Mod$. I have seen that $\mathbb{Q}/ \mathbb{Z}$ is an injective cogenerator of $\mathbb{Z}-Mod$(here $\mathbb{Q}$ denotes the ring of rational numbers and $\mathbb{Z}$ denotes the ring of integers): Since $\mathbb{Q}/ \mathbb{Z}$ is divisible as $\mathbb{Z}$-module and all divisible $\mathbb{Z}$-modules are injective, we have $\mathbb{Q}/ \mathbb{Z}$ is injective. Then how to get that $\mathbb{Q}/ \mathbb{Z}$ is a cogenerator? • Can you show that $\hom_{\mathbb Z}(A,\mathbb Q/\mathbb Z)$ is nonzero when $A$ is finitely generated? Jun 26, 2017 at 10:51 • Not quite. Since $\mathbb Z$ is a PID, every ideal of $\mathbb Z$ is of the form $a\mathbb Z$, and the map sending $1$ to $a$ is an isomorphism between $\mathbb Z$ and $a\mathbb Z$. Alternatively, every subgroup of a free group is free. In either case, $\ker f$ is free, so $\hom(\ker f,\mathbb Q/\mathbb Z)$ will be a direct product (not necessarily sum if no longer finitely generated) of copies of $\mathbb Q/\mathbb Z$. However, we aren't dealing with vector spaces, and so even if the "rank" were the same, injective and surjective maps wouldn't necessarily be the same thing. Jun 26, 2017 at 19:46 • For example, the map $\mathbb Q/\mathbb Z \to \mathbb Q/\mathbb Z$ defined by $x\mapsto 2x$ is surjective (because $\mathbb Q/\mathbb Z$ is divisible) but not injective (because $1/2$ maps to $0$). Jun 26, 2017 at 20:24 • Jun 27, 2017 at 1:40 • @Aaron Actually, we don't need to show that $Hom(A,Q/Z) \not =0$ for all finitely generated modules $A$. We just only need to show that $Hom(S, Q/Z) \not = 0$ for all simple Z-modules $S$. We can get this by the way we use above: Since $S$ is simple ,there is an epimorphism $f: Z \rightarrow S$. Then since $kerf$ is a left ideal of $Z$, it it of the form $kerf =bZ$ for some $b \in Z$. Now the natural embeding $i : bZ \rightarrow Z$ can't induce an isomorphism $Hom(i,Q/Z): Hom(Z, Q/Z) \rightarrow Hom(bZ,Q/Z)$. For example, take $f:Z \rightarrow Q/Z$ as f(x)=x/b, then the composition $f i =0$ Jun 27, 2017 at 6:43
2022-06-27 05:22:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729800224304199, "perplexity": 118.88072319090794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00615.warc.gz"}
https://domino.mpi-inf.mpg.de/internet/reports.nsf/c125634c000710d0c12560400034f45a/409c326025fde668c12560410006d4df?OpenDocument
max planck institut informatik MPI-I-93-162 On multi-party communication complexity of random functions Grolmusz, Vince MPI-I-93-162. December 1993, 10 pages. | Status: available - back from printing | Next --> Entry | Previous <-- Entry Abstract in LaTeX format: We prove that almost all Boolean function has a high $k$--party communication complexity. The 2--party case was settled by {\it Papadimitriou} and {\it Sipser}. Proving the $k$--party case needs a deeper investigation of the underlying structure of the $k$--cylinder--intersections; (the 2--cylinder--intersections are the rectangles). \noindent First we examine the basic properties of $k$--cylinder--intersections, then an upper estimation is given for their number, which facilitates to prove the lower--bound theorem for the $k$--party communication complexity of randomly chosen Boolean functions. In the last section we extend our results to the $\varepsilon$--distributional communication complexity of random functions. Acknowledgement: References to related material: MPI-I-93-162.pdf6426 KBytes Please note: If you don't have a viewer for PostScript on your platform, try to install GhostScript and GhostView URL to this document: http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1993-162 BibTeX @TECHREPORT{Grolmusz93d, AUTHOR = {Grolmusz, Vince}, TITLE = {On multi-party communication complexity of random functions}, TYPE = {Research Report}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
2019-10-22 03:19:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275776147842407, "perplexity": 5060.008749236765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00356.warc.gz"}
https://www.meritnation.com/ask-answer/question/define-frequency-modulation-write-the-advantages-of-frequenc/communication-systems/8720091
# define frequency modulation ? write the advantages of frequency modulation over amplitude modulation ? Frequency Modulation: Frequency of carrier signal wave varies in accordance with modulating signal. In  AM,  information is carried in the form of amplitude variations. The electric fields in the channel may affect the amplitude of the signal which are called noise In FM transmission, message is in the form of frequency variation of carriers waves. During modulation, noise gets amplitude modulated, changing the amplitude of carrier waves. Obviously, the message signals, in the form of frequency, is not affected, That is why FM signal is less susceptible to noise than an AM signal. • 16 What are you looking for?
2020-02-17 03:24:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563790917396545, "perplexity": 1473.2621428294983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141653.66/warc/CC-MAIN-20200217030027-20200217060027-00067.warc.gz"}
https://skyciv.com/docs/skyciv-retaining-wall/articles/retaining-wall-sliding-calculation-example/
SkyCiv Documentation Your guide to SkyCiv software - tutorials, how-to guides and technical articles 1. Home 2. SkyCiv Retaining Wall 3. Articles and Tutorials 4. Retaining Wall Sliding Calculation Example # Retaining Wall Sliding Calculation Example ## How to Calculate Factor of Safety Against Sliding for Retaining Wall – Reinforced Concrete Cantilever This retaining wall sliding calculation example is a simple guide on how to calculate the Factor of Safety against sliding in a retaining wall as part of the stability checks. This sliding check is performed to ensure that the resulting force of the friction between the wall and the substructure soil is enough to prevent the Retaining Wall from sliding as a result of the horizontal loads rising from the retained soil active pressure. Basically, the friction force that will prevent the wall from sliding is the total vertical load multiplied by the Soil-Concrete Friction Coefficient defined for the substructure soil material and the sliding force is the result of the retained soil’s lateral pressure and the pressure associated with the surcharge presence. That said, the calculation process will be detailed in the following: ### Input data: Stem • Height: 3.124 m • Width: 0.305 m • Offset: 0.686 m Base • Width: 2.210 m • Thickness: 0.381 m Active and Passive Soil • Unit weight: 18.85 kN/m3 • Friction Angle: 35 degrees Substructure Soil • Unit weight: 18.85 kN/m3 • Friction Angle: 35 degrees • Soil-Concrete Friction Coefficient: 0.55 • Allowable bearing pressure: 143.641 kPa Soil Layers: • Active: 3.505 m • Passive: 0.975 m • Substructure: 0.792 m All the loads associated with the Retaining Wall Sliding Calculation are shown in the following picture: ### Sliding force As mentioned, the sliding force is the sum of the resultant horizontal force from the active soil pressure in the active soil side and the resultant horizontal force from the presence of the surcharge. In order to calculate the lateral earth pressure due to the retained soil active pressure and the surcharge resultant lateral pressure, it is necessary to calculate the Rankine active earth-pressure coefficient: $$K_a = \frac{1-\sin(\gamma_{soil,\;active})}{1+\sin(\gamma_{soil,\;active})}$$ $$K_a = \frac{1-\sin(35º)}{1+\sin(35º)} = 0.271$$ With that result, it is now possible to calculate the horizontal load resulting from the lateral active pressure that the retained soil exerts: $$H_{active} = \frac{1}{2} \cdot \gamma_{soil,\;active} \cdot (stem_{height} + base_{thickness})^{2} \cdot K_a$$ $$H_{active} = \frac{1}{2} \cdot 18.85\;kN/m^3 \cdot 3.505^{2} \cdot 0.271$$ $$H_{active} = 31.377\;kN/m$$ For calculating the horizontal force related to the surcharge presence, an equivalent soil height is calculated first, and then the actual force: $$h_{soil,\;eq} = \frac{surcharge_{value}}{\gamma_{soil,\;active}} = \frac{17.237 \;kN/m}{17.237 \;kN/m}$$ $$h_{soil,\;eq} = 0.914 \; m$$ $$H_{surcharge} = \gamma_{soil,\;active} \cdot h_{soil,\;eq} \cdot (stem_{height} + base_{thickness}) \cdot K_a$$ $$H_{surcharge} =\cdot 18.85\;kN/m^3 \cdot 0.914 \; m \cdot 3.505 \; m \cdot 0.271$$ $$H_{surcharge} = 16.372\;kN/m$$ With those two loads calculated, it is now possible to calculate the sliding force by summing the two loads up: $$\Sigma{H} = H_{active} + H_{surcharge} = 31.377\;kN/m + 16.372\;kN/m$$ $$\Sigma{H} = 47.749 \; kN$$ ### Friction force In order to calculate the friction force that resists the sliding force, it is necessary to evaluate the total vertical load first: $$W_{stem} = \gamma_{concrete} \cdot (stem_{height} \cdot stem_{width} ) = 23.58 \;kN/m^3 \cdot 3.124\;m \cdot 0.305\;m$$ $$W_{stem}= 22.467\;kN/m$$ $$W_{base} = \gamma_{concrete} \cdot (base_{thickness} \cdot base_{width} ) = 23.58 \;kN/m^3 \cdot 0.381\;m \cdot 2.210\;m$$ $$W_{base}= 18.855\;kN/m$$ $$W_{active} = \gamma_{soil,\;active} \cdot (stem_{height}\cdot (base_{width}-stem_{offset}-stem_{width}) )$$ $$W_{active} = 18.85 \;kN/m^3 \cdot 3.124\;m \cdot (2.210-0.686-0.305)\;m$$ $$W_{active} = 71.784\;kN/m$$ $$W_{surcharge} = surcharge_{value} \cdot ( (base_{width}-stem_{offset}-stem_{width} )$$ $$W_{surcharge} = 17.237 \;kN/m \cdot (2.210-0.686-0.305)\;m$$ $$W_{surcharge} = 21.012\;kN/m$$ Having all those individual loads it is now possible to calculate the friction force: $$\mu \cdot \Sigma{W} = \mu \cdot (W_{stem}+W_{base}+W_{active}+W_{surcharge})$$ $$\mu \cdot \Sigma{W} = 0.55 \cdot (22.467+18.855+71.784+21.012)\;kN$$ $$\mu \cdot \Sigma{W} = 0.55 \cdot 135.12\;kN$$ $$\mu \cdot \Sigma{W} = 74.315\;kN$$ ### Factor of Safety against sliding Finally, the Factor of Safety against sliding for the retaining wall will be the ratio between the friction and the sliding forces. ACI 318 recommends a factor of safety to be greater than or equal to $$1.5$$: $$FS = \frac{\mu \cdot \Sigma{W}}{\Sigma{H}}$$ $$FS = \frac{74.315\;kN}{47.749 \; kN}= 1.556 \ge 1.5$$  PASS! ## Retaining Wall Calculator SkyCiv offers a free Retaining Wall Calculator that will check sliding in retaining wall and perform a stability analysis on your retaining walls. The paid version also displays the full calculations, so you can see step-by-step how to calculate the stability of retaining wall against overturning, sliding and bearing! Oscar Sanchez Product Developer BEng (Civil)
2022-09-27 14:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3610382676124573, "perplexity": 3483.8222739438256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00055.warc.gz"}
https://www.studyadda.com/question-bank/critical-thinking_q1/1405/103703
• # question_answer In the case homologous series of alkanes, which one of the following statements is incorrect?    [JIPMER 2000] A) The members of the series are isomers of each other B) The members of the series have similar chemical properties C) The members of the series have the general formula${{C}_{n}}{{H}_{2n+2}}$, where n is an integer D) The difference between any two successive members of the series corresponds to 14 unit of relative atomic mass The difference between any two successive members of the homologous series $-C{{H}_{2}}-$ i.e., the molecular weight of every two adjacent members differ by 14. $(C{{H}_{2}}=12+2=14)$
2020-09-22 15:05:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390271067619324, "perplexity": 573.7859552741912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00581.warc.gz"}
https://thomasedwardriley.github.io/xpsi/prior.html
# Prior¶ Instances of Prior are called by worker sampling processes for evaluation of a joint prior distribution. class xpsi.Prior.Prior(parameters=None, *hyperparameters)[source] The joint prior distribution of parameters (including hyperparameters). Methods to both evaluate the distribution (required by MCMC) and draw uniformly from the distribution (required for nested sampling, and by default is used to initialise an ensemble of MCMC chains). The distribution must be integrable (proper) and is thus usually bounded (compactly supported on a space). In the __call__() method the parameter values are checked against the hard parameter bounds of the prior support. Note If you wish to check bounds manually, implement the __init__() and __call__() methods, and do not access the default code (e.g., via super()) or only use the default code for some parameters. Parameters: parameters (obj) – An optional instance of ParameterSubspace. hyperparameters (obj) – Positional arguments that are hyperparameters (parameters of the prior distribution), or iterables of hyperparameters. __call__(p=None)[source] Evaluate distribution at p and store it as a property. Parameters: p (list) – Vector of model parameter values, but typically unused. If you use it, handle it in a custom implementation of this method. draw(ndraws, transform=False)[source] Draw samples uniformly from the prior via inverse sampling. Parameters: ndraws (int) – Number of draws. (samples, acceptance fraction) (ndarray[ndraws, len(self)], float) estimate_hypercube_frac(ndraws=5)[source] Estimate using Monte Carlo integration the fractional hypervolume within a unit hypercube at which prior density is finite. Parameters: ndraws (optional[int]) – Base-10 logarithm of number of draws from the prior to require at which the density is finite. inverse_sample(hypercube=None)[source] Draw sample uniformly from the distribution via inverse sampling. By default, implements a flat density between bounds. If None is in a tuple of bounds, None is assigned to the corresponding coordinate and the user must handle in a custom subclass. Parameters: hypercube (iterable) – A pseudorandom point in an n-dimensional hypercube. A parameter list. Note If you call this base method via the super() built-in, the current parameter values will be cached when new values are assigned. If you then assign to a parameter again, the current value will be automatically cached, thus overwriting the cache established in the body of this present method. If you want to call this present method and then assign again to a parameter, you can restore the cached value so that it is pushed to the cache when you reassign. inverse_sample_and_transform(hypercube=None)[source] Inverse sample and then transform. This method is useful for drawing from the prior when overplotting posterior and prior density functions. static transform(p, **kwargs)[source] A transformation for post-processing. A subclass can implement this attribute as an instance method: it does not need to be a static method. Parameters: p (list) – Parameter vector. A key sometimes passed is old_API, which flags whether a transformation needs to account for a parameter vector written to file by an older software version, which might be different in due to transformations of parameters defined in the current software version. Note As an example, the Riley et al. 2019 (ApJL, 887, L21) samples are in inclination $$i$$ instead of $$\cos(i)$$ which is the current inclination parameter in the API. Therefore the transformation needed depends on the source of the parameter vector. If the vector is from the original sample files, then it needs to be transformed to have the same parameter definitions as the current API. However, when drawing samples from the prior in the current API, no such transformation needs to be performed because these are the definitions we need to match. Refer to the dummy example code in the method body. Returns: Transformed vector p where len(p) > len(self). list unit_hypercube_frac Get the fractional hypervolume with finite prior density.
2021-04-10 21:44:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5922409296035767, "perplexity": 1745.5987667660745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00318.warc.gz"}
https://proofwiki.org/wiki/Vector_Space_of_All_Mappings_is_Vector_Space
# Vector Space of All Mappings is Vector Space ## Theorem Let $\struct {K, +, \circ}$ be a division ring. Let $\struct {G, +_G, \circ}_K$ be a $K$-vector space. Let $S$ be a set. Let $\struct {G^S, +_G', \circ}_R$ be the vector space of all mappings from $S$ to $G$. Then $\struct {G^S, +_G', \circ}_K$ is a $K$-vector space. ## Proof Follows directly from Module of All Mappings is Module and the definition of vector space.
2020-01-20 08:28:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6284645795822144, "perplexity": 126.90809669862227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00000.warc.gz"}
https://deeplearning.neuromatch.io/tutorials/W3D2_DlThinking2/student/W3D2_Tutorial1.html
Tutorial 1: Deep Learning Thinking 2: Architectures and Multimodal DL thinking¶ Week 3, Day 2: DL Thinking 2 Content creators: Konrad Kording, Lyle ungar Content reviewers: Kelson Shilling-Scrivo Content editors: Kelson Shilling-Scrivo Production editors: Gagana B, Spiros Chavlis Tutorial Objectives¶ In this tutorial, you will practice thinking like a deep learning practitioner and figure out how to design architectures for different scenarios. By the end of this tutorial, you will be better able to: • Know how to proceed when low on data • Have a toolbox of what to do in non-standard situations We will also continue to see how to get relevant information out of domain experts, arguably the central skill of DL and how to convert insights into domains into the logic of actual approaches. Setup¶ Install dependencies¶ # @title Install dependencies from evaltools.airtable import AirtableForm Section 1: Intro Deep Learning Thinking 2¶ Time estimate: ~4 mins Video 1: Intro to DL Thinking 2¶ Like Deep Learning thinking 1 last week, this tutorial is a bit different from others - there will be no coding! Instead, you will watch a series of vignettes about various scenarios where you want to use a neural network. This tutorial will focus on various architectures and multimodal thinking. Each section below will start with a vignette where either Lyle or Konrad is trying to figure out how to set up a neural network for a specific problem. Try to think of questions you want to ask them as you watch, then pay attention to what questions Lyle and Konrad are asking. Were they what you would have asked? How do their questions help quickly clarify the situation? Section 2: Getting More Data¶ Time estimate: ~15 mins Video 2: Getting More Data Vignette¶ Konrad wants to build a neural network that classifies images based on the objects contained within them. He needs more data to help him train an accurate network, but buying more images is costly. He needs a different solution. Think! 1: Designing a strategy to get more data¶ Given everything you know, how would you design a strategy to get some more data (pairs of images and the label of the object they are of) for the image classification neural network that Konrad is training? Be specific & write down a procedure. Please discuss as a group. If you get stuck, you can uncover the hints below one at a time. Please spend some time discussing before uncovering the next hint, though! You are being real deep learning scientists now, and the answers won’t be easy. Student Response¶ # @title Student Response from ipywidgets import widgets text=widgets.Textarea( value='Type your answer here and click on Submit!', placeholder='Type something', description='', disabled=False ) button = widgets.Button(description="Submit!") display(text,button) def on_button_clicked(b): print("Submission successful!") button.on_click(on_button_clicked) Look at a few photos of dogs (use an image search engine). How are they similar? How are they different? What makes them all be dogs? We don’t need to obtain any new images in order to give more examples of each object to our neural network. Think about color, orientation, flipping, pixel noise, color noise, shearing, contrast, brightness and scaling Discuss where each of these ideas will break down. Can too much of a good thing be good? Instead of collecting new data, we can create multiple examples for the neural network of each of our existing images by changing things like flipping them horizontally, shifting them horizontally or vertically by some number of pixels, scaling them to be larger or smaller (and cropping), rotating them, and changing their contrast and brightness. This is called data augmentation and is a very commonly used and is an important strategy for training neural networks. Importantly, we need to be careful about the amount we change each image, and we still want them to be useful training images! Let’s say you have a photo of a dog, and you scale it to be 1000x bigger and crop the middle out. You’d have just an image of fur - this would not be very useful as a training example on how to classify dogs. So we want to change factors about the images but not so much that they are no longer recognizable as the original object. Video 3: Getting More Data Wrap-up¶ Check out the paper mentioned in the above video: • Balestriero, R., Bottou, L., LeCun, Y. (2022). The Effects of Regularization and Data Augmentation are Class Dependent. arxiv: 2204.03632 (Bonus) Think!: Class-based strategies¶ Discuss how you may want to vary these strategies based on the class of the object/images. Section 3: Detecting Tumors - What to do if there still isn’t enough data¶ Time estimate: ~15 mins Video 5: Detecting Tumors Set-up¶ Konrad works for a hospital and wants to train a neural network to detect tumors in brain scans automatically. This type of tumor is pretty rare, which is great for humanity but means we only have a few thousand training examples for our neural network. This isn’t enough. Even with adding in images of other types of tumors, we don’t have enough data. We have a lot of images of other things in ImageNet, like cats and dogs, though! Maybe we can use that? Think! 2: Designing a strategy for detecting tumors¶ Given everything you know, how would you design a strategy to be able to train an accurate tumor-detecting neural network? Be specific & write down a procedure. Please discuss as a group. If you get stuck, you can uncover the hints below one at a time. Please spend some time discussing before uncovering the next hint, though! You are being real deep learning scientists now, and the answers won’t be easy. Student Response¶ # @title Student Response from ipywidgets import widgets text=widgets.Textarea( value='Type your answer here and click on Submit!', placeholder='Type something', description='', disabled=False ) button = widgets.Button(description="Submit!") display(text,button) def on_button_clicked(b): print("Submission successful!") button.on_click(on_button_clicked) Data augmentation is always something to consider A human learning to detect tumors is not learning how to see from scratch just based on the tumor images. You could use another dataset to help. What properties should such a dataset have? Even though the images in ImageNet are not of tumors, natural images contain information on aspects of visual objects that are similar to tumors (that they’re coherent, locally smooth, etc) If you train a neural network on ImageNet first so that it learns general vision and embeddings of images, what might you want to change when training on the tumor images dataset? Humans don’t learn to see when they learn a new classification task. We already have a trained visual system that is good at processing and learning embeddings for natural images. We can replicate this in neural networks! First, we can train our neural network on ImageNet alone to do object classification. This gives us a neural network that has already learned how to process and embed images. Then, we want to take this neural network and continue to train it on just the tumor classification dataset. We can chop off the existing final layer (that outputs the probabilities of all the ImageNet classes) and train a new one that outputs the probability of there being a tumor in the image. We could keep all the weights in the convolutional layers fixed, not allowing them to change after the ImageNet training or fine-tune them. People take both strategies! This whole process is called pre-training. We have pre-trained the neural network on ImageNet before training on our actual task, the detection of tumors. We should mention here that there are many ways of doing this. Train the whole network after training on a first task. Train the top layers after training the bottom layers. Potentially first do the latter and then the former. Pre-training can be done in many ways - looking for it as an opportunity is important. Video 6: Detecting Tumors Wrap-up¶ Check out the paper mentioned in the above video: • Tschandl, P., Rinner, C., Apalla, Z. et al. (2020). Human–computer collaboration for skin cancer recognition. Nat Med 26: 1229–1234. doi: https://doi.org/10.1038/s41591-020-0942-0 Section 4: Brains on Forrest Gump¶ Time estimate: ~17 mins Video 8: Brains on Forrest Gump Set-up¶ Konrad has a great dataset - he has someone watching all of the movie Forrest Gump and MRI data (brain imaging) over the whole time the person is watching the movie. So, basically, he has the video stream over time and the brain data over time. He wants to figure out what those two data streams have in common. In other words, he wants to pull the shared information from two data modalities. Think! 3: Designing a strategy for pulling shared info about brain data and Forrest Gump¶ Given everything you know, how would you design a strategy to get a shared embedding for the brain and video data? Be specific & write down a procedure. Please discuss as a group. If you get stuck, you can uncover the hints below one at a time. Please spend some time discussing before uncovering the next hint though! You are being real deep learning scientists now and the answers won’t be easy Student Response¶ # @title Student Response from ipywidgets import widgets text=widgets.Textarea( value='Type your answer here and click on Submit!', placeholder='Type something', description='', disabled=False ) button = widgets.Button(description="Submit!") display(text,button) def on_button_clicked(b): print("Submission successful!") button.on_click(on_button_clicked) We want the two datasets to share something. What does that mean? Where could the vectors $$\bar{X}_1$$ and $$\bar{X}_2$$ come from? How could they relate to the brain data and video data? You may want to use more than one neural network! What do we want our neural network solution to do here? Is there anything you want it to maximize or minimize? What happens if we multiply all activities by 2? We need a scale invariant solution. The first thing to note is that we want two embeddings, one for the brain data and a second for the video data. The second thing to note is that we want these embeddings to capture shared information between the two. The key is to realize that if both embeddings contain the same information, they should be correlated. Looking at the formula for Pearson correlation: (103)$$$\rho = \frac{\text{cov}(X_1, X_2)}{\text{var}(X_1) \cdot \text{var}(X_2)}$$$ Where $$X_1$$ and $$X_2$$ are our two embeddings, to find the correlation between our two embeddings, we take their covariance and normalize it by their combined variance, giving us our scale invariant quantity to optimize. Imagine the extreme case where there was no noise, and both embeddings extracted the same information. Both embeddings would be perfectly correlated with each other. Conversely, if the two embeddings had no shared information, there would be little to no correlation between them. Therefore, by maximizing the correlation between the two embedding spaces, we’re maximizing the shared information between the two embeddings. Another way to think about it is that by maximizing the correlation, we’re attempting to have one common embedding between brain data and ANN data. If both networks extract the same information, this will be possible. The two embeddings will be slightly different if they extract slightly different information. Therefore, the more similar (and thus, more correlated) the embeddings are, the more similar the information extracted. Video 9: Brains on Forrest Gump Wrap-up¶ Check out the paper mentioned in the above video: • Andrew, G., Arora, R., Bilmes, J., Livescu, K. (2013). Deep Canonical Correlation Analysis. Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1247-1255. url: proceedings.mlr.press/v28/andrew13 Summary¶ Time estimate: ~2 mins Video 10: Wrap up of DL thinking¶ In this set of DL Thinks, we saw several tricks on how to do well when there is very limited data we saw: • Data augmentation • Pretraining • Canonical Correlation Analysis (CCA) All three can be used in cases where there is limited data available. All three also teach us how the relevant information may be quite clear once we think about it. And how ideas about the world translate into approaches in deep learning.
2022-08-14 13:18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5034717321395874, "perplexity": 1320.368870466381}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00112.warc.gz"}
http://mathoverflow.net/revisions/101062/list
## Return to Answer 1 [made Community Wiki] It is very common in set theory to prove that a particular model or structure is well-founded by mapping it into a fixed well-founded structure. The point is that if $j:\langle M,{\in^M}\rangle\to \langle N,{\in}\rangle$ is $\in$-preserving, then any instance of ill-foundedness in $M$ would carrry over to $N$, which has none; and so $M$ is well-founded. This method is often applied in the context of iterated ultrapower constructions.
2013-05-18 21:31:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3629089891910553, "perplexity": 528.638933696701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382892/warc/CC-MAIN-20130516092622-00011-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.talentschildrensmission.com/brandon-ratcliff-utlbe/ed25519-key-generation-d60cd8
Note: This example requires Chilkat v9.5.0.83 or greater. When used with a program known as an SSH agent, SSH keys can allow you to connect to a server, or multiple servers, without having to remember or enter your password for each system. In the case where the user's private key passphrase user password differ, the pam_ssh module will prompt the user to enter the SSH passphrase after the user password has been entered. If you created your key with a different name, or if you are adding an existing key that has a different name, replace id_ed25519 in the command with the name of your private key file. SSH public-key authentication uses asymmetric cryptographic algorithms to generate two key files – one "private" and the other "public". If you require a different encryption algorithm, select the desired option under the Parameters heading before generating the key pair. See the GNOME Keyring article for further details. SSH keys are always generated in pairs with one known as the private key and the other as the public key. Because Keychain reuses the same ssh-agent process on successive logins, you should not have to enter your passphrase the next time you log in or open a new terminal. The public key is what is placed on the SSH server, and may be shared … ssh-keygen -t ed25519 Extracting the public key from an RSA keypair. Secure coding. Use this if you would like your ssh agent to run when you are logged in, regardless of whether x is running. If your key file is ~/.ssh/id_rsa.pub you can simply enter the following command. Step 3: The PuTTY key generator dialog box will appear on the screen. SSH keys can serve as a means of identifying yourself to an SSH server using public-key cryptography and challenge-response authentication. The additional auth authentication rule added to the end of the authentication stack then instructs the pam_ssh module to try to decrypt any private keys found in the ~/.ssh/login-keys.d directory. BenchmarkKeyGeneration 30000 47007 ns/op BenchmarkSigning 30000 48820 ns/op BenchmarkVerification 10000 119701 ns/op ok github.com/agl/ed25519 5.775s Making key generation and signing a rough average of 2x faster, and verification 2.5-3x … perl rename script not working in some cases? By default, for OpenSSH, the public key needs to be concatenated with ~/.ssh/authorized_keys. How would one justify public funding for non-STEM (or unprofitable) college majors to a non college educated taxpayer? When using a security token the sensitive private key is also never present in the RAM of the PC; the cryptographic operations are performed on the token itself. Furthermore, without a passphrase, you must also trust the root user, as he can bypass file permissions and will be able to access your unencrypted private key file at any time. If your private key is encrypted with a passphrase, this passphrase must be entered every time you attempt to connect to an SSH server using public-key authentication. Setting bit 254 improves performance when operations are implemented in a way that doesn't leak information about the key through timing. In order to start the agent automatically and make sure that only one ssh-agent process runs at a time, add the following to your ~/.bashrc: This will run a ssh-agent process if there is not one already, and save the output thereof. Ed25519 is more than a curve, it also specifies deterministic key generation among other things (e.g. If the user's private key passphrase and user password are the same, this should succeed and the user will not be prompted to enter the same password twice. If you use the GNOME desktop, the GNOME Keyring tool can be used as an SSH agent. The public key file shares the same name as the private key except that it is appended with a .pub extension. Fooling Proof-of-Storage Protocols. Has Star Trek: Discovery departed from canon on the role/nature of dilithium? BSD-3-Clause. Add your SSH private key to the ssh-agent. What should I do? See KeePass#Plugin installation in KeePass or install the keepass-plugin-keeagent package. As security features, Ed25519 does not use branch operations and array indexing steps that depend on secret data, so as to defeat many side channel attacks. A private key is a guarded secret and as such it is advisable to store it on disk in an encrypted form. OpenSSH 6.5 added support for Ed25519 as a public key type. 256 is the only valid size for the Ed25519. Prune the buffer: The lowest three bits of the first octet are Place the public key on RHEL 8 server. counters attacks that force the use of a weak key, Podcast 300: Welcome to 2021 with Joel Spolsky. Hash the 32-byte private key using SHA-512, storing the digest in a 64-octet large buffer, denoted h. Only the lower 32 bytes are used for generating the public key. https://www.unixtutorial.org/how-to-generate-ed25519-ssh-key If your username differs on remote machine, be sure to prepend the username followed by @ to the server name. To enable single sign-on behavior at the tty login prompt, install the unofficial pam_sshAUR package. It is using an elliptic curve signature scheme, which offers better security than ECDSA and DSA. Ed25519 ssh keys work on modern systems (OpenSSH 6.7+) and are much shorter than RSA keys. Keychain is a program designed to help you easily manage your SSH keys with minimal user interaction. You have to specify the full path everywhere. the following rfc describes the key-pair generation mechanism for Ed25519; the first two steps are as follows: Hash the 32-byte private key using SHA-512, storing the digest in Also note that the name of your public key may differ from the example given. README for sigtool What is this? ssh-agent is the default agent included with OpenSSH. Help for configuration can be found upstream. So now in your .xinitrc, before calling your window manager, one just needs to export the SSH_ASKPASS environment variable: and your X resources will contain something like: Doing it this way works well with the above method on using ssh-agent as a wrapper program. 90,985 downloads per month Used in 500 crates (109 directly). The pam_ssh project exists to provide a Pluggable Authentication Module (PAM) for SSH private keys. This section provides an overview of a number of different solutions which can be adapted to meet your specific needs. You are advised to accept the default name and location in order for later code examples in this article to work properly. (PowerShell) Generate ed25519 Key and Save to PuTTY Format. export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR"'/keeagent.socket'. Use MathJax to format equations. Furthermore SSH key authentication can be more convenient than the more traditional password authentication. E.g. Example. Both of those concerns are best summarized in libssh curve25519 introduction. For X25519, which operates on an equivalent curve Curve25519, the private key is obtained by randomly generating 32 bytes and the first step of using that key is to apply the bit pruning step (clear bits 0, 1, 2 and 255 and set bit 254). ed25519/7C406DB5 is the primary key, and cv25519/DF7B31B1 is encryption subkey. Versions of pam_ssh prior to version 2.0 do not support SSH keys employing the newer option of ECDSA (elliptic curve) cryptography. A notable feature of Keychain is that it can maintain a single ssh-agent process across multiple login sessions. If the ssh server is listening on a port other than default of 22, be sure to include it within the host argument. "[5], On the other hand, the latest iteration of the NSA Fact Sheet Suite B Cryptography[dead link 2020-04-02 ⓘ] suggests a minimum 3072-bit modulus for RSA while "[preparing] for the upcoming quantum resistant algorithm transition".[6]. The above example copies the public key (id_ecdsa.pub) to your home directory on the remote server via scp. Minimum key size is 1024 bits, default is 3072 (see ssh-keygen(1)) and maximum is 16384. Changing the private key's passphrase without changing the key, Copying the public key to the remote server, Using a different password to unlock the SSH key, the same level of security with smaller keys, deprecated and disabled support for DSA keys, difficulty to properly implement the standard, Trusted Platform Module#Securing SSH Keys, GNOME/Keyring#Disable keyring daemon components, this ssh-agent tutorial by UC Berkeley Labs, the below notes on using x11-ssh-askpass with ssh-add, https://github.com/sigmavirus24/x11-ssh-askpass, KDE Wallet#Using the KDE Wallet to store ssh key passphrases, supports being used as an SSH agent by default, https://wiki.archlinux.org/index.php?title=SSH_keys&oldid=647769, Pages or sections flagged with Template:Expansion, GNU Free Documentation License 1.3 or later, to disable the graphical prompt and always enter your passphrase on the terminal, use the, if you do not want to be immediately prompted for unlocking the keys but rather wait until they are needed, use the. Only you, the holder of the private key, will be able to correctly understand the challenge and produce the proper response. The order in which these lines appear is significiant and can affect login behavior. This type of keys may be used for user and host keys. Once you have been authenticated, the pam_ssh module spawns ssh-agent to store your decrypted private key for the duration of the session. Packages providing support for PAM typically place a default configuration file in the /etc/pam.d/ directory. Save your private and public key files, preferably to a thumb drive. Can every continuous function between topological manifolds be turned into a differentiable map? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. EdDSA Key Generation Ed25519 and Ed448 use small private keys (32 or 57 bytes respectively), small public keys (32 or 57 bytes) and small signatures (64 or 114 bytes) with high security level at the same time (128-bit or 224-bit respectively). It is a shell script that uses pam_exec. ssh-keygen defaults to RSA therefore there is no need to specify it with the -t option. Then enable or start the service with the --user flag. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. It can sign and verify very large files - it prehashes the files with SHA-512 and then signs the SHA-512 checksum. An SSH agent is a program which caches your decrypted private keys and provides them to SSH client programs on your behalf. Move the cursor around in the gray box to fill up the green bar. It is possible — although controversial [8] [9] — to use the same SSH key pair for multiple hosts. Do not forget to include the : at the end of the server address. are there security relevant properties related to that? 2. 1. When the encrypted private key is required, a passphrase must first be entered in order to decrypt it. When ssh-agent is run, it forks to background and prints necessary environment variables. KeeAgent is a plugin for KeePass that allows SSH keys stored in a KeePass database to be used for SSH authentication by other programs. At the same time, it also has good performance. Some vendors also disable the required implementations due to potential patent issues. If someone acquires your private key, they can log in as you to any SSH server you have access to. As long as you hold the private key, which is typically stored in the ~/.ssh/ directory, your SSH client should be able to reply with the appropriate response to the server. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. ed25519-dalek . Begin by copying the public key to the remote server. Clearing bit 255 ensures that the key is in the range$0..2^{255}-1$where the operations are defined. If you want to unlock the SSH keys or not depending on whether you use your key's passphrase or the (different!) Thanks for contributing an answer to Cryptography Stack Exchange! Creating an ed25519 signature on a message is simple. The simplest way to generate a key pair is to run … 97KB 848 lines. ssh-keygen -t ecdsa -b 521 -C "ECDSA 521 bit Keys" Generate an ed25519 SSH keypair- this is a new algorithm added in OpenSSH. See x11-ssh-askpass(1) for full details. In the PuTTY Key Generator window, click Generate. Although the political concerns are still subject to debate, there is a clear consensus that #Ed25519 is technically superior and should therefore be preferred. Viewed 681 times 3. You may also use the --confhost option to inform keychain to look in ~/.ssh/config for IdentityFile settings defined for particular hosts, and use these paths to locate keys. You can also add an optional comment field to the public key with the -C switch, to more easily identify it in places such as ~/.ssh/known_hosts, ~/.ssh/authorized_keys and ssh-add -L output. We invoke gpg frontend with --edit-key and the key … Works with native SSH agent on Linux/Mac and with PuTTY on Windows. This challenge-response phase happens behind the scenes and is invisible to the user. If the originally chosen SSH key passphrase is undesirable or must be changed, one can use the ssh-keygen command to change the passphrase without changing the actual key. login password, you can modify /etc/pam.d/system-auth to. Note: This example requires Chilkat v9.5.0.83 or greater. The Elliptic Curve Digital Signature Algorithm (ECDSA) was introduced as the preferred algorithm for authentication in OpenSSH 5.7. It is already implemented in many applications and libraries and is the default key exchange algorithm (which is different from key signature) in OpenSSH. cleared, the highest bit of the last octet is cleared, and the When prompted for a passphrase, choose something that will be hard to guess if you have the security of your private key in mind. The try_first_pass option is passed to the pam_ssh module, instructing it to first try to decrypt any SSH private keys using the previously entered user password. This challenge is an encrypted message and it must be met with the appropriate response before the server will grant you access. You can also use the same passphrase like any of your old SSH keys. openssl rsa -pubout -in private_key.pem -out public_key.pem Extracting … If you are using earlier versions of pam_ssh you must use either RSA or DSA keys. The lifetime of the unlocked keys is set to 1 hour. In the above example, the first line invokes keychain and passes the name and location of your private key. Why are the lower 3 bits of curve25519/ed25519 secret keys cleared during creation? When using Curve25519, why does the private key always have a fixed bit at 2^254? (PowerShell) Generate an Ed25519 Key Pair. To do so, we need a cryptographically secure pseudorandom number generator (CSPRNG). FIDO devices are supported by the public key types “ecdsa-sk” and “ed25519-sk", along with corresponding certificate types. By default keychain will look for key pairs in the ~/.ssh/ directory, but absolute path can be used for keys in non-standard location. to guard against cutting-edge or unknown attacks and more sophisticated attackers), simply specify the -b option with a higher bit value than the default: Be aware though that there are diminishing returns in using longer keys. It doesn't matter which hash is used in the first step. By contrast, the public key can be shared freely with any SSH server to which you wish to connect. Are the first 4 bytes of a Ed25519 public key random? This is a little annoying, not only when declaring the SSH_ASKPASS variable, but also when theming. This can also be used to change the password encoding format to the new standard. What is the fundamental difference between image and text encryption schemes? A cryptographic token has the additional advantage that it is not bound to a single computer; it can easily be removed from the computer and carried around to be used on other computers.$ ssh-add ~/.ssh/id_ed25519 Add the SSH key to your GitHub account. An SSH key pair can be generated by running the ssh-keygen command, defaulting to 3072-bit RSA (and SHA256) which the ssh-keygen(1) man page says is "generally considered sufficient" and should be compatible with virtually all clients and servers: The randomart image was introduced in OpenSSH 5.1 as an easier means of visually identifying the key fingerprint. Once your private key has been successfully added to the agent you will be able to make SSH connections without having to enter your passphrase. If using Bash: multiple keys can also be used with OpenSSH ed25519 key generation on... Rsa or DSA keys keys work on the token instead of x11-ssh-askpass is.... To 1 hour like any of your private key configuration file in the example below with user. Is required, a passphrase if you are advised to accept the default name and location of private! Bits in length and signatures are twice that size named according to the agent 's cache disk an. File in the ~/.ssh/ directory, but also when theming fault attacks individual of! Which are not mentioned in the PuTTY key generator dialog box will appear longer. These commands before the line which invokes your window manager and Ed448, do scalars need! On all relevant files for OpenSSH, the public key may differ from example... Very same message ; back them up with references or personal experience a weak key ECDSA ) was introduced the! Keepass or install the unofficial pam_sshAUR package in place of, or in addition to, your system! Putty starts generating the key size to be aware of some of its limitations which are not mentioned the. Passphrase like any of your own private key is stored securely on the through. Create a symlink to your shell configuration file in the example given Ed25519 key generation for vs... And cookie policy disable the required implementations due to potential patent issues need pruning/trimming/clamping provide sufficient security install... The order in which these lines appear is significiant and can affect login ed25519 key generation are shorter... Openssh, the GNOME Keyring tool can be customized by setting its associated X resources the duration of secure! The role/nature of dilithium made my move it works and as such it is like like OpenBSD signify... It also has good performance Ed25519 vs X25519, Protecting Ed448 against DPA and fault attacks example. The appropriate response before the server name passphrase can be convenient, you be! All algorithms but requires the key with its strength and pressed the generate ’ button than starts. To users without an SSH agent is a frustrating thing about DJB implementations as! Very large files - it prehashes the files with SHA-512 and then signs SHA-512. The above example copies the public key type Discovery departed from canon on the token of. An encrypted form keychain, simply open a new terminal emulator or log out and back in your.. Operations are implemented in a paper look for key pairs refer to the server address except written Golang... On 31 December 2020, at 16:37 ECDSA, Ed25519, and cv25519/DF7B31B1 is encryption.. An agent is a program which caches your decrypted private key, and is invisible the. Signing, and SSH-1 ( RSA ) KeePass that allows SSH keys with minimal user interaction keys with user. -O -a 100 ” option is implied with Ed25519 key and the appearance of session... Server is listening on a security token like a smart card or a USB token interested cryptography. Whether you use the systemd/User facilities to start the service with the name of old. Like any of your own private key holder into the wrong hands to. Prints necessary environment variables rename script not working in some cases are best summarized libssh... ( Ed25519 ) and maximum is 16384 therefore there is no need to enter his user password create. Key without a passphrase must first be entered in order for later code examples this... Ssh private key of ECDSA ( elliptic curve Digital signature algorithm ( ECDSA ) was introduced as public... You must only provide your passphrase once each time the machine is booted portal wo n't accept my application initially. Means of identifying yourself to an SSH server using public-key cryptography and challenge-response authentication the location of the.. 109 directly ) of dilithium key always have a fixed bit at 2^254 infrequent and the appearance the. Ed25519 key generation for Ed25519 vs X25519, Protecting Ed448 against DPA and fault attacks.pub extension --! Update the key is required, a passphrase or serve as the back-end to a non college educated taxpayer the... Has nothing to do with the -- user flag agent to run … Rust. ) college majors to a non college educated taxpayer a basic understanding of SSH! Database to be used with OpenSSH to rotate in outer space bit at 2^254 secret and as such is... Issuing the ssh-keygen command, you need to enter his user password server address files - prehashes. Mathematicians and others interested in cryptography may not support these keys named according to the agent elliptic. Experience by 10 days and the other public '' public.... Appears that the name of your private key before authentication can be customized by its! Manager 's list of its limitations which are not mentioned in the itself... Perl rename script not working in some cases for SSH by... Mentioned later in this article assumes you already have a basic understanding of how SSH keys will! Generally be stronger and harder to crack should it fall into the wrong hands pam_ssh project exists to sufficient. A question and answer site for software developers, mathematicians and others interested in cryptography understand the challenge and the! Generally be stronger and harder to crack should it fall into the wrong hands program caches! Creation, encryption and decryption ) and maximum is 16384 simplest way to generate keys ensure! Certicom 's secp256r1 and secp256k1 curves @ to the agent 's cache I was searching my. Which can be more convenient than the more traditional password authentication the difference image... But absolute path can be convenient, you need to specify it with the user being prompted to enter user. By 10 days and the other ` public '' be researched elsewhere ) a! Of these variables, run the command through the eval command may differ the! Appropriate response before the line which invokes your window manager ( ECDSA ) introduced! Add authentication subkey which can easily be researched elsewhere ) in a way that does leak... Means that you only need to enter his user password you require a different encryption algorithm select! Ssh authentication by other programs used instead of x11-ssh-askpass is customizable key creation, encryption decryption. Every continuous function between topological manifolds be turned into a differentiable map contributions licensed under cc by-sa algorithm. The message, it can maintain a single ssh-agent process across multiple login sessions further details on it... Convenient than the more traditional password authentication id_rsa in the first two steps are as:! Open a new terminal emulator or log out and back in your path in many ways, it implemented. Below notes on using x11-ssh-askpass with ssh-add for an idea on how use. Key may differ from the example below /etc/pam.d/login configuration file in the above example, the holder of public! Line which invokes your window manager non-standard location while the public key, update the key is required, passphrase. To place these commands before the line which invokes your window manager list... Work on modern systems ( OpenSSH 6.7+ ) and HashEdDSA ( ed25519ph.. Also see High-speed high-security signatures ( 20110926 ).. Ed25519 is unique among signature.! Educated taxpayer also when theming 20x to 30x faster than Certicom 's secp256r1 and secp256k1 curves when adding private... Rfc describes the key-pair generation mechanism for Ed25519 vs X25519, Protecting Ed448 against DPA and fault.. Username or email address and set a passphrase must first be entered in to... Outer space step 4: in the ~/.ssh/ directory and named according the. Startx and then signs the SHA-512 checksum 9 ] — to use by setting its associated X.... Constant in the pam_ssh project is infrequent and the appearance of the server name 4 bytes a. Avoid this problem elsewhere ) in a KeePass database to be used for user and host keys /etc/pam.d/login configuration in. 'M short of required experience by 10 days and the other as the to! And are much shorter than RSA keys with the appropriate response before the server grant... Cryptographically secure pseudorandom number generator ( CSPRNG ) the fundamental difference between stimulus checks tax... Passphrase in order to decrypt that very same message mentioned in the example below how to immediately add key! X11-Ssh-Askpass is customizable best compatibility of all algorithms but requires the key size as... Pair for multiple hosts providing support for PAM typically place a default configuration file in the.! Protected under all circumstances adapted to meet your specific needs pam_ssh you must use either or! Issuing the ssh-keygen command, you agree to our terms of service, privacy policy and cookie.! Server is listening on a security token like a smart card or USB... Public-Keys as pre-images generator ( CSPRNG ) passphrase dialog programs which can easily be researched elsewhere ) in KeePass. Similar to the server will grant you access storage and transmission requirements convenient than the traditional! Your answer ”, you must use either RSA or DSA keys native SSH agent by default, OpenSSH! Comment with your username or email address and set a passphrase authentication subkey can. Be customized by setting its associated X resources contributions licensed under cc by-sa limitations! Is listening on a port other than default of 22, be sure to the. Associated X resources ) ≠ L ( G ' ) limitations which not. Between stimulus checks and tax breaks ’ button than PuTTY starts generating key. Card or a USB token the Parameters heading before generating the key on which machine and when to use to...
2021-07-29 00:18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1963021606206894, "perplexity": 3883.8085412444793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00281.warc.gz"}
https://gmatclub.com/forum/if-x-is-an-integer-is-x-gt-1-1-1-2x-1-x-67950.html
It is currently 24 Feb 2018, 01:39 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If x is an integer, is |x| \gt 1 ? 1. (1 - 2x)(1 + x) < 0 Author Message Manager Joined: 22 May 2007 Posts: 218 If x is an integer, is |x| \gt 1 ? 1. (1 - 2x)(1 + x) < 0 [#permalink] ### Show Tags 26 Jul 2008, 19:24 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. If x is an integer, is $$|x| \gt 1$$ ? 1. (1 - 2x)(1 + x) < 0 2. (1 - x)(1 + 2x) < 0 Director Joined: 23 Sep 2007 Posts: 782 ### Show Tags 26 Jul 2008, 20:32 aaron22197 wrote: If x is an integer, is $$|x| \gt 1$$ ? 1. (1 - 2x)(1 + x) < 0 2. (1 - x)(1 + 2x) < 0 C statement 1: 1 < $$2x^2 + x$$ --> x = -1 does not work, but x = 1 works statement 2: 1 < $$2x^2 - x$$ --> x = 1 does not work, but x = -1 works also, x can't be 0 according to each statements together x can't be -1, 0, or 1, so it must be the other integers Senior Manager Joined: 06 Apr 2008 Posts: 428 ### Show Tags 26 Jul 2008, 21:38 aaron22197 wrote: If x is an integer, is $$|x| \gt 1$$ ? 1. (1 - 2x)(1 + x) < 0 2. (1 - x)(1 + 2x) < 0 For statement 1) if x= 0 then statement 1) is false and if x=1 then statement 1 is true For statement 2) if x= 0 then statement 2) is false and if x=-1 then statement 2 is true Combining both statement 1) and 2) x is not equal to -1 or 1. For x>1 or x<-1 both statements are true Re: DS   [#permalink] 26 Jul 2008, 21:38 Display posts from previous: Sort by
2018-02-24 09:39:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5137226581573486, "perplexity": 3189.6347942556126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815544.79/warc/CC-MAIN-20180224092906-20180224112906-00156.warc.gz"}
http://www.math.uchicago.edu/~lawler/m312f07.html
### Text Rudin, Real and Complex Analysis Notes on Probability, This is the first quarter of a three-quarter sequence on real and complex analysis intended primarily for first-year graduate students in the department of mathematics (but is open to all students with the appropriate background and mathematical maturity). The plan is to have lectures on Mondays and Wednesdays from the half of Rudin's book and for Friday lectures to be on probability. There will be weekly homework exercises due on Wednesdays. That assignment will cover material from the lectures of the previous week. There will also be a large problem set at the end of the semester that will serve as a final exam. Students may work together on homework exercises EXCEPT FOR THE FINAL PROBLEM SET, but must write-up their work separately. Problem Set 1 (due Oct 3) Problem Set 2 (due Oct 10) Problem Set 3 (due Oct 17) Problem Set 4 (due Oct 24) (CORRECTION: In part 3 of the long extra exercise (which is part 4 if you downloaded this a few days ago), we want to show that there is a __G-measurable__ E(X|G) satisfying (1). Without that condition of G-measurability, the problem is really trivial! This then defines E(X|G) for all integrable X. In parts 5 and 6, assume that X is integrable. In part 4, XY must be integrable if X,Y are square integrable.) Problem Set 5 (due Oct 31) Problem Set 6 (due Nov 7) Problem Set 7 (due Nov 14) (revised Nov. 9 --- problem 8 from Rudin removed and a new extra problem added) Problem Set 8 (due Nov 21) FINAL PROBLEM SET. REVISED: Nov 26, 12:30 pm . (NOTE: another typo was noticed Monday, Nov. 26, 12:30pm. On part 3 of the last problem the \leq should be a \geq. With \leq the statement is false.) This is due at 11:30 am on Friday, November 30 --- I will be in the usual class room to collect the papers. You may not discuss this with each other people, but it is open book(s). Corrections: Exercise 5 had a number of errors in it in the original version --- c_2 should be c_1 and throughout the problem log n should be log log n. The corrected version is above.
2014-10-21 15:08:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.800042986869812, "perplexity": 1552.2288833528785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444493.40/warc/CC-MAIN-20141017005724-00272-ip-10-16-133-185.ec2.internal.warc.gz"}
http://clay6.com/qa/50522/find-the-coordinates-of-the-foci-the-vertices-the-length-of-major-axis-the-
Browse Questions # Find the coordinates of the foci, the vertices, the length of major axis, the minor axis, the eccentricity and the length of the latus rectum of the ellipse : $\large\frac{x^2}{25}$$+\large\frac{y^2}{100}$$=1$ $\begin {array} {1 1} (A)\;Axis : Major \: axis \: along \: x - axis, Foci : ( \pm 5\sqrt 3, 0 ), Vertices ( \pm 10, 0), length \: of \: the\: major\: axis : 20 , length \: of\: the\: minor\: axis : 10, Eccentricity \: e = \sqrt 3/ 2, length \: of \: the \: latus \: rectum : 5 \\ (B)\;Axis : Major \: axis \: along \: y - axis, Foci : ( 0, \pm 5\sqrt 3 ), Vertices (0, \pm 10), length \: of \: the\: major\: axis : 20 , length \: of\: the\: minor\: axis : 10, Eccentricity \: e = \sqrt 3/ 2, length \: of \: the \: latus \: rectum : 5 \\ (C)\;Axis : Major \: axis \: along \: y - axis, Foci : (0, \pm \sqrt 3 ), Vertices (0, \pm 5), length \: of \: the\: major\: axis : 5 , length \: of\: the\: minor\: axis : 10, Eccentricity \: e = \sqrt 3/ 2, length \: of \: the \: latus \: rectum : 5 \\ (D)\;Axis : Major \: axis \: along \: x - axis, Foci : ( \pm \sqrt 3 , 0), Vertices ( \pm 5, 0), length \: of \: the\: major\: axis : 10 , length \: of\: the\: minor\: axis : 5, Eccentricity \: e = 2\sqrt 3, length \: of \: the \: latus \: rectum : 5 \end {array}$ Can you answer this question? Toolbox: • If $b > a$ then the major axis is y - axis and the minor axis is along x - axis. Hence the equation of the ellipse is $\large\frac{x^2}{b^2}$$+ \large\frac{y^2}{a^2}$$=1$. • $c = \sqrt{a^2-b^2}$ where c is the focus of the ellipse. • Coordinates of foci are $( 0, \pm c)$ • Coordinates of vertices are $(0, \pm b)$ • Length of the latus rectum = $\large\frac{2b^2}{a}$ • eccentricity $e = \large\frac{c}{a}$ • Length of the major axis is 2a ; length of the minor axis is 2b. Step 1 : The given equation is $\large\frac{x^2}{25}$$+\large\frac{y^2}{100}$$=1$ Comparing this with equation of the ellipse $\large\frac{x^2}{b^2}$$+\large\frac{y^2}{a^2}$$=1$ We get, $a^2=100$ and $b^2=25$ Hence the major axis is along y - axis. $\therefore c = \sqrt{a^2-b^2} = \sqrt{100-25} = 75 = 5 \sqrt 3$ Step : 2 The coordinates of foci are $(0, \pm 5 \sqrt 3)$ The coordinates of vertices are $(0, \pm 10)$ Length of major axis $= 2a = 20$ Length of minor axis $=2b = 10$
2017-03-25 13:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825865983963013, "perplexity": 585.9530536330794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00444-ip-10-233-31-227.ec2.internal.warc.gz"}
https://proxies123.com/tag/sets/
## Fit: How can I fit two sets of outdated data in an envelope? My situation is similar to having a Cos and Sin function that would be combined with the envelope profile, but the envelope is offset horizontally by some dt. It would be something like aCos (t + dt) + bSin (t + dt) = Over (t), I want to find the coefficients and the time change {a, b, dt}. I have 2 datasets (lists) for the two input datasets and the dataset for the envelope. How can I fit this? The use of a NonLinearModelFit cannot be adjusted to so many functions at once. ## Time complexity: variant of several sets of known algorithms of subset summation problem I have been working on the time analysis for a solver that I designed for the subset summation problem (multiple set variant), and determined that its time complexity depends on the count of repeated elements in the input. The complexity of time is $$O (2 ^ {n / 2} cdot 0.75 ^ { frac {d / 2} {n}})$$ where $$d =$$ # of duplicates in the input instance (assuming both $$n$$ Y $$d$$ they are even) For example when $$d = n / 2$$ so: $$O (2 ^ n / 2} 0.75 ^ { frac {n / 4} {n}}) approx. Or (1.4142 0.93) approx. O (1,316 ^ n)$$ In addition to asking for comments, I am also looking for other known algorithms with similar behavior to compare approaches (I searched for them but did not find anything so far …) ## graphics: partition search with the maximum number of edges between sets Given a graph (say in the form of an adjacency list), is there an algorithm to find a partition of vertices such that the number of edges between the two sets of the partition is the maximum possible? For example, for the next set of edges of a chart with set of vertices $${1, 2, 3, 4, 5, 6 }$$: $${(1, 2), (2, 3), (3, 1), (4, 5), (5, 6), (6, 4) }$$, a possible "maximum" partition is $${ {1, 3, 4, 6 }, {2, 5 } }$$ with $$4$$ borders between sets $${1, 3, 4, 6 }$$ Y $${2, 5 }$$. ## erd – Understand sets of ternary relationships from many to many I am new to the ER model diagrams. I am a little confused when it comes to interpreting sets of ternary relationships like this: Does this mean that each instance of Party relationship will have exactly one fighter, a wizard and a healer? If we were simply dealing with sets of binary relationships without key or total restrictions, each instance of the relationship would be linked to one entity from each set of entities. But in the previous case, isn't it possible not to have an instance of a particular set of entities? E.g. if the party instance only had a fighter and a healer (and not a wizard)? ## The borel class of an accounting union of \$ G_ delta \$ -sets Issue. Suppose a metrizable separable space $$X$$ is the accounting union $$X = bigcup_ {n in omega} X_n$$ of disjoint pairs $$G_ delta$$-sets $$X_n$$ in $$X$$ such that each $$X_n$$ it's an absolute $$F _ { sigma delta}$$-set. Is $$X$$ absolutely $$F _ { sigma delta}$$? ## nt.number theory – Eccentricity in the number of representations for sets too large to be Sidon sets Leave $$A = {a_1 Be a set of integers. Leave $$r_A (n) = # {(a_i, a_j): a_i + a_j = n }$$ be the number of representations of $$n$$ as a sum of two elements of $$A$$. In the typical language, $$A$$ is a set of Sidon (or $$B_2$$ set) yes $$r_A (n) le 2$$ for all $$n$$. It is known that the maximum size of a Sidon set that is a subset of $${1,2, points, N }$$ is $$sqrt {N} (1 + or (1))$$. My question, in general terms, is to ask if we can measure how often (and how much) $$r_A (n)$$ must exceed $$2$$Yeah $$A$$ contains at least $$(1+ epsilon) sqrt {N}$$ elements, for some $$epsilon> 0$$? More specifically, let's leave $$E (A)$$ denote the "eccentricity" of $$A$$, given by $$E (A) = sum_n max {r_A (n) -2.0 }.$$ Yes $$| A |> (1+ epsilon) sqrt {N}$$ for some $$epsilon> 0$$, then there must be some $$delta> 0$$ such that $$E (A)> delta N$$? My impetus for asking this question comes from my attempts to understand the binary digits of $$sqrt {2}$$. It is currently known that the number of $$1$$is in the first $$N$$ binary digits of $$sqrt {2}$$ is $$ge sqrt {2N} (1 + or (1))$$, and for some infinite sequence of $$N$$This can be improved to $$ge sqrt {8N / pi} (1 + or (1))$$. However, this limit comes in part from assuming that the set of indexes of $$1$$It behaves like a set of Sidon, which is too big to really be. If you could show that $$E (A)> delta N$$, so I think a stronger lower limit could be tested. ## set theory: find minimal disjoint sets from a collection of overlapping sets I have several sets that have overlapping elements. ``````e1, e2, e3, e4 e7,e9,e10 e1,e4 e2,e7 e3,e9 e10,e11,e12 e11,e12 `````` I want to divide the previous sets so that they don't overlap. So the output I hope is ``````e1, e4 e2 e3 e7 e9 e10 e11, e12 `````` What something / function can I write to do this? ## plotter – Contour 3D Plot of four data sets I have four different functions that can generate four different data sets. I want to be able to generate a single 3D contour of the result. I have tried many things and basically I don't get anything that looks remotely like what I want, a, b, c, d are constants in terms of data that x and z are axes. Here is my code that is quite basic. ``````a = 2/Sqrt(3)*(Pi); b = 3/Sqrt(3)*(Pi); c = 2/Sqrt(2)*(Pi); d = 3/Sqrt(2)*(Pi); x1 = -(x*a); x2 = -(z*b); x3 = (x* c); x4 = -(z*d); data1 = Table({x1, x2}, {x, -10, 0, 1}, {z, -10, 0, 1}); data2 = Table({x3, x4}, {x, 0, 10, 1}, {z, 0, 10, 1}); Show(ListPlot3D(data1), ListPlot3D(data2, PlotStyle -> Red)) `````` ## plotting: PlotRange manually sets only one side of the plotrange You can use `PlotRange -> {Automatic, {1, All}}`. An example: ``````SeedRandom[1] data = RandomReal[100, {100, 2}]; data = Append[data, {{500, 50}, {50, 500}}]; Row[{ListPlot[data, ImageSize -> 300, PlotLabel -> "PlotRange -> Automatic"], ListPlot[data, ImageSize -> 300, PlotRange -> All, PlotLabel -> "PlotRange->All"] ListPlot[data, ImageSize -> 300, PlotRange -> {Automatic, {70, All}}, PlotLabel -> "PlotRange -> {Automatic, {70, All}}" ]}, Spacer[20]] `````` This works in both directions. You can use `PlotRange -> {Automatic, {All, 50}}` so that the vertical plot range goes from the minimum data to 50: ``````ListPlot[data, ImageSize -> 300, PlotRange -> {Automatic, {All, 50}}, PlotLabel->"PlotRange -> {Automatic, {All, 50}}" ] `````` ## You cannot create new documents within modern document sets in SP Online #### Battery exchange network The Stack Exchange network consists of 175 question and answer communities, including Stack Overflow, the largest and most reliable online community for developers to learn, share their knowledge and develop their careers. Visit Stack Exchange
2019-09-19 08:21:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 52, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.379749596118927, "perplexity": 1953.2544243071623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573465.18/warc/CC-MAIN-20190919081032-20190919103032-00179.warc.gz"}
http://clay6.com/qa/60139/a-conductor-of-length-2-m-carrying-current-of-2-a-is-held-parallel-to-an-in
# A conductor of length $2\;m$ carrying current of $2\;A$ is held parallel to an infinitely long conductor carrying current of $10\;A$ at a distance of $100\;mm$ . Find the force on small conductor . $\begin{array}{1 1} 8 \times 10^5 \;N \\ 8 \times 10^4\;N \\ 8 \times 10^{-5}\;N \\ 8 \times 10^{-4}\;N \end{array}$
2020-09-19 20:22:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851620554924011, "perplexity": 222.02048153443212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00746.warc.gz"}